To see the other types of publications on this topic, follow the link: Univariate and multivariate analysis.

Dissertations / Theses on the topic 'Univariate and multivariate analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Univariate and multivariate analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhou, Feifei, and 周飞飞. "Cure models for univariate and multivariate survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45700977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ley, Christophe. "Univariate and multivariate symmetry: statistical inference and distributional aspects." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210029.

Full text
Abstract:
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.

The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.

The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.

Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./

Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.

La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.

La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.

Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
3

McGeehan, Lawrence T. "Multivariate and Univariate Analyses of the Geographic Variation within Etheostoma Flabellare (Pisces: Percidae) of Eastern North America." Connect to resource, 1985. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1218739588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Jessica. "Evaluating Restoration Success of a Southern California Wetland Comparing Univariate Analysis to Multivariate and Equivalence Analyses." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10752347.

Full text
Abstract:

Loss of wetland habitat and their associated services and functions during the past century has been extensive. As a solution, managers have turned to restoration, but even regionally, researchers lack agreement on monitoring criteria and analytical methods for defining restoration success. This study investigated the recovery trajectory of two recently restored wetlands in southern California as compared to a reference site using univariate, multivariate and equivalence analyses. Important abiotic and biotic parameters in the two restored marshes, such as salinity and invertebrate abundance, were equal or higher than the reference marsh using traditional simple hypothesis-based statistics like ANOVAs, indicating potential restoration success after 4 years. However, invertebrate community composition remained significantly using multivariate analysis. Inequivalence tests (an interval-based approach with reversed null hypothesis) indicated fewer parameters achieved restoration success, representing a more conservative approach. Overall, this demonstrates the need for long-term comprehensive monitoring that includes novel approaches to statistical analysis.

APA, Harvard, Vancouver, ISO, and other styles
5

Stevens, James G. "An investigation of multivariate adaptive regression splines for modeling and analysis of univariate and semi-multivariate time series systems." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Acres, Daniel Nigel Gerard. "The behaviour of style anomalies in worldwide sector indices : a univariate and multivariate analysis." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/8909.

Full text
Abstract:
Includes bibliographical references.
The aim of this thesis is to explain the cross-section of International Classification Benchmark (ICB) level 4 (sector) index returns. A worldwide study of 48 developed and emerging countries is conducted, considering up to 38 sector indices per country. In cluster and factor analyses of the sector returns all the developed markets are found to cluster together, as are the emerging markets, suggesting diversificationary benefits from investing across the two. The one-month-ahead return forecasting power of 35 sector-specific attributes is investigated over an in-sample period from 31 January 1995 to 31 December 2001 and an out-sample period from 31 January 2002 to 31 December 2005. The data is adjusted for look-ahead bias, outliers, influential observations and non-uniformity across markets. Monthly sector returns are cross-sectionally regressed on the attributes in a similar fashion to Fama and MacBeth (1973). Sector returns are considered both before and after risk adjustment with the Capital Asset Pricing Model (CAPM), the Arbitrage Pricing Theory (APT) model and Solnik's (2000) version of the International CAPM (ICAPM). The ICAPM is found to be the best performing model but, in general, the evidence does not support covariance-based models of asset pricing. Nine attributes are found to be significant and robust over the two sample periods namely cash earnings per share to price (CP), dividend yield (DY), cash earnings to book value (CB), 6 and 12-month growth in cash earnings, to price (C-6P & C-12P), 12 and 24-month growth in dividends, to price (D-12P & D-24P), the payout ratio (PO) and 12-month prior return (MOM-12). All the significant attributes from the univariate regression tests are found to payoff consistently in the positive direction when tested with the nonparametric Sign Test. Nine of the significant attributes namely book value per share to price (BP), dividend yield (DY), earnings yield (EY), 6-month growth in cash earnings, to price (C-6P), cash earnings to book value (CB), 24-month growth in dividends, to price (D-24P), 24-month growth in earnings, to price (E-24P), 12-month and 18-month prior return (MOM-12 & MOM-18) are also found to have significantly low frequencies of changes in payoff direction when assessed with the nonparametric Runs Test. Seven style timing models are developed, all of which produce significantly accurate payoff direction forecasts for most of the significant attributes. The timing models are however generally inaccurate in forecasting the magnitude of the payoffs. Very little seasonality is observed in the payoffs to the significant attributes. Two sets of seven 'stepwise optimal' and 'control' multivariate models are constructed from the significant univariate in-sample attributes in order to forecast the payoffs to the factors in a controlled multifactor setting. The stepwise optimal models are derived from a stepwise procedure, whilst the 'control' models comprise all the attributes which are found to be significant in one or more of the 'optimal' models. The forecasting power of the all the models is found to be below an exploitable level; of the 'control' models the single exponential smoothing model is the most accurate outsample performer. Weighted Least Squares (WLS) models are used to allow for the possibility of heteroskedasticity, which may exist in the cross-section of worldwide sector returns. The WLS models are ineffective in improving forecasting power when the inverse of the 12-month rolling standard deviation of the residuals is used as the weight series.
APA, Harvard, Vancouver, ISO, and other styles
7

ARAÚJO, Adalberto Gomes de. "Comparação entre métodos univariados e multivariados na seleção de variáveis independentes, na construção de tabelas volumétricas para Leucaena leicocephala (Lam) de Wit." Universidade Federal Rural de Pernambuco, 2005. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/4450.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-05-18T16:23:27Z No. of bitstreams: 1 Adalberto Gomes de Araujo.pdf: 1912427 bytes, checksum: f5afb060ed8c40727f32cece528acc33 (MD5)
Made available in DSpace on 2016-05-18T16:23:27Z (GMT). No. of bitstreams: 1 Adalberto Gomes de Araujo.pdf: 1912427 bytes, checksum: f5afb060ed8c40727f32cece528acc33 (MD5) Previous issue date: 2005-06-15
The objective of this work was to use multivariate and univariate statistical methods, in the selection of independent variables, in mathematical models, in the construction of volume tables for Leucaena leucocephala, looking for reduction in time and costs, without loss of precision. The data came from an experiment carried out at the Experimental Station of the Institute of Agriculture Research (IPA), Caruaru-PE. It was used 201 trees of leucena that had their volumes (dependent variable) measured by the method of Smalian, and 20 variables independent measured in the same trees. For the selection of the independent variables the following methods were used: Principal Components, Cluster Analysis, Maximum and Minimum R2, Stepwise, Forward, Backward and Criterion of Akaike. In the general, the univariate and multivariate methods used in the selection of independent variables for volume models, showed similar responses, even though they had different structures in relation to the independent variables, since the number of those variables is high. Besides the applied statistical tests, the researcher'sjudgment about the relevance of the selected independent variables in the final equations has a great importance, mainly, in the reduction of costs and sampling errors.
O objetivo deste trabalho foi utilizar métodos estatísticos univariados e multivariados na seleção de variáveis independentes, em modelos matemáticos, para a construção de tabelas de volumes para Leucaena leucocephala, visando reduzir tempo e custos sem perda de precisão. Os dados foram provenientes de um experimento conduzido na Estação Experimental da Empresa Pernambucana de Pesquisa Agropecuária (IPA), Caruaru-PE. Foram utilizadas 201 árvores de leucena, que tiveram seus volumes cubados pelo método de Smalian, e 20 variáveis independentes medidas nas mesmas árvores. Para a seleção das variáveis independentes foram utilizados os seguintes métodos: Componentes Principais, Análise de Agrupamento, R2 Máximo e Mínimo, Stepwise, Forward, Backward e Critério de Akaike. No geral, os métodos univariados e multivariados empregados no descarte de variáveis independentes para modelos volumétricos, conduzem a respostas semelhantes, mesmo que possuam estruturas diferentes em relação às variáveis independentes, desde que o número dessas variáveis seja elevado. Além dos testes estatísticos aplicados, o julgamento do pesquisador sobre a relevância das variáveis selecionadas nas equações resultantes, é de grande importância, principalmente, na redução de custos e do erro de amostragem
APA, Harvard, Vancouver, ISO, and other styles
8

Bradshaw, Steve. "Style anomalies on the London Stock Exchange : an analysis of univariate, multivariate and timing strategies." Thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/6691.

Full text
Abstract:
According to Dimson (1998), modem financial theory is founded on the assumption that markets are highly efficient. The presence of anomalous stock market behaviour has therefore attracted a great amount of research internationally. This thesis investigates the presence and exploitability of style anomalies on the London Stock Exchange (LSE) and is divided into three main branches of research.
APA, Harvard, Vancouver, ISO, and other styles
9

Janari, Emile. "The behaviour of style anomalies on the Australian Stock Exchange : a univariate and multivariate analysis." Master's thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/15905.

Full text
Abstract:
Includes bibliographical references.
Recent attempts to empirically verify the Sharpe (1964), Lintner (1965), Moss in (1966), and Black (1972) Capital Asset Pricing Model (CAPM) have identified numerous inconsistencies with the model's predictions. A number of variables have displayed evidence of the ability to explain the cross-sectional variation in share returns beyond that explained by data. These anomalous effect have become known as "style effects " or "style characteristics". This thesis sets out to examine the existence and behaviour of these style-characteristics over the period June 1994 to May 2004. A data set of 207 firm-specific attributes is created for all Australian Stock Exchange (ASX) All Ordinaries stocks listed on 1 September 2004. The data are adjusted for both thin trading and look-ahead bias. The study largely follows the tests of van Rensburg and Robertson (2003) who adopt the characteristic-based approach of Fama and Macbeth (1973). Attributes are tested for the ability to explain the cross-sectional variation in ASX share returns beyond that explained by the CAPM and a principal-components-derived APT model. Similar significant characteristics are found when unadjusted and both risk-adjusted returns sets are examined. The set of significant characteristics d e rived from the unadjusted returns test is then simplified using correlation analysis and an agglomerative hierarchical clustering algorithm, resulting in a list of 27 variables that are not highly correlated with each other. These characteristics are divided into nine interpretation groups or combinations thereof, namely: (1) Liquidity; (2) Momentum; (3) Performance; (4) Size; (5) Value; (6) Change in Liquidity; (7) Change in Performance; (8) Change in Size; and (9) Change in Value. While the existence of the anomalies found in prior Australian literature (size, price-per-share, M/B, cashflow-to-price, and short- to medium-term momentum) is confirmed, the PIE effect is not found to be significant in this study. As these previously documented anomalies only cover five of the final 27 characteristics, this paper identifies 2 2 new Australian anomalies. Six style-timing models are evaluated for the ability to forecast the monthly payoffs to the 27 characteristics. A twelve-lag autoregressive model convincingly displays the best performance against moving average and historic mean models. Parametric and nonparametric tests find inconclusive evidence of seasonality in the monthly payoffs to the attributes. The 27 significant style characteristics are then used to construct a multifactor style-characteristics model which comprises a set of factors that are significant when simultaneously cross-sectionally regressed on share returns. The employed construction method yields a five-factor style model for the ASX and comprises: (1) prior twelve-month momentum; (2) book-to-market value; (3) two-year percentage change in dividends paid; (4) cashflow-to-price; and (5) two-year percentage change in market-to-book value. Finally, a step wise procedure is performed using six style-timing models. Five dynamic multifactor expected return models are created and contrast with a static multifactor expected return model similar to that used in van Rensburg and Robertson (2003). The derived expected return models have between three and thirteen factors. While all six models display good forecasting ability, the dynamic (trailing moving average) models all perform better than the static (historic mean) model. This is convincing evidence that the asset pricing relationship follows a dynamic model.
APA, Harvard, Vancouver, ISO, and other styles
10

Schwartz, Michael. "Optimized Forecasting of Dominant U.S. Stock Market Equities Using Univariate and Multivariate Time Series Analysis Methods." Chapman University Digital Commons, 2017. http://digitalcommons.chapman.edu/comp_science_theses/3.

Full text
Abstract:
This dissertation documents an investigation into forecasting U.S. stock market equities via two very different time series analysis techniques: 1) autoregressive integrated moving average (ARIMA), and 2) singular spectrum analysis (SSA). Approximately 40% of the S&P 500 stocks are analyzed. Forecasts are generated for one and five days ahead using daily closing prices. Univariate and multivariate structures are applied and results are compared. One objective is to explore the hypothesis that a multivariate model produces superior performance over a univariate configuration. Another objective is to compare the forecasting performance of ARIMA to SSA, as SSA is a relatively recent development and has shown much potential. Stochastic characteristics of stock market data are analyzed and found to be definitely not Gaussian, but instead better fit to a generalized t-distribution. Probability distribution models are validated with goodness-of-fit tests. For analysis, stock data is segmented into non-overlapping time “windows” to support unconditional statistical evaluation. Univariate and multivariate ARIMA and SSA time series models are evaluated for independence. ARIMA models are found to be independent, but SSA models are not able to reach independence. Statistics for out-of-sample forecasts are computed for every stock in every window, and multivariate-univariate confidence interval shrinkages are examined. Results are compared for univariate, bivariate, and trivariate combinations of highly-correlated stocks. Effects are found to be mixed. Bivariate modeling and forecasting with three different covariates are investigated. Examination of results with covariates of trading volume, principal component analysis (PCA), and volatility reveal that PCA exhibits the best overall forecasting accuracy in the entire field of investigated elements, including univariate models. Bivariate-PCA structures are applied in a back-testing environment to evaluate economic significance and robustness of the methods. Initial results of back-testing yielded similar results to those from earlier independent testing. Inconsistent performance across test intervals inspired the development of a second technique that yields improved results and positive economic significance. Robustness is validated through back-testing across multiple market trends.
APA, Harvard, Vancouver, ISO, and other styles
11

Dunn, Bryan. "Style anomalies on the Toronto Stock Exchange : a univariate, multivariate, style timing and portfolio sorting analysis." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/10429.

Full text
Abstract:
Includes bibliographical references.
A growing body of empirical evidence has found inconsistencies in the Capital Asset-pricing Model (CAPM) of Sharpe (1964), Lintner (1965), and Black (1972) and Ross's (1976) Arbitrage Pricing Theory (APT). Numerous attempts to explore the validity of these theories of modern finance have led to the identification of various firm specific attributes that explain the cross-sectional variation of returns. These attributes have appropriately been termed 'style anomalies '.This thesis investigates the existence and exploitability of style anomalies for the shares comprising the Toronto Stock Exchange (TSX) for the period 31 January 1989 to 31 July 2005. The investigation is divided into four areas of research. First, a methodology similar to Fama and Macbeth (1973) is used to explore the cross-sectional relationships between some 904 firm-specific attributes and the unadjusted and risk adjusted monthly returns of equities constituting the S&P TSX Composite Index. A myriad of uncorrelated style anomalies are found to persist before and after controlling for systematic risk, and are categorized as either size, growth, momentum, value, liquidity and bankruptcy (risk) effects. The most significant attributes from each respective style group include: Price, eighteen month change in net tangible asset value, price change over twelve months, twelve month change in price to net tangible asset value, three month change in the absolute volume ratio and interest cover before tax. Multivariate testing confirms the ability of anomalies to explain excess returns. In and out sample cross sectional tests show inconsistent anomaly persistence, raising the question of whether they are perhaps perennial in nature. Second, the predictability of style payoffs is examined through the analysis of autocorrelation and six style timing models. Strong positive autocorrelation at lower orders for the majority of style payoffs suggests that the ability to time payoffs is possible. The six month moving average timing model shows the best forecasting skill, followed by twelve month and eighteen month moving average models. Third, the presence of firm specific attributes among three classified sectors namely: Basic materials, Cyclicals and Non-Cyclicals are compared. Risk, value and liquidity based anomalies dominate the Basic Materials shares. Liquidity effects stand out within the Cyclicals group, and the Non-Cyclicals sectors exhibit value and size effects. The ability to exploit all style-based anomalies after accounting for transaction costs is evaluated using a portfolio sorting methodology. The tests illustrate that increased exposure to the anomalies has delivered substantially higher returns with lower volatility than a buy and hold approach using an equally weighted all share benchmark. These abnormal returns are confirmed after adjusting for systematic risk. Further testing shows that the attributes, rather than loading on those attributes, are better at explaining share returns. Finally, the seasonal nature of Canadian equity returns is investigated. A six month strategy of "Selling in June and going away till December" provides the most optimal returns. The calendar month tests find January, February and December to be the strongest months of the year. Attribute payoffs seem to show vague seasonal tendencies.
APA, Harvard, Vancouver, ISO, and other styles
12

Speer, William D. "Systematics of Eastern North American Bracken Fern." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36715.

Full text
Abstract:
The cosmopolitan Pteridium aquilinum (L.) Kuhn is widespread throughout eastern North American, where it is represented primarily by Tryon's (1941) var. latiusculum (Desv.) Underw. and var. pseudocaudatum (Clute) Heller. The taxonomy of Pteridium is controversial. Fourteen isozyme loci and 12 morphological characters were used to assess the taxonomic relationship of these two varieties. Isozyme data indicated a high mean genetic identity (I = 0.976) between eleven bracken populations. Strong patterns of geographic variation for isozyme allele frequencies were also observed. The isozyme results did not separate the two taxa. Numerical analysis of the morphology distinguished the two taxa when the qualitative characters were used alone or in conjunction with some of the quantitative traits. All qualitative characters differed significantly between the two taxa. No perceptible geographic pattern of variation was observed. Morphological distinctiveness was maintained even in those localities where both taxa were present, with few or no intermediates being found. Isozyme evidence suggestive of gene flow between the two varieties was found at Greensboro, NC, where the two morphotypes were easily recognizable. The isozyme evidence strongly indicates conspecificity, while the morphological evidence supports their status at the varietal level.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Salami, Alireza. "Decoding the complex brain : multivariate and multimodal analyses of neuroimaging data." Doctoral thesis, Umeå universitet, Institutionen för integrativ medicinsk biologi (IMB), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-51842.

Full text
Abstract:
Functional brain images are extraordinarily rich data sets that reveal distributed brain networks engaged in a wide variety of cognitive operations. It is a substantial challenge both to create models of cognition that mimic behavior and underlying cognitive processes and to choose a suitable analytic method to identify underlying brain networks. Most of the contemporary techniques used in analyses of functional neuroimaging data are based on univariate approaches in which single image elements (i.e. voxels) are considered to be computationally independent measures. Beyond univariate methods (e.g. statistical parametric mapping), multivariate approaches, which identify a network across all regions of the brain rather than a tessellation of regions, are potentially well suited for analyses of brain imaging data. A multivariate method (e.g. partial least squares) is a computational strategy that determines time-varying distributed patterns of the brain (as a function of a cognitive task). Compared to its univariate counterparts, a multivariate approach provides greater levels of sensitivity and reflects cooperative interactions among brain regions. Thus, by considering information across more than one measuring point, additional information on brain function can be revealed. Similarly, by considering information across more than one measuring technique, the nature of underlying cognitive processes become well-understood. Cognitive processes have been investigated in conjunction with multiple neuroimaging modalities (e.g. fMRI, sMRI, EEG, DTI), whereas the typical method has been to analyze each modality separately. Accordingly, little work has been carried out to examine the relation between different modalities. Indeed, due to the interconnected nature of brain processing, it is plausible that changes in one modality locally or distally modulate changes in another modality. This thesis focuses on multivariate and multimodal methods of image analysis applied to various cognitive questions. These methods are used in order to extract features that are inaccessible using univariate / unimodal analytic approaches. To this end, I implemented multivariate partial least squares analysis in study I and II in order to identify neural commonalities and differences between the available and accessible information in memory (study I), and also between episodic encoding and episodic retrieval (study II). Study I provided evidence of a qualitative differences between availability and accessibility signals in memory by linking memory access to modality-independent brain regions, and availability in memory to elevated activity in modality-specific brain regions. Study II provided evidence in support of general and specific memory operations during encoding and retrieval by linking general processes to the joint demands on attentional, executive, and strategic processing, and a process-specific network to core episodic memory function. In study II, III, and IV, I explored whether the age-related changes/differences in one modality were driven by age-related changes/differences in another modality. To this end, study II investigated whether age-related functional differences in hippocampus during an episodic memory task could be accounted for by age-related structural differences. I found that age-related local structural deterioration could partially but not entirely account for age-related diminished hippocampal activation. In study III, I sought to explore whether age-related changes in the prefrontal and occipital cortex during a semantic memory task were driven by local and/or distal gray matter loss. I found that age-related diminished prefrontal activation was driven, at least in part, by local gray matter atrophy, whereas the age-related decline in occipital cortex was accounted for by distal gray matter atrophy. Finally, in study IV, I investigated whether white matter (WM) microstructural differences mediated age-related decline in different cognitive domains. The findings implicated WM as one source of age-related decline on tasks measuring processing speed, but they did not support the view that age-related differences in episodic memory, visuospatial ability, or fluency were strongly driven by age-related differences in white-matter pathways. Taken together, the architecture of different aspects of episodic memory (e.g. encoding vs. retrieval; availability vs. accessibility) was characterized using a multivariate partial least squares. This finding highlights usefulness of multivariate techniques in guiding cognitive theories of episodic memory. Additionally, competing theories of cognitive aging were investigated by multimodal integration of age-related changes in brain structure, function, and behavior. The structure-function relationships were specific to brain regions and cognitive domains. Finally, we urged that contemporary theories on cognitive aging need to be extended to longitudinal measures to be further validated.
APA, Harvard, Vancouver, ISO, and other styles
14

Shang, Jing [Verfasser], and Nikolaos [Akademischer Betreuer] Koutsouleris. "Univariate and multivariate pattern analysis of preterm subjects: a multimodal neuroimaging study / Jing Shang ; Betreuer: Nikolaos Koutsouleris." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2019. http://d-nb.info/1213245745/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Deo, Harsukhjit Singh. "Analysis of a quantitative trait locus for twin data using univariate and multivariate linear mixed effects models." Thesis, University of Reading, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Baars, Monique. "The existence and behaviour of style anomalies in the global equity market : a univariate and multivariate analysis." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/13081.

Full text
Abstract:
Includes bibliographical references.
Style anomalies comprise patterns and relationships found in the cross-section of stock returns data, which contradict the existing asset-pricing models. They have proven to be reasonably effective at explaining the return-genera ting process of ordinary shares, and have bro ad uses within modern finance. Empirically, style anomalies are found to have statistically significant rewards in individual markets and s mall market groupings, and are found to be significant at a sector level on a global scale, but have not been tested at a firm level on a global scale. The aim of this study is to explain the cross-section of returns of the 1468 largest global firms by market capitalisation. The worldwide study considers stocks from 53 different countries and 112 industries, and investigates the end of month return forecasting power of 44 different firm-specific attributes over the period August 2003 to August 2013. A univariate analysis is performed through a cross-sectional regression of the forward stock re turns on the firm-specific attributes in a similar method to Fama and MacBeth (1973). A ‘Full Data’ regression is also conducted, and results are presented both before and after a beta-adjustment for market risk. Following this, a multivariate analysis is conducted and a forward stepwise procedure is used to construct a multi-factor model. According to the results of this study, style anomalies exist and have a statistically significant reward at a firm level on a global scale. In a univariate setting there are 25 firm-specific style factors that have a significant return payoff at a 5% level of significance. The specific style groups containing significant firm-specific attributes are the Value, Growth, Momentum, Size and Liquidity, Leverage, and Emerging Market groupings. Ten attributes within these style groupings are found to be robust as they are highly significant both before and after beta-adjustment, and within both a univariate and multivariate setting, namely: EBITDA to Share Price (EBP), Emerging Market (EM), CAPEX to Sales (CXS), Sales to Total Assets (STA), Payout Ratio (PR), 24-month growth in Turnover by Volume (TVO24), Sales to Share Price (SP), 6-month growth in Earnings (E6), 1-month prior return (MOM1), and 3-month prior return (MOM3). This confirms that style effects exist both independently, in a univariate setting, and in a multi-factor model. The results of this study show that the Value and Emerging Market styles have the highest cumulative payoffs over the 10-year period, and the evidence of strong correlation between attributes within specific styles gives further validation to the traditional style groupings. The behaviour of, and relationships between the firm-specific style factors give great insight into the payoffs to investing in different style factors over time, and are key to the construction of a multi-factor model. The fifteen firm-specific style factors that are significant in a multivariate setting form the core of a multi-factor style model, which can potentially be used to explain a degree of unexplained returns, predict returns, give insight into global market behaviour, and price global assets for use within a global portfolio. These firm-specific attributes include: EBITDA to Share Price (EBP), Emerging Market (EM), CAPEX to Sales (CXS), Sales to Total Assets (STA), Payout Ratio (PR), 24-month growth in Turnover by Volume (TVO24), Sales to Share Price (SP), 6-month growth in Earnings (E6), 1-month prior return (MOM1), 3-month prior return (MOM3), the natural log of Enterprise Value (LNEV), Interest Cover before Tax (ITBT), 6-month prior return (MOM6), Price-to-Book value (PTB), and Cash Flow-to-Price (CFP).
APA, Harvard, Vancouver, ISO, and other styles
17

Ruoff, Erin. "An Analysis of the Relationship between Socioeconomic Status and Skin Cancer Using the Health Information National Trends Survey, 2005." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/iph_theses/193.

Full text
Abstract:
Background: Skin cancer is one of the most preventable forms of cancer yet for certain types of skin cancers, it can be fatal if it goes untreated. While ultraviolet radiation is the main cause of skin cancer, there are several other risk factors, including sunburn history, smoking, environmental pollutants, family history, personal history, and skin color. Practicing sun protection behaviors and receiving regular skin cancer screenings can prevent the cancer from ever developing. This study examines the demographic and socioeconomic status risk factors for skin cancer. Methods: The Health Information National Trends Survey data was used from 2005. Using this secondary dataset, chi-square analysis was performed to determine the prevalence of skin cancer within the demographic categories of age and race/ethnicity as well as socioeconomic status indicators educational attainment, annual household income, employment status, and marital status. Univariate and multivariate analyses were performed to determine the correlations of the variables with skin cancer. A p-value of 0.05 and a 95% confidence interval were maintained throughout the analyses to determine any statistical significance. Results: Of the 3,804 respondents who answered the question related to cancer diagnosis, 226 indicated they had a positive skin cancer diagnosis, which was 5.94% of the total sample. Skin cancer and increased age were consistently associated (χ2 (2) = 171.5, p<.001). The skin cancer peak prevalence was for all those respondents aged 65 and older. Higher educational attainment and higher annual household income were associated with greater likelihood of skin cancer. Conclusions: This study revealed that skin cancer is significantly associated with increased age, higher educational attainment, and higher annual household income. Implementing consistent screening practices and targeted behavioral interventions are important areas for health focus in the future.
APA, Harvard, Vancouver, ISO, and other styles
18

Kruse, Britta. "Fuzzy-Technologie versus multivariate Statistik versus univariate Statistik ein Verfahrensvergleich am Beispiel der geotechnischen Datenanalyse von Geschiebemergel." Berlin mbv, Mensch-und-Buch-Verl, 2009. http://d-nb.info/995878218/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lanzini, Justine. "Recherche de biomarqueurs et études lipidomiques à travers diverses applications en santé." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB126.

Full text
Abstract:
La notion de biomarqueurs est définie comme « une caractéristique mesurée objectivement et évaluée comme indicateur de processus biologiques normaux ou pathologiques, ou de réponses pharmacologiques à une intervention thérapeutique ». L'intérêt scientifique pour les biomarqueurs est de plus en plus important. Ils permettent, entre autres,une meilleure compréhension des processus pathologiques et de diagnostiquer, voire pronostiquer ces pathologies. Les études « omiques » telles que la lipidomique jouent un rôle essentiel dans la découverte de nouveaux biomarqueurs. La lipidomique consiste à explorer le lipidome d'un échantillon biologique et à déceler l'impact de la pathologie sur ce dernier. Les lipides constituent une vaste et importante famille de métabolites retrouvés dans toutes les cellules vivantes, dont leur nombre est estimé à plus de 100 000 espèces chez les mammifères. Ils sont impliqués, notamment, dans le stockage d'énergie et la transduction de signal. Mon travail de thèse a reposé sur la réalisation d'approches lipidomiques en LC-MS sur diverses applications en santé telles que le syndrome de déficit immunitaire combiné sévère associé à une alopécie et une dystrophie des ongles, le syndrome du nystagmus infantile et le rejet de greffe rénale. A cette fin, des analyses statistiques multivariées et univariées ont été employées pour déceler des potentiels lipides biomarqueurs
Biomarker was defined as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to therapeutic intervention". The scientific interest in biomarkers is more and more important. They allow, in particular, to better understand pathogenic processes and to diagnose, even to predict pathologies. "Omics" studies, such as lipidomics, play an essential role in the new biomarkers discovery. Lipidomics consist in exploring biological samples lipidome and in detecting pathogenic impact on this latter. Lipids are a large and important metabolite family found in all living cells. Their quantity is estimated to more than 100,000 species in mammals. They are involved, in particular, in the energy storage and the signal transduction. My PhD thesis involved carrying out lipidomics approaches with LC-MS through various health applications such as severe combined immunodeficiency associated with alopecia syndrome, infantile nystagmus syndrome and renal graft rejection. For this purpose, multivariate and univariate statistical analyses were carried out in order to detect potential lipid biomarkers
APA, Harvard, Vancouver, ISO, and other styles
20

Lahm, Rudinei Luis da Fonseca. "Análise dos papéis de compra no processo de aquisição de interruptores por clientes finais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/158357.

Full text
Abstract:
O presente estudo tem como objetivo identificar os papéis na compra de interruptores de luz por clientes finais e identificar as influências desses papéis e como ocorrem. O método utilizado foi dividido em duas fases, a primeira com abordagem qualitativa e a segunda com abordagem quantitativa. Inicialmente foram feitas entrevistas em profundidade com consumidores e profissionais da área, com o objetivo de identificar quais os tipos de papéis que ocorrem durante a compra de interruptores e identificar os influenciadores. O segundo passo foi uma survey, sendo entrevistadas 1.013 pessoas divididas nas cinco regiões do país, Região Centro Oeste, Nordeste, Sul, Norte e Sudeste. Foram identificados cinco papéis de compra e sete influenciadores desse processo. Os resultados da pesquisa quantitativa foram analisados com análises univariadas e multivariadas. As análises indicam que os compradores adquirem interruptores para outros usuários, mas a grande maioria deles adquirem o produto para uso próprio e são responsáveis pela compra e pelo pagamento do produto. Os resultados também indicam que as pessoas próximas são os maiores influenciadores. Espera-se que com os resultados obtidos esse trabalho possa contribuir para executivos e empresas do setor elétrico na tomada de decisões.
The present paper has the objective to identify which are the final consumers purchase roles of light switches and identify the influences about this roles and how it occurs. The analysis method was dividing in two steps, the first with a qualitative approach and the second with a quantitative approach. First of all, had been made depth interviews with consumers and professionals from the area, with the objective to identify the influencers. After that, a survey has been made, 1.013 people were interviewed and divided at five regions of the country, Midwest region, Northeast, South, North and Southeast. Were identified five purchase roles and seven influencers of this process. The quantitative research results were analyzed with univariate and multivariate analysis. The analysis indicate that the buyers also buy the light switches for other users, but the majority buy the product for own use and is responsible for the purchase and the product payment. The results also indicate that, in general, close people are the major influencers. With the obtained results throw crossing and the performed analyses, the expectations with this paper is contribute with executives and companies of the area on theirs decision-making.
APA, Harvard, Vancouver, ISO, and other styles
21

Karimpour, Masoumeh. "Multi-platform metabolomics assays to study the responsiveness of the human plasma and lung lavage metabolome." Doctoral thesis, Umeå universitet, Kemiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-120591.

Full text
Abstract:
Metabolomics as a field has been used to track changes and perturbations in the human body by investigating metabolite profiles indicating the change of metabolite levels over time and in response to different challenges. In this thesis work, the main focus was on applying multiplatform-metabolomics to study the human metabolome following exposure to perturbations, such as diet (in the form of a challenge meal) and exhaust emissions (air pollution exposure in a controlled setting). The cutting-edge analytical platforms used for this purpose were nuclear magnetic resonance (NMR), as well as gas chromatography (GC) and liquid chromatography (LC) coupled to mass spectrometry (MS). Each platform offered unique characterization features, allowing detection and identification of a specific range of metabolites. The use of multiplatform-metabolomics was found to enhance the metabolome coverage and to provide complementary findings that enabled a better understanding of the biochemical processes reflected by the metabolite profiles. Using non-targeted analysis, a wide range of unknown metabolites in plasma were identified during the postprandial stage after a well-defined challenge meal (in Paper I). In addition, a considerable number of metabolites were detected and identified in lung lavage fluid after biodiesel exhaust exposure compared to filtered air exposure (in Paper II). In parallel, using targeted analysis, both lung lavage and plasma fatty acid metabolites were detected and quantified in response to filtered air and biodiesel exhaust exposure (in Paper III and IV). Data processing of raw data followed by data analysis, using both univariate and multivariate methods, enabled changes occurring in metabolites levels to be screened and investigated. For the initial pilot postprandial study, the aim was to investigate the plasma metabolome response after a well-defined meal during the postprandial stage for two types of diet. It was found that independent of the background diet type, levels of metabolites returned to their baseline levels after three hours. This finding was taken into consideration for the biodiesel exhaust exposures studies, designed to limit the impact of dietary effects. Both targeted and non-targeted approaches resulted in important findings. For instance, different metabolite profiles were detected in bronchial wash (BW) compared to bronchoalveolar lavage (BAL) fluid with mainly NMR and LC-MS. Furthermore, biodiesel exhaust exposure resulted in different metabolite profiles as observed by GC-MS, especially in BAL. In addition, fatty acid metabolites in BW, BAL, and plasma were shown to be responsive to biodiesel exhaust exposure, as measured by a targeted LC-MS/MS protocol. In summary, the new analytical methods developed to investigate the responsiveness of the human plasma and lung lavage metabolome proved to be useful in an analytical perspective, and provided important biological findings. However, further studies are needed to validate these results.
Metabolomik har använts för att spåra förändringar och störningar i kroppens funktioner genom undersökning av metabolit-profiler. I detta avhandlingasarbete har huvudfokus varit på tillämpning av flera olika analytiska plattformar för metabolomikstudier av det mänskliga metabolomet efter exponering för olika kost och avgasutsläpp från biodieselbränsle. De sofistikerade analytiska plattformarna som användes för detta ändamål var kärnmagnetisk resonans (NMR), samt gaskromatografi (GC) och vätskekromatografi (LC) kopplat till masspektrometri (MS). Varje plattform erbjöd unika karakteriseringsmöjligheter med detektion och identifiering av specifika grupper av metaboliter. Användningen av multipattformmetabolomik förbättrade täckningen av metabolomet och genererade kompletterande resultat som möjliggjorde en bättre förståelse av de biokemiska processer som reflekteras av metabolitprofilerna. Med hjälp av breda analyser har ett stort antal okända metaboliter i plasma identifierats under den postprandial fasen efter en väldefinerad måltid (i Paper I). Dessutom har ett stort antal metaboliter påvisats och identifierats i lungsköljvätska efter exponering av biodieselavgaser jämfört med kontollexponering med filtrerad luft (i Paper II). Parallellt med dessa breda analyser har också riktade analyser genomförts av både lungsköljvätska och plasma. Därigenom har bioaktiva lipider detekterats och kvantifieras efter avgasexponering och resultaten har jämförts med filtrerad luft som kontrollexponering (Paper III och IV). Processning av rådata följt av dataanalys, med både univariata och multivariata metoder möjliggjorde screening och fördjupad undersökning av förändringen i metabolitnivåer. I den första pilotstudien av postprandiala nivåer var syftet att undersöka responsen i plasmametabolomet efter en väldefinierad måltid under den postprandiala fasen vid två olika typer av kost. Resultaten visade att oberoende av kosten, så återvände metabolitnivåerna till sina baslinjenivåer tre timmar efter måltiden. Detta togs i beaktande vid exponeringsstudierna för biodieselavgaser, som designades så att dietens inverkan minimerades. Både breda och riktade analyser resulterade i viktiga resultat. Exempelvis så detekterades olika metabolitprofiler i bronkiell sköljvätska (BW) jämfört med bronkoalveolär sköljvätska (BAL), speciellt med NMR och LC-MS. Dessutom resulterade avgasexponering i förändrade metabolitprofiler, observerade med GC-MS, särskilt i BAL. Dessutom uppvisade fettsyrametaboliter i BW, BAL och plasma förändrade halter efter avgasexponering, uppmätt genom en riktad LC-MS/MS-analys. Sammanfattningsvis så visade sig de nya metoderna som utvecklats för att undersöka  förändringar i metabolithalterna i plasma och lungsköljvätska fungera väl ur ett analytiskt perspektiv och resulterade i viktiga biologiska fynd. Fördjupade studier behövs dock för att validera resultaten.
APA, Harvard, Vancouver, ISO, and other styles
22

Lasmar, Nour-Eddine. "Modélisation stochastique pour l’analyse d’images texturées : approches Bayésiennes pour la caractérisation dans le domaine des transformées." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14639/document.

Full text
Abstract:
Le travail présenté dans cette thèse s’inscrit dans le cadre de la modélisation d’images texturées à l’aide des représentations multi-échelles et multi-orientations. Partant des résultats d’études en neurosciences assimilant le mécanisme de la perception humaine à un schéma sélectif spatio-fréquentiel, nous proposons de caractériser les images texturées par des modèles probabilistes associés aux coefficients des sous-bandes. Nos contributions dans ce contexte concernent dans un premier temps la proposition de différents modèles probabilistes permettant de prendre en compte le caractère leptokurtique ainsi que l’éventuelle asymétrie des distributions marginales associées à un contenu texturée. Premièrement, afin de modéliser analytiquement les statistiques marginales des sous-bandes, nous introduisons le modèle Gaussien généralisé asymétrique. Deuxièmement, nous proposons deux familles de modèles multivariés afin de prendre en compte les dépendances entre coefficients des sous-bandes. La première famille regroupe les processus à invariance sphérique pour laquelle nous montrons qu’il est pertinent d’associer une distribution caractéristique de type Weibull. Concernant la seconde famille, il s’agit des lois multivariées à copules. Après détermination de la copule caractérisant la structure de la dépendance adaptée à la texture, nous proposons une extension multivariée de la distribution Gaussienne généralisée asymétrique à l’aide de la copule Gaussienne. L’ensemble des modèles proposés est comparé quantitativement en terme de qualité d’ajustement à l’aide de tests statistiques d’adéquation dans un cadre univarié et multivarié. Enfin, une dernière partie de notre étude concerne la validation expérimentale des performances de nos modèles à travers une application de recherche d’images par le contenu textural. Pour ce faire, nous dérivons des expressions analytiques de métriques probabilistes mesurant la similarité entre les modèles introduits, ce qui constitue selon nous une troisième contribution de ce travail. Finalement, une étude comparative est menée visant à confronter les modèles probabilistes proposés à ceux de l’état de l’art
In this thesis we study the statistical modeling of textured images using multi-scale and multi-orientation representations. Based on the results of studies in neuroscience assimilating the human perception mechanism to a selective spatial frequency scheme, we propose to characterize textures by probabilistic models of subband coefficients.Our contributions in this context consist firstly in the proposition of probabilistic models taking into account the leptokurtic nature and the asymmetry of the marginal distributions associated with a textured content. First, to model analytically the marginal statistics of subbands, we introduce the asymmetric generalized Gaussian model. Second, we propose two families of multivariate models to take into account the dependencies between subbands coefficients. The first family includes the spherically invariant processes that we characterize using Weibull distribution. The second family is this of copula based multivariate models. After determination of the copula characterizing the dependence structure adapted to the texture, we propose a multivariate extension of the asymmetric generalized Gaussian distribution using Gaussian copula. All proposed models are compared quantitatively using both univariate and multivariate statistical goodness of fit tests. Finally, the last part of our study concerns the experimental validation of the performance of proposed models through texture based image retrieval. To do this, we derive closed-form metrics measuring the similarity between probabilistic models introduced, which we believe is the third contribution of this work. A comparative study is conducted to compare the proposed probabilistic models to those of the state-of-the-art
APA, Harvard, Vancouver, ISO, and other styles
23

Zelelew, Mulugeta. "Improving Runoff Estimation at Ungauged Catchments." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for vann- og miljøteknikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19675.

Full text
Abstract:
Water infrastructures have been implemented to support the vital activities of human society. The infrastructure developments at the same time have interrupted the natural catchment response characteristics, challenging society to implement effective water resources planning and management strategies. The Telemark area in southern Norway has seen a large number of water infrastructure developments, particularly hydropower, over more than a century. Recent developments in decision support tools for flood control and reservoir operation has raised the need to compute inflows from local catchments, most of which are regulated or have no observed data. This has contributed for the motivation of this PhD thesis work, with an aim of improving runoff estimation at ungauged catchments, and the research results are presented in four manuscript scientific papers.  The inverse distance weighting, inverse distance squared weighting, ordinary kriging, universal kriging and kriging with external drift were applied to analyse precipitation variability and estimate daily precipitation in the study area. The geostatistical based univariate and multivariate map-correlation concepts were applied to analyse and physically understand regional hydrological response patterns. The Sobol variance based sensitivity analysis (VBSA) method was used to investigate the HBV hydrological model parameterization significances on the model response variations and evaluate the model’s reliability as a prediction tool. The HBV hydrological model space transferability into ungauged catchments was also studied.  The analyses results showed that the inverse distance weighting variants are the preferred spatial data interpolation methods in areas where relatively dense precipitation station network can be found.  In mountainous areas and in areas where the precipitation station network is relatively sparse, the kriging variants are the preferred methods. The regional hydrological response correlation analyses suggested that geographic proximity alone cannot explain the entire hydrological response correlations in the study area. Besides, when the multivariate map-correlation analysis was applied, two distinct regional hydrological response patterns - the radial and elliptical-types were identified. The presence of these hydrological response patterns influenced the location of the best-correlated reference streamgauges to the ungauged catchments. As a result, the nearest streamgauge was found the best-correlated in areas where the radial-type hydrological response pattern is the dominant. In area where the elliptical-type hydrological response pattern is the dominant, the nearest reference streamgauge was not necessarily the best-correlated. The VBSA verified that varying up to a minimum of four to six influential HBV model parameters can sufficiently simulate the catchments' responses characteristics when emphasis is given to fit the high flows. Varying up to a minimum of six influential model parameters is necessary to sufficiently simulate the catchments’ responses and maintain the model performance when emphasis is given to fit the low flows. However, varying more than nine out of the fifteen HBV model parameters will not make any significant change on the model performance.  The hydrological model space transfer study indicated that estimation of representative runoff at ungauged catchments cannot be guaranteed by transferring model parameter sets from a single donor catchment. On the other hand, applying the ensemble based model space transferring approach and utilizing model parameter sets from multiple donor catchments improved the model performance at the ungauged catchments. The result also suggested that high model performance can be achieved by integrating model parameter sets from two to six donor catchments. Objectively minimizing the HBV model parametric dimensionality and only sampling the sensitive model parameters, maintained the model performance and limited the model prediction uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
24

Tian, Wen Jing. "A multivariate control chart for monitoring univariate processes." Thesis, University of Macau, 2006. http://umaclib3.umac.mo/record=b1675975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

MOEINZADEH, BEHRAD. "AN INTEGRATED UNIVARIATE AND MULTIVARIATE QUALITY CONTROL SYSTEM." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1078350395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ferguson, Claire Ann. "Univariate and multivariate statistical methodologies for lake ecosystem modelling." Thesis, University of Glasgow, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bostrom, Aaron. "Shapelet transforms for univariate and multivariate time series classification." Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/67270/.

Full text
Abstract:
Time Series Classification (TSC) is a growing field of machine learning research. One particular algorithm from the TSC literature is the Shapelet Transform (ST). Shapelets are a phase independent subsequences that are extracted from times series to form discriminatory features. It has been shown that using the shapelets to transform the datasets into a new space can improve performance. One of the major problems with ST, is that the algorithm is O(n2m4), where n is the number of time series and m is the length of the series. As a problem increases in sizes, or additional dimensions are added, the algorithm quickly becomes computationally infeasible. The research question addressed is whether the shapelet transform be improved in terms of accuracy and speed. Making algorithmic improvements to shapelets will enable the development of multivariate shapelet algorithms that can attempt to solve much larger problems in realistic time frames. In support of this thesis a new distance early abandon method is proposed. A class balancing algorithm is implemented, which uses a one vs. all multi class information gain that enables heuristics which were developed for two class problems. To support these improvements a large scale analysis of the best shapelet algorithms is conducted as part of a larger experimental evaluation. ST is proven to be one of the most accurate algorithms in TSC on the UCR-UEA datasets. Contract classification is proposed for shapelets, where a fixed run time is set, and the number of shapelets is bounded. Four search algorithms are evaluated with fixed run times of one hour and one day, three of which are not significantly worse than a full enumeration. Finally, three multivariate shapelet algorithms are developed and compared to benchmark results and multivariate dynamic time warping.
APA, Harvard, Vancouver, ISO, and other styles
28

Maiorano, Alexandre Cristovão. "Avaliação esportiva utilizando técnicas multivariadas: construção de indicadores e sistemas online." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/7596.

Full text
Abstract:
Submitted by Izabel Franco (izabel-franco@ufscar.br) on 2016-09-27T13:57:54Z No. of bitstreams: 1 DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-03T18:18:22Z (GMT) No. of bitstreams: 1 DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-03T18:18:38Z (GMT) No. of bitstreams: 1 DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5)
Made available in DSpace on 2016-10-03T18:29:41Z (GMT). No. of bitstreams: 1 DissACM.pdf: 2683283 bytes, checksum: 013455f7d8c0d48a1566d18bcdd0fbe8 (MD5) Previous issue date: 2014-10-10
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
The main objective of this research is to provide statistical tools that allow the comparison of individuals in a speci ed sports category. Particularly, the present study is focused on the performance evaluation in football using univariate and multivariate methods. The univariate approach is given by Z-CELAFISCS methodology, which was developed with the purpose of identifying talents in the sport. The multivariate approaches are given by the construction of indicators, speci cally by means of principal component analysis, factor analysis and copulas. These indicators allows the reduction of the dimensionality of the data in studying, providing better interpretation of the results and improving comparability between the performance and assortment of individuals. To facilitate the use of the methodology studied here was built an online statistical system called i-Sports.
principal objetivo do trabalho é apresentar ferramentas estatísticas que permitam a comparação de indivíduos em uma determinada modalidade esportiva. Particularmente, o estudo exposto é voltado à avaliação de desempenho em futebol, utilizando métodos univariados e multivariados. A abordagem univariada é dada pela metodologia Z-CELAFISCS, desenvolvida com o propósito de identi car talentos no esporte. As abordagens multivariadas são dadas pela construção de indicadores, mais especi camente por meio da análise de componentes principais, análise fatorial e cópulas. A obtenção desses indicadores possibilita a redução da dimensionalidade do estudo, fornecendo melhor interpretação dos resultados e melhor comparabilidade entre o desempenho e rankeamento dos indivíduos. Para facilitar a utilização da metodologia aqui estudada foi construído um sistema estat ístico online chamado de i-Sports.
APA, Harvard, Vancouver, ISO, and other styles
29

Srivastava, Arunima. "Univariate and Multivariate Representation and Modeling of Cancer Biomedical Data." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1577717365850367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Joner, Michael D. Jr. "Univariate and Multivariate Surveillance Methods for Detecting Increases in Incidence Rates." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26773.

Full text
Abstract:
It is often important to detect an increase in the frequency of some event. Particular attention is given to medical events such as mortality or the incidence of a given disease, infection or birth defect. Observations are regularly taken in which either an incidence occurs or one does not. This dissertation contains the result of an investigation of prospective monitoring techniques in two distinct surveillance situations. In the first situation, the observations are assumed to be the results of independent Bernoulli trials. Some have suggested adapting the scan statistic to monitor such rates and detect a rate increase as soon as possible after it occurs. Other methods could be used in prospective surveillance, such as the Bernoulli cumulative sum (CUSUM) technique. Issues involved in selecting parameters for the scan statistic and CUSUM methods are discussed, and a method for computing the expected number of observations needed for the scan statistic method to signal a rate increase is given. A comparison of these methods shows that the Bernoulli CUSUM method tends to be more effective in detecting increases in the rate. In the second situation, the incidence information is available at multiple locations. In this case the individual sites often report a count of incidences on a regularly scheduled basis. It is assumed that the counts are Poisson random variables which are independent over time, but the counts at any given time are possibly correlated between regions. Multivariate techniques have been suggested for this situation, but many of these approaches have shortcomings which have been demonstrated in the quality control literature. In an attempt to remedy some of these shortcomings, a new control chart is recommended based on a multivariate exponentially weighted moving average. The average run-length performance of this chart is compared with that of the existing methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Cox, Steven. "Simulation and control of univariate and multivariate set-up dominant process." Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11383/.

Full text
Abstract:
This thesis explores the use of statistically valid process improvement tools in low-volume applications. Setting out the following research questions: How can the Six Sigma Measure and Analyse phases of a chronic quality problem be statistically validated in a low-volume process? How can a statistically valid approach for process control be implemented in a low-volume process? And how can this tool be extended to fit multivariate processes and can the calculation of control parameter adjustments be automated? In answer, the thesis presents an enhanced PROcess VAriation Diagnosis Tool (PROVADT) method, driving a Six Sigma improvement project through the Measure and Analyse phases. PROVADT provides a structured sampling plan to perform a Multi-Vari study, Isoplot, Gage R&R and Provisional Process Capability in as few as twenty samples and eighty measurements, making the technique suited to low-volume applications. The enhanced PROVADT method provides a Gage R&R without confounded variation sources, as was the case in the original method, and its practical application was demonstrated through two case studies. Process control tools for low-volume, high-variety manufacturing applications were developed. An adjustable traffic-light chart, with control limits linked to tolerance and simple decision rules, was used for monitoring univariate processes. This tool, the Set-Up Process Algorithm (SUPA), uses probability theory to provide 98% confidence that the process is operating at a pre-specified minimum level of Cp in as few as five samples. SUPA was extended to deal with high-complexity applications, resulting in multivariate SUPA (mSUPA). mSUPA maintains SUPA’s principles, but presents the information about multiple process features on one chart, rather than multiple univariate charts. To supplement the mSUPA tool, a theoretical method for calculating optimal process adjustment when a multivariate process is off-target was introduced, combining discrete-event simulation and numerical optimisation to calculate adjustments.
APA, Harvard, Vancouver, ISO, and other styles
32

Ahmad, Shafiq, and Shafiq ahmad@rmit edu au. "Process capability assessment for univariate and multivariate non-normal correlated quality characteristics." RMIT University. Mathematical and Geospatial Sciences, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20091127.121556.

Full text
Abstract:
In today's competitive business and industrial environment, it is becoming more crucial than ever to assess precisely process losses due to non-compliance to customer specifications. To assess these losses, industry is extensively using Process Capability Indices for performance evaluation of their processes. Determination of the performance capability of a stable process using the standard process capability indices such as and requires that the underlying quality characteristics data follow a normal distribution. However it is an undisputed fact that real processes very often produce non-normal quality characteristics data and also these quality characteristics are very often correlated with each other. For such non-normal and correlated multivariate quality characteristics, application of standard capability measures using conventional methods can lead to erroneous results. The research undertaken in this PhD thesis presents several capability assessment methods to estimate more precisely and accurately process performances based on univariate as well as multivariate quality characteristics. The proposed capability assessment methods also take into account the correlation, variance and covariance as well as non-normality issues of the quality characteristics data. A comprehensive review of the existing univariate and multivariate PCI estimations have been provided. We have proposed fitting Burr XII distributions to continuous positively skewed data. The proportion of nonconformance (PNC) for process measurements is then obtained by using Burr XII distribution, rather than through the traditional practice of fitting different distributions to real data. Maximum likelihood method is deployed to improve the accuracy of PCI based on Burr XII distribution. Different numerical methods such as Evolutionary and Simulated Annealing algorithms are deployed to estimate parameters of the fitted Burr XII distribution. We have also introduced new transformation method called Best Root Transformation approach to transform non-normal data to normal data and then apply the traditional PCI method to estimate the proportion of non-conforming data. Another approach which has been introduced in this thesis is to deploy Burr XII cumulative density function for PCI estimation using Cumulative Density Function technique. The proposed approach is in contrast to the approach adopted in the research literature i.e. use of best-fitting density function from known distributions to non-normal data for PCI estimation. The proposed CDF technique has also been extended to estimate process capability for bivariate non-normal quality characteristics data. A new multivariate capability index based on the Generalized Covariance Distance (GCD) is proposed. This novel approach reduces the dimension of multivariate data by transforming correlated variables into univariate ones through a metric function. This approach evaluates process capability for correlated non-normal multivariate quality characteristics. Unlike the Geometric Distance approach, GCD approach takes into account the scaling effect of the variance-covariance matrix and produces a Covariance Distance variable that is based on the Mahanalobis distance. Another novelty introduced in this research is to approximate the distribution of these distances by a Burr XII distribution and then estimate its parameters using numerical search algorithm. It is demonstrates that the proportion of nonconformance (PNC) using proposed method is very close to the actual PNC value.
APA, Harvard, Vancouver, ISO, and other styles
33

Roberts, Gwendolyn Rose 1963. "A comparison of multiple univariate and multivariate geometric moving average control charts." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276779.

Full text
Abstract:
This study utilizes a Monte Carlo simulation to examine the performance of multivariate geometric moving average control chart schemes for controlling the mean of a multivariate normal process. The study compares the performance of the proposed method with a multivariate Shewhart chart, a multiple univariate cumulative sum (CUSUM) control chart, a multivariate CUSUM control chart and a multiple univariate geometric moving average control chart.
APA, Harvard, Vancouver, ISO, and other styles
34

Blagojevic, Milica. "Univariate and multivariate non-PH frailty models with application to trauma data." Thesis, Keele University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Evans, S. D. "The anterior pathway for intelligible speech : insights from univariate and multivariate methods." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1348320/.

Full text
Abstract:
Whilst there is broad agreement concerning the existence of an anterior processing stream in the human brain concerned with extracting meaning from speech, there is an ongoing controversy as to whether intelligible speech is first resolved in left anterior or bilateral posterior temporal fields (Hickok and Poeppel, 2007;Rauschecker and Scott, 2009). Proponents of the bilateral processing model argue that bilateral responses are driven by the acoustic properties of the speech signal, whilst proponents of the left lateralised model suggest that left lateralisation is driven by access to linguistic representations. This thesis directly addresses these controversies using Functional Magnetic Resonance Imaging (fMRI) and univariate and multivariate analysis methods. Two main questions are addressed: (1) where are responses to intelligible, and intelligible but degraded speech, separated from responses to acoustic complexity and (2) does the resulting pattern of lateralisation, or otherwise, derive from the acoustic properties or the linguistic status of speech. The results of this thesis reconcile, to some degree, the two theoretical positions. I show that the most consistent and largest amplitude responses to intelligible, and degraded but intelligible speech, are found in the left anterior Superior Temporal Sulcus (STS). Additional responses were also found in right anterior and left posterior STS, however, these were less consistently identified. Regions of the left posterior STS showed sensitivity to resolved intelligible speech and also showed a response likely to reflect acoustic-phonetic processing supporting the resolving of intelligibility. Right posterior STS responses to intelligible speech were noticeably absent across all studies. No evidence was found for a relative acoustic basis for hemispheric lateralisation in the case of speech derived manipulations of spectrum and amplitude, but evidence was found in support of a left hemisphere specialism for resolving intelligible speech, supporting a relative left lateralisation to speech driven by linguistic rather than acoustic factors.
APA, Harvard, Vancouver, ISO, and other styles
36

Feng, Gang [Verfasser], and Jens-Peter [Akademischer Betreuer] Kreiß. "Bootstrap Methods for Univariate and Multivariate Volatility / Gang Feng ; Betreuer: Jens-Peter Kreiß." Braunschweig : Technische Universität Braunschweig, 2015. http://d-nb.info/117581962X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Wei. "Quantitative methods in high-frequency financial econometrics modeling univariate and multivariate time series /." [S.l. : s.n.], 2007. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000007344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Xie, Weiyi. "A Geometric Approach to Visualization of Variability in Univariate and Multivariate Functional Data." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500348052174345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hodis, Flaviu-Adrian. "Simulating univariate and multivariate nonnormal distributions based on a system of power method distributions /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1594480491&sid=3&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph.D.)--Southern Illinois University Carbondale, 2008.
"Department of Educational Psychology and Special Education." Includes bibliographical references (p. 132-138). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
40

Anver, Haneef Mohamed. "Mean Hellinger Distance as an Error Criterion in Univariate and Multivariate Kernel Density Estimation." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/dissertations/161.

Full text
Abstract:
Ever since the pioneering work of Parzen the mean square error( MSE) and its integrated form (MISE) have been used as the error criteria in choosing the bandwidth matrix for multivariate kernel density estimation. More recently other criteria have been advocated as competitors to the MISE, such as the mean absolute error. In this study we define a weighted version of the Hellinger distance for multivariate densities and show that it has an asymptotic form, which is one-fourth the asymptotic MISE under weak smoothness conditions on the multivariate density f. In addition the proposed criteria give rise to a new data-dependent bandwidth matrix selector. The performance of the new data-dependent bandwidth matrix selector is compared with other well known bandwidth matrix selectors such as the least squared cross validation (LSCV) and the plug-in (HPI) through simulation. We derived a closed form formula for the mean Hellinger distance (MHD) in the univariate case. We also compared via simulation mean weighted Hellinger distance (MWHD) and the asymptotic MWHD, and the MISE and the asymptotic MISE for both univariate and bivariate cases for various densities and sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
41

Mavrakakis, Miltiadis C. "State space models : univariate representation of a multivariate model, partial interpolation and periodic convergence." Thesis, London School of Economics and Political Science (University of London), 2008. http://etheses.lse.ac.uk/2341/.

Full text
Abstract:
This thesis examines several issues that arise from the state space representation of a multivariate time series model. Original proofs of the algorithms for obtaining interpolated estimates of the state and observation vectors from the Kalman filter smoother (KFS) output are presented, particularly for the formulae for which rigorous proofs do not appear in the existing literature. The notion of partially interpolated estimates is introduced and algorithms for constructing these estimates are established. An existing method for constructing a univariate representation (UR) of a multivariate model is developed further, and applied to a wider class of state space models. The computational benefits of filtering and smoothing with the UR, rather than the original multivariate model, are discussed. The UR KFS recursions produce useful quantities that cannot be obtained from the original multivariate model. The mathematical properties of these quantities are examined and the process of reconstructing the original multivariate KFS output is demonstrated By reversing the UR process, a time-invariant state space form (SSF) is proposed for models with periodic system matrices. This SSF is used to explore the novel concept of periodic convergence of the KFS. Necessary and sufficient conditions for periodic convergence are asserted and proved. The techniques developed are then applied to the problem of missing- value estimation in long multivariate temperature series, which can arise due to gaps in the historical records. These missing values are a hindrance to the study of weather risk and pricing of weather derivatives, as well as the development of climate-dependent models. The proposed model-based techniques are compared to existing methods in the field, as well as an original ad hoc approach. The relative performance of these methods is assessed by their application to data from weather stations in the state of Texas, for daily maximum temperatures from 1950 to 2001.
APA, Harvard, Vancouver, ISO, and other styles
42

Phan, Thi-Thu-Hong. "Elastic matching for classification and modelisation of incomplete time series." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0483/document.

Full text
Abstract:
Les données manquantes constituent un challenge commun en reconnaissance de forme et traitement de signal. Une grande partie des techniques actuelles de ces domaines ne gère pas l'absence de données et devient inutilisable face à des jeux incomplets. L'absence de données conduit aussi à une perte d'information, des difficultés à interpréter correctement le reste des données présentes et des résultats biaisés notamment avec de larges sous-séquences absentes. Ainsi, ce travail de thèse se focalise sur la complétion de larges séquences manquantes dans les séries monovariées puis multivariées peu ou faiblement corrélées. Un premier axe de travail a été une recherche d'une requête similaire à la fenêtre englobant (avant/après) le trou. Cette approche est basée sur une comparaison de signaux à partir d'un algorithme d'extraction de caractéristiques géométriques (formes) et d'une mesure d'appariement élastique (DTW - Dynamic Time Warping). Un package R CRAN a été développé, DTWBI pour la complétion de série monovariée et DTWUMI pour des séries multidimensionnelles dont les signaux sont non ou faiblement corrélés. Ces deux approches ont été comparées aux approches classiques et récentes de la littérature et ont montré leur faculté de respecter la forme et la dynamique du signal. Concernant les signaux peu ou pas corrélés, un package DTWUMI a aussi été développé. Le second axe a été de construire une similarité floue capable de prender en compte les incertitudes de formes et d'amplitude du signal. Le système FSMUMI proposé est basé sur une combinaison floue de similarités classiques et un ensemble de règles floues. Ces approches ont été appliquées à des données marines et météorologiques dans plusieurs contextes : classification supervisée de cytogrammes phytoplanctoniques, segmentation non supervisée en états environnementaux d'un jeu de 19 capteurs issus d'une station marine MAREL CARNOT en France et la prédiction météorologique de données collectées au Vietnam
Missing data are a prevalent problem in many domains of pattern recognition and signal processing. Most of the existing techniques in the literature suffer from one major drawback, which is their inability to process incomplete datasets. Missing data produce a loss of information and thus yield inaccurate data interpretation, biased results or unreliable analysis, especially for large missing sub-sequence(s). So, this thesis focuses on dealing with large consecutive missing values in univariate and low/un-correlated multivariate time series. We begin by investigating an imputation method to overcome these issues in univariate time series. This approach is based on the combination of shape-feature extraction algorithm and Dynamic Time Warping method. A new R-package, namely DTWBI, is then developed. In the following work, the DTWBI approach is extended to complete large successive missing data in low/un-correlated multivariate time series (called DTWUMI) and a DTWUMI R-package is also established. The key of these two proposed methods is that using the elastic matching to retrieving similar values in the series before and/or after the missing values. This optimizes as much as possible the dynamics and shape of knowledge data, and while applying the shape-feature extraction algorithm allows to reduce the computing time. Successively, we introduce a new method for filling large successive missing values in low/un-correlated multivariate time series, namely FSMUMI, which enables to manage a high level of uncertainty. In this way, we propose to use a novel fuzzy grades of basic similarity measures and fuzzy logic rules. Finally, we employ the DTWBI to (i) complete the MAREL Carnot dataset and then we perform a detection of rare/extreme events in this database (ii) forecast various meteorological univariate time series collected in Vietnam
APA, Harvard, Vancouver, ISO, and other styles
43

Troschke, Sven-Oliver. "Enhanced approaches to the combination of forecasts : univariate linear plus quadratic and multivariate linear methods /." Lohmar [u. a.] : Eul, 2002. http://www.gbv.de/dms/zbw/358295025.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Vollenbröker, Bernd Karl [Verfasser], and Alexander [Akademischer Betreuer] Lindner. "Strictly Stationary Solutions of Multivariate ARMA and Univariate ARIMA Equations / Bernd Karl Vollenbröker ; Betreuer: Alexander Lindner." Braunschweig : Technische Universität Braunschweig, 2011. http://d-nb.info/1175824860/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Deshpande, Harshawardhan Umakant. "Univariate and Multivariate fMRI Investigations of Delay Discounting and Episodic Future Thinking in Alcohol Use Disorder." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/101551.

Full text
Abstract:
Alcohol use disorder (AUD) remains a major public health concern globally with substantially increased mortality and a significant economic burden. The low rates of treatment and the high rates of relapse mean that excessive alcohol consumption detrimentally affects many aspects of the user's life and the lives of those around them. One reason for the low efficacy of treatments for AUD could be an unclear understanding of the neural correlates of the disease. As such, the studies in this dissertation aim at elucidating the neural mechanisms undergirding AUD, which could lead to more efficacious treatment and rehabilitation strategies. The propensity for impulsive decision making (choosing smaller, sooner rewards over larger, later ones) also known as delay discounting (DD), is an established risk-factor for a variety of substance abuse disorders, including AUD. Brain mapping of DD routinely uses modalities such as blood-oxygenation-level-dependent functional magnetic resonance imaging (BOLD fMRI). However, the extent to which these brain activation maps reflect the characteristics of impulsive behavior has not been directly studied. To examine this, we used multi-voxel pattern analysis (MVPA) methods such as multivariate classification using Support Vector Machine (SVM) algorithms and trained accurate classifiers of high vs. low impulsivity with individual fMRI brain maps. Our results demonstrate that brain regions in the prefrontal cortex encode neuroeconomic decision making characterizing DD behavior and help classify individuals with low impulsivity from individuals with high impulsivity. Individuals suffering from addictive afflictions such as AUD are often unable to plan for the future and are trapped in a narrow temporal window, resulting in short-term, impulsive decision making. Episodic future thinking (EFT) or the ability to project oneself into the future and pre-experience an event, is a rapidly growing area of addiction research and individuals suffering from addictive disorders are often poor at it. However, it has been shown across healthy individuals and disease populations (addiction, obesity) that practicing EFT reduces impulsive decision making. We provided real-time fMRI neurofeedback to alcohol users while they performed EFT inside the MR scanner to aid them in successfully modulating their thoughts between the present and the future. After the scanning session, participants made more restrained choices when performing a behavioral task outside the scanner, demonstrating an improvement in impulsivity. These two neuroimaging studies interrogate the brain mechanisms of delay discounting and episodic future thinking in alcohol use disorder. Successful classification of impulsive behavior as demonstrated in the first study could lead to accurate prediction of treatment outcomes in AUD. The second study suggests that rtfMRI provides direct access to brain mechanisms regulating EFT and highlights its potential as an intervention for impulsivity in the context of AUD. The work in this dissertation thus investigates important cognitive process for the treatment of alcohol use disorder that could pave the way for novel therapeutic interventions not only for AUD, but also for a wide spectrum of other addictive disorders.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
46

Malherbe, Chanel. "Fourier method for the measurement of univariate and multivariate volatility in the presence of high frequency data." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/4386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Pant, Mohan Dev. "Simulating Univariate and Multivariate Burr Type III and Type XII Distributions Through the Method of L-Moments." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/401.

Full text
Abstract:
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and heavy-tailed non-normal distributions. Conventional moment-based estimators (i.e., the mean, variance, skew, and kurtosis) are traditionally used to characterize the distribution of a random variable or in the context of fitting data. However, conventional moment-based estimators can (a) be substantially biased, (b) have high variance, or (c) be influenced by outliers. In view of these concerns, a characterization of the Burr Type III and Type XII distributions through the method of L-moments is introduced. Specifically, systems of equations are derived for determining the shape parameters associated with user specified L-moment ratios (e.g., L-Skew and L-Kurtosis). A procedure is also developed for the purpose of generating non-normal Burr Type III and Type XII distributions with arbitrary L-correlation matrices. Numerical examples are provided to demonstrate that L-moment based Burr distributions are superior to their conventional moment based counterparts in the context of estimation, distribution fitting, and robustness to outliers. Monte Carlo simulation results are provided to demonstrate that L-moment-based estimators are nearly unbiased, have relatively small variance, and are robust in the presence of outliers for any sample size. Simulation results are also provided to show that the methodology used for generating correlated non-normal Burr Type III and Type XII distributions is valid and efficient. Specifically, Monte Carlo simulation results are provided to show that the empirical values of L-correlations among simulated Burr Type III (and Type XII) distributions are in close agreement with the specified L-correlation matrices.
APA, Harvard, Vancouver, ISO, and other styles
48

Wolting, Duane. "MULTIVARIATE SYSTEMS ANALYSIS." International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615760.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1985 / Riviera Hotel, Las Vegas, Nevada
In many engineering applications, a systems analysis is performed to study the effects of random error propagation throughout a system. Often these errors are not independent, and have joint behavior characterized by arbitrary covariance structure. The multivariate nature of such problems is compounded in complex systems, where overall system performance is described by a q-dimensional random vector. To address this problem, a computer program was developed which generates Taylor series approximations for multivariate system performance in the presence of random component variablilty. A summary of an application of this approach is given in which an analysis was performed to assess simultaneous design margins and to ensure optimal component selection.
APA, Harvard, Vancouver, ISO, and other styles
49

Ahmed, Mosabber Uddin. "Multivariate multiscale complexity analysis." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10204.

Full text
Abstract:
Established dynamical complexity analysis measures operate at a single scale and thus fail to quantify inherent long-range correlations in real world data, a key feature of complex systems. They are designed for scalar time series, however, multivariate observations are common in modern real world scenarios and their simultaneous analysis is a prerequisite for the understanding of the underlying signal generating model. To that end, this thesis first introduces a notion of multivariate sample entropy and thus extends the current univariate complexity analysis to the multivariate case. The proposed multivariate multiscale entropy (MMSE) algorithm is shown to be capable of addressing the dynamical complexity of such data directly in the domain where they reside, and at multiple temporal scales, thus making full use of all the available information, both within and across the multiple data channels. Next, the intrinsic multivariate scales of the input data are generated adaptively via the multivariate empirical mode decomposition (MEMD) algorithm. This allows for both generating comparable scales from multiple data channels, and for temporal scales of same length as the length of input signal, thus, removing the critical limitation on input data length in current complexity analysis methods. The resulting MEMD-enhanced MMSE method is also shown to be suitable for non-stationary multivariate data analysis owing to the data-driven nature of MEMD algorithm, as non-stationarity is the biggest obstacle for meaningful complexity analysis. This thesis presents a quantum step forward in this area, by introducing robust and physically meaningful complexity estimates of real-world systems, which are typically multivariate, finite in duration, and of noisy and heterogeneous natures. This also allows us to gain better understanding of the complexity of the underlying multivariate model and more degrees of freedom and rigor in the analysis. Simulations on both synthetic and real world multivariate data sets support the analysis.
APA, Harvard, Vancouver, ISO, and other styles
50

Alashwali, Fatimah Salem. "Robustness and multivariate analysis." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/5299/.

Full text
Abstract:
Invariant coordinate selection (ICS) is a method for �nding structures in multivariate data using the eigenvalue-eigenvector decomposition of two different scatter matrices. The performance of the ICS depends on the structure of the data and the choice of the scatter matrices. The main goal of this thesis is to understand how ICS works in some situations, and does not in other. In particular, we look at ICS under three different structures: two-group mixtures, long-tailed distributions, and parallel line structure. Under two-group mixtures, we explore ICS based on the fourth-order moment matrix, ^K , and the covariance matrix S. We find the explicit form of ^K , and the ICS criterion under this model. We also explore the projection pursuit (PP) method, a variant of ICS, based on the univariate kurtosis. A comparison is made between PP, based on kurtosis, and ICS, based on ^K and S, through a simulation study. The results show that PP is more accurate than ICS. The asymptotic distributions of the ICS and PP estimates of the groups separation direction are derived. We explore ICS and PP based on two robust measures of spread, under twogroup mixtures. The use of common location measures, and pairwise differencing of the data in robust ICS and PP are investigated using simulations. The simulation results suggest that using a common location measure can be sometimes useful. The second structure considered in this thesis, the long-tailed distribution, is modelled by two dimensional errors-in-variables model, where the signal can have a non-normal distribution. ICS based on ^K and S is explored. We gain insight into how ICS �nds the signal direction in the errors in variables problem. We also compare the accuracy of the ICS estimate of the signal direction and Geary's fourth-order cumulant-based estimates through simulations. The results suggest that some of the cumulant-based estimates are more accurate than ICS, but ICS has the advantage of affine equivariance. The third structure considered is the parallel lines structure. We explore ICS based on the W-estimate based on the pairwise di�erencing of the data, ^ V , and S. We give a detailed analysis of the e�ect of the separation between points, overall and conditional on the horizontal separation, on the power of ICS based on ^ V and S.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography