Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Statistical model.

Dissertationen zum Thema „Statistical model“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Statistical model" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Rackham, Edward J. „Statistical model of reaction dynamics“. Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408683.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sassoon, Isabel Karen. „Argumentation for statistical model selection“. Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/argumentation-for-statistical-model-selection(79168e3a-2903-43dc-ac60-97a7c87f94f0).html.

Der volle Inhalt der Quelle
Annotation:
The increased availability of clinical data, in particular case data collected routinely, provides a valuable opportunity for analysis with a view to support evidence based decision making. In order to con dently leverage this data in support of decision making, it is essential to analyse it with rigour by employing the most appropriate statistical method. It can be dicult for a clinician to choose the appropriate statistical method and indeed the choice is not always straight forward, even for a statistician. The considerations as to what model to use depend on the research question, data and at times background information from the clinician, and will vary from model to model. This thesis develops an intelligent decision support method that supports the clinician by recommending the most appropriate statistical model approach given the research question and the available data. The main contributions of this thesis are: identi cation of the requirements from realworld collaboration with clinicians; development of an argumentation based approach to recommend statistical models based on a research question and data features; an argumentation scheme for proposing possible models; a statistical knowledge base designed to support the argumentation scheme, critical questions and preferences; a method of reasoning with the generated arguments and preference arguments. The approach is evaluated through case studies and a prototype.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Barbot, Benoît. „Acceleration for statistical model checking“. Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0041/document.

Der volle Inhalt der Quelle
Annotation:
Ces dernières années, l'analyse de systèmes complexes critiques est devenue de plus en plus importante. En particulier, l'analyse quantitative de tels systèmes est nécessaire afin de pouvoir garantir que leur probabilité d'échec est très faible. La difficulté de l'analyse de ces systèmes réside dans le fait que leur espace d’état est très grand et que la probabilité recherchée est extrêmement petite, de l'ordre d'une chance sur un milliard, ce qui rend les méthodes usuelles inopérantes. Les algorithmes de Model Checking quantitatif sont les algorithmes classiques pour l'analyse de systèmes probabilistes. Ils prennent en entrée le système et son comportement attendu et calculent la probabilité avec laquelle les trajectoires du système correspondent à ce comportement. Ces algorithmes de Model Checking ont été largement étudié depuis leurs créations. Deux familles d'algorithme existent : - le Model Checking numérique qui réduit le problème à la résolution d'un système d'équations. Il permet de calculer précisément des petites probabilités mais soufre du problème d'explosion combinatoire- - le Model Checking statistique basé sur la méthode de Monte-Carlo qui se prête bien à l'analyse de très gros systèmes mais qui ne permet pas de calculer de petite probabilités. La contribution principale de cette thèse est le développement d'une méthode combinant les avantages des deux approches et qui renvoie un résultat sous forme d'intervalles de confiance. Cette méthode s'applique à la fois aux systèmes discrets et continus pour des propriétés bornées ou non bornées temporellement. Cette méthode est basée sur une abstraction du modèle qui est analysée à l'aide de méthodes numériques, puis le résultat de cette analyse est utilisé pour guider une simulation du modèle initial. Ce modèle abstrait doit à la fois être suffisamment petit pour être analysé par des méthodes numériques et suffisamment précis pour guider efficacement la simulation. Dans le cas général, cette abstraction doit être construite par le modélisateur. Cependant, une classe de systèmes probabilistes a été identifiée dans laquelle le modèle abstrait peut être calculé automatiquement. Cette approche a été implémentée dans l'outil Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas ont été effectuées, qui montrent l'efficacité de la méthode. Cette approche à été implanté dans l'outils Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas on été effectué, qui montre l'efficacité de la méthode
In the past decades, the analysis of complex critical systems subject to uncertainty has become more and more important. In particular the quantitative analysis of these systems is necessary to guarantee that their probability of failure is very small. As their state space is extremly large and the probability of interest is very small, typically less than one in a billion, classical methods do not apply for such systems. Model Checking algorithms are used for the analysis of probabilistic systems, they take as input the system and its expected behaviour, and compute the probability with which the system behaves as expected. These algorithms have been broadly studied. They can be divided into two main families: Numerical Model Checking and Statistical Model Checking. The former computes small probabilities accurately by solving linear equation systems, but does not scale to very large systems due to the space size explosion problem. The latter is based on Monte Carlo Simulation and scales well to big systems, but cannot deal with small probabilities. The main contribution of this thesis is the design and implementation of a method combining the two approaches and returning a confidence interval of the probability of interest. This method applies to systems with both continuous and discrete time settings for time-bounded and time-unbounded properties. All the variants of this method rely on an abstraction of the model, this abstraction is analysed by a numerical model checker and the result is used to steer Monte Carlo simulations on the initial model. This abstraction should be small enough to be analysed by numerical methods and precise enough to improve the simulation. This abstraction can be build by the modeller, or alternatively a class of systems can be identified in which an abstraction can be automatically computed. This approach has been implemented in the tool Cosmos, and this method was successfully applied on classical benchmarks and a case study
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Crampton, Raymond J. „A nonlinear statistical MESFET model using low order statistics of equivalent circuit model parameter sets“. Thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-03032009-040420/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chang, Chia-Jung. „Statistical and engineering methods for model enhancement“. Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44766.

Der volle Inhalt der Quelle
Annotation:
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as "Minimal Adjustment", which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shi, Jianqiang. „A trust model with statistical foundation“. Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/27038.

Der volle Inhalt der Quelle
Annotation:
The widespread use of the Internet signals the need for a better understanding of trust as a basis for secure on-line interaction. In the face of increasing uncertainty and risk, users and machines must be allowed to reason effectively about the trustworthiness of other entities. In this thesis, we propose a trust model that assists users and machines with decision-making in online interactions by using past behavior as a predictor of likely future behavior. We develop a general method to automatically compute trust based on self-experience and the recommendations of others. Our trust model solves the problem of recommendation combination and detection of unfair recommendations. Our approach involves data analysis methods (Bayesian estimation, Dirichlet distribution), and machine learning methods (Weighted Majority Algorithm). Furthermore, we apply our trust model to several utility models to increase the accuracy of decision-making in different contexts of Web Services. We describe simulation experiments to illustrate its effectiveness, robustness and the evolution of trust.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jabbari, Sanaz. „A Statistical Model of Lexical Context“. Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521960.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

屠烈偉 und Lit-wai Tao. „Statistical inference on a mixture model“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977480.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Keogh-Brown, Marcus R. „A statistical model of internet traffic“. Thesis, Queen Mary, University of London, 2003. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1811.

Der volle Inhalt der Quelle
Annotation:
We present a method to extract a time series (Number of Active Requests (NAR)) from web cache logs which serves as a transport level measurement of internet traffic. This series also reflects the performance or Quality of Service of a web cache. Using time series modelling, we interpret the properties of this kind of internet traffic and its effect on the performance perceived by the cache user. Our preliminary analysis of NAR concludes that this dataset is suggestive of a long-memory self-similar process but is not heavy-tailed. Having carried out more in-depth analysis, we propose a three stage modelling process of the time series: (i) a power transformation to normalise the data, (ii) a polynomial fit to approximate the general trend and (iii) a modelling of the residuals from the polynomial fit. We analyse the polynomial and show that the residual dataset may be modelled as a FARIMA(p, d, q) process. Finally, we use Canonical Variate Analysis to determine the most significant defining properties of our measurements and draw conclusions to categorise the differences in traffic properties between the various caches studied. We show that the strongest illustration of differences between the caches is shown by the short memory parameters of the FARIMA fit. We compare the differences revealed between our studied caches and draw conclusions on them. Several programs have been written in Perl and S programming languages for this analysis including totalqd.pl for NAR calculation, fullanalysis for general statistical analysis of the data and armamodel for FARIMA modelling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Predoehl, Andrew. „A Statistical Model of Recreational Trails“. Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612599.

Der volle Inhalt der Quelle
Annotation:
We present a statistical model of recreational trails, and a method to infer trail routes from geophysical data, namely aerial imagery and terrain elevation. We learn a set of textures (textons) that characterize the imagery, and use the textons to segment each image into super-pixels. We also model each texton's probability of generating trail pixels, and the direction of such trails. From terrain elevation, we model the magnitude and direction of terrain gradient on-trail and off-trail. These models lead to a likelihood function for image and elevation. Consistent with Bayesian reasoning, we combine the likelihood with a prior model of trail length and smoothness, yielding a posterior distribution for trails, given an image. We search for good values of this posterior using both a novel stochastic variation of Dijkstra's algorithm, and an MCMC-inspired sampler. Our experiments, on trail images and groundtruth collected in the western continental USA, show substantial improvement over those of the previous best trail-finding methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hauer, Michael. „Statistical distributions in the thermal model“. Master's thesis, University of Cape Town, 2006. http://hdl.handle.net/11427/11672.

Der volle Inhalt der Quelle
Annotation:
Includes abstract.
Includes bibliographical references.
An attempt is made to use the thermal model to determine statistical particle number fluctuations in the presence of exact conservation laws. A basis is provided, which will be useful to extend the range of applications of the thermal model with both a large number of conserved charges as well as quantum statistics. The central limit theorem and its related expansions provide a flexible mathematical tool for calculation of statistical fluctuations, and allows for application of the canonical ensemble to high energy particle collision data. A first analysis of the NA49 CC data suggests that statistical multiplicity fluctuations can be understood within the statistical hadronization model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Dehdari, Jonathan. „A Neurophysiologically-Inspired Statistical Language Model“. The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1399071363.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Tao, Lit-wai. „Statistical inference on a mixture model“. [Hong Kong] : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13781479.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Schivo, Stefano. „Statistical Model Checking of Web Services“. Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/368768.

Der volle Inhalt der Quelle
Annotation:
In recent years, the increasing interest on service-oriented paradigm has given rise to a series of supporting tools and languages. In particular, COWS (Calculus for Orchestration of Web Services) has been attracting the attention of part of the scientific community for its peculiar effort in formalising the semantics of the de facto standard Web Services orchestration language WS-BPEL. The purpose of the present work is to provide the tools for representing and evaluating the performance of Web Services modelled through COWS. In order to do this, a stochastic version of COWS is proposed: such a language allows us to describe the performance of the modelled systems and thus to represent Web Services both from the qualitative and quantitative points of view. In particular, we provide COWS with an extension which maintains the polyadic matching mechanism: this way, the language will still provide the capability to explicitly model the use of session identifiers. The resulting Scows is then equipped with a software tool which allows us to effectively perform model checking without incurring into the problem of state-space explosion, which would otherwise thwart the computation efforts even when checking relatively small models. In order to obtain this result, the proposed tool relies on the statistical analysis of simulation traces, which allows us to deal with large state-spaces without the actual need to completely explore them. Such an improvement in model checking performances comes at the price of accuracy in the answers provided: for this reason, users can trade-off speed against accuracy by modifying a series of parameters. In order to assess the efficiency of the proposed technique, our tool is compared with a number of existing model checking softwares.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Schivo, Stefano. „Statistical Model Checking of Web Services“. Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/231/1/PhD-Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
In recent years, the increasing interest on service-oriented paradigm has given rise to a series of supporting tools and languages. In particular, COWS (Calculus for Orchestration of Web Services) has been attracting the attention of part of the scientific community for its peculiar effort in formalising the semantics of the de facto standard Web Services orchestration language WS-BPEL. The purpose of the present work is to provide the tools for representing and evaluating the performance of Web Services modelled through COWS. In order to do this, a stochastic version of COWS is proposed: such a language allows us to describe the performance of the modelled systems and thus to represent Web Services both from the qualitative and quantitative points of view. In particular, we provide COWS with an extension which maintains the polyadic matching mechanism: this way, the language will still provide the capability to explicitly model the use of session identifiers. The resulting Scows is then equipped with a software tool which allows us to effectively perform model checking without incurring into the problem of state-space explosion, which would otherwise thwart the computation efforts even when checking relatively small models. In order to obtain this result, the proposed tool relies on the statistical analysis of simulation traces, which allows us to deal with large state-spaces without the actual need to completely explore them. Such an improvement in model checking performances comes at the price of accuracy in the answers provided: for this reason, users can trade-off speed against accuracy by modifying a series of parameters. In order to assess the efficiency of the proposed technique, our tool is compared with a number of existing model checking softwares.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

STEGEMAN, ROY. „STATISTICAL LEARNING FOR STANDARD MODEL PHENOMENOLOGY“. Doctoral thesis, Università degli Studi di Milano, 2022. https://hdl.handle.net/2434/953993.

Der volle Inhalt der Quelle
Annotation:
The focus of this thesis is the accurate determination of parton distribution functions (PDFs), with a particular emphasis on modern machine learning tools used within the NNPDF approach. We first present NNPDF4.0, currently the most recent and most precise set of PDFs based on a global dataset. We then provide suggestions for improvements to the machine learning tools used for the NNPDF4.0 determination, both in terms of parametrization and model selection. We discuss different sources of PDF uncertainty. First, we elucidate the nontrivial aspects of averaging over the space of PDF determinations by explicitly calculating the data-driven correlation between different sets of PDFs. Then, we lay out certain fundamental properties of the sampling as performed within NNPDF methodology through explicit examples, and discuss how one may gain insight into the results of a neural network fit despite it being a black box model. Finally, we show how the flexibility of the NNPDF methodology allows for it to be applied to problems other than PDF determination, in particular we present a determination of neutrino inelastic structure functions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kahle, Thomas. „On Boundaries of Statistical Models“. Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-37952.

Der volle Inhalt der Quelle
Annotation:
In the thesis "On Boundaries of Statistical Models" problems related to a description of probability distributions with zeros, lying in the boundary of a statistical model, are treated. The distributions considered are joint distributions of finite collections of finite discrete random variables. Owing to this restriction, statistical models are subsets of finite dimensional real vector spaces. The support set problem for exponential families, the main class of models considered in the thesis, is to characterize the possible supports of distributions in the boundaries of these statistical models. It is shown that this problem is equivalent to a characterization of the face lattice of a convex polytope, called the convex support. The main tool for treating questions related to the boundary are implicit representations. Exponential families are shown to be sets of solutions of binomial equations, connected to an underlying combinatorial structure, called oriented matroid. Under an additional assumption these equations are polynomial and one is placed in the setting of commutative algebra and algebraic geometry. In this case one recovers results from algebraic statistics. The combinatorial theory of exponential families using oriented matroids makes the established connection between an exponential family and its convex support completely natural: Both are derived from the same oriented matroid. The second part of the thesis deals with hierarchical models, which are a special class of exponential families constructed from simplicial complexes. The main technical tool for their treatment in this thesis are so called elementary circuits. After their introduction, they are used to derive properties of the implicit representations of hierarchical models. Each elementary circuit gives an equation holding on the hierarchical model, and these equations are shown to be the "simplest", in the sense that the smallest degree among the equations corresponding to elementary circuits gives a lower bound on the degree of all equations characterizing the model. Translating this result back to polyhedral geometry yields a neighborliness property of marginal polytopes, the convex supports of hierarchical models. Elementary circuits of small support are related to independence statements holding between the random variables whose joint distributions the hierarchical model describes. Models for which the complete set of circuits consists of elementary circuits are shown to be described by totally unimodular matrices. The thesis also contains an analysis of the case of binary random variables. In this special situation, marginal polytopes can be represented as the convex hulls of linear codes. Among the results here is a classification of full-dimensional linear code polytopes in terms of their subgroups. If represented by polynomial equations, exponential families are the varieties of binomial prime ideals. The third part of the thesis describes tools to treat models defined by not necessarily prime binomial ideals. It follows from Eisenbud and Sturmfels' results on binomial ideals that these models are unions of exponential families, and apart from solving the support set problem for each of these, one is faced with finding the decomposition. The thesis discusses algorithms for specialized treatment of binomial ideals, exploiting their combinatorial nature. The provided software package Binomials.m2 is shown to be able to compute very large primary decompositions, yielding a counterexample to a recent conjecture in algebraic statistics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Pirozhkova, Daria. „Statistical models for an MTPL portfolio“. Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359373.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we consider several statistical techniques applicable to claim frequency models of an MTPL portfolio with a focus on overdispersion. The practical part of the work is focused on the application and comparison of the models on real data represented by an MTPL portfolio. The comparison is presented by the results of goodness-of-fit measures. Furthermore, the predictive power of selected models is tested for the given dataset, using the simulation method. Hence, this thesis provides a combination of the analysis of goodness-of-fit results and the predictive power of the models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

KIM, NAMHEE. „A semiparametric statistical approach to Functional MRI data“. The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1262295445.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Jeng, Tian-Tzer. „Some contributions to asymptotic theory on hypothesis testing when the model is misspecified /“. The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487332636473942.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Rivers, Derick Lorenzo. „Dynamic Bayesian Approaches to the Statistical Calibration Problem“. VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3599.

Der volle Inhalt der Quelle
Annotation:
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the "classical" approach and the "inverse" regression approach. Both of these models are static models and are used to estimate "exact" measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe the measurement. The Bayesian time series analysis method of Dynamic Linear Models (DLM) can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs the use of Bayesian methodology to perform statistical calibration. The DLM framework is used to capture the time-varying parameters that may be changing or drifting over time. Dynamic based approaches to the linear, nonlinear, and multivariate calibration problem are presented in this dissertation. Simulation studies are conducted where the dynamic models are compared to some well known "static'" calibration approaches in the literature from both the frequentist and Bayesian perspectives. Applications to microwave radiometry are given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Yu, Fu. „On statistical analysis of vehicle time-headways using mixed distribution models“. Thesis, University of Dundee, 2014. https://discovery.dundee.ac.uk/en/studentTheses/d101df63-b7db-45b6-8a03-365b64345e6b.

Der volle Inhalt der Quelle
Annotation:
For decades, vehicle time-headway distribution models have been studied by many researchers and traffic engineers. A good time-headway model can be beneficial to traffic studies and management in many aspects; e.g. with a better understanding of road traffic patterns and road user behaviour, the researchers or engineers can give better estimations and predictions under certain road traffic conditions and hence make better decisions on traffic management and control. The models also help us to implement high-quality microscopic traffic simulation studies to seek good solutions to traffic problems with minimal interruption of the real traffic environment and minimum costs. Compared within previously studied models, the mixed (SPM and GQM) mod- els, especially using the gamma or lognormal distributions to describe followers headways, are probably the most recognized ones by researchers in statistical stud- ies of headway data. These mixed models are reported with good fitting results indicated by goodness-of-fit tests, and some of them are better than others in com- putational costs. The gamma-SPM and gamma-GQM models are often reported to have similar fitting qualities, and they often out-perform the lognormal-GQM model in terms of computational costs. A lognormal-SPM model cannot be formed analytically as no explicit Laplace transform is available with the lognormal dis- tribution. The major downsides of using mixed models are the difficulties and more flexibilities in fitting process as they have more parameters than those single models, and this sometimes leads to unsuccessful fitting or unreasonable fitted pa- rameters despite their success in passing GoF tests. Furthermore, it is difficult to know the connections between model parameters and realistic traffic situations or environments, and these parameters have to be estimated using headway samples. Hence, it is almost impossible to explain any traffic phenomena with the param- eters of a model. Moreover, with the gamma distribution as the only common well-known followers headway model, it is hard to justify whether it has described the headway process appropriately. This creates a barrier for better understanding the process of how drivers would follow their preceding vehicles. This study firstly proposes a framework developed using MATLAB, which would help researchers in quick implementations of any headway distributions of interest. This framework uses common methods to manage and prepare headway samples to meet those requirements in data analysis. It also provides common structures and methods on implementing existing or new models, fitting models, testing their performance hence reporting results. This will simplify the development work involved in headway analysis, avoid unnecessary repetitions of work done by others and provide results in formats that are more comparable with those reported by others. Secondly, this study focuses on the implementation of existing mixed models, i.e. the gamma-SPM, gamma-GQM and lognormal-GQM, using the proposed framework. The lognormal-SPM is also tested for the first time, with the recently developed approximation method of Laplace transform available for lognormal distributions. The parameters of these mixed models are specially discussed, as means of restrictions to simplify the fitting process of these models. Three ways of parameter pre-determinations are attempted over gamma-SPM and gamma-GQM models. A couple of response-time (RT) distributions are focused on in the later part of this study. Two RT models, i.e. Ex-Gaussian (EMG) and inverse Gaussian (IVG) are used, for first time, as single models to describe headway data. The fitting performances are greatly comparable to the best known lognormal single model. Further extending this work, these two models are tested as followers headway distributions in both SPM and GQM mixed models. The test results have shown excellent fitting performance. These now bring researchers more alternatives to use mixed models in headway analysis, and this will help to compare the be- haviours of different models when they are used to describe followers headway data. Again, similar parameter restrictions are attempted for these new mixed models, and the results show well-acceptable performance, and also corrections on some unreasonable fittings caused by the over flexibilities using 4- or 5- parameter models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Kahle, Thomas. „On Boundaries of Statistical Models“. Doctoral thesis, Max-Planck-Institut für Mathematik in den Naturwissenschaften, 2009. https://ul.qucosa.de/id/qucosa%3A11003.

Der volle Inhalt der Quelle
Annotation:
In the thesis "On Boundaries of Statistical Models" problems related to a description of probability distributions with zeros, lying in the boundary of a statistical model, are treated. The distributions considered are joint distributions of finite collections of finite discrete random variables. Owing to this restriction, statistical models are subsets of finite dimensional real vector spaces. The support set problem for exponential families, the main class of models considered in the thesis, is to characterize the possible supports of distributions in the boundaries of these statistical models. It is shown that this problem is equivalent to a characterization of the face lattice of a convex polytope, called the convex support. The main tool for treating questions related to the boundary are implicit representations. Exponential families are shown to be sets of solutions of binomial equations, connected to an underlying combinatorial structure, called oriented matroid. Under an additional assumption these equations are polynomial and one is placed in the setting of commutative algebra and algebraic geometry. In this case one recovers results from algebraic statistics. The combinatorial theory of exponential families using oriented matroids makes the established connection between an exponential family and its convex support completely natural: Both are derived from the same oriented matroid. The second part of the thesis deals with hierarchical models, which are a special class of exponential families constructed from simplicial complexes. The main technical tool for their treatment in this thesis are so called elementary circuits. After their introduction, they are used to derive properties of the implicit representations of hierarchical models. Each elementary circuit gives an equation holding on the hierarchical model, and these equations are shown to be the "simplest", in the sense that the smallest degree among the equations corresponding to elementary circuits gives a lower bound on the degree of all equations characterizing the model. Translating this result back to polyhedral geometry yields a neighborliness property of marginal polytopes, the convex supports of hierarchical models. Elementary circuits of small support are related to independence statements holding between the random variables whose joint distributions the hierarchical model describes. Models for which the complete set of circuits consists of elementary circuits are shown to be described by totally unimodular matrices. The thesis also contains an analysis of the case of binary random variables. In this special situation, marginal polytopes can be represented as the convex hulls of linear codes. Among the results here is a classification of full-dimensional linear code polytopes in terms of their subgroups. If represented by polynomial equations, exponential families are the varieties of binomial prime ideals. The third part of the thesis describes tools to treat models defined by not necessarily prime binomial ideals. It follows from Eisenbud and Sturmfels'' results on binomial ideals that these models are unions of exponential families, and apart from solving the support set problem for each of these, one is faced with finding the decomposition. The thesis discusses algorithms for specialized treatment of binomial ideals, exploiting their combinatorial nature. The provided software package Binomials.m2 is shown to be able to compute very large primary decompositions, yielding a counterexample to a recent conjecture in algebraic statistics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

AvRuskin, Gillian. „Towards A Spatial Model of Rurality“. Fogler Library, University of Maine, 2000. http://www.library.umaine.edu/theses/pdf/AvRuskinG2000.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Bakir, Mehmet Emin. „Automatic selection of statistical model checkers for analysis of biological models“. Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/20216/.

Der volle Inhalt der Quelle
Annotation:
Statistical Model Checking (SMC) blends the speed of simulation with the rigorous analytical capabilities of model checking, and its success has prompted researchers to implement a number of SMC tools whose availability provides flexibility and fine-tuned control over model analysis. However, each tool has its own practical limitations, and different tools have different requirements and performance characteristics. The performance of different tools may also depend on the specific features of the input model or the type of query to be verified. Consequently, choosing the most suitable tool for verifying any given model requires a significant degree of experience, and in most cases, it is challenging to predict the right one. The aim of our research has been to simplify the model checking process for researchers in biological systems modelling by simplifying and rationalising the model selection process. This has been achieved through delivery of the various key contributions listed below. • We have developed a software component for verification of kernel P (kP) system models, using the NuSMV model checker. We integrated it into a larger software platform (www.kpworkbench.org). • We surveyed five popular SMC tools, comparing their modelling languages, external dependencies, expressibility of specification languages, and performance. To best of our knowledge, this is the first known attempt to categorise the performance of SMC tools based on the commonly used property specifications (property patterns) for model checking. • We have proposed a set of model features which can be used for predicting the fastest SMC for biological model verification, and have shown, moreover, that the proposed features both reduce computation time and increase predictive power. • We used machine learning algorithms for predicting the fastest SMC tool for verification of biological models, and have shown that this approach can successfully predict the fastest SMC tool with over 90% accuracy. • We have developed a software tool, SMC Predictor, that predicts the fastest SMC tool for a given model and property query, and have made this freely available to the wider research community (www.smcpredictor.com). Our results show that using our methodology can generate significant savings in the amount of time and resources required for model verification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Liu, Xiang. „A Multi-Indexed Logistic Model for Time Series“. Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3140.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we explore a multi-indexed logistic regression (MILR) model, with particular emphasis given to its application to time series. MILR includes simple logistic regression (SLR) as a special case, and the hope is that it will in some instances also produce significantly better results. To motivate the development of MILR, we consider its application to the analysis of both simulated sine wave data and stock data. We looked at well-studied SLR and its application in the analysis of time series data. Using a more sophisticated representation of sequential data, we then detail the implementation of MILR. We compare their performance using forecast accuracy and an area under the curve score via simulated sine waves with various intensities of Gaussian noise and Standard & Poors 500 historical data. Overall, that MILR outperforms SLR is validated on both realistic and simulated data. Finally, some possible future directions of research are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Dave, Kedar Himanshu. „Inferential model predictive control using statistical tools“. College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2585.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Chemical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Fu, Wenjiang J. „A statistical shrinkage model and its applications“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ35158.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Stark, J. Alex. „Statistical model selection techniques for data analysis“. Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390190.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Su, Z. „Statistical shape modelling : automatic shape model building“. Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1213097/.

Der volle Inhalt der Quelle
Annotation:
Statistical Shape Models (SSM) have wide applications in image segmentation, surface registration and morphometry. This thesis deals with an important issue in SSM, which is establishing correspondence between a set of shape surfaces on either 2D or 3D. Current methods involve either manual annotation of the data (current ‘gold standard’); or establishing correspondences by using segmentation or registration algorithms; or using an information technique, Minimum Description Length (MDL), as an objective function that measures the utility of a model (the state-of-the-art). This thesis presents in principle another framework for establishing correspondences completely automatically by treating it as a learning process. Shannon theory is used extensively to develop an objective function, which measures the performance of a model along each eigenvector direction, and a proper weighting is automatically calculated for each energy component. Correspondence finding can then be treated as optimizing the objective function. An efficient optimization method is also incorporated by deriving the gradient of the cost function. Experimental results on various data are presented on both 2D and 3D. In the end, a quantitative evaluation between the proposed algorithm and MDL shows that the proposed model has better Generalization Ability, Specificity and similar Compactness. It also shows a good potential ability to solve the so-called “Pile Up” problem that exists in MDL. In terms of application, I used the proposed algorithm to help build a facial contour classifier. First, correspondence points across facial contours are found automatically and classifiers are trained by using the correspondence points found by the MDL, proposed method and direct human observer. These classification schemes are then used to perform gender prediction on facial contours. The final conclusion for the experiments is that MEM found correspondence points built classification scheme conveys a relatively more accurate gender prediction result. Although, we have explored the potential of our proposed method to some extent, this is not the end of the research for this topic. The future work is also clearly stated which includes more validations on various 3D datasets; discrimination analysis between normal and abnormal subjects could be the direct application for the proposed algorithm, extension to model-building using appearance information, etc.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Marion, Glenn. „The statistical mechanics of Bayesian model selection“. Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/15264.

Der volle Inhalt der Quelle
Annotation:
In this thesis we examine the question of model selection in systems which learn input-output mappings from a data set of examples. The models we consider are inspired by feed-forward architectures used within the artificial neural networks community. The approach taken here is to elucidate the properties of various model selection criteria by calculation of relevant quantities derived in a Bayesian framework. These calculations make the assumption that examples are generated from some underlying rule or teacher by randomly sampling the input space and are performed using techniques borrowed from statistical mechanics. Such an approach allows for the comparison of different approaches on the basis of the resultant ability of the system to generalize to novel examples. Broadly stated, the model selection problem is the following. Given only a limited set of examples, which model, or student, should one choose from a set of candidates in order to achieve the highest level of generalization? We consider four model selection criteria. A penalty based method utilising a quantity derived from Bayesian statistics termed the evidence, and two methods based on estimates of the generalization performance namely, the test error and the cross validation error. The fourth method, less widely used, is based on the noise sensitivity of he models. In a simple scenario we demonstrate that model selection based on the evidence is susceptible to misspecification of the student. Our analysis is conducted in the thermodynamic limit where the system size is taken to be arbitrarily large. In particular we examine the evidence procedure assignments of the hyperparameters which control the learning algorithm. We find that, where the student is not sufficiently powerful to fully model the teacher, despite being sub-optimal this procedure is remarkably robust towards such misspecifications. In a scenario in which the student is more than able to represent the teacher we find the evidence procedure is optimal.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Jegourel, Cyrille. „Rare event simulation for statistical model checking“. Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous considérons deux problèmes auxquels le model checking statistique doit faire face. Le premier concerne les systèmes hétérogènes qui introduisent complexité et non-déterminisme dans l'analyse. Le second problème est celui des propriétés rares, difficiles à observer et donc à quantifier. Pour le premier point, nous présentons des contributions originales pour le formalisme des systèmes composites dans le langage BIP. Nous en proposons une extension stochastique, SBIP, qui permet le recours à l'abstraction stochastique de composants et d'éliminer le non-déterminisme. Ce double effet a pour avantage de réduire la taille du système initial en le remplaçant par un système dont la sémantique est purement stochastique sur lequel les algorithmes de model checking statistique sont définis. La deuxième partie de cette thèse est consacrée à la vérification de propriétés rares. Nous avons proposé le recours à un algorithme original d'échantillonnage préférentiel pour les modèles dont le comportement est décrit à travers un ensemble de commandes. Nous avons également introduit les méthodes multi-niveaux pour la vérification de propriétés rares et nous avons justifié et mis en place l'utilisation d'un algorithme multi-niveau optimal. Ces deux méthodes poursuivent le même objectif de réduire la variance de l'estimateur et le nombre de simulations. Néanmoins, elles sont fondamentalement différentes, la première attaquant le problème au travers du modèle et la seconde au travers des propriétés
In this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Lymp, James Francis. „A statistical model for fluorescence image cytometry /“. Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9539.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ettlich, Daniel W. „Therminator : configuring the underlying statistical mechanics model“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FEttlich.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering and M.S. in Computer Science)--Naval Postgraduate School, December 2003.
Thesis advisor(s): John C. McEachen, Chris S. Eagle. Includes bibliographical references (p. 71-72). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Passarelli, Luigi <1981&gt. „A new statistical model for eruption forecasting“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2824/1/tesi_dottorato_geofisica_Luigi_Passarelli.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Passarelli, Luigi <1981&gt. „A new statistical model for eruption forecasting“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2824/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Cheng, Dunlei Stamey James D. „Topics in Bayesian sample size determination and Bayesian model selection“. Waco, Tex. : Baylor University, 2007. http://hdl.handle.net/2104/5039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Seid, Hamid Jemila. „New residuals in multivariate bilinear models : testing hypotheses, diagnosing models and validating model assumptions /“. Uppsala : Dept. of Biometry and Engineering, Swedish University of Agricultural Sciences, 2005. http://epsilon.slu.se/200583.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Zang, Yong, und 臧勇. „Robust tests under genetic model uncertainty in case-control association studies“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46419123.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Bartošík, Tomáš. „Metody simulace dodávky výkonu z větrných elektráren“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217592.

Der volle Inhalt der Quelle
Annotation:
Theme Master’s thesis was studying of wind energy power supply. Comparison of character of wind power supply in Czech Republic to power supply abroad. Thesis begins with short introduction of historical wind applications. It continues by theory of wind engines, the wind engines construction and its facilities. Next part describes wind energy characteristics and physics. It describes wind speed influence to power supply of wind turbine, a physical limits of wind engines efficiency. Later, meteorological forecast possibilities are mentioned. Following chapter classifies wind power plants by geographical locations and characterizes them. It presents and explains individual cases of wind energy business growth in Czech Republic and other countries. There are also mentioned many suitable locations for wind parks in Czech Republic. There are described data analysis methods in chapter number 5. Analysis results of day period graph and year period graphs are shown. Unsophisticated forecast model is sketched out and created in following chapter. Here the regressive analysis methods are described, such as Autoregressive moving average model (ARMA), which can bring satisfactory results. Another example is Markov switching autoregressive model (MSAR). Next step from statistic forecast models is to sophisticated large forecasting systems. Those systems require meteorological forecast data and historical wind power data. Data are analyzed by statistical models. They have been developed recently and they are ordinary used nowadays.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Gumedze, Freedom Nkhululeko. „A variance shilf model for outlier detection and estimation in linear and linear mixed models“. Doctoral thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/4381.

Der volle Inhalt der Quelle
Annotation:
Includes abstract.
Includes bibliographical references.
Outliers are data observations that fall outside the usual conditional ranges of the response data.They are common in experimental research data, for example, due to transcription errors or faulty experimental equipment. Often outliers are quickly identified and addressed, that is, corrected, removed from the data, or retained for subsequent analysis. However, in many cases they are completely anomalous and it is unclear how to treat them. Case deletion techniques are established methods in detecting outliers in linear fixed effects analysis. The extension of these methods to detecting outliers in linear mixed models has not been entirely successful, in the literature. This thesis focuses on a variance shift outlier model as an approach to detecting and assessing outliers in both linear fixed effects and linear mixed effects analysis. A variance shift outlier model assumes a variance shift parameter, wi, for the ith observation, where wi is unknown and estimated from the data. Estimated values of wi indicate observations with possibly inflated variances relative to the remainder of the observations in the data set and hence outliers. When outliers lurk within anomalous elements in the data set, a variance shift outlier model offers an opportunity to include anomalies in the analysis, but down-weighted using the variance shift estimate wi. This down-weighting might be considered preferable to omitting data points (as in case-deletion methods). For very large values of wi a variance shift outlier model is approximately equivalent to the case deletion approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gumedze, Freedom Nkhululeko. „A variance shift model for outlier detection and estimation in linear and linear mixed models“. Doctoral thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/4380.

Der volle Inhalt der Quelle
Annotation:
Outliers are data observations that fall outside the usual conditional ranges of the response data.They are common in experimental research data, for example, due to transcription errors or faulty experimental equipment. Often outliers are quickly identified and addressed, that is, corrected, removed from the data, or retained for subsequent analysis. However, in many cases they are completely anomalous and it is unclear how to treat them. Case deletion techniques are established methods in detecting outliers in linear fixed effects analysis. The extension of these methods to detecting outliers in linear mixed models has not been entirely successful, in the literature. This thesis focuses on a variance shift outlier model as an approach to detecting and assessing outliers in both linear fixed effects and linear mixed effects analysis. A variance shift outlier model assumes a variance shift parameter, !i, for the ith observation, where !i is unknown and estimated from the data. Estimated values of !i indicate observations with possibly inflated variances relative to the remainder of the observations in the data set and hence outliers. When outliers lurk within anomalous elements in the data set, a variance shift outlier model offers an opportunity to include anomalies in the analysis, but down-weighted using the variance shift estimate Ë!i. This down-weighting might be considered preferable to omitting data points (as in case-deletion methods). For very large values of !i a variance shift outlier model is approximately equivalent to the case deletion approach. We commence with a detailed review of parameter estimation and inferential procedures for the linear mixed model. The review is necessary for the development of the variance shift outlier model as a method for detecting outliers in linear fixed and linear mixed models. This review is followed by a discussion of the status of current research into linear mixed model diagnostics. Different types of residuals in the linear mixed model are defined. A decomposition of the leverage matrix for the linear mixed model leads to interpretable leverage measures. ii A detailed review of a variance shift outlier model in linear fixed effects analysis is given. The purpose of this review is firstly, to gain insight into the general case (the linear mixed model) and secondly, to develop the model further in linear fixed effects analysis. A variance shift outlier model can be formulated as a linear mixed model so that the calculations required to estimate the parameters of the model are those associated with fitting a linear mixed model, and hence the model can be fitted using standard software packages. Likelihood ratio and score test statistics are developed as objective measures for the variance shift estimates. The proposed test statistics initially assume balanced longitudinal data with a Gaussian distributed response variable. The dependence of the proposed test statistics on the second derivatives of the log-likelihood function is also examined. For the single-case outlier in linear fixed effects analysis, analytical expressions for the proposed test statistics are obtained. A resampling algorithm is proposed for assessing the significance of the proposed test statistics and for handling the problem of multiple testing. A variance shift outlier model is then adapted to detect a group of outliers in a fixed effects model. Properties and performance of the likelihood ratio and score test statistics are also investigated. A variance shift outlier model for detecting single-case outliers is also extended to linear mixed effects analysis under Gaussian assumptions for the random effects and the random errors. The variance parameters are estimated using the residual maximum likelihood method. Likelihood ratio and score tests are also constructed for this extended model. Two distinct computing algorithms which constrain the variance parameter estimates to be positive, are given. Properties of the resulting variance parameter estimates from each computing algorithm are also investigated. A variance shift outlier model for detecting single-case outliers in linear mixed effects analysis is extended to detect groups of outliers or subjects having outlying profiles with random intercepts and random slopes that are inconsistent with the corresponding model elements for the remaining subjects in the data set. The issue of influence on the fixed effects under a variance shift outlier model is also discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Kim, Seong W. „Bayesian model selection using intrinsic priors for commonly used models in reliability and survival analysis /“. free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9841159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Rogers, Brandon Lamar. „A Statistical Performance Model of Homogeneous Raidb Clusters“. Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd709.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Zhang, Shiju. „Statistical Inferences under a semiparametric finite mixture model“. See Full Text at OhioLINK ETD Center (Requires Adobe Acroba Reader for viewing), 2005. http://www.ohiolink.edu/etd/view.cgi?toledo1135779503.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Toledo, 2005.
Typescript. "A dissertation [submitted] as partial fulfillment of the requirements of the Doctor of Philosophy degree in Mathematics." Bibliography: leaves 100-105.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Thesen, Michael. „Quantum statistical physics of a microscopic glass model“. [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968960014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Seligman, T. H., Thomas Gorin, Frank-Michael Dittes, Markus Müller und Ingrid Rotter. „Correlations between resonances in a statistical scattering model“. Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-31261.

Der volle Inhalt der Quelle
Annotation:
The distortion of the regular motion in a quantum system by its coupling to the continuum of decay channels is investigated. The regular motion is described by means of a Poissonian ensemble. We focus on the case of only few channels K < 10. The coupling to the continuum induces two main effects, due to which the distorted system differs from a chaotic system (described by a Gaussian ensemble): 1. The width distribution for large coupling becomes broader than the corresponding Χ2K distribution in the GOE case. 2. Due to the coupling to the continuum, correlations are induced not only between the positions of the resonances but also between positions and widths. These correlations remain even in the strong coupling limit. In order to explain these results, an asymptotic expression for the width distribution is derived for the one channel case. It relates the width of a trapped resonance state to the distance between its two neighboring levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ahame, Edmund. „Statistical model for risk diversification in renewable energy“. Thesis, Nelson Mandela Metropolitan University, 2013. http://hdl.handle.net/10948/d1008399.

Der volle Inhalt der Quelle
Annotation:
The growth of the industry and population of South Africa urges to seek new sources of electric power, hence the need to look at alternative power sources. Power output from some renewable energy sources is highly volatile. For instance power output from wind turbines or photovoltaic solar panels fluctuates between zero and the maximum rated power out. To optimize the overall power output a model was designed to determine the best trade-off between production from two or more renewable energy sources putting emphasis on wind and solar. Different measures of risk, such as coefficient of variation (CV) and value at risk (VAR), were used to determine the best hybrid renewable energy system (HRES) configuration. Depending on the investors’ expected returns (demand) and risk averseness, they will be able to use the model to choose the best configuration that suites their needs. In general it was found that investing in a diversified HRES is better than investing in individual power sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Ellis, William Joseph. „Application of statistical mechanics to a model neuron /“. Title page, contents and abstract only, 1993. http://web4.library.adelaide.edu.au/theses/09PH/09phe479.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Blume, Christian [Verfasser]. „Statistical learning to model stratospheric variability / Christian Blume“. Berlin : Freie Universität Berlin, 2012. http://d-nb.info/1027815650/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie