To see the other types of publications on this topic, follow the link: Parametric and nonparametric tests.

Dissertations / Theses on the topic 'Parametric and nonparametric tests'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parametric and nonparametric tests.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Sejong. "Three nonparametric specification tests for parametric regression models : the kernel estimation approach." Connect to resource, 1994. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1261492759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Andrew H. (Andrew Hwa-Fen). "Robustness of Parametric and Nonparametric Tests When Distances between Points Change on an Ordinal Measurement Scale." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278300/.

Full text
Abstract:
The purpose of this research was to evaluate the effect on parametric and nonparametric tests using ordinal data when the distances between points changed on the measurement scale. The research examined the performance of Type I and Type II error rates using selected parametric and nonparametric tests.
APA, Harvard, Vancouver, ISO, and other styles
3

Shadat, Wasel Bin. "Specification testing of Garch regression models." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/specification-testing-of-garch-regression-models(56c218db-9b91-4d8c-bf26-8377ab185c71).html.

Full text
Abstract:
This thesis analyses, derives and evaluates specification tests of Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH) regression models, both univariate and multivariate. Of particular interest, in the first half of the thesis, is the derivation of robust test procedures designed to assess the Constant Conditional Correlation (CCC) assumption often employed in multivariate GARCH (MGARCH) models. New asymptotically valid conditional moment tests are proposed which are simple to construct, easily implementable following the full or partial Quasi Maximum Likelihood (QML) estimation and which are robust to non-normality. In doing so, a non-normality robust version of the Tse's (2000) LM test is provided. In addition, a new and easily programmable expressions of the expected Hessian matrix associated with the QMLE is obtained. The finite sample performances of these tests are investigated in an extensive Monte Carlo study, programmed in GAUSS.In the second half of the thesis, attention is devoted to nonparametric testing of GARCH regression models. First simultaneous consistent nonparametric tests of the conditional mean and conditional variance structure of univariate GARCH models are considered. The approach is developed from the Integrated Generalized Spectral (IGS) and Projected Integrated Conditional Moment (PICM) procedures proposed recently by Escanciano (2008 and 2009, respectively) for time series models. Extending Escanciano (2008), a new and simple wild bootstrap procedure is proposed to implement these tests. A Monte Carlo study compares the performance of these nonparametric tests and four parametric tests of nonlinearity and/or asymmetry under a wide range of alternatives. Although the proposed bootstrap scheme does not strictly satisfy the asymptotic requirements, the simulation results demonstrate its ability to control the size extremely well and therefore the power comparison seems justified. Furthermore, this suggests there may exist weaker conditions under which the tests are implementable. The simulation exercise also presents the new evidence of the effect of conditional mean misspecification on various parametric tests of conditional variance. The testing procedures are also illustrated with the help of the S&P 500 data. Finally the PICM and IGS approaches are extended to the MGARCH case. The procedure is illustrated with the help of a bivariate CCC-GARCH model, but can be generalized to other MGARCH specifications. Simulation exercise shows that these tests have satisfactory size and are robust to non-normality. The marginal mean and variance tests have excellent power; however the covariance marginal tests lack power for some alternatives.
APA, Harvard, Vancouver, ISO, and other styles
4

Georgii, Hellberg Kajsa-Lotta, and Andreas Estmark. "Fisher's Randomization Test versus Neyman's Average Treatment Test." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385069.

Full text
Abstract:
The following essay describes and compares Fisher's Randomization Test and Neyman's average treatment test, with the intention of concluding an easily understood blueprint for the comprehension of the practical execution of the tests and the conditions surrounding them. Focus will also be directed towards the tests' different implications on statistical inference and how the design of a study in relation to assumptions affects the external validity of the results. The essay is structured so that firstly the tests are presented and evaluated, then their different advantages and limitations are put against each other before they are applied to a data set as a practical example. Lastly the results obtained from the data set are compared in the Discussion section. The example used in this paper, which compares cigarette consumption after having treated one group with nicotine patches and another with fake nicotine patches, shows a decrease in cigarette consumption for both tests. The tests differ however, as the result from the Neyman test can be made valid for the population of interest. Fisher's test on the other hand only identifies the effect derived from the sample, consequently the test cannot draw conclusions about the population of heavy smokers. In short, the findings of this paper suggests that a combined use of the two tests would be the most appropriate way to test for treatment effect. Firstly one could use the Fisher test to check if any effect at all exist in the experiment, and then one could use the Neyman test to compensate the findings of the Fisher test, by estimating an average treatment effect for example.
APA, Harvard, Vancouver, ISO, and other styles
5

Chroboček, Michal. "Případové studie pro statistickou analýzu dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217911.

Full text
Abstract:
This thesis deals with questions which are related to the creation of case studies for statistical data analysis using applied computer technology. The main aim is focused on showing the solution of statistical case studies in the field of electrical engineering. Solved case studies include task, exemplary solution and conclusion. Clarity of explained theory and the results understanding and interpretation is accentuated. This thesis can be used for practical education of applied statistical methods, it’s also supplemented with commented outputs from Minitab. Trial version of Minitab has been used for solution of case studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Lopez, Gabriel E. "Detection and Classification of DIF Types Using Parametric and Nonparametric Methods: A comparison of the IRT-Likelihood Ratio Test, Crossing-SIBTEST, and Logistic Regression Procedures." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4131.

Full text
Abstract:
The purpose of this investigation was to compare the efficacy of three methods for detecting differential item functioning (DIF). The performance of the crossing simultaneous item bias test (CSIBTEST), the item response theory likelihood ratio test (IRT-LR), and logistic regression (LOGREG) was examined across a range of experimental conditions including different test lengths, sample sizes, DIF and differential test functioning (DTF) magnitudes, and mean differences in the underlying trait distributions of comparison groups, herein referred to as the reference and focal groups. In addition, each procedure was implemented using both an all-other anchor approach, in which the IRT-LR baseline model, CSIBEST matching subtest, and LOGREG trait estimate were based on all test items except for the one under study, and a constant anchor approach, in which the baseline model, matching subtest, and trait estimate were based on a predefined subset of DIF-free items. Response data for the reference and focal groups were generated using known item parameters based on the three-parameter logistic item response theory model (3-PLM). Various types of DIF were simulated by shifting the generating item parameters of select items to achieve desired DIF and DTF magnitudes based on the area between the groups' item response functions. Power, Type I error, and Type III error rates were computed for each experimental condition based on 100 replications and effects analyzed via ANOVA. Results indicated that the procedures varied in efficacy, with LOGREG when implemented using an all-other approach providing the best balance of power and Type I error rate. However, none of the procedures were effective at identifying the type of DIF that was simulated.
APA, Harvard, Vancouver, ISO, and other styles
7

Cícha, Martin. "Extrakce informací o pravděpodobnosti a riziku výnosů z cen opcí." Doctoral thesis, Vysoká škola ekonomická v Praze, 2004. http://www.nusl.cz/ntk/nusl-77098.

Full text
Abstract:
The issue of forecasting the future price of risky financial assets has attracted academia and business practice since the inception of the stock exchange. Also due to the just finished financial crisis, which was the worst crisis since the Great Depression, it is clear that research in this area has not been finished yet. On the contrary, new challenges have been raised. The main goal of the thesis is the demonstration of the significant information potential which is hidden in option market prices. These prices contain informations on probability distribution of the underlying asset returns and the risk connected with these returns. Other objectives of the thesis are the forecast of the underlying asset price distribution using parametric and nonparametric estimates, the improvement of this forecast using the utility function of the representative investor, the description of the current market sentiment and the determination of the risk premium, especially the risk premium on Czech market. The thesis deals with the forecast of the underlying asset price probability distribution implied by the current option market prices using parametric and nonparametric estimates. The resulting distribution is described by the moment characteristics which represent a valuable tool for analyzing the current market sentiment. According to the theory, the probability distribution of the underlying asset price implied by option prices is risk neutral, i.e. it applies only to risk neutral investors. The theory further implies that the distribution of real world can be derived from the risk neutral distribution using utility function of the representative investor. The inclusion of a utility function of representative investor improves the forecast of the underlying asset price distribution. Three different utility functions of traditional risk theory are used in the thesis. These functions range from the simple power function to the general function of hyperbolic absolute risk aversion (HARA). Further, Friedman-Savage utility function is used. This function allows both a risk averse investor and a risk loving investor. The thesis also answers the question: Are the current asset prices at so high level that the purchase of the asset means a gamble? The risk premium associated with investing in the risky asset is derived in the thesis. The risk premium can be understood as the premium demanded by investors for investment in a risky asset against the investment in a riskless asset. All the theoretical methods introduced in the thesis are demonstrated on real data coming from two different markets. Developing market is represented by shares of CEZ and developed market is represented by S&P 500 futures. The thesis deals with demonstrations in single point in time as well as in available history of the data. The forecasts of the underlying asset price distribution and the relating risk premium are constructed in the available data history. The goals and the objectives of the thesis have been achieved. The contribution of the thesis is the development of parametric and nonparametric methodology for estimating the underlying asset price probability distribution implied by the option market prices so that the nature of the particular market and instrument is captured. The further contribution of the thesis is the construction of the forecasts of the underlying asset price distribution and the construction of the market sentiment in the available history of data. The contribution of the thesis is also the construction of the market risk premium in the available history and the establishment of the hypothesis that the markets gamble before the crisis.
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Lei. "Nonparametric tests for longitudinal data." Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Millen, Brian A. "Nonparametric tests for umbrella alternatives /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488205318508038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ho, Pak-kei. "Parametric and non-parametric inference for Geometric Process." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B31483859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ho, Pak-kei, and 何柏基. "Parametric and non-parametric inference for Geometric Process." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B31483859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Villasante, Tezanos Alejandro G. "COMPOSITE NONPARAMETRIC TESTS IN HIGH DIMENSION." UKnowledge, 2019. https://uknowledge.uky.edu/statistics_etds/42.

Full text
Abstract:
This dissertation focuses on the problem of making high-dimensional inference for two or more groups. High-dimensional means both the sample size (n) and dimension (p) tend to infinity, possibly at different rates. Classical approaches for group comparisons fail in the high-dimensional situation, in the sense that they have incorrect sizes and low powers. Much has been done in recent years to overcome these problems. However, these recent works make restrictive assumptions in terms of the number of treatments to be compared and/or the distribution of the data. This research aims to (1) propose and investigate refined small-sample approaches for high-dimension data in the multi-group setting (2) propose and study a fully-nonparametric approach, and (3) conduct an extensive comparison of the proposed methods with some existing ones in a simulation. When treatment effects can meaningfully be formulated in terms of means, a semiparametric approach under equal and unequal covariance assumptions is investigated. Composites of F-type statistics are used to construct two tests. One test is a moderate-p version – the test statistic is centered by asymptotic mean – and the other test is a large-p version asymptotic-expansion based finite-sample correction for the mean of the test statistic. These tests do not make any distributional assumptions and, therefore, they are nonparametric in a way. The theory for the tests only requires mild assumptions to regulate the dependence. Simulation results show that, for moderately small samples, the large-p version yields substantial gain in the size with a small power tradeoff. In some situations mean-based inference is not appropriate, for example, for data that is in ordinal scale or heavy tailed. For these situations, a high-dimensional fully-nonparametric test is proposed. In the two-sample situation, a composite of a Wilcoxon-Mann-Whitney type test is investigated. Assumptions needed are weaker than those in the semiparametric approach. Numerical comparisons with the moderate-p version of the semiparametric approach show that the nonparametric test has very similar size but achieves superior power, especially for skewed data with some amount of dependence between variables. Finally, we conduct an extensive simulation to compare our proposed methods with other nonparametric test and rank transformation methods. A wide spectrum of simulation settings is considered. These simulation settings include a variety of heavy tailed and skewed data distributions, homoscedastic and heteroscedastic covariance structures, various amounts of dependence and choices of tuning (smoothing window) parameter for the asymptotic variance estimators. The fully-nonparametric and the rank transformation methods behave similarly in terms of type I and type II errors. However, the two approaches fundamentally differ in their hypotheses. Although there are no formal mathematical proofs for the rank transformations, they have a tendency to provide immunity against effects of outliers. From a theoretical standpoint, our nonparametric method essentially uses variable-by-variable ranking which naturally arises from estimating the nonparametric effect of interest. As a result of this, our method is invariant against application of any monotone marginal transformations. For a more practical comparison, real-data from an Encephalogram (EEG) experiment is analyzed.
APA, Harvard, Vancouver, ISO, and other styles
13

Kolossiatis, Michalis. "Modelling via normalisation for parametric and nonparametric inference." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/2769/.

Full text
Abstract:
Bayesian nonparametric modelling has recently attracted a lot of attention, mainly due to the advancement of various simulation techniques, and especially Monte Carlo Markov Chain (MCMC) methods. In this thesis I propose some Bayesian nonparametric models for grouped data, which make use of dependent random probability measures. These probability measures are constructed by normalising infinitely divisible probability measures and exhibit nice theoretical properties. Implementation of these models is also easy, using mainly MCMC methods. An additional step in these algorithms is also proposed, in order to improve mixing. The proposed models are applied on both simulated and real-life data and the posterior inference for the parameters of interest are investigated, as well as the effect of the corresponding simulation algorithms. A new, n-dimensional distribution on the unit simplex, that contains many known distributions as special cases, is also proposed. The univariate version of this distribution is used as the underlying distribution for modelling binomial probabilities. Using simulated and real data, it is shown that this proposed model is particularly successful in modelling overdispersed count data.
APA, Harvard, Vancouver, ISO, and other styles
14

Brion, Vladislav. "Nonparametric tests of hypotheses for umbrella alternatives." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27571.

Full text
Abstract:
A general method for nonparametric hypothesis testing for umbrella alternatives is proposed. Such alternatives arise in situations where the treatment effect changes in direction after reaching a peak. The approach consists of defining two sets of rankings, one corresponding to the observations and the other to the alternative. The test statistic measures the distance between the two sets. The limiting distributions are obtained under the null hypothesis when the location of the peak is known. The simulation study shows good power of the test. A method is proposed for estimating the location of the peak, if it is unknown.
APA, Harvard, Vancouver, ISO, and other styles
15

Sydor, Kevin. "Comparison of parametric and nonparametric streamflow record extension techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0011/MQ32261.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ng, Hon Keung Tony Balakrishnan N. "Contributions to parametric and nonparametric inference in life testing /." *McMaster only, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Worden, Keith. "Parametric and nonparametric identification of nonlinearity in structural dynamics." Thesis, Heriot-Watt University, 1989. http://hdl.handle.net/10399/1033.

Full text
Abstract:
The work described in this thesis is concerned with procedures for the identification of nonlinearity in structural dynamics. It begins with a diagnostic method which uses the Hubert transform for detecting nonlinearity and describes the neccessary conditions for obtaining a valid Hubert transform. The transform is shown to be incapable of producing a model with predictive power. A method based on the identification of nonlinear restoring forces is adopted for extracting a nonlinear model. The method is critically examined; various caveats, modifications and improvements are obtained. The method is demonstrated on time data obtained from computer simulations. It is shown that a parameter estimation approach to restoring force identification based on direct least—squares estimation theory is a fast and accurate procedure. In addition, this approach allows one to obtain the equations of motion for a multi—degree—of—freedom system even if the system is only excited at one point. The data processing methods for the restoring force identification including integration and differentiation of sampled time data are developed and discussed in some detail. A comparitive study is made of several of the most well—known least—squares estimation procedures and the direct least —squares approach is applied to data from several experiments where it is shown to correctly identify nonlinearity in both single— and multi—degree--of—freedom systems. Finally, using both simulated and experimental data, it is shown that the recursive least—squares algorithm modified by the inclusion of a data forgetting factor can be used to identify time—dependent structural parameters.
APA, Harvard, Vancouver, ISO, and other styles
18

Mays, James Edward. "Model robust regression: combining parametric, nonparametric, and semiparametric methods." Diss., Virginia Polytechnic Institute and State University, 1995. http://hdl.handle.net/10919/49937.

Full text
Abstract:
In obtaining a regression fit to a set of data, ordinary least squares regression depends directly on the parametric model formulated by the researcher. If this model is incorrect, a least squares analysis may be misleading. Alternatively, nonparametric regression (kernel or local polynomial regression, for example) has no dependence on an underlying parametric model, but instead depends entirely on the distances between regressor coordinates and the prediction point of interest. This procedure avoids the necessity of a reliable model, but in using no information from the researcher, may fit to irregular patterns in the data. The proper combination of these two regression procedures can overcome their respective problems. Considered is the situation where the researcher has an idea of which model should explain the behavior of the data, but this model is not adequate throughout the entire range of the data. An extension of partial linear regression and two methods of model robust regression are developed and compared in this context. These methods involve parametric fits to the data and nonparametric fits to either the data or residuals. The two fits are then combined in the most efficient proportions via a mixing parameter. Performance is based on bias and variance considerations.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
19

Bock, Mitchum T. "Methods of inference for nonparametric curves and surfaces." Thesis, University of Glasgow, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Capanu, Marinela. "Tests of misspecification for parametric models." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0010943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Walker, Stephen Graham. "Bayesian parametric and nonparametric methods with applications in medical statistics." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kössler, Wolfgang [Verfasser]. "Nonparametric Location Tests Against Restricted Alternatives / Wolfgang Kössler." Aachen : Shaker, 2006. http://d-nb.info/1170529119/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Branscum, Adam Jacob. "Epidemiologic modeling and data analysis : Bayesian parametric, nonparametric, and semiparametric approaches /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Hwang-Jaw. "Nonparametric and parametric analyses of food demand in the United States /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487685204967625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Piri, Sepehr. "Parametric, Nonparametric and Semiparametric Approaches in Profile Monitoring of Poisson Data." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/5023.

Full text
Abstract:
Profile monitoring is a relatively new approach in quality control best used when the process data follow a profile (or curve). The majority of previous studies in profile monitoring focused on the parametric modeling of either linear or nonlinear profiles under the assumption of the correct model specification. Our work considers those cases where the parametric model for the family of profiles is unknown or, at least uncertain. Consequently, we consider monitoring Poisson profiles via three methods, a nonparametric (NP) method using penalized splines, a nonparametric (NP) method using wavelets and a semi parametric (SP) procedure that combines both parametric and NP profile fits. Our simulation results show that SP method is robust to the common problem of model misspecification of the user's proposed parametric model. We also showed that Haar wavelets are a better choice than the penalized splines in situations where a sudden jump happens or the jump is edgy. In addition, we showed that the penalized splines are better than wavelets when the shape of the profiles are smooth. The proposed novel techniques have been applied to a real data set and compare with some state-of-the arts.
APA, Harvard, Vancouver, ISO, and other styles
26

Su, Liangjun. "Nonparameteric tests for conditional independence /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3130206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hatzinger, Reinhold, and Walter Katzenbeisser. "A Combination of Nonparametric Tests for Trend in Location." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/1298/1/document.pdf.

Full text
Abstract:
A combination of some well known nonparametric tests to detect trend in location is considered. Simulation results show that the power of this combination is remarkably increased. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
28

Kang, Qing. "Nonparametric tests of median for a size-biases sample /." Search for this dissertation online, 2005. http://wwwlib.umi.com/cr/ksu/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Shuangquan. "Nonparametric tests for change-point problems with random censorship." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0003/NQ34803.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ruth, David M. "Applications of assignment algorithms to nonparametric tests for homogeneity." Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/dissert/2009/Sep/09Sep%5FRuth%5FPhD.pdf.

Full text
Abstract:
Dissertation (Ph.D. in Operations Research)--Naval Postgraduate School, September 2009.
Dissertation supervisor: Koyak, Robert. "September 2009." Description based on title screen as viewed on November 5, 2009. Author(s) subject terms: Nonparametric test, distribution-free test, non-bipartite matching, bipartite matching, change point. Includes bibliographical references (p. 121-126). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Jingxin. "Application of partial consistency for the semi-parametric models." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/441.

Full text
Abstract:
The semi-parametric model enjoys a relatively flexible structure and keeps some of the simplicity in the statistical analysis. Hence, there are abundance discussions on semi-parametric models in the literature. The concept of partial consistency was firstly brought up in Neyman and Scott (1948). It was said the in cases where infinite parameters are involved, consistent estimators are always attainable for those "structural" parameters. The "structural' parameters are finite and govern infinite samples. Since the nonparametric model can be regarded as a parametric model with infinite parameters, then the semi-parametric model can be easily transformed into a infinite-parametric model with some "structural" parameters. Therefore, based on this idea, we develop several new methods for the estimating and model checking problems in semi-parametric models. The implementation of applying partial consistency is through the method "local average". We consider the nonparametric part as piecewise constant so that infinite parameters are created. The "structural" parameters shall be the parametric part, the model residual variance and so on. Due to the partial consistency phenomena, classical statistic tools can then be applied to obtain consistent estimators for those "structural" parameters. Furthermore, we can take advantage of the rest of parameters to estimate the nonparametric part. In this thesis, we take the varying coefficient model as the example. The estimation of the functional coefficient is discussed and relative model checking methods are presented. The proposed new methods, no matter for the estimation or the test, have remarkably lessened the computation complexity. At the same time, the estimators and the tests get satisfactory asymptotic statistical properties. The simulations we conducted for the new methods also support the asymptotic results, giving a relatively efficient and accurate performance. What's more, the local average method is easy to understand and can be flexibly applied to other type of models. Further developments could be done on this potential method. In Chapter 2, we introduce a local average method to estimate the functional coefficients in the varying coefficient model. As a typical semi-parametric model, the varying coefficient model is widely applied in many areas. The varying coefficient model could be seen as a more flexible version of classical linear model, while it explains well when the regression coefficients do not stay constant. In addition, we extend this local average method to the semi-varying coefficient model, which consists of a linear part and a varying coefficient part. The procedures of the estimations are developed, and their statistical properties are investigated. Plenty of simulations and a real data application are conducted to study the performance of the proposed method. Chapter 3 is about the local average method in variance estimation. Variance estimation is a fundamental problem in statistical modeling and plays an important role in the inferences in model selection and estimation. In this chapter, we have discussed the problem in several nonparametric and semi-parametric models. The proposed method has the advantages of avoiding the estimation of the nonparametric function and reducing the computational cost, and can be easily extended to more complex settings. Asymptotic normality is established for the proposed local average estimators. Numerical simulations and a real data analysis are presented to illustrate the finite sample performance of the proposed method. Naturally, we move to the model checking problem in Chapter 4, still taking varying coefficient models as an example. One important and frequently asked question is whether an estimated coefficient is significant or really "varying". In the literature, the relative hypothesis tests usually require fitting the whole model, including the nuisance coefficients. Consequently, the estimation procedure could be very compute-intensive and time-consuming. Thus, we bring up several tests which can avoid unnecessary functions estimation. The proposed tests are very easy to implement and their asymptotic distributions under null hypothesis have been deduced. Simulations are also studied to show the properties of the tests.
APA, Harvard, Vancouver, ISO, and other styles
32

Harati, Nejad Torbati Amir Hossein. "Nonparametric Bayesian Approaches for Acoustic Modeling." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/338396.

Full text
Abstract:
Electrical Engineering
Ph.D.
The goal of Bayesian analysis is to reduce the uncertainty about unobserved variables by combining prior knowledge with observations. A fundamental limitation of a parametric statistical model, including a Bayesian approach, is the inability of the model to learn new structures. The goal of the learning process is to estimate the correct values for the parameters. The accuracy of these parameters improves with more data but the model’s structure remains fixed. Therefore new observations will not affect the overall complexity (e.g. number of parameters in the model). Recently, nonparametric Bayesian methods have become a popular alternative to Bayesian approaches because the model structure is learned simultaneously with the parameter distributions in a data-driven manner. The goal of this dissertation is to apply nonparametric Bayesian approaches to the acoustic modeling problem in continuous speech recognition. Three important problems are addressed: (1) statistical modeling of sub-word acoustic units; (2) semi-supervised training algorithms for nonparametric acoustic models; and (3) automatic discovery of sub-word acoustic units. We have developed a Doubly Hierarchical Dirichlet Process Hidden Markov Model (DHDPHMM) with a non-ergodic structure that can be applied to problems involving sequential modeling. DHDPHMM shares mixture components between states using two Hierarchical Dirichlet Processes (HDP). An inference algorithm for this model has been developed that enables DHDPHMM to outperform both its hidden Markov model (HMM) and HDP HMM (HDPHMM) counterparts. This inference algorithm is shown to also be computationally less expensive than a comparable algorithm for HDPHMM. In addition to sharing data, the proposed model can learn non-ergodic structures and non-emitting states, something that HDPHMM does not support. This extension to the model is used to model finite length sequences. We have also developed a generative model for semi-supervised training of DHDPHMMs. Semi-supervised learning is an important practical requirement for many machine learning applications including acoustic modeling in speech recognition. The relative improvement in error rates on classification and recognition tasks is shown to be 22% and 7% respectively. Semi-supervised training results are slightly better than supervised training (29.02% vs. 29.71%). Context modeling was also investigated and results show a modest improvement of 1.5% relative over the baseline system. We also introduce a nonparametric Bayesian transducer based on an ergodic HDPHMM/DHDPHMM that automatically segments and clusters the speech signal using an unsupervised approach. This transducer was used in several applications including speech segmentation, acoustic unit discovery, spoken term detection and automatic generation of a pronunciation lexicon. For the segmentation problem, an F¬¬¬¬¬¬-score of 76.62% was achieved which represents a 9% relative improvement over the baseline system. On the spoken term detection tasks, an average precision of 64.91% was achieved, which represents a 20% improvement over the baseline system. Lexicon generation experiments also show automatically discovered units (ADU) generalize to new datasets. In this dissertation, we have established the foundation for applications of non-parametric Bayesian modeling to problems such as speech recognition that involve sequential modeling. These models allow a new generation of machine learning systems that adapt their overall complexity in a data-driven manner and yet preserve meaningful modalities in the data. As a result, these models improve generalization and offer higher performance at lower complexity.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
33

Caron, Emmanuel. "Comportement des estimateurs des moindres carrés du modèle linéaire dans un contexte dépendant : Étude asymptotique, implémentation, exemples." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0036.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au modèle de régression linéaire usuel dans le cas où les erreurs sont supposées strictement stationnaires. Nous utilisons un résultat de Hannan (1973) qui a prouvé un Théorème Limite Central pour l’estimateur des moindres carrés sous des conditions très générales sur le design et le processus des erreurs. Pour un design et un processus d’erreurs vérifiant les conditions d’Hannan, nous définissons un estimateur de la matrice de covariance asymptotique de l’estimateur des moindres carrés et nous prouvons sa consistance sous des conditions très générales. Ensuite nous montrons comment modifier les tests usuels sur le paramètre du modèle linéaire dans ce contexte dépendant. Nous proposons différentes approches pour estimer la matrice de covariance afin de corriger l’erreur de première espèce des tests. Le paquet R slm que nous avons développé contient l’ensemble de ces méthodes statistiques. Les procédures sont évaluées à travers différents ensembles de simulations et deux exemples particuliers de jeux de données sont étudiés. Enfin, dans le dernier chapitre, nous proposons une méthode non-paramétrique par pénalisation pour estimer la fonction de régression dans le cas où les erreurs sont gaussiennes et corrélées
In this thesis, we consider the usual linear regression model in the case where the error process is assumed strictly stationary.We use a result from Hannan (1973) who proved a Central Limit Theorem for the usual least squares estimator under general conditions on the design and on the error process. Whatever the design and the error process satisfying Hannan’s conditions, we define an estimator of the asymptotic covariance matrix of the least squares estimator and we prove its consistency under very mild conditions. Then we show how to modify the usual tests on the parameter of the linear model in this dependent context. We propose various methods to estimate the covariance matrix in order to correct the type I error rate of the tests. The R package slm that we have developed contains all of these statistical methods. The procedures are evaluated through different sets of simulations and two particular examples of datasets are studied. Finally, in the last chapter, we propose a non-parametric method by penalization to estimate the regression function in the case where the errors are Gaussian and correlated
APA, Harvard, Vancouver, ISO, and other styles
34

Human, S. W. "Univariate parametric and nonparametric statistical quality control techniques with estimated process parameters." Pretoria : [s.l.], 2009. http://upetd.up.ac.za/thesis/available/etd-10172009-093912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Burghart, Ryan A. "Do Economic Factors Help Forecast Political Turnover? Comparing Parametric and Nonparametric Approaches." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1619021610240619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Royeen, Charlotte Brasic. "An exploration of parametric versus nonparametric statistics in occupational therapy clinical research." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/71181.

Full text
Abstract:
Data sets from research in clinical practice professions often do not meet assumptions necessary for appropriate use of parametric statistics (Lezak and Gray, 1984). When assumptions underlying the use of the parametric tests are violated or cannot be documented, the power of the parametric test may be invalidated and consequently, the significance levels inaccurate (Gibbons, 1976). Much research has investigated the relative merits of parametric versus nonparametric procedures using simulation studies, but little has been done using actual data sets from a particular discipline. This study compared the application of parametric and nonparametric statistics using a body of literature in clinical occupational therapy. The most common parametric procedures in occupational therapy research literature from 1980 - 1984 were identified using methodology adapted from Goodwin and Goodwin (1985). Five small sample size data sets from published occupational therapy research articles typifying the most commonly used univariate parametric procedures were obtained, and subjected to exploratory data analyses (Tukey, 1977) in order to evaluate whether or not assumptions underlying appropriate use of the respective parametric procedures had been met. Subsequently, the nonparametric analogue test was identified and computed. Results revealed that in three of the five cases (paired t-test, one factor ANOVA and Pearson Correlation Coefficient) assumptions underlying the use of the parametric test were not met. In one case (independent t-test) the assumptions were met with a minor qualification. In only one case (simple linear regression) were assumptions clearly met. It was also found that in each of the two cases where parametric assumptions were met, no significant differences in p values between the parametric and the nonparametric tests were found. And conversely, in each of the three cases where parametric assumptions were not met, significant differences between the parametric and nonparametric results were found. These findings indicate that if cases were considered as a whole, there was a one hundred percent agreement between whether or not parametric assumptions were violated and whether or not differences were discovered regarding parametric versus nonparametric results. Other findings regarding (a) non-normality, (b) outliers, (c) multiple violation of assumptions for a given procedure, and (d) research designs employed are discussed and implications identified. Suggestions for future research are put forth.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Wahlström, Helen. "Nonparametric tests for comparing two treatments by using ordinal data /." Örebro : Örebro universitetsbibliotek, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gharaibeh, Mohammed Mahmoud. "Nonparametric lack-of-fit tests in presence of heteroscedastic variances." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/18116.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Haiyan Wang
It is essential to test the adequacy of a specified regression model in order to have cor- rect statistical inferences. In addition, ignoring the presence of heteroscedastic errors of regression models will lead to unreliable and misleading inferences. In this dissertation, we consider nonparametric lack-of-fit tests in presence of heteroscedastic variances. First, we consider testing the constant regression null hypothesis based on a test statistic constructed using a k-nearest neighbor augmentation. Then a lack-of-fit test of nonlinear regression null hypothesis is proposed. For both cases, the asymptotic distribution of the test statistic is derived under the null and local alternatives for the case of using fixed number of nearest neighbors. Numerical studies and real data analyses are presented to evaluate the perfor- mance of the proposed tests. Advantages of our tests compared to classical methods include: (1) The response variable can be discrete or continuous and can have variations depend on the predictor. This allows our tests to have broad applicability to data from many practi- cal fields. (2) Using fixed number of k-nearest neighbors avoids slow convergence problem which is a common drawback of nonparametric methods that often leads to low power for moderate sample sizes. (3) We obtained the parametric standardizing rate for our test statis- tics, which give more power than smoothing based nonparametric methods for intermediate sample sizes. The numerical simulation studies show that our tests are powerful and have noticeably better performance than some well known tests when the data were generated from high frequency alternatives. Based on the idea of the Least Squares Cross-Validation (LSCV) procedure of Hardle and Mammen (1993), we also proposed a method to estimate the number of nearest neighbors for data augmentation that works with both continuous and discrete response variable.
APA, Harvard, Vancouver, ISO, and other styles
39

Herrera, Catalán Pedro, and Oscar Millones. "Estimating the Cost of Mining Pollution on Water Resources: Parametric and Nonparametric Resources." Economía, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/117289.

Full text
Abstract:
This study estimates the economic costs of mining pollution on water resources for the years 2008 and 2009 based on the conceptual framework of Environmental Efficiency. This framework identifies such costs as the mining companies’ trade-off between increasing production that is saleable at market prices (desirable output) and reducing the environmental pollution that emerges from the production process (undesirable output). These economic costs were calculated from parametric and non parametric production possibility frontiers for 28 and 37 mining units in 2008 and 2009, respectively, which were under the purview of the National Campaign for Environmental Monitoring of Effluent and Water Resources, conducted by the Energy and Mining Investment Supervisory Agency (Osinergmin) in those years. The results show that the economic cost of mining pollution on water resources rose to U.S. $ 814.7 million and U.S. $ 448.8 million for 2008 and 2009, respectively. These economic costs were highly concentrated in a few mining units, within a few pollution parameters, and were also higher in mining units with average/low mineral production. Taking into consideration that at present the fine and penalty system in the mining sector is based on administrative criteria, this study proposes a System of Environmentally Efficient Sanctions based on economic criteria so as to establish a preventive mechanism for pollution. It is hoped that this mechanism will generate the necessary incentives for mining companies to address the negative externalities that emerge from their production process.
En este estudio se aproximan los costos económicos de la contaminación ambiental minera sobre los recursos hídricos para 2008 y 2009 en el marco conceptual de la Eficiencia Medioambiental, que interpreta dichos costos como el trade-off de los empresarios mineros entre incrementar su producción que es vendible a precios de mercado (output deseable) yreducir la contaminación ambiental que se desprende de su proceso productivo (output no deseable). Dichos costos económicos fueron calculados a partir de fronteras de posibilidades de producción paramétricas y no paramétricas para 28 y 37 unidades mineras en los años 2008 y 2009 respectivamente, las que estuvieron bajo el ámbito de la Campaña Nacional deMonitoreo Ambiental de Efluentes y Recursos Hídricos que realizó el Organismo Supervisor de Inversión Energía y Minería (Osinergmin) en dichos años. Los resultados indican que los costos económicos de la contaminación ambiental minera sobre los recursos hídricos ascendieron, en promedio, para los años 2008 y 2009, a US$ 814,7 millones,y US$ 448,8 millones, respectivamente. Dichos costos estuvieron altamente concentrados en pocas unidades productivas, así como en pocos parámetros de contaminación, y fueron mayores en unidades mineras con producción media/baja de minerales. Dado que en la actualidad el sistema de multas y sanciones en el sector minero se basa en criterios administrativos, el estudio propone un Sistema de Sanciones Ambientalmente Eficiente basado en criterios económicos
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Anquan. "An Empirical Comparison of Four Data Generating Procedures in Parametric and Nonparametric ANOVA." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/329.

Full text
Abstract:
The purpose of this dissertation was to empirically investigate the Type I error and power rates of four data transformations that produce a variety of non-normal distributions. Specifically, the transformations investigated were (a) the g-and-h, (b) the generalized lambda distribution (GLD), (c) the power method, and (d) the Burr families of distributions in the context of between-subjects and within-subjects analysis of variance (ANOVA). The traditional parametric F tests and their nonparametric counterparts, the Kruskal-Wallis (KW) and Friedman (FR) tests, were selected to be used in this investigation. The four data transformations produce non-normal distributions that have either valid or invalid probability density functions (PDFs). Specifically, the data generating procedures will produce distributions with valid PDFs if and only if the transformations are strictly increasing - otherwise the distributions are considered to be associated with invalid PDFs. As such, the primary objective of this study was to isolate and investigate the behaviors of the four data transformation procedures themselves while holding all other conditions constant (i.e., sample sizes, effect sizes, correlation levels, skew, kurtosis, random seed numbers, etc. all remain the same). The overall results of the Monte Carlo study generally suggest that when the distributions have valid probability density functions (PDFs) that the Type I error and power rates for the parametric (or nonparametric) tests were similar across all four data transformations. It is noted that there were some dissimilar results when the distributions were very skewed and near their associated boundary conditions for a valid PDF. These dissimilarities were most pronounced in the context of the KW and FR tests. In contrast, when the four transformations produced distributions with invalid PDFs, the Type I error and power rates were more frequently dissimilar for both the parametric F and nonparametric (KW, FR) tests. The dissimilarities were most pronounced when the distributions were skewed and heavy-tailed. For example, in the context of a parametric between subjects design, four groups of data were generated with (a) sample sizes of 10, (b) standardized effect size of 0.50 between groups, (c) skew of 2.5 and kurtosis of 60, (d) power method transformations generating distributions with invalid PDFs, and (e) g-and-h and GLD transformations both generating distributions with valid PDFs. The power results associated with the power method transformation showed that the F-test (KW test) was rejecting at a rate of .32 (.86). On the other hand, the power results associated with both the g-and-h and GLD transformations showed that the F-test (KW test) was rejecting at a rate of approximately .19 (.26). The primary recommendation of this study is that researchers conducting Monte Carlo studies in the context described herein should use data transformation procedures that produce valid PDFs. This recommendation is important to the extent that researchers using transformations that produce invalid PDFs increase the likelihood of limiting their study to the data generating procedure being used i.e. Type I error and power results may be substantially disparate between different procedures. Further, it also recommended that g-and-h, GLD, Burr, and fifth-order power method transformations be used if it is desired to generate distributions with extreme skew and/or heavy-tails whereas third-order polynomials should be avoided in this context.
APA, Harvard, Vancouver, ISO, and other styles
41

Elkhafifi, Faiza F. "Nonparametric Predictive Inference for ordinal data and accuracy of diagnostic tests." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/3914/.

Full text
Abstract:
This thesis considers Nonparametric Predictive Inference (NPI) for ordinal data and accuracy of diagnostic tests. We introduce NPI for ordinal data, which are categor- ical data with an ordering of the categories. Such data occur in many application areas, for example medical and social studies. The method uses a latent variable representation of the observations and categories on the real line. Lower and upper probabilities for events involving the next observation are presented, with specic attention to comparison of multiple groups of ordinal data. We introduce NPI for accuracy of diagnostic tests with ordinal outcomes, with the inferences based on data for a disease group and a non-disease group. We intro- duce empirical and NPI lower and upper Receiver Operating Characteristic (ROC) curves and the corresponding areas under the curves. We discuss the use of the Youden index related to the NPI lower and upper ROC curves in order to deter- mine the optimal cut-o point for the test. Finally, we present NPI for assessment of accuracy of diagnostic tests involving three groups of real-valued data. This is achieved by developing NPI lower and upper ROC surfaces and the corresponding volumes under these surfaces, and we also consider the choice of cut-o points for classications based on such diagnostic tests.
APA, Harvard, Vancouver, ISO, and other styles
42

Al-Thubaiti, Samah Abdullah. "Proposed Nonparametric Tests for the Umbrella Alternative in a Mixed Design." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31775.

Full text
Abstract:
Several nonparametric tests are proposed for a mixed design consisting of a randomized complete block design (RCBD) and a completely randomized design (CRD) under the umbrella hypothesis with a known and an unknown peak. The combination of the two statistics is based on two different methods. A simulation study was conducted to investigate the performance of the proposed mixed design tests under many different cases. In either case of a known or an unknown peak umbrella hypothesis, the estimated power of the first method used for the proposed test statistics is better than the second method for all situations. We use a square distance as a weight in terms of assessing the power’s performance of the proposed test statistics for the known peak umbrella hypothesis. The square distance modification improves in increasing the test’s power; in particular, if the peak is indistinct with the first location parameter for four and five treatments, or if the location parameter on the left side of the umbrella hypothesis (upside) is greater than all the different location parameters on the right side of the umbrella hypothesis (downside) such as, (0.8 , 1.0 , 0.75 , 0.2) ; (0.75 , 0.8 , 0.6 , 0.4 , 0.2). Also, the modification improves the test’s power for five treatments and peak at 3 once the underlying distribution is symmetric, as long as the peak of the umbrella hypothesis is distinct. In general, for the unknown peak umbrella hypothesis, the result of the test’s power differs slightly between a modification and nonmodification cases. However, we can distinguish some cases based on the type of underlying distribution. In the case of having a symmetric distribution, the square distance modification is much better than test statistics without modification for some cases once we have four and five treatments. For the case of having three treatments; the estimated power for the proposed test statistics with a square distance modification (3.3.15), (3.3.16) is slightly different from the estimated power for the test statistic without modification (3.3.13), (3.3.14) in both cases of underlying distributions ”symmetric and skewed.”
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Yangyi. "Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional Covariate." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/71285.

Full text
Abstract:
We provide a Frequentist-Bayesian hybrid test statistic in this dissertation for two testing problems. The first one is to design a test for the significant differences between non-parametric functions and the second one is to design a test allowing any departure of predictors of high dimensional X from constant. The implementation is also given in construction of the proposal test statistics for both problems. For the first testing problem, we consider the statistical difference among massive outcomes or signals to be of interest in many diverse fields including neurophysiology, imaging, engineering, and other related fields. However, such data often have nonlinear system, including to row/column patterns, having non-normal distribution, and other hard-to-identifying internal relationship, which lead to difficulties in testing the significance in difference between them for both unknown relationship and high-dimensionality. In this dissertation, we propose an Adaptive Bayes Sum Test capable of testing the significance between two nonlinear system basing on universal non-parametric mathematical decomposition/smoothing components. Our approach is developed from adapting the Bayes sum test statistic by Hart (2009). Any internal pattern is treated through Fourier transformation. Resampling techniques are applied to construct the empirical distribution of test statistic to reduce the effect of non-normal distribution. A simulation study suggests our approach performs better than the alternative method, the Adaptive Neyman Test by Fan and Lin (1998). The usefulness of our approach is demonstrated with an application in the identification of electronic chips as well as an application to test the change of pattern of precipitations. For the second testing problem, currently numerous statistical methods have been developed for analyzing high-dimensional data. These methods mainly focus on variable selection approach, but are limited for purpose of testing with high-dimensional data, and often are required to have explicit derivative likelihood functions. In this dissertation, we propose ``Hybrid Omnibus Test'' for high-dimensional data testing purpose with much less requirements. Our Hybrid Omnibus Test is developed under semi-parametric framework where likelihood function is no longer necessary. Our Hybrid Omnibus Test is a version of Freqentist-Bayesian hybrid score-type test for a functional generalized partial linear single index model, which has link being functional of predictors through a generalized partially linear single index. We propose an efficient score based on estimating equation to the mathematical difficulty in likelihood derivation and construct our Hybrid Omnibus Test. We compare our approach with a empirical likelihood ratio test and Bayesian inference based on Bayes factor using simulation study in terms of false positive rate and true positive rate. Our simulation results suggest that our approach outperforms in terms of false positive rate, true positive rate, and computation cost in high-dimensional case and low-dimensional case. The advantage of our approach is also demonstrated by published biological results with application to a genetic pathway data of type II diabetes.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Cenci, Simone. "Parametric and nonparametric approaches to explain and predict nonlinear population dynamics in changing environments." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123224.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 195-211).
Many aspects of human societies, from the sustenance of national economies to the control of population health, depend on the dynamics of biological populations within a given environment. Therefore, understanding and predicting the effects of changing environments on the dynamics of biological populations evolving in a continuously changing world is, nowadays, one of the most important challenges in biology. In this thesis we have addressed this challenge using two different approaches. The first approach, called the structural approach, is deductive, i.e., the effects of changing environments on population dynamics are studied using parametric models under equilibrium assumptions. In this context, firstly we have shown that, while the approach was originally introduced to investigate the structural stability of the classic Lotka-Volterra dynamics; it can be applied to a much larger class of nonlinear models and to stochastic systems.
Then, we used the approach to analyze empirical data to investigate how structure and dynamics of species interactions regulate species coexistence under fast environmental changes. The generalizability of this approach however, has been limited because equilibrium dynamics are seldom observed in nature and exact equations for population dynamics are rarely known. Therefore, in the second part of the thesis we took an inductive approach. Specifically, we proposed a nonparametric framework to estimate the tolerance of nonequilibrium population dynamics to environmental variability. To apply the framework on empirical data we have improved/developed nonparametric computational methods to infer biotic interactions and their uncertainty from nonlinear time series data. Using our approach we were able to recover important ecological insights without the explicit formulation of parametric models.
That is, we have shown that it is possible to build ecological theories inductively from observational data with minimal assumptions on the data-generating processes. Overall, we believe that the increasing amount of biological data available nowadays paves the way for moving theoretical population biology from being a deductive, assumption driven science towards an inductive data-driven science. In this context, this study is a step forward towards the foundations of a nonparametric data-driven research to monitor and anticipate the response of populations to the increasing rate of environmental changes.
by Simone Cenci.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Civil and Environmental Engineering
APA, Harvard, Vancouver, ISO, and other styles
45

Bender, Mary. "Monte Carlo simulation with parametric and nonparametric analysis of covariance for nonequivalent control groups." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/82902.

Full text
Abstract:
There are many parametric statistical models that have been designed to measure change in nonequivalent control group studies, but because of assumption violations and potential artifacts, there is no one form of analysis that always appears to be appropriate. While the parametric analysis of covariance and parametric ANCOVAS with a covariate correction are some of the more frequently completed analyses used in nonequivalent control group research, comparative studies with nonparametric counterparts should be completed and results compared with those more commonly used forms of analysis. The current investigation studied and compared the application of four ANCOVA models: the parametric, the covariate-corrected parametric, the rank transform, and the covariate-corrected rank transform. Population parameters were established; sample parameter intervals determined by Monte Carlo simulation were examined; and a best ANCOVA model was systematically and theoretically determined in light of population assumption violations, reliability of the covariate correction, the width of the sample probability level intervals, true parent population parameters, and results of robust regression. Results of data exploration on the parent population revealed that, based on assumptions, the covariate-corrected ANCOVAS are preferred over both the parametric and rank analyses. A reliability coefficient of ṟ=.83 also indicated that a covariate-corrected ANCOVA is effective in error reduction. Robust regression indicated that the outliers in the data set impacted the regression equation for both parametric models, and deemed selection of either model questionable. The tightest probability level interval for the samples serves to delineate the model with the greatest convergence of probability levels, and, theoretically, the most stable model. Results of the study indicated that, because the covariate-corrected rank ANCOVA had by far the tightest interval, it is the preferred model. In addition, the probability level interval of the covariate-corrected rank model is the only model interval that contained the true population parameter. Results of the investigation clearly indicate that the covariate-corrected rank ANCOVA is the model of choice for this nonequivalent control group study. While its use has yet to be reported in the literature, the covariate-corrected rank analysis of covariance provides a viable alternative for researchers who must rely upon intact groups for the answers to their research questions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Serasinghe, Shyamalee Kumary. "A simulation comparison of parametric and nonparametric estimators of quantiles from right censored data." Kansas State University, 2010. http://hdl.handle.net/2097/4318.

Full text
Abstract:
Master of Science
Department of Statistics
Paul I. Nelson
Quantiles are useful in describing distributions of component lifetimes. Data, consisting of the lifetimes of sample units, used to estimate quantiles are often censored. Right censoring, the setting investigated here, occurs, for example, when some test units may still be functioning when the experiment is terminated. This study investigated and compared the performance of parametric and nonparametric estimators of quantiles from right censored data generated from Weibull and Lognormal distributions, models which are commonly used in analyzing lifetime data. Parametric quantile estimators based on these assumed models were compared via simulation to each other and to quantile estimators obtained from the nonparametric Kaplan- Meier Estimator of the survival function. Various combinations of quantiles, censoring proportion, sample size, and distributions were considered. Our simulation show that the larger the sample size and the lower the censoring rate the better the performance of the estimates of the 5th percentile of Weibull data. The lognormal data are very sensitive to the censoring rate and we observed that for higher censoring rates the incorrect parametric estimates perform the best. If you do not know the underlying distribution of the data, it is risky to use parametric estimates of quantiles close to one. A limitation in using the nonparametric estimator of large quantiles is their instability when the censoring rate is high and the largest observations are censored. Key Words: Quantiles, Right Censoring, Kaplan-Meier estimator
APA, Harvard, Vancouver, ISO, and other styles
47

Sarker, Md Shah Jalal. "Tests for Weibull based proportional hazards frailty models." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/1046/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Olet, Susan. "Proposed Nonparametric Tests for the Simple Tree Alternative in a Mixed Design." Diss., North Dakota State University, 2014. http://hdl.handle.net/10365/24783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pretorius, Wesley Byron. "Non-parametric regression modelling of in situ fCO2 in the Southern Ocean." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71630.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: The Southern Ocean is a complex system, where the relationship between CO2 concentrations and its drivers varies intra- and inter-annually. Due to the lack of readily available in situ data in the Southern Ocean, a model approach was required which could predict the CO2 concentration proxy variable, fCO2. This must be done using predictor variables available via remote measurements to ensure the usefulness of the model in the future. These predictor variables were sea surface temperature, log transformed chlorophyll-a concentration, mixed layer depth and at a later stage altimetry. Initial exploratory analysis indicated that a non-parametric approach to the model should be taken. A parametric multiple linear regression model was developed to use as a comparison to previous studies in the North Atlantic Ocean as well as to compare with the results of the non-parametric approach. A non-parametric kernel regression model was then used to predict fCO2 and nally a combination of the parametric and non-parametric regression models was developed, referred to as the mixed regression model. The results indicated, as expected from exploratory analyses, that the non-parametric approach produced more accurate estimates based on an independent test data set. These more accurate estimates, however, were coupled with zero estimates, caused by the curse of dimensionality. It was also found that the inclusion of salinity (not available remotely) improved the model and therefore altimetry was chosen to attempt to capture this e ect in the model. The mixed model displayed reduced errors as well as removing the zero estimates and hence reducing the variance of the error rates. The results indicated that the mixed model is the best approach to use to predict fCO2 in the Southern Ocean and that altimetry's inclusion did improve the prediction accuracy.
AFRIKAANSE OPSOMMING: Die Suidelike Oseaan is 'n komplekse sisteem waar die verhouding tussen CO2 konsentrasies en die drywers daarvoor intra- en interjaarliks varieer. 'n Tekort aan maklik verkrygbare in situ data van die Suidelike Oseaan het daartoe gelei dat 'n model benadering nodig was wat die CO2 konsentrasie plaasvervangerveranderlike, fCO2, kon voorspel. Dié moet gedoen word deur om gebruik te maak van voorspellende veranderlikes, beskikbaar deur middel van afgeleë metings, om die bruikbaarheid van die model in die toekoms te verseker. Hierdie voorspellende veranderlikes het ingesluit see-oppervlaktetemperatuur, log getransformeerde chloro l-a konsentrasie, gemengde laag diepte en op 'n latere stadium, hoogtemeting. 'n Aanvanklike, ondersoekende analise het aangedui dat 'n nie-parametriese benadering tot die data geneem moet word. 'n Parametriese meerfoudige lineêre regressie model is ontwikkel om met die vorige studies in die Noord-Atlantiese Oseaan asook met die resultate van die nieparametriese benadering te vergelyk. 'n Nie-parametriese kern regressie model is toe ingespan om die fCO2 te voorspel en uiteindelik is 'n kombinasie van die parametriese en nie-parametriese regressie modelle ontwikkel vir dieselfde doel, wat na verwys word as die gemengde regressie model. Die resultate het aangetoon, soos verwag uit die ondersoekende analise, dat die nie-parametriese benadering meer akkurate beramings lewer, gebaseer op 'n onafhanklike toets datastel. Dié meer akkurate beramings het egter met "nul"beramings gepaartgegaan wat veroorsaak word deur die vloek van dimensionaliteit. Daar is ook gevind dat die insluiting van soutgehalte (nie beskikbaar oor via sateliet nie) die model verbeter en juis daarom is hoogtemeting gekies om te poog om hierdie e ek in die model vas te vang. Die gemengde model het kleiner foute getoon asook die "nul"beramings verwyder en sodoende die variasie van die foutkoerse verminder. Die resultate het dus aangetoon dat dat die gemengde model die beste benadering is om te gebruik om die fCO2 in die Suidelike Oseaan te beraam en dat die insluiting van altimetry die akkuraatheid van hierdie beraming verbeter.
APA, Harvard, Vancouver, ISO, and other styles
50

Thawornkaiwong, Supachoke. "Statistical inference on linear and partly linear regression with spatial dependence : parametric and nonparametric approaches." Thesis, London School of Economics and Political Science (University of London), 2012. http://etheses.lse.ac.uk/620/.

Full text
Abstract:
The typical assumption made in regression analysis with cross-sectional data is that of independent observations. However, this assumption can be questionable in some economic applications where spatial dependence of observations may arise, for example, from local shocks in an economy, interaction among economic agents and spillovers. The main focus of this thesis is on regression models under three di§erent models of spatial dependence. First, a multivariate linear regression model with the disturbances following the Spatial Autoregressive process is considered. It is shown that the Gaussian pseudo-maximum likelihood estimate of the regression and the spatial autoregressive parameters can be root-n-consistent under strong spatial dependence or explosive variances, given that they are not too strong, without making restrictive assumptions on the parameter space. To achieve e¢ ciency improvement, adaptive estimation, in the sense of Stein (1956), is also discussed where the unknown score function is nonparametrically estimated by power series estimation. A large section is devoted to an extension of power series estimation for random variables with unbounded supports. Second, linear and semiparametric partly linear regression models with the disturbances following a generalized linear process for triangular arrays proposed by Robinson (2011) are considered. It is shown that instrumental variables estimates of the unknown slope parameters can be root-n-consistent even under some strong spatial dependence. A simple nonparametric estimate of the asymptotic variance matrix of the slope parameters is proposed. An empirical illustration of the estimation technique is also conducted. Finally, linear regression where the random variables follow a marked point process is considered. The focus is on a family of random signed measures, constructed from the marked point process, that are second-order stationary and their spectral properties are discussed. Asymptotic normality of the least squares estimate of the regression parameters are derived from the associated random signed measures under mixing assumptions. Nonparametric estimation of the asymptotic variance matrix of the slope parameters is discussed where an algorithm to obtain a positive deÖnite estimate, with faster rates of convergence than the traditional ones, is proposed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography