To see the other types of publications on this topic, follow the link: Non-normality.

Dissertations / Theses on the topic 'Non-normality'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Non-normality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hristova-Bojinova, Daniela. "Non-normality and non-linearity in univariate standard models of inflation." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/30141.

Full text
Abstract:
The empirical evidences presented in a vast number of recent publications gave rise to debates in the literature regarding the problem of stationarity of inflation. Sometimes considered as a unit root process and sometimes as a stationary process, in most of the studies inflationary time series are modelled assuming normality and linearity. The present thesis relaxes the frequently used assumptions of linearity in price processes and normality in distribution of inflation, and suggests two ways of modelling inflationary data. Firstly, it is assumed that distribution of inflation is a stable Paretian distribution and, under this assumption, stationarity of inflation is examined applying an appropriate test. Secondly, price time series are modelled by treating them as a unit root bilinear process, which further leads to non-normality in distribution of inflation. A recently proposed test for presence of no-bilinearity is then applied. If bilinearity is detected, the bilinear coefficient is estimated by the Kalman filter method. Subsequently, the finite sample properties of this estimator are evaluated using Monte Carlo simulation experiments. A series of Monte Carlo simulations leads to calculating the r-statistic critical values for testing whether the estimated bilinear coefficients significantly differ from zero. The methodologies explained above are then applied to a large set of worldwide price and inflationary data for 107 different countries. Assuming that the distribution of inflation is a stable Paretian distribution 75% of the inflationary time series are classified as integrated of order zero. Under the assumption of normality of distribution of inflation this can be inferred for 11.11% of the inflationary time series. It has been also shown that 71.03% of the price time series exhibit unit root bilinearity. Analysis of the inflationary time series reveals the presence of bilinearity in 9.35% of them.
APA, Harvard, Vancouver, ISO, and other styles
2

Gordon, Carol J. (Carol Jean). "The Robustness of O'Brien's r Transformation to Non-Normality." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc332002/.

Full text
Abstract:
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known to have a non-normal distribution, and the size of the equal samples is at least twelve.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Yinkai. "Non-normality, uncertainty and inflation forecasting : an analysis of China's inflation." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37175.

Full text
Abstract:
Economic forecasting is important because it can affect the decision making processes of individuals, firms and governments so as to affect their behaviours. In this thesis, I discuss different methodologies for forecasting and forecast evaluation. I also discuss the role of assumption of normality and the role of uncertainty in economic forecasting. The first chapter is the introduction of the thesis. In second chapter, I conduct a Monte Carlo simulation to investigate the performances of forecast combination and the forecast encompassing test under the forecast errors non-normality. In third chapter, I examines the relationship between inflation forecast uncertainties and macroeconomic uncertainty for China by using different measures of uncertainties. I also investigate the relationship between inflation forecast uncertainties and inflation itself. In fourth chapter, I compute the probabilities of deflation for China by applying density forecast based on the theories and methodologies from previous two chapters. Particularly, I construct density forecasts for different forecast horizons by a joint distribution using Student-t copula. The fifth chapter is conclusion.
APA, Harvard, Vancouver, ISO, and other styles
4

Chuenpibal, Tanitpong. "If I pick up non-normality, can robust models make it better?" Thesis, University of Exeter, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Qi. "Finite sample distributions and non-normality in second generation panel unit root tests." Thesis, University of Leicester, 2010. http://hdl.handle.net/2381/8929.

Full text
Abstract:
As a remarkable advantage, panel unit root testing statistics present Gaussian distribution in the limit rather than the complicated functionals of Wiener processes compared with traditional single time series tests. Therefore, the asymptotic critical values are directly used and the finite sample performance is not given proper attention. In addition, the unit root test literature heavily relies on the normality assumption, when this condition fails, the asymptotic results are no longer valid. This thesis analyzes and finds serious finite sample bias in panel unit root tests and the systematic impact of non-normality on the tests. Using Monte Carlo simulations, in particular, the application of response surface analysis with newly designed functional forms of response surface regressions, the thesis demonstrates the trend patterns of finite sample bias and test bias vary closely in relation to the variation in sample size and the degree of non-normality, respectively. Finite sample critical values are then proposed, more importantly, the finite sample critical values are augmented by the David-Johnson estimate of percentile standard deviation to account for the randomness incurred by stochastic simulations. Non-normality is modeled by the Lévy-Paretian stable distribution. Certain degree of non-normality is found which causes so severe test dis-tortion that the finite sample critical values computed under normality are no longer va-lid. It provides important indications to the reliability of panel unit root test results when empirical data exhibit non-normality. Finally, a panel of OECD country inflation rates is examined for stationarity considering its feature of structural breaks. Instead of constructing structural breaks in panel unit root tests, an alternative and new approach is proposed by treating the breaks as a type of non-normality. With the help of earlier results in the thesis, the study supports the presence of unit root in inflation rates.
APA, Harvard, Vancouver, ISO, and other styles
6

Shutes, Karl. "Non-normality in asset pricing- extensions and applications of the skew-normal distribution." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lau, Christian [Verfasser], Jörg [Akademischer Betreuer] Laitenberger, and Claudia [Akademischer Betreuer] Becker. "Non-normality in financial markets and the measurement of risk / Christian Lau. Betreuer: Jörg Laitenberger ; Claudia Becker." Halle, Saale : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2015. http://d-nb.info/1078505004/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tano, Bask Andreas, and Johan Jaurin. "Det elliptiska säkerhetsområdets robusthet : hur robust är metoden med de elliptiska säkerhetsområdena förett symmetriskt men icke normalfördelat datamaterial?" Thesis, Umeå University, Department of Statistics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34821.

Full text
Abstract:

Quality Control is a term often used within production and is referring to managing processes so they produce capable products. Within Quality Control, process capability index is a common measure to oversee processes. Safety Region Plots were introduced to do this graphically. In Albing & Vännman (2010) the concept of Safety Region Plots is expanded to incorporate an elliptical shape. The method of Elliptical Safety Region Plots assumes a normally distributed data. In this paper we are looking at the robustness of the Elliptical Safety Region Plots if we can assume a symmetrically, but non-normal, distribution. In the results we can conclude that an adjustment is required for symmetric, but non-normal, data if the method in Albing & Vännman (2010) is going to be used. An eventual adjustment is discussed in discussions. To easily be able to use the Elliptical Safety Region Plots mentioned in Albing & Vännman (2010) we have developed a program in RExcel.

APA, Harvard, Vancouver, ISO, and other styles
9

Rogers, Catherine Jane. "Power comparisons of four post-MANOVA tests under variance-covariance heterogeneity and non-normality in the two group case." Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/40171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Joo, Seang-Hwane. "Robustness of the Within- and Between-Series Estimators to Non-Normal Multiple-Baseline Studies: A Monte Carlo Study." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6715.

Full text
Abstract:
In single-case research, multiple-baseline (MB) design is the most widely used design in practical settings. It provides the opportunity to estimate the treatment effect based on not only within-series comparisons of treatment phase to baseline phase observations, but also time-specific between-series comparisons of observations from those that have started treatment to those that are still in the baseline. In MB studies, the average treatment effect and the variation of these effects across multiple participants can be estimated using various statistical modeling methods. Recently, two types of statistical modeling methods were proposed for analyzing MB studies: a) within-series model and b) between-series model. The within-series model is a typical two-level multilevel modeling approach analyzing the measurement occasions within a participant, whereas the between-series model is an alternative modeling approach analyzing participants’ measurement occasions at certain time points, where some participants are in the baseline phase and others are in the treatment phase. Parameters of both within- and between-series models are generally estimated with restricted maximum likelihood (ReML) estimation and ReML is developed based on the assumption of normality (Hox, et al., 2010; Raudenbush & Bryk, 2002). However, in practical educational and psychological settings, observed data may not be easily assumed to be normal. Therefore, the purpose of this study is to investigate the robustness of analyzing MB studies with the within- and between-series models when level-1 errors are non-normal. A Monte Carlo study was conducted under the conditions where level-1 errors were generated from non-normal distributions in which skewness and kurtosis of the distribution were manipulated. Four statistical approaches were considered for comparison based on theoretical and/or empirical rationales. The approaches were defined by the crossing of two analytic decisions: a) whether to use a within- or between-series estimate of effect and b) whether to use REML estimation with Kenward-Roger adjustment for inferences or Bayesian estimation and inference. The accuracy of parameter estimation and statistical power and Type I error were systematically analyzed. The results of the study showed the within- and between-series models are robust to the non-normality of the level-1 error variance. Both within- and between-series models estimated the treatment effect accurately and statistical inferences were acceptable. ReML and Bayesian estimations also showed similar results in the current study. Applications and implications for applied and methodology researchers are discussed based on the findings of the study.
APA, Harvard, Vancouver, ISO, and other styles
11

Donmez, Ayca. "Adaptive Estimation And Hypothesis Testing Methods." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611724/index.pdf.

Full text
Abstract:
For statistical estimation of population parameters, Fisher&rsquo
s maximum likelihood estimators (MLEs) are commonly used. They are consistent, unbiased and efficient, at any rate for large n. In most situations, however, MLEs are elusive because of computational difficulties. To alleviate these difficulties, Tiku&rsquo
s modified maximum likelihood estimators (MMLEs) are used. They are explicit functions of sample observations and easy to compute. They are asymptotically equivalent to MLEs and, for small n, are equally efficient. Moreover, MLEs and MMLEs are numerically very close to one another. For calculating MLEs and MMLEs, the functional form of the underlying distribution has to be known. For machine data processing, however, such is not the case. Instead, what is reasonable to assume for machine data processing is that the underlying distribution is a member of a broad class of distributions. Huber assumed that the underlying distribution is long-tailed symmetric and developed the so called M-estimators. It is very desirable for an estimator to be robust and have bounded influence function. M-estimators, however, implicitly censor certain sample observations which most practitioners do not appreciate. Tiku and Surucu suggested a modification to Tiku&rsquo
s MMLEs. The new MMLEs are robust and have bounded influence functions. In fact, these new estimators are overall more efficient than M-estimators for long-tailed symmetric distributions. In this thesis, we have proposed a new modification to MMLEs. The resulting estimators are robust and have bounded influence functions. We have also shown that they can be used not only for long-tailed symmetric distributions but for skew distributions as well. We have used the proposed modification in the context of experimental design and linear regression. We have shown that the resulting estimators and the hypothesis testing procedures based on them are indeed superior to earlier such estimators and tests.
APA, Harvard, Vancouver, ISO, and other styles
12

Pospíšil, Tomáš. "STOCHASTIC MODELING OF COMPOSITE MATERIALS." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-233889.

Full text
Abstract:
Práce je věnována generování náhodných struktur dvousložkových vláknových kompozitních materiálů a statistickým metodám analýzy náhodnosti těchto struktur. Byly vyvinuty čtyři algoritmy a vygenerované struktury byly statisticky porovnány s reálnými daty.
APA, Harvard, Vancouver, ISO, and other styles
13

Uddin, Mohammad Moin. "ROBUST STATISTICAL METHODS FOR NON-NORMAL QUALITY ASSURANCE DATA ANALYSIS IN TRANSPORTATION PROJECTS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/153.

Full text
Abstract:
The American Association of Highway and Transportation Officials (AASHTO) and Federal Highway Administration (FHWA) require the use of the statistically based quality assurance (QA) specifications for construction materials. As a result, many of the state highway agencies (SHAs) have implemented the use of a QA specification for highway construction. For these statistically based QA specifications, quality characteristics of most construction materials are assumed normally distributed, however, the normality assumption can be violated in several forms. Distribution of data can be skewed, kurtosis induced, or bimodal. If the process shows evidence of a significant departure from normality, then the quality measures calculated may be erroneous. In this research study, an extended QA data analysis model is proposed which will significantly improve the Type I error and power of the F-test and t-test, and remove bias estimates of Percent within Limit (PWL) based pay factor calculation. For the F-test, three alternative tests are proposed when sampling distribution is non-normal. These are: 1) Levene’s test; 2) Brown and Forsythe’s test; and 3) O’Brien’s test. One alternative method is proposed for the t-test, which is the non-parametric Wilcoxon - Mann – Whitney Sign Rank test. For PWL based pay factor calculation when lot data suffer non-normality, three schemes were investigated, which are: 1) simple transformation methods, 2) The Clements method, and 3) Modified Box-Cox transformation using “Golden Section Search” method. The Monte Carlo simulation study revealed that both Levene’s test and Brown and Forsythe’s test are robust alternative tests of variances when underlying sample population distribution is non-normal. Between the t-test and Wilcoxon test, the t-test was found significantly robust even when sample population distribution was severely non-normal. Among the data transformation for PWL based pay factor, the modified Box-Cox transformation using the golden section search method was found to be the most effective in minimizing or removing pay bias. Field QA data was analyzed to validate the model and a Microsoft® Excel macro based software is developed, which can adjust any pay consequences due to non-normality.
APA, Harvard, Vancouver, ISO, and other styles
14

Yilmaz, Yildiz Elif. "Experimental Design With Short-tailed And Long-tailed Symmetric Error Distributions." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605191/index.pdf.

Full text
Abstract:
One-way and two-way classification models in experimental design for both balanced and unbalanced cases are considered when the errors have Generalized Secant Hyperbolic distribution. Efficient and robust estimators for main and interaction effects are obtained by using the modified maximum likelihood estimation (MML) technique. The test statistics analogous to the normal-theory F statistics are defined to test main and interaction effects and a test statistic for testing linear contrasts is defined. It is shown that test statistics based on MML estimators are efficient and robust. The methodogy obtained is also generalized to situations where the error distributions from block to block are non-identical.
APA, Harvard, Vancouver, ISO, and other styles
15

Kolli, Kranthi Kumar. "Domain Effects in the Finite / Infinite Time Stability Properties of a Viscous Shear Flow Discontinuity." Connect to this title, 2008. http://scholarworks.umass.edu/theses/204/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yilmaz, Yildiz Elif. "Bayesian Learning Under Nonnormality." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605582/index.pdf.

Full text
Abstract:
Naive Bayes classifier and maximum likelihood hypotheses in Bayesian learning are considered when the errors have non-normal distribution. For location and scale parameters, efficient and robust estimators that are obtained by using the modified maximum likelihood estimation (MML) technique are used. In naive Bayes classifier, the error distributions from class to class and from feature to feature are assumed to be non-identical and Generalized Secant Hyperbolic (GSH) and Generalized Logistic (GL) distribution families have been used instead of normal distribution. It is shown that the non-normal naive Bayes classifier obtained in this way classifies the data more accurately than the one based on the normality assumption. Furthermore, the maximum likelihood (ML) hypotheses are obtained under the assumption of non-normality, which also produce better results compared to the conventional ML approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Herrington, Richard S. "Simulating Statistical Power Curves with the Bootstrap and Robust Estimation." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2846/.

Full text
Abstract:
Power and effect size analysis are important methods in the psychological sciences. It is well known that classical statistical tests are not robust with respect to power and type II error. However, relatively little attention has been paid in the psychological literature to the effect that non-normality and outliers have on the power of a given statistical test (Wilcox, 1998). Robust measures of location exist that provide much more powerful tests of statistical hypotheses, but their usefulness in power estimation for sample size selection, with real data, is largely unknown. Furthermore, practical approaches to power planning (Cohen, 1988) usually focus on normal theory settings and in general do not make available nonparametric approaches to power and effect size estimation. Beran (1986) proved that it is possible to nonparametrically estimate power for a given statistical test using bootstrap methods (Efron, 1993). However, this method is not widely known or utilized in data analysis settings. This research study examined the practical importance of combining robust measures of location with nonparametric power analysis. Simulation and analysis of real world data sets are used. The present study found that: 1) bootstrap confidence intervals using Mestimators gave shorter confidence intervals than the normal theory counterpart whenever the data had heavy tailed distributions; 2) bootstrap empirical power is higher for Mestimators than the normal theory counterpart when the data had heavy tailed distributions; 3) the smoothed bootstrap controls type I error rate (less than 6%) under the null hypothesis for small sample sizes; and 4) Robust effect sizes can be used in conjuction with Cohen's (1988) power tables to get more realistic sample sizes given that the data distribution has heavy tails.
APA, Harvard, Vancouver, ISO, and other styles
18

Khalil, Nathalie. "Conditions d'optimalité pour des problèmes en contrôle optimal et applications." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0095/document.

Full text
Abstract:
Le projet de cette thèse est double. Le premier concerne l’extension des résultats précédents sur les conditions nécessaires d’optimalité pour des problèmes avec contraintes d’état, dans le cadre du contrôle optimal ainsi que dans le cadre de calcul des variations. Le deuxième objectif consiste à travailler sur deux nouveaux aspects de recherche : dériver des résultats de viabilité pour une classe de systèmes de contrôle avec des contraintes d’état dans lesquels les conditions dites ‘standard inward pointing conditions’ sont violées; et établir les conditions nécessaires d’optimalité pour des problèmes de minimisation de coût moyen éventuellement perturbés par des paramètres inconnus.Dans la première partie, nous examinons les conditions nécessaires d’optimalité qui jouent un rôle important dans la recherche de candidats pour être des solutions optimales parmi toutes les solutions admissibles. Cependant, dans les problèmes d’optimisation dynamique avec contraintes d’état, certaines situations pathologiques pourraient survenir. Par exemple, il se peut que le multiplicateur associé à la fonction objective (à minimiser) disparaisse. Dans ce cas, la fonction objective à minimiser n’intervient pas dans les conditions nécessaires de premier ordre: il s’agit du cas dit anormal. Un phénomène pire, appelé le cas dégénéré montre que, dans certaines circonstances, l’ensemble des trajectoires admissibles coïncide avec l’ensemble des candidats minimiseurs. Par conséquent, les conditions nécessaires ne donnent aucune information sur les minimiseurs possibles.Pour surmonter ces difficultés, de nouvelles hypothèses supplémentaires doivent être imposées, appelées les qualifications de la contrainte. Nous étudions ces deux problèmes (normalité et non dégénérescence) pour des problèmes de contrôle optimal impliquant des contraintes dynamiques exprimées en termes d’inclusion différentielle, lorsque le minimiseur a son point de départ dans une région où la contrainte d’état est non lisse. Nous prouvons que sous une information supplémentaire impliquant principalement le cône tangent de Clarke, les conditions nécessaires sous la forme dite ‘Extended Euler-Lagrange condition’ sont satisfaites en forme normale et non dégénérée pour deux classes de problèmes de contrôle optimal avec contrainte d’état. Le résultat sur la normalité est également appliqué pour le problème de calcul des variations avec contrainte d’état.Dans la deuxième partie de la thèse, nous considérons d’abord une classe de systèmes de contrôle avec contrainte d’état pour lesquels les qualifications de la contrainte standard du ‘premier ordre’ ne sont pas satisfaites, mais une qualification de la contrainte d’ordre supérieure (ordre 2) est satisfaite.Nous proposons une nouvelle construction des trajectoires admissibles (dit un résultat de viabilité) et nous étudions des exemples (tels que l’intégrateur non holonomique de Brockett) fournissant en plus un résultat d’estimation non linéaire. L’autre sujet de la deuxième partie de la thèse concerne l’étude d’une classe de problèmes de contrôle optimal dans lesquels des incertitudes apparaissent dans les données en termes de paramètres inconnus. En tenant compte d’un critère de performance sous la forme de coût moyen, une question cruciale est clairement de pouvoir caractériser les contrôles optimaux indépendamment de l’action du paramètre inconnu: cela permet de trouver une sorte de ‘meilleur compromis’ parmi toutes les réalisations possibles du système de contrôle tant que le paramètre varie. Pour ce type de problèmes, nous obtenons des conditions nécessaires d’optimalité sous la forme du Principe du Maximum (éventuellement pour le cas non lisse)
The project of this thesis is twofold. The first concerns the extension of previous results on necessary optimality conditions for state constrained problems in optimal control and in calculus of variations. The second aim consists in working along two new research lines: derive viability results for a class of control systems with state constraints in which ‘standard inward pointing conditions’ are violated; and establish necessary optimality conditions for average cost minimization problems possibly perturbed by unknown parameters.In the first part, we examine necessary optimality conditions which play an important role in finding candidates to be optimal solutions among all admissible solutions. However, in dynamic optimization problems with state constraints, some pathological situations might arise. For instance, it might occur that the multiplier associated with the objective function (to minimize) vanishes. In this case, the objective function to minimize does not intervene in first order necessary conditions: this is referred to as the abnormal case. A worse phenomenon, called the degenerate case shows that in some circumstances the set of admissible trajectories coincides with the set of candidates to be minimizers. Therefore the necessary conditions give no information on the possible minimizers.To overcome these difficulties, new additional hypotheses have to be imposed, known as constraint qualifications. We investigate these two issues (normality and non-degeneracy) for optimal control problems involving state constraints and dynamics expressed as a differential inclusion, when the minimizer has its left end-point in a region where the state constraint set in nonsmooth. We prove that under an additional information involving mainly the Clarke tangent cone, necessary conditions in the form of the Extended Euler-Lagrange condition are derived in the normal and non-degenerate form for two different classes of state constrained optimal control problems. Application of the normality result is shown also for the calculus of variations problem subject to a state constraint.In the second part of the thesis, we consider first a class of state constrained control systems for which standard ‘first order’ constraint qualifications are not satisfied, but a higher (second) order constraint qualification is satisfied. We propose a new construction for feasible trajectories (a viability result) and we investigate examples (such as the Brockett nonholonomic integrator) providing in addition a non-linear stimate result. The other topic of the second part of the thesis concerns the study of a class of optimal control problems in which uncertainties appear in the data in terms of unknown parameters. Taking into consideration an average cost criterion, a crucial issue is clearly to be able to characterize optimal controls independently of the unknown parameter action: this allows to find a sort of ‘best compromise’ among all the possible realizations of the control system as the parameter varies. For this type of problems, we derive necessary optimality conditions in the form of Maximum Principle (possibly nonsmooth)
APA, Harvard, Vancouver, ISO, and other styles
19

Hafsa, Houda. "Modèles d'évaluation et d'allocations des actifs financiers dans le cadre de non normalité des rendements : essais sur le marché français." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM1015.

Full text
Abstract:
Depuis quelques années, la recherche financière s'inscrit dans une nouvelle dynamique. La nécessité de mieux modéliser le comportement des rendements des actifs financiers et les risques sur les marchés pousse les chercheurs à trouver des mesures de risque plus adéquates. Ce travail de recherche se situe dans cette évolution, ayant admis les caractéristiques des séries financières par des faits stylisés tels que la non normalité des rendements. A travers cette thèse nous essayons de montrer l'importance d'intégrer des mesures de risque qui tiennent compte de la non normalité dans le processus d'évaluation et d'allocation des actifs financiers sur le marché français. Cette thèse propose trois chapitres correspondant chacun à un article de recherche académique. Le premier article propose de revisiter les modèles d'évaluation en prenant en compte des moments d'ordres supérieurs dans un cadre de downside risk. Les résultats indiquent que les downside co-moments d'ordres supérieurs sont déterminants dans l'explication des variations des rendements en coupe transversale. Le second chapitre propose de mettre en relation la rentabilité financière et le risque mesuré par la VaR ou la CVaR. Nous trouvons que la VaR présente un pouvoir explicatif plus élevé que celui de la CVaR et que l'approche normale est plus intéressante que l'approche basée sur l'expansion de Cornish-Fisher (1937). Ces deux résultats contredisent les prédictions théoriques mais nous avons pu démontrer qu'ils sont inhérents au marché français. Le troisième chapitre propose une autre piste, nous revisitons le modèle moyenne-CVaR dans un cadre dynamique et en présence des coûts de transaction
This dissertation is part of an ongoing researches looking for an adequate model that apprehend the behavior of financial asset returns. Through this research, we propose to analyze the relevance of risk measures that take into account the non-normality in the asset pricing and portfolio allocation models on the French market. This dissertation is comprised of three articles. The first one proposes to revisit the asset pricing model taking into account the higher-order moments in a downside framework. The results indicate that the downside higher order co-moments are relevant in explaining the cross sectional variations of returns. The second paper examines the relation between expected returns and the VaR or CVaR. A cross sectional analysis provides evidence that VaR is superior measure of risk when compared to the CVaR. We find also that the normal estimation approach gives better results than the approach based on the expansion of Cornish-Fisher (1937). Both results contradict the theoretical predictions but we proved that they are inherent to the French market. In the third paper, we review the mean-CVaR model in a dynamic framework and we take into account the transaction costs. The results indicate that the asset allocation model that takes into account the non-normality can improve the performance of the portfolio comparing to the mean-variance model, in terms of the average return and the return-to CVaR ratio. Through these three studies, we think that it is possible to modify the risk management framework to apprehend in a better way the risk of loss associated to the non-normality problem
APA, Harvard, Vancouver, ISO, and other styles
20

Ferrani, Yacine. "Sur l'estimation non paramétrique de la densité et du mode dans les modèles de données incomplètes et associées." Thesis, Littoral, 2014. http://www.theses.fr/2014DUNK0370/document.

Full text
Abstract:
Cette thèse porte sur l'étude des propriétés asymptotiques d'un estimateur non paramétrique de la densité de type Parzen-Rosenblatt, sous un modèle de données censurées à droite, vérifiant une structure de dépendance de type associé. Dans ce cadre, nous rappelons d'abord les résultats existants, avec détails, dans les cas i.i.d. et fortement mélangeant (α-mélange). Sous des conditions de régularité classiques, il est établi que la vitesse de coonvergence uniforme presque sûre de l'estimateur étudié, est optimale. Dans la partie dédiée aux résultats de cette thèse, deux résultats principaux et originaux sont présentés : le premier résultat concerne la convergence uniforme presque sûre de l'estimateur étudié sous l'hypothèse d'association. L'outil principal ayant permis l'obtention de la vitesse optimale est l'adaptation du Théorème de Doukhan et Neumann (2007), dans l'étude du terme des fluctuations (partie aléatoire) de l'écart entre l'estimateur considéré et le paramètre étudié (densité). Comme application, la convergence presque sûre de l'estimateur non paramétrique du mode est établie. Les résultats obtenus ont fait l'objet d'un article accepté pour publication dans Communications in Statistics-Theory and Methods ; Le deuxième résultat établit la normalité asymptotique de l'estimateur étudié sous le même modèle et constitute ainsi une extension au cas censuré, du résultat obtenu par Roussas (2000). Ce résultat est soumis pour publication
This thesis deals with the study of asymptotic properties of e kernel (Parzen-Rosenblatt) density estimate under associated and censored model. In this setting, we first recall with details the existing results, studied in both i.i.d. and strong mixing condition (α-mixing) cases. Under mild standard conditions, it is established that the strong uniform almost sure convergence rate, is optimal. In the part dedicated to the results of this thesis, two main and original stated results are presented : the first result concerns the strong uniform consistency rate of the studied estimator under association hypothesis. The main tool having permitted to achieve the optimal speed, is the adaptation of the Theorem due to Doukhan and Neumann (2007), in studying the term of fluctuations (random part) of the gap between the considered estimator and the studied parameter (density). As an application, the almost sure convergence of the kernel mode estimator is established. The stated results have been accepted for publication in Communications in Statistics-Theory & Methods ; The second result establishes the asymptotic normality of the estimator studied under the same model and then, constitute an extension to the censored case, the result stated by Roussas (2000). This result is submitted for publication
APA, Harvard, Vancouver, ISO, and other styles
21

Benelmadani, Djihad. "Contribution à la régression non paramétrique avec un processus erreur d'autocovariance générale et application en pharmacocinétique." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM034/document.

Full text
Abstract:
Dans cette thèse, nous considérons le modèle de régression avec plusieurs unités expérimentales, où les erreurs forment un processus d'autocovariance dans un cadre générale, c'est-à-dire, un processus du second ordre (stationnaire ou non stationnaire) avec une autocovariance non différentiable le long de la diagonale. Nous sommes intéressés, entre autres, à l'estimation non paramétrique de la fonction de régression de ce modèle.Premièrement, nous considérons l'estimateur classique proposé par Gasser et Müller. Nous étudions ses performances asymptotiques quand le nombre d'unités expérimentales et le nombre d'observations tendent vers l'infini. Pour un échantillonnage régulier, nous améliorons les vitesses de convergence d'ordre supérieur de son biais et de sa variance. Nous montrons aussi sa normalité asymptotique dans le cas des erreurs corrélées.Deuxièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression, basé sur une propriété de projection. Cet estimateur est construit à travers la fonction d'autocovariance des erreurs et une fonction particulière appartenant à l'Espace de Hilbert à Noyau Autoreproduisant (RKHS) associé à la fonction d'autocovariance. Nous étudions les performances asymptotiques de l'estimateur en utilisant les propriétés de RKHS. Ces propriétés nous permettent d'obtenir la vitesse optimale de convergence de la variance de cet estimateur. Nous prouvons sa normalité asymptotique, et montrons que sa variance est asymptotiquement plus petite que celle de l'estimateur de Gasser et Müller. Nous conduisons une étude de simulation pour confirmer nos résultats théoriques.Troisièmement, nous proposons un nouvel estimateur à noyau pour la fonction de régression. Cet estimateur est construit en utilisant la règle numérique des trapèzes, pour approximer l'estimateur basé sur des données continues. Nous étudions aussi sa performance asymptotique et nous montrons sa normalité asymptotique. En outre, cet estimateur permet d'obtenir le plan d'échantillonnage optimal pour l'estimation de la fonction de régression. Une étude de simulation est conduite afin de tester le comportement de cet estimateur dans un plan d'échantillonnage de taille finie, en terme d'erreur en moyenne quadratique intégrée (IMSE). De plus, nous montrons la réduction dans l'IMSE en utilisant le plan d'échantillonnage optimal au lieu de l'échantillonnage uniforme.Finalement, nous considérons une application de la régression non paramétrique dans le domaine pharmacocinétique. Nous proposons l'utilisation de l'estimateur non paramétrique à noyau pour l'estimation de la fonction de concentration. Nous vérifions son bon comportement par des simulations et une analyse de données réelles. Nous investiguons aussi le problème de l'estimation de l'Aire Sous la Courbe de concentration (AUC), pour lequel nous proposons un nouvel estimateur à noyau, obtenu par l'intégration de l'estimateur à noyau de la fonction de régression. Nous montrons, par une étude de simulation, que le nouvel estimateur est meilleur que l'estimateur classique en terme d'erreur en moyenne quadratique. Le problème crucial de l'obtention d'un plan d'échantillonnage optimale pour l'estimation de l'AUC est discuté en utilisant l'algorithme de recuit simulé généralisé
In this thesis, we consider the fixed design regression model with repeated measurements, where the errors form a process with general autocovariance function, i.e. a second order process (stationary or nonstationary), with a non-differentiable covariance function along the diagonal. We are interested, among other problems, in the nonparametric estimation of the regression function of this model.We first consider the well-known kernel regression estimator proposed by Gasser and Müller. We study its asymptotic performance when the number of experimental units and the number of observations tend to infinity. For a regular sequence of designs, we improve the higher rates of convergence of the variance and the bias. We also prove the asymptotic normality of this estimator in the case of correlated errors.Second, we propose a new kernel estimator of the regression function based on a projection property. This estimator is constructed through the autocovariance function of the errors, and a specific function belonging to the Reproducing Kernel Hilbert Space (RKHS) associated to the autocovariance function. We study its asymptotic performance using the RKHS properties. These properties allow to obtain the optimal convergence rate of the variance. We also prove its asymptotic normality. We show that this new estimator has a smaller asymptotic variance then the one of Gasser and Müller. A simulation study is conducted to confirm this theoretical result.Third, we propose a new kernel estimator for the regression function. This estimator is constructed through the trapezoidal numerical approximation of the kernel regression estimator based on continuous observations. We study its asymptotic performance, and we prove its asymptotic normality. Moreover, this estimator allow to obtain the asymptotic optimal sampling design for the estimation of the regression function. We run a simulation study to test the performance of the proposed estimator in a finite sample set, where we see its good performance, in terms of Integrated Mean Squared Error (IMSE). In addition, we show the reduction of the IMSE using the optimal sampling design instead of the uniform design in a finite sample set.Finally, we consider an application of the regression function estimation in pharmacokinetics problems. We propose to use the nonparametric kernel methods, for the concentration-time curve estimation, instead of the classical parametric ones. We prove its good performance via simulation study and real data analysis. We also investigate the problem of estimating the Area Under the concentration Curve (AUC), where we introduce a new kernel estimator, obtained by the integration of the regression function estimator. We prove, using a simulation study, that the proposed estimators outperform the classical one in terms of Mean Squared Error. The crucial problem of finding the optimal sampling design for the AUC estimation is investigated using the Generalized Simulating Annealing algorithm
APA, Harvard, Vancouver, ISO, and other styles
22

Manandhar, Binod. "Bayesian Models for the Analyzes of Noisy Responses From Small Areas: An Application to Poverty Estimation." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/188.

Full text
Abstract:
We implement techniques of small area estimation (SAE) to study consumption, a welfare indicator, which is used to assess poverty in the 2003-2004 Nepal Living Standards Survey (NLSS-II) and the 2001 census. NLSS-II has detailed information of consumption, but it can give estimates only at stratum level or higher. While population variables are available for all households in the census, they do not include the information on consumption; the survey has the `population' variables nonetheless. We combine these two sets of data to provide estimates of poverty indicators (incidence, gap and severity) for small areas (wards, village development committees and districts). Consumption is the aggregate of all food and all non-food items consumed. In the welfare survey the responders are asked to recall all information about consumptions throughout the reference year. Therefore, such data are likely to be noisy, possibly due to response errors or recalling errors. The consumption variable is continuous and positively skewed, so a statistician might use a logarithmic transformation, which can reduce skewness and help meet the normality assumption required for model building. However, it could be problematic since back transformation may produce inaccurate estimates and there are difficulties in interpretations. Without using the logarithmic transformation, we develop hierarchical Bayesian models to link the survey to the census. In our models for consumption, we incorporate the `population' variables as covariates. First, we assume that consumption is noiseless, and it is modeled using three scenarios: the exponential distribution, the gamma distribution and the generalized gamma distribution. Second, we assume that consumption is noisy, and we fit the generalized beta distribution of the second kind (GB2) to consumption. We consider three more scenarios of GB2: a mixture of exponential and gamma distributions, a mixture of two gamma distributions, and a mixture of two generalized gamma distributions. We note that there are difficulties in fitting the models for noisy responses because these models have non-identifiable parameters. For each scenario, after fitting two hierarchical Bayesian models (with and without area effects), we show how to select the most plausible model and we perform a Bayesian data analysis on Nepal's poverty data. We show how to predict the poverty indicators for all wards, village development committees and districts of Nepal (a big data problem) by combining the survey data with the census. This is a computationally intensive problem because Nepal has about four million households with about four thousand households in the survey and there is no record linkage between households in the survey and the census. Finally, we perform empirical studies to assess the quality of our survey-census procedure.
APA, Harvard, Vancouver, ISO, and other styles
23

Chandler, Gary James. "Sensitivity analysis of low-density jets and flames." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/246531.

Full text
Abstract:
This work represents the initial steps in a wider project that aims to map out the sensitive areas in fuel injectors and combustion chambers. Direct numerical simulation (DNS) using a Low-Mach-number formulation of the Navier–Stokes equations is used to calculate direct-linear and adjoint global modes for axisymmetric low-density jets and lifted jet diffusion flames. The adjoint global modes provide a map of the most sensitive locations to open-loop external forcing and heating. For the jet flows considered here, the most sensitive region is at the inlet of the domain. The sensitivity of the global-mode eigenvalues to force feedback and to heat and drag from a hot-wire is found using a general structural sensitivity framework. Force feedback can occur from a sensor-actuator in the flow or as a mechanism that drives global instability. For the lifted flames, the most sensitive areas lie between the inlet and flame base. In this region the jet is absolutely unstable, but the close proximity of the flame suppresses the global instability seen in the non-reacting case. The lifted flame is therefore particularly sensitive to outside disturbances in the non-reacting zone. The DNS results are compared to a local analysis. The most absolutely unstable region for all the flows considered is at the inlet, with the wavemaker slightly downstream of the inlet. For lifted flames, the region of largest sensitivity to force feedback is near to the location of the wavemaker, but for the non-reacting jet this region is downstream of the wavemaker and outside of the pocket of absolute instability near the inlet. Analysing the sensitivity of reacting and non-reacting variable-density shear flows using the low-Mach-number approximation has up until now not been done. By including reaction, a large forward step has been taken in applying these techniques to real fuel injectors.
APA, Harvard, Vancouver, ISO, and other styles
24

El, Heda Khadijetou. "Choix optimal du paramètre de lissage dans l'estimation non paramétrique de la fonction de densité pour des processus stationnaires à temps continu." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0484/document.

Full text
Abstract:
Les travaux de cette thèse portent sur le choix du paramètre de lissage dans le problème de l'estimation non paramétrique de la fonction de densité associée à des processus stationnaires ergodiques à temps continus. La précision de cette estimation dépend du choix de ce paramètre. La motivation essentielle est de construire une procédure de sélection automatique de la fenêtre et d'établir des propriétés asymptotiques de cette dernière en considérant un cadre de dépendance des données assez général qui puisse être facilement utilisé en pratique. Cette contribution se compose de trois parties. La première partie est consacrée à l'état de l'art relatif à la problématique qui situe bien notre contribution dans la littérature. Dans la deuxième partie, nous construisons une méthode de sélection automatique du paramètre de lissage liée à l'estimation de la densité par la méthode du noyau. Ce choix issu de la méthode de la validation croisée est asymptotiquement optimal. Dans la troisième partie, nous établissons des propriétés asymptotiques, de la fenêtre issue de la méthode de la validation croisée, données par des résultats de convergence presque sûre
The work this thesis focuses on the choice of the smoothing parameter in the context of non-parametric estimation of the density function for stationary ergodic continuous time processes. The accuracy of the estimation depends greatly on the choice of this parameter. The main goal of this work is to build an automatic window selection procedure and establish asymptotic properties while considering a general dependency framework that can be easily used in practice. The manuscript is divided into three parts. The first part reviews the literature on the subject, set the state of the art and discusses our contribution in within. In the second part, we design an automatical method for selecting the smoothing parameter when the density is estimated by the Kernel method. This choice stemming from the cross-validation method is asymptotically optimal. In the third part, we establish an asymptotic properties pertaining to consistency with rate for the resulting estimate of the window-width
APA, Harvard, Vancouver, ISO, and other styles
25

Lacombe, Jean-Pierre. "Analyse statistique de processus de poisson non homogènes. Traitement statistique d'un multidétecteur de particules." Phd thesis, Grenoble 1, 1985. http://tel.archives-ouvertes.fr/tel-00318875.

Full text
Abstract:
La première partie de cette thèse est consacrée à l'étude statistique des processus de Poisson non homogènes et spatiaux. On définit un test de type Neyman-Pearson concernant la mesure intensité de ces processus. On énonce des conditions pour lesquelles la consistance du test est assurée, et d'autres entrainant la normalité asymptotique de la statistique de test. Dans la seconde partie de ce travail, on étudie certaines techniques de traitement statistique de champs poissoniens et leurs applications à l'étude d'un multidétecteur de particules. On propose en particulier des tests de qualité de l'appareillage ainsi que les méthodes d'extraction du signal
APA, Harvard, Vancouver, ISO, and other styles
26

Purutcuoglu, Vilda. "Unit Root Problems In Time Series Analysis." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12604701/index.pdf.

Full text
Abstract:
In time series models, autoregressive processes are one of the most popular stochastic processes, which are stationary under certain conditions. In this study we consider nonstationary autoregressive models of order one, which have iid random errors. One of the important nonstationary time series models is the unit root process in AR (1), which simply implies that a shock to the system has permanent effect through time. Therefore, testing unit root is a very important problem. However, under nonstationarity, any estimator of the autoregressive coefficient does not have a known exact distribution and the usual t &ndash
statistic is not accurate even if the sample size is very large. Hence,Wiener process is invoked to obtain the asymptotic distribution of the LSE under normality. The first four moments of under normality have been worked out for large n. In 1998, Tiku and Wong proposed the new test statistics and whose type I error and power values are calculated by using three &ndash
moment chi &ndash
square or four &ndash
moment F approximations. The test statistics are based on the modified maximum likelihood estimators and the least square estimators, respectively. They evaluated the type I errors and the power of these tests for a family of symmetric distributions (scaled Student&rsquo
s t). In this thesis, we have extended this work to skewed distributions, namely, gamma and generalized logistic.
APA, Harvard, Vancouver, ISO, and other styles
27

Servien, Rémi. "Estimation de régularité locale." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00730491.

Full text
Abstract:
L'objectif de cette thèse est d'étudier le comportement local d'une mesure de probabilité, notamment au travers d'un indice de régularité locale. Dans la première partie, nous établissons la normalité asymptotique de l'estimateur des kn plus proches voisins de la densité et de l'histogramme. Dans la deuxième, nous définissons un estimateur du mode sous des hypothèses affaiblies. Nous montrons que l'indice de régularité intervient dans ces deux problèmes. Enfin, nous construisons dans une troisième partie différents estimateurs pour l'indice de régularité à partir d'estimateurs de la fonction de répartition, dont nous réalisons une revue bibliographique.
APA, Harvard, Vancouver, ISO, and other styles
28

Caron, Emmanuel. "Comportement des estimateurs des moindres carrés du modèle linéaire dans un contexte dépendant : Étude asymptotique, implémentation, exemples." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0036.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au modèle de régression linéaire usuel dans le cas où les erreurs sont supposées strictement stationnaires. Nous utilisons un résultat de Hannan (1973) qui a prouvé un Théorème Limite Central pour l’estimateur des moindres carrés sous des conditions très générales sur le design et le processus des erreurs. Pour un design et un processus d’erreurs vérifiant les conditions d’Hannan, nous définissons un estimateur de la matrice de covariance asymptotique de l’estimateur des moindres carrés et nous prouvons sa consistance sous des conditions très générales. Ensuite nous montrons comment modifier les tests usuels sur le paramètre du modèle linéaire dans ce contexte dépendant. Nous proposons différentes approches pour estimer la matrice de covariance afin de corriger l’erreur de première espèce des tests. Le paquet R slm que nous avons développé contient l’ensemble de ces méthodes statistiques. Les procédures sont évaluées à travers différents ensembles de simulations et deux exemples particuliers de jeux de données sont étudiés. Enfin, dans le dernier chapitre, nous proposons une méthode non-paramétrique par pénalisation pour estimer la fonction de régression dans le cas où les erreurs sont gaussiennes et corrélées
In this thesis, we consider the usual linear regression model in the case where the error process is assumed strictly stationary.We use a result from Hannan (1973) who proved a Central Limit Theorem for the usual least squares estimator under general conditions on the design and on the error process. Whatever the design and the error process satisfying Hannan’s conditions, we define an estimator of the asymptotic covariance matrix of the least squares estimator and we prove its consistency under very mild conditions. Then we show how to modify the usual tests on the parameter of the linear model in this dependent context. We propose various methods to estimate the covariance matrix in order to correct the type I error rate of the tests. The R package slm that we have developed contains all of these statistical methods. The procedures are evaluated through different sets of simulations and two particular examples of datasets are studied. Finally, in the last chapter, we propose a non-parametric method by penalization to estimate the regression function in the case where the errors are Gaussian and correlated
APA, Harvard, Vancouver, ISO, and other styles
29

Kabui, Ali. "Value at risk et expected shortfall pour des données faiblement dépendantes : estimations non-paramétriques et théorèmes de convergences." Phd thesis, Université du Maine, 2012. http://tel.archives-ouvertes.fr/tel-00743159.

Full text
Abstract:
Quantifier et mesurer le risque dans un environnement partiellement ou totalement incertain est probablement l'un des enjeux majeurs de la recherche appliquée en mathématiques financières. Cela concerne l'économie, la finance, mais d'autres domaines comme la santé via les assurances par exemple. L'une des difficultés fondamentales de ce processus de gestion des risques est de modéliser les actifs sous-jacents, puis d'approcher le risque à partir des observations ou des simulations. Comme dans ce domaine, l'aléa ou l'incertitude joue un rôle fondamental dans l'évolution des actifs, le recours aux processus stochastiques et aux méthodes statistiques devient crucial. Dans la pratique l'approche paramétrique est largement utilisée. Elle consiste à choisir le modèle dans une famille paramétrique, de quantifier le risque en fonction des paramètres, et d'estimer le risque en remplaçant les paramètres par leurs estimations. Cette approche présente un risque majeur, celui de mal spécifier le modèle, et donc de sous-estimer ou sur-estimer le risque. Partant de ce constat et dans une perspective de minimiser le risque de modèle, nous avons choisi d'aborder la question de la quantification du risque avec une approche non-paramétrique qui s'applique à des modèles aussi généraux que possible. Nous nous sommes concentrés sur deux mesures de risque largement utilisées dans la pratique et qui sont parfois imposées par les réglementations nationales ou internationales. Il s'agit de la Value at Risk (VaR) qui quantifie le niveau de perte maximum avec un niveau de confiance élevé (95% ou 99%). La seconde mesure est l'Expected Shortfall (ES) qui nous renseigne sur la perte moyenne au delà de la VaR.
APA, Harvard, Vancouver, ISO, and other styles
30

Boyer, Germain. "Étude de stabilité et simulation numérique de l’écoulement interne des moteurs à propergol solide simplifiés." Thesis, Toulouse, ISAE, 2012. http://www.theses.fr/2012ESAE0029/document.

Full text
Abstract:
Cette thèse vise à modéliser les instabilités hydrodynamiques générant des détachements tourbillonnaires pariétaux (ou VSP) responsables des Oscillations De Pression dans les moteurs à propergol solide longs et segmentés par interaction avec l’acoustique du moteur. Ces instabilités sont modélisées en tant que modes de stabilité linéaire globaux de l’écoulement d’un conduit à parois débitantes. En supposant que les structures pariétales émergent d’une perturbation de l’écoulement de base, des modes discrets et indépendants du maillage utilisé sont calculés. Dans ce but, une discrétisation par collocation spectrale multi-domaine est implémentée dans un solveur parallèle afin de s’affranchir de la croissance polynomiale des fonctions propres et de la présence de couches limites. Les valeurs propres ainsi calculées dépendent explicitement des frontières du domaine, à savoir la position de la perturbation et celle de la sortie, et sont ensuite validées par simulation numérique directe. On montre alors qu’elles permettent bien de décrire la réponse à une perturbation initiale de l’écoulement modifié par une rupture de débit pariétale. Ensuite, la simulation d’une réponse forcée par l’acoustique se fait sous forme de structures tourbillonnaires dont les fréquences discrètes sont en accord avec celles des modes de stabilité. Ces structures sont réfléchies en ondes de pression de même fréquences remontant l’écoulement. Finalement, la simulation numérique et la théorie de la stabilité permettent de montrer que le VSP, dont la réponse est linéaire vis-à-vis d’un forçage compressible comme l’acoustique, est le phénomène moteur des Oscillations De Pression
The current work deals with the modeling of the hydrodynamic instabilities that play a major role in the triggering of the Pressure Oscillations occurring in large segmented solid rocket motors. These instabilities are responsible for the emergence of Parietal Vortex Shedding (PVS) and they interact with the boosters acoustics. They are first modeled as eigenmodes of the internal steady flowfield of a cylindrical duct with sidewall injection within the global linear stability theory framework. Assuming that the related parietal structures emerge from a baseflow disturbance, discrete meshindependant eigenmodes are computed. In this purpose, a multi-domain spectral collocation technique is implemented in a parallel solver to tackle numerical issues such as the eigenfunctions polynomial axial amplification and the existence of boundary layers. The resulting eigenvalues explicitly depend on the location of the boundaries, namely those of the baseflow disturbance and the duct exit, and are then validated by performing Direct Numerical Simulations. First, they successfully describe flow response to an initial disturbance with sidewall velocity injection break. Then, the simulated forced response to acoustics consists in vortical structures wihich discrete frequencies that are in good agreement with those of the eigenmodes. These structures are reflected into upstream pressure waves with identical frequencies. Finally, the PVS, which response to a compressible forcing such as the acoustic one is linear, is understood as the driving phenomenon of the Pressure Oscillations thanks to both numerical simulation and stability theory
APA, Harvard, Vancouver, ISO, and other styles
31

Ahmad, Ali. "Contribution à l'économétrie des séries temporelles à valeurs entières." Thesis, Lille 3, 2016. http://www.theses.fr/2016LIL30059/document.

Full text
Abstract:
Dans cette thèse, nous étudions des modèles de moyennes conditionnelles de séries temporelles à valeurs entières. Tout d’abord, nous proposons l’estimateur de quasi maximum de vraisemblance de Poisson (EQMVP) pour les paramètres de la moyenne conditionnelle. Nous montrons que, sous des conditions générales de régularité, cet estimateur est consistant et asymptotiquement normal pour une grande classe de modèles. Étant donné que les paramètres de la moyenne conditionnelle de certains modèles sont positivement contraints, comme par exemple dans les modèles INAR (INteger-valued AutoRegressive) et les modèles INGARCH (INteger-valued Generalized AutoRegressive Conditional Heteroscedastic), nous étudions la distribution asymptotique de l’EQMVP lorsque le paramètre est sur le bord de l’espace des paramètres. En tenant compte de cette dernière situation, nous déduisons deux versions modifiées du test de Wald pour la significativité des paramètres et pour la moyenne conditionnelle constante. Par la suite, nous accordons une attention particulière au problème de validation des modèles des séries temporelles à valeurs entières en proposant un test portmanteau pour l’adéquation de l’ajustement. Nous dérivons la distribution jointe de l’EQMVP et des autocovariances résiduelles empiriques. Puis, nous déduisons la distribution asymptotique des autocovariances résiduelles estimées, et aussi la statistique du test. Enfin, nous proposons l’EQMVP pour estimer équation-par-équation (EpE) les paramètres de la moyenne conditionnelle des séries temporelles multivariées à valeurs entières. Nous présentons les hypothèses de régularité sous lesquelles l’EQMVP-EpE est consistant et asymptotiquement normal, et appliquons les résultats obtenus à plusieurs modèles des séries temporelles multivariées à valeurs entières
The framework of this PhD dissertation is the conditional mean count time seriesmodels. We propose the Poisson quasi-maximum likelihood estimator (PQMLE) for the conditional mean parameters. We show that, under quite general regularityconditions, this estimator is consistent and asymptotically normal for a wide classeof count time series models. Since the conditional mean parameters of some modelsare positively constrained, as, for example, in the integer-valued autoregressive (INAR) and in the integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH), we study the asymptotic distribution of this estimator when the parameter lies at the boundary of the parameter space. We deduce a Waldtype test for the significance of the parameters and another Wald-type test for the constance of the conditional mean. Subsequently, we propose a robust and general goodness-of-fit test for the count time series models. We derive the joint distribution of the PQMLE and of the empirical residual autocovariances. Then, we deduce the asymptotic distribution of the estimated residual autocovariances and also of a portmanteau test. Finally, we propose the PQMLE for estimating, equation-by-equation (EbE), the conditional mean parameters of a multivariate time series of counts. By using slightly different assumptions from those given for PQMLE, we show the consistency and the asymptotic normality of this estimator for a considerable variety of multivariate count time series models
APA, Harvard, Vancouver, ISO, and other styles
32

Reding, Lucas. "Contributions au théorème central limite et à l'estimation non paramétrique pour les champs de variables aléatoires dépendantes." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMR049.

Full text
Abstract:
La thèse suivante traite du Théorème Central Limite pour des champs de variables aléatoires dépendantes et de son application à l’estimation non-paramétrique. Dans une première partie, nous établissons des théorèmes centraux limite quenched pour des champs satisfaisant une condition projective à la Hannan (1973). Les versions fonctionnelles de ces théorèmes sont également considérées. Dans une seconde partie, nous établissons la normalité asymptotique d’estimateurs à noyau de la densité et de la régression pour des champs fortement mélangeants au sens de Rosenblatt (1956) ou bien des champs faiblement dépendants au sens de Wu (2005). Dans un premier temps, nous établissons les résultats pour l’estimateur à noyau de la régression introduit par Elizbar Nadaraya (1964) et Geoffrey Watson (1964). Puis, dans un second temps, nous étendons ces résultats à une large classe d’estimateurs récursifs introduite par Peter Hall et Prakash Patil (1994)
This thesis deals with the central limit theorem for dependent random fields and its applications to nonparametric statistics. In the first part, we establish some quenched central limit theorems for random fields satisfying a projective condition à la Hannan (1973). Functional versions of these theorems are also considered. In the second part, we prove the asymptotic normality of kernel density and regression estimators for strongly mixing random fields in the sense of Rosenblatt (1956) and for weakly dependent random fields in the sense of Wu (2005). First, we establish the result for the kernel regression estimator introduced by Elizbar Nadaraya (1964) and Geoffrey Watson (1964). Then, we extend these results to a large class of recursive estimators defined by Peter Hall and Prakash Patil (1994)
APA, Harvard, Vancouver, ISO, and other styles
33

Bassene, Aladji. "Contribution à la modélisation spatiale des événements extrêmes." Thesis, Lille 3, 2016. http://www.theses.fr/2016LIL30039/document.

Full text
Abstract:
Dans cette de thèse, nous nous intéressons à la modélisation non paramétrique de données extrêmes spatiales. Nos résultats sont basés sur un cadre principal de la théorie des valeurs extrêmes, permettant ainsi d’englober les lois de type Pareto. Ce cadre permet aujourd’hui d’étendre l’étude des événements extrêmes au cas spatial à condition que les propriétés asymptotiques des estimateurs étudiés vérifient les conditions classiques de la Théorie des Valeurs Extrêmes (TVE) en plus des conditions locales sur la structure des données proprement dites. Dans la littérature, il existe un vaste panorama de modèles d’estimation d’événements extrêmes adaptés aux structures des données pour lesquelles on s’intéresse. Néanmoins, dans le cas de données extrêmes spatiales, hormis les modèles max stables,il n’en existe que peu ou presque pas de modèles qui s’intéressent à l’estimation fonctionnelle de l’indice de queue ou de quantiles extrêmes. Par conséquent, nous étendons les travaux existants sur l’estimation de l’indice de queue et des quantiles dans le cadre de données indépendantes ou temporellement dépendantes. La spécificité des méthodes étudiées réside sur le fait que les résultats asymptotiques des estimateurs prennent en compte la structure de dépendance spatiale des données considérées, ce qui est loin d’être trivial. Cette thèse s’inscrit donc dans le contexte de la statistique spatiale des valeurs extrêmes. Elle y apporte trois contributions principales. • Dans la première contribution de cette thèse permettant d’appréhender l’étude de variables réelles spatiales au cadre des valeurs extrêmes, nous proposons une estimation de l’indice de queue d’une distribution à queue lourde. Notre approche repose sur l’estimateur de Hill (1975). Les propriétés asymptotiques de l’estimateur introduit sont établies lorsque le processus spatial est adéquatement approximé par un processus M−dépendant, linéaire causal ou lorsqu'il satisfait une condition de mélange fort (a-mélange). • Dans la pratique, il est souvent utile de lier la variable d’intérêt Y avec une co-variable X. Dans cette situation, l’indice de queue dépend de la valeur observée x de la co-variable X et sera appelé indice de queue conditionnelle. Dans la plupart des applications, l’indice de queue des valeurs extrêmes n’est pas l’intérêt principal et est utilisé pour estimer par exemple des quantiles extrêmes. La contribution de ce chapitre consiste à adapter l’estimateur de l’indice de queue introduit dans la première partie au cadre conditionnel et d’utiliser ce dernier afin de proposer un estimateur des quantiles conditionnels extrêmes. Nous examinons les modèles dits "à plan fixe" ou "fixed design" qui correspondent à la situation où la variable explicative est déterministe et nous utlisons l’approche de la fenêtre mobile ou "window moving approach" pour capter la co-variable. Nous étudions le comportement asymptotique des estimateurs proposés et donnons des résultats numériques basés sur des données simulées avec le logiciel "R". • Dans la troisième partie de cette thèse, nous étendons les travaux de la deuxième partie au cadre des modèles dits "à plan aléatoire" ou "random design" pour lesquels les données sont des observations spatiales d’un couple (Y,X) de variables aléatoires réelles. Pour ce dernier modèle, nous proposons un estimateur de l’indice de queue lourde en utilisant la méthode des noyaux pour capter la co-variable. Nous utilisons un estimateur de l’indice de queue conditionnelle appartenant à la famille de l’estimateur introduit par Goegebeur et al. (2014b)
In this thesis, we investigate nonparametric modeling of spatial extremes. Our resultsare based on the main result of the theory of extreme values, thereby encompass Paretolaws. This framework allows today to extend the study of extreme events in the spatialcase provided if the asymptotic properties of the proposed estimators satisfy the standardconditions of the Extreme Value Theory (EVT) in addition to the local conditions on thedata structure themselves. In the literature, there exists a vast panorama of extreme events models, which are adapted to the structures of the data of interest. However, in the case ofextreme spatial data, except max-stables models, little or almost no models are interestedin non-parametric estimation of the tail index and/or extreme quantiles. Therefore, weextend existing works on estimating the tail index and quantile under independent ortime-dependent data. The specificity of the methods studied resides in the fact that theasymptotic results of the proposed estimators take into account the spatial dependence structure of the relevant data, which is far from trivial. This thesis is then written in thecontext of spatial statistics of extremes. She makes three main contributions.• In the first contribution of this thesis, we propose a new approach of the estimatorof the tail index of a heavy-tailed distribution within the framework of spatial data. This approach relies on the estimator of Hill (1975). The asymptotic properties of the estimator introduced are established when the spatial process is adequately approximated by aspatial M−dependent process, spatial linear causal process or when the process satisfies a strong mixing condition.• In practice, it is often useful to link the variable of interest Y with covariate X. Inthis situation, the tail index depends on the observed value x of the covariate X and theunknown fonction (.) will be called conditional tail index. In most applications, the tailindexof an extreme value is not the main attraction, but it is used to estimate for instance extreme quantiles. The contribution of this chapter is to adapt the estimator of the tail index introduced in the first part in the conditional framework and use it to propose an estimator of conditional extreme quantiles. We examine the models called "fixed design"which corresponds to the situation where the explanatory variable is deterministic. To tackle the covariate, since it is deterministic, we use the window moving approach. Westudy the asymptotic behavior of the estimators proposed and some numerical resultsusing simulated data with the software "R".• In the third part of this thesis, we extend the work of the second part of the framemodels called "random design" for which the data are spatial observations of a pair (Y,X) of real random variables . In this last model, we propose an estimator of heavy tail-indexusing the kernel method to tackle the covariate. We use an estimator of the conditional tail index belonging to the family of the estimators introduced by Goegebeur et al. (2014b)
APA, Harvard, Vancouver, ISO, and other styles
34

Bouhadjera, Feriel. "Estimation non paramétrique de la fonction de régression pour des données censurées : méthodes locale linéaire et erreur relative." Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0561.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à développer des méthodes robustes et efficaces dans l’estimation non paramétrique de la fonction de régression. Le modèle considéré ici est le modèle censuré aléatoirement à droite qui est le plus utilisé dans différents domaines pratiques. Dans un premier temps, nous proposons un nouvel estimateur de la fonction de régression en utilisant la méthode linéaire locale. Nous étudions sa convergence uniforme presque sûre avec vitesse. Enfin, nous comparons ses performances avec celles de l’estimateur de la régression à noyau classique à l’aide de simulations. Dans un second temps, nous considérons l’estimateur de la fonction de régression par erreur relative (RER en anglais), basé sur la minimisation de l’erreur quadratique relative moyenne. Ainsi, nous établissons la convergence uniforme presque sûre (sur un compact) avec vitesse de l’estimateur défini pour des observations indépendantes et identiquement distribuées. En outre, nous prouvons sa normalité asymptotique en explicitant le terme de variance. Enfin, nous conduisons une étude de simulations pour confirmer nos résultats théoriques et nous appliquons notre estimateur sur des données réelles. Par la suite, nous étudions la convergence uniforme presque sûre (sur un compact) avec vitesse de l’estimateur RER pour des observations soumises à une structure de dépendance du type α-mélange. Une étude de simulation montre le bon comportement de l’estimateur étudié. Des prévisions sur données générées sont réalisées pour illustrer la robustesse de notre estimateur. Enfin, nous établissons la normalité asymptotique de l’estimateur RER pour des observations α-mélangeantes où nous construisons des intervalles de confiance afin de réaliser une étude de simulations qui valide nos résultats. Pour conclure, le fil conducteur de cette modeste contribution, hormis l’analyse des données censurées est la proposition de deux méthodes de prévision alternative à la régression classique. La première approche corrige les effets de bord crée par les estimateurs à noyaux classiques et réduit le biais. Tandis que la seconde est plus robuste et moins affectée par la présence de valeurs aberrantes dans l’échantillon
In this thesis, we are interested in developing robust and efficient methods in the nonparametric estimation of the regression function. The model considered here is the right-hand randomly censored model which is the most used in different practical fields. First, we propose a new estimator of the regression function by the local linear method. We study its almost uniform convergence with rate. We improve the order of the bias term. Finally, we compare its performance with that of the classical kernel regression estimator using simulations. In the second step, we consider the regression function estimator, based on theminimization of the mean relative square error (called : relative regression estimator). We establish the uniform almost sure consistency with rate of the estimator defined for independent and identically distributed observations. We prove its asymptotic normality and give the explicit expression of the variance term. We conduct a simulation study to confirm our theoretical results. Finally, we have applied our estimator on real data. Then, we study the almost sure uniform convergence (on a compact set) with rate of the relative regression estimator for observations that are subject to a dependency structure of α-mixing type. A simulation study shows the good behaviour of the studied estimator. Predictions on generated data are carried out to illustrate the robustness of our estimator. Finally, we establish the asymptotic normality of the relative regression function estimator for α-mixing data. We construct the confidence intervals and perform a simulation study to validate our theoretical results. In addition to the analysis of the censored data, the common thread of this modest contribution is the proposal of two alternative prediction methods to classical regression. The first approach corrects the border effects created by classical kernel estimators and reduces the bias term. While the second is more robust and less affected by the presence of outliers in the sample
APA, Harvard, Vancouver, ISO, and other styles
35

Esstafa, Youssef. "Modèles de séries temporelles à mémoire longue avec innovations dépendantes." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD021.

Full text
Abstract:
Dans cette thèse nous considérons, dans un premier temps, le problème de l'analyse statistique des modèles FARIMA (Fractionally AutoRegressive Integrated Moving-Average) induits par un bruit blanc non corrélé mais qui peut contenir des dépendances non linéaires très générales. Ces modèles sont appelés FARIMA faibles et permettent de modéliser des processus à mémoire longue présentant des dynamiques non linéaires, de structures souvent non-identifiées, très générales. Relâcher l'hypothèse d'indépendance sur le terme d'erreur, une hypothèse habituellement imposée dans la littérature, permet aux modèles FARIMA faibles d'élargir considérablement leurs champs d'application en couvrant une large classe de processus à mémoire longue non linéaires. Les modèles FARIMA faibles sont denses dans l'ensemble des processus stationnaires purement non déterministes, la classe formée par ces modèles englobe donc celle des processus FARIMA avec un bruit indépendant et identiquement distribué (iid). Nous appelons par la suite FARIMA forts les modèles dans lesquels le terme d'erreur est supposé être un bruit iid.Nous établissons les procédures d'estimation et de validation des modèles FARIMA faibles. Nous montrons, sous des hypothèses faibles de régularités sur le bruit, que l'estimateur des moindres carrés des paramètres des modèles FARIMA(p,d,q) faibles est fortement convergent et asymptotiquement normal. La matrice de variance asymptotique de l'estimateur des moindres carrés des modèles FARIMA(p,d,q) faibles est de la forme "sandwich". Cette matrice peut être très différente de la variance asymptotique obtenue dans le cas fort (i.e. dans le cas où le bruit est supposé iid). Nous proposons, par deux méthodes différentes, un estimateur convergent de cette matrice. Une méthode alternative basée sur une approche d'auto-normalisation est également proposée pour construire des intervalles de confiance des paramètres des modèles FARIMA(p,d,q) faibles. Cette technique nous permet de contourner le problème de l'estimation de la matrice de variance asymptotique de l'estimateur des moindres carrés.Nous accordons ensuite une attention particulière au problème de la validation des modèles FARIMA(p,d,q) faibles. Nous montrons que les autocorrélations résiduelles ont une distribution asymptotique normale de matrice de covariance différente de celle obtenue dans le cadre des FARIMA forts. Cela nous permet de déduire la loi asymptotique exacte des statistiques portmanteau et de proposer ainsi des versions modifiées des tests portmanteau standards de Box-Pierce et Ljung-Box. Il est connu que la distribution asymptotique des tests portmanteau est correctement approximée par un khi-deux lorsque le terme d'erreur est supposé iid. Dans le cas général, nous montrons que cette distribution asymptotique est celle d'une somme pondérée de khi-deux. Elle peut être très différente de l'approximation khi-deux usuelle du cas fort. Nous adoptons la même approche d'auto-normalisation utilisée pour la construction des intervalles de confiance des paramètres des modèles FARIMA faibles pour tester l'adéquation des modèles FARIMA(p,d,q) faibles. Cette méthode a l'avantage de contourner le problème de l'estimation de la matrice de variance asymptotique du vecteur joint de l'estimateur des moindres carrés et des autocovariances empiriques du bruit.Dans un second temps, nous traitons dans cette thèse le problème de l'estimation des modèles autorégressifs d'ordre 1 induits par un bruit gaussien fractionnaire d'indice de Hurst H supposé connu. Nous étudions, plus précisément, la convergence et la normalité asymptotique de l'estimateur des moindres carrés généralisés du paramètre autorégressif de ces modèles
We first consider, in this thesis, the problem of statistical analysis of FARIMA (Fractionally AutoRegressive Integrated Moving-Average) models endowed with uncorrelated but non-independent error terms. These models are called weak FARIMA and can be used to fit long-memory processes with general nonlinear dynamics. Relaxing the independence assumption on the noise, which is a standard assumption usually imposed in the literature, allows weak FARIMA models to cover a large class of nonlinear long-memory processes. The weak FARIMA models are dense in the set of purely non-deterministic stationary processes, the class of these models encompasses that of FARIMA processes with an independent and identically distributed noise (iid). We call thereafter strong FARIMA models the models in which the error term is assumed to be an iid innovations.We establish procedures for estimating and validating weak FARIMA models. We show, under weak assumptions on the noise, that the least squares estimator of the parameters of weak FARIMA(p,d,q) models is strongly consistent and asymptotically normal. The asymptotic variance matrix of the least squares estimator of weak FARIMA(p,d,q) models has the "sandwich" form. This matrix can be very different from the asymptotic variance obtained in the strong case (i.e. in the case where the noise is assumed to be iid). We propose, by two different methods, a convergent estimator of this matrix. An alternative method based on a self-normalization approach is also proposed to construct confidence intervals for the parameters of weak FARIMA(p,d,q) models.We then pay particular attention to the problem of validation of weak FARIMA(p,d,q) models. We show that the residual autocorrelations have a normal asymptotic distribution with a covariance matrix different from that one obtained in the strong FARIMA case. This allows us to deduce the exact asymptotic distribution of portmanteau statistics and thus to propose modified versions of portmanteau tests. It is well known that the asymptotic distribution of portmanteau tests is correctly approximated by a chi-squared distribution when the error term is assumed to be iid. In the general case, we show that this asymptotic distribution is a mixture of chi-squared distributions. It can be very different from the usual chi-squared approximation of the strong case. We adopt the same self-normalization approach used for constructing the confidence intervals of weak FARIMA model parameters to test the adequacy of weak FARIMA(p,d,q) models. This method has the advantage of avoiding the problem of estimating the asymptotic variance matrix of the joint vector of the least squares estimator and the empirical autocovariances of the noise.Secondly, we deal in this thesis with the problem of estimating autoregressive models of order 1 endowed with fractional Gaussian noise when the Hurst parameter H is assumed to be known. We study, more precisely, the convergence and the asymptotic normality of the generalized least squares estimator of the autoregressive parameter of these models
APA, Harvard, Vancouver, ISO, and other styles
36

Liao, Hung-Neng, and 廖宏能. "Evaluation of Tukey’s Control Chart to Non-normality." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/28347091111825982109.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理研究所碩士班
94
Lately, Alemi proposed an individual control chart-Tukey control chart at Journal of Quality Management in Health Care in 2004. The proposed control chart is easy to construct, besides has some advantage. These advantage are:(1)works well with low sample sizes, (2)need not estimate parameter,(3)is unaffected by non-normal data distributions, and(4) is not negatively impacted by the outliers. However, this control chart attract some scholars to investigate after proposed. But these research focus on the power of Tukey’s control chart when data have various levels of autocorrelation. Consequently, This research is to investigate the effect of Tukey’s control chart to non-normality. And realize properties of Tukey’s control chart.
APA, Harvard, Vancouver, ISO, and other styles
37

Zheng, Bing Hong, and 鄭炳宏. "Control charts for process variability under non-normality." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/87894584813549733622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zheng, Bing-Hong, and 鄭炳宏. "Control Charts for Process Variability Under Non-Normality." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/69749030505398333717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Shiau, Wu-Fu, and 蕭武夫. "Non-normality and the Control Chart for Median." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/09868522092875983900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Guo, Hong-Ying, and 郭虹纓. "Robustness Assessment of Process Capability Indices Under Non-Normality." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/70978286438328625531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

CHEN, WEI-ZHI, and 陳偉智. "Design of Run Sum Control Chart Under Non-Normality." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/5c2m9j.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理系
107
Owing to the rise of consumer awareness, the improvement of quality has become the primary goal of the company. The stability of the process will have a direct impact on quality. The control chart which is the most effective and widely used tool of Statistical Process Control(SPC)is often used to monitor process. The Shewhart control charts has a good performance when the process is greatly shifting, but have poor performance when the process is slightly shifting. Roberts (1966) proposed the Run Sum control chart and its performance is obviously better than the Shewhart control chart when the process have medium or small shift. Besides, it is easily and conveniently to operate. The Run Sum control chart provides a better option as controlling a process. The control charts are usually constructed under the assumption that the data is normal distribution. In reality, the measurement data of many processes does not obey the normal distribution. Chang and Bai (2001) mentioned that the distributions of measurements from chemical process, semiconductor process and the cutting tool wear process are often skewed. When the data skewness increases and the control charts established under the normal distribution monitor the skewed data, the false alarm rate of the control chart will increase. In this study, different Burr distributions are used to represent the non-normal distributions of different skewed conditions, and the asymmetric Run Sum control chart design suitable for data distribution is established in the non-normal distribution. The genetic algorithm is used to solve the optimal parameter combination of the control chart. Finally, the performance is compared with the asymmetric Shewhart control chart in the non-normal distribution. It shows that the asymmetric Run Sum control chart proposed in this study can detect the shifts of the processes more quickly in the non-normal distribution.
APA, Harvard, Vancouver, ISO, and other styles
42

Kang, Fu-Sen, and 康富森. "The Research on Non-normality of individual EWMA Control Chart." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/20742848115769457358.

Full text
Abstract:
碩士
淡江大學
統計學系
89
We usually suppose the quality characteristic is normal distribution on variables control chart. Nelson(1976) discuss x-bar control chart on non-normality.Their study indicates that datas for small value of r in gamma distribution (r=0.5 and 1) will cause largeα-risk’s difference with normality. In this paper, I will discuss individual EWMA control chart on non-normality. I will use charting, Gaussian quadrature and simulations to discuss the dependence on normality of the two control chart. In this paper, we can discover that x control chart is more sensitive departures from normality than individual EWMA control chart.
APA, Harvard, Vancouver, ISO, and other styles
43

Pan, Peng-Yuan, and 潘鵬元. "Application of Bootstrap Method in Process Incapability Index under Non-Normality." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/03771201419133603230.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理研究所碩士班
92
One of the most important tools to control the quality is Process Capability Index Analysis. Traditionally, it has been assumed that underlying distribution of the measurements is normal. However, it is not normal in the real manufacturing process. In this paper, the distribution function of the Burr distribution and nonparametric Bootstrap method will be employed to examine confidence interval of Process Incapability Index, defined by Greenwich and Jahr-Schaffrath(1995). According to the results, kurtosis coefficient has more significantly effect than skewness coefficient. Therefore, when to estimate the Process Incapability Index in Bootstrap method should examine the effect of kurtosis of sample distribution departs from normality.
APA, Harvard, Vancouver, ISO, and other styles
44

Hsu, Ya-Chen, and 徐雅甄. "Robustness of the EWMA Control Chart to Non-normality for Autocorrelated Processes." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/73662462652940348438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Weng, Tzu-Ying, and 翁慈霙. "Generalized Uniform Integrability and Its Applications to Asymptotic Normality for Non-i.i.d. Sequences." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/30913090148719235013.

Full text
Abstract:
碩士
國立彰化師範大學
數學系所
96
When one proves central limit theorems for dependent sequence of random variables, two prerequisites should be verified: they are stochastic convergence of the normalized sample second moment and the renowned Lindeberg condition. A host of references therein can be referred to Chow and Teicher (1997) and Hall and Heyde (1980). In this thesis, we'll introduce a general concept of uniform integrability and then exploit it to prove the asymptotic normality for Non-i.i.d. sequence.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Pin-Hao, and 王品皓. "The Economic Design of Average Control chart Under Non-normality and Correlated Subgroups." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/15690059983602600548.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理研究所
88
Since 1924 when Dr. Shewhart presented the first control chart, statistical methods provide a useful application in industrial process control. Duncan (1956) proposed the first model for determining the sample size (n), the interval between successive samples (h), and the control limits of an control chart which minimizes the average cost when a single out-of-control state (assignable cause) exists. Traditionally, when conducting the design of control charts, one usually assumes the measurements in the sample are normally distributed and independent. However, this assumption may not be tenable. If the measurements are asymmetrically distributed and correlated, the statistic will be approximately normally distributed only when the sample size n is sufficiently large and may reduce the ability that a control chart detects the assignable causes. In this paper, the economic design of chart under non-normality and correlated samples will be developed using the Burr distribution. There are three sections in this research which including, the economic statistical design of the chart using Duncan’s and Alexander’s cost model for non-normality data; the economic design of the chart using Duncan’s cost model under normality and correlated data; the economic design of the chart using Duncan’s and Alexander’s cost model for non-normality and correlated data. The results of comparison show that an increase in correlation coefficient leads to increases on both the sample size and the sampling interval, and a wider control limit under Duncan’s cost model with correlated data. An increase in correlation coefficient leads to decreases in the sample size, the sampling interval and a wider control limit under Alexander’s cost model with correlated data. The sample size is not significantly affected by non-normality under both Duncan’s and Alexander’s cost models. Increasing in skewness and kurtosis coefficient results in an increase in sampling interval; control limits are robust both on skewness and kurtosis coefficient under non-normality data. A slight effect may be observed under the consideration of non-normally correlated data.
APA, Harvard, Vancouver, ISO, and other styles
47

楊俊輝. "Economic model of X-chart under non-normality and measurement errors:a sensitivity study." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/11288781492404754176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Junaidi, S. Si. "Meta-analysis adjusting for heterogeneity, dependence and non-normality: a Bayesian parametric approach." Thesis, 2015. http://hdl.handle.net/1959.13/1296543.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
Independence and dependence between studies in meta-analysis are assumptions which are imposed on the structure of hierarchical Bayesian meta-analytic models. Whilst independence assumes that two or more studies have no correlation in meta-analysis, dependence can occur as a result of study reports using the same data or authors (Stevens & Taylor, 2009). In this thesis, the relaxation of the assumption of independence, and/or of a normal distribution for the true study effects is investigated. A variety of statistical meta-analytic models were developed and extended in this thesis. These include the DuMouchel (DuMouchel, 1990) model and hierarchical Bayesian meta-regression (HBMR) (Jones et al., 2009) model, which assume independence within and between studies or between subgroups. Also investigated were the hierarchical Bayesian linear model (HBLM) (Stevens, 2005) and hierarchical Bayesian delta-splitting (HBDS) (Stevens & Taylor, 2009) model, which allow for dependence between studies and sub-studies, introducing dependency at the sampling and hierarchical levels. Overall the General Bayesian Linear Model (GBLM) theorems, the Gibbs sampler, the Metropolis-Hasting and the Metropolis within Gibbs algorithms were shown to be produce good estimates for specific models. The analytical forms of the joint posterior distributions of all parameters for the DuMouchel and the HBMR models were derived using the general Bayesian linear model (GBLM) theorems; with models presented in the form of matrices to which the theorems could be directly applied. The GBLM theorems were shown to be useful alternative meta-analytic approaches. The Gibbs sampler algorithm was demonstrated to be an appropriate approach for approximating the parameters of the DuMouchel model, for which sensitivity analyses were conducted by imposing different prior distributions at the study level. In contrast in the HBMR model, different prior specifications at the subgroup level were imposed. An extended GBLM theorem was used to approximate the joint posterior distribution of parameters in the HBMR, given the analytical derivation of the posterior distribution for the HBMR model can be computationally intractable due to the integration of multiple functions. The DuMouchel model and the HBMR model developed were demonstrated on a data set related to the incidence of Ewing’s sarcoma (Honoki et al., 2007) and on a study relating to exposure to certain chemicals and reproductive health (Jones et al., 2009), respectively. Consistency of results suggested that the GBLM Theorem and the Gibbs sampler algorithm were good alternative approaches to parameter estimation for the DuMouchel and HBMR models. Parameter estimates were generally not sensitive to the imposition of different prior distributions on the mean and variance for the DuMouchel model, and were close to the true values when different values were specified for the hyper-parameters for the HBMR model, indicating robust models. The HBLM and HBDS models were introduced to allow for dependency at the sampling and hierarchical levels. The Gibbs sampler and Metropolis within Gibbs algorithms were used to estimate the joint posterior distributions of all parameters for the HBLM and HBDS models, respectively. The Gibbs sampler algorithm was shown to successfully approximate the joint posterior distribution of parameters in the HBLM. The analytical form of the HBLM for the l-dependence group was derived by calculating the conditional posterior distribution of each parameter, as the distributions were in standard form. The joint posterior distribution of all parameters for the HBDS model, however, was derived using the Metropolis within Gibbs algorithm, chosen as the conditional posterior distributions of some parameters were in non-standard form. The formula for the joint posterior distribution was tested successfully on studies to assess the effects of native-language vocabulary aids on second language reading. Non-normal analogues of the independent and dependent DuMouchel and HBLM were developed in the thesis. The multivariate normal distribution at the true study effects for the DuMouchel model and the HBLM being replaced by the non-central multivariate t distribution. The joint posterior distribution of all parameters for the non-normal DuMouchel model and the non-normal HBLM were approximated using the Metropolis-Hasting algorithm due to its ability to deal with the non-standard form of conditional posterior distribution of parameters. Estimation of parameters of the non-normal models was successfully conducted using R. The Metropolis-Hasting algorithm was demonstrated to be a useful approach by which to estimate the joint posterior distribution for the hierarchical Bayesian model when a non-standard form of the joint posterior is encountered. It is shown that conducting a meta-analysis which allows for dependency and/or a non-normal distribution at the true study effects for hierarchical Bayesian models can lead to good overall conclusions.
APA, Harvard, Vancouver, ISO, and other styles
49

Liou, Jia-Hueng, and 劉家宏. "Non-Normality of the Joint Economic Design of X-bar and R Control Chart." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/24656584001280333879.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理研究所
87
Since Duncan’s pioneering work in economically design of X-bar control chart, there were a lot of works toward economically design of different control charts. Saniga who is the first person proposed joint economically optimal design of X-bar and R control chart in 1977. In his research, the quality characteristic is assumed to be normally distributed. But there are cases to have quality characteristic that is not normally distributed in practice. In this research, the Burr distribution is used to represent the distribution of the quality characteristic which is nonnormally distributed, and Saniga’s joint economic design model is used as the basis for developing the joint economic design of X-bar and R control chart. The Genetic Algorithms procedure is employed for searching the optimal solution of those economic design parameters of X-bar and R control chart. A computer program will be developed also to help the practitioner for searching the optimal design parameters. There are two points must be considered before making use of this study, which are described in the following list. 1. The distribution of the quality characteristic of this study that must can be approximated by Burr distribution. 2. To understand the condition of the non-normal distribution of the quality characteristic in advance, and to obtain the skewness coefficient and the kurtosis coefficient of the non-normal distribution before making use of this study. 12 categories of non-normal distribution, and each category includes 81examples are presented for optimal solution in this research. This research found that if the normal model is performed but the distribution of the quality characteristic is nonnormally distributed in practice, the false alarm and the expected cost per unit of output of normal model are more then this research.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Kung-Hong, and 林昆宏. "Non-Normality of the Joint Economic Design of X-bar and S Control Chart." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/56180967728341392283.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理研究所碩士班
91
Traditionally, observations characteristic is assumed to be normally distributed when control chart is applied for statistic process control. If observations value is not normally distributed, the traditional methods of design about the control chart probably reduce the ability that control chart detects non-chance cause. In according to Burr distribution, Hooke and Jeeves optimal searching rule and the skill of computer simulation, this research develops the joint economic design model of X-bar and S control chart under non-normally distributed. The theme of the thesis discuss that X-bar and S control chart control average and variance about process quality in the same time with Knappenberger and Grandage’s(1969) cost model; besides, it also proposes the economic design to make the max profit on each unit. The purposes of this research are described in the following list: 1. Apply non-normal distributed to the joint of the economic design of X-bar and S control chart. 2. Develop non-normal distributed on control limit to the joint of the economic design of X-bar and S control chart. 3. Optimal solution in different (c,k) and make sensitive analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography