To see the other types of publications on this topic, follow the link: Analysis of variance.

Dissertations / Theses on the topic 'Analysis of variance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analysis of variance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ramazi, Pouria. "Variance Analysis of Parallel Hammerstein Models." Thesis, KTH, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102169.

Full text
Abstract:
In this thesis we generalize some recent results on variance analysis of Hammerstein models. A variance formula for an arbitrary number of parallel blocks is derived. This expression shows that the variance increases in one block due to the estimation of parameters in other blocks but levels off when the number of parameters in other blocks reach the number of parameters in that block. As a second contribution, the problem of how to design the input so that the identification process leads to a more accurate model is considered. In other words, how to choose the input signal so that the model error described previously is minimized, is studied. The investigations show that the optimal input probability density function has a surprisingly simple format. In summary, some of the derived results can be used directly in practice, while some might be used for further research.
APA, Harvard, Vancouver, ISO, and other styles
2

Zoglat, Abdelhak. "Analysis of variance for functional data." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/10136.

Full text
Abstract:
In this dissertation we present an extension to the well known theory of multivariate analysis of variance. In various situations data are continuous stochastic functions of time or space. The speed of pollutants diffusing through a river, the real amplitude of a signal received from a broadcasting satellite, or the hydraulic conductivity rates at a given region are examples of such processes. After the mathematical background we develop tools for analyzing such data. Namely, we develop estimators, tests, and confidence sets for the parameters of interest. We extend these results, obtained under the normality assumption, and show that they are still valid if this assumption is relaxed. Some examples of applications of our techniques are given. We also outline how the latter can apply to random and mixed models for continuous data. In the appendix, we give some programs which we use to compute the distributions of some of our tests statistics.
APA, Harvard, Vancouver, ISO, and other styles
3

Di, Gessa Giorgio. "Simple strategies for variance uncertainty in meta-analysis." Connect to e-thesis, 2007. http://theses.gla.ac.uk/128/.

Full text
Abstract:
Thesis (M.Sc.(R)) - University of Glasgow, 2007.
M.Sc.(R) thesis submitted to the Department of Statistics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
4

Nisa, Khoirin. "On multivariate dispersion analysis." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2025.

Full text
Abstract:
Cette thèse examine la dispersion multivariée des modelés normales stables Tweedie. Trois estimateurs de fonction variance généralisée sont discutés. Ensuite dans le cadre de la famille exponentielle naturelle deux caractérisations du modèle normal-Poisson, qui est un cas particulier de modèles normales stables Tweedie avec composante discrète, sont indiquées : d'abord par fonction variance et ensuite par fonction variance généralisée. Le dernier fournit la solution à un problème particulier d'équation de Monge-Ampère. Enfin, pour illustrer l'application de la variance généralisée des modèles Tweedie stables normales, des exemples à partir des données réelles sont fournis
This thesis examines the multivariate dispersion of normal stable Tweedie (NST) models. Three generalize variance estimators of some NST models are discussed. Then within the framework of natural exponential family, two characterizations of normal Poisson model, which is a special case of NST models with discrete component, are shown : first by variance function and then by generalized variance function. The latter provides a solution to a particular Monge-Ampere equation problem. Finally, to illustrate the application of generalized variance of normal stable Tweedie models, examples from real data are provided
APA, Harvard, Vancouver, ISO, and other styles
5

Khalilzadeh, Amir Hossein. "Variance Dependent Pricing Kernels in GARCH Models." Thesis, Uppsala universitet, Analys och tillämpad matematik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nagarajan, Balaji. "Analytic Evaluation of the Expectation and Variance of Different Performance Measures of a Schedule under Processing Time Variability." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31264.

Full text
Abstract:
The realm of manufacturing is replete with instances of uncertainties in job processing times, machine statuses (up or down), demand fluctuations, due dates of jobs and job priorities. These uncertainties stem from the inability to gather accurate information about the various parameters (e.g., processing times, product demand) or to gain complete control over the different manufacturing processes that are involved. Hence, it becomes imperative on the part of a production manager to take into account the impact of uncertainty on the performance of the system on hand. This uncertainty, or variability, is of considerable importance in the scheduling of production tasks. A scheduling problem is primarily to allocate the jobs and determine their start times for processing on a single or multiple machines (resources) for the objective of optimizing a performance measure of interest. If the problem parameters of interest e.g., processing times, due dates, release dates are deterministic, the scheduling problem is relatively easier to solve than for the case when the information is uncertain about these parameters. From a practical point of view, the knowledge of these parameters is, most often than not, uncertain and it becomes necessary to develop a stochastic model of the scheduling system in order to analyze its performance. Investigation of the stochastic scheduling literature reveals that the preponderance of the work reported has dealt with optimizing the expected value of the performance measure. By focusing only on the expected value and ignoring the variance of the measure used, the scheduling problem becomes purely deterministic and the significant ramifications of schedule variability are essentially neglected. In many a practical cases, a scheduler would prefer to have a stable schedule with minimum variance than a schedule that has lower expected value and unknown (and possibly high) variance. Hence, it becomes apparent to define schedule efficiencies in terms of both the expectation and variance of the performance measure used. It could be easily perceived that the primary reasons for neglecting variance are the complications arising out of variance considerations and the difficulty of solving the underlying optimization problem. Moreover, research work to develop closed-form expressions or methodologies to determine the variance of the performance measures is very limited in the literature. However, conceivably, such an evaluation or analysis can only help a scheduler in making appropriate decisions in the face of uncertain environment. Additionally, these expressions and methodologies can be incorporated in various scheduling algorithms to determine efficient schedules in terms of both the expectation and variance. In our research work, we develop such analytic expressions and methodologies to determine the expectation and variance of different performance measures of a schedule. The performance measures considered are both completion time and tardiness based measures. The scheduling environments considered in our analysis involve a single machine, parallel machines, flow shops and job shops. The processing times of the jobs are modeled as independent random variables with known probability density functions. With the schedule given a priori, we develop closed-form expressions or devise methodologies to determine the expectation and variance of the performance measures of interest. We also describe in detail the approaches that we used for the various scheduling environments mentioned earlier. The developed expressions and methodologies were programmed in MATLAB R12 and illustrated with a few sample problems. It is our understanding that knowing the variance of the performance measure in addition to its expected value would aid in determining the appropriate schedule to use in practice. A scheduler would be in a better position to base his/her decisions having known the variability of the schedules and, consequently, can strike a balance between the expected value and variance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Meterelliyoz, Kuyzu Melike. "Variance parameter estimation methods with re-use of data." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26490.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Alexopoulos, Christos; Committee Co-Chair: Goldsman, David; Committee Member: Kim, Seong-Hee; Committee Member: Shapiro, Alexander; Committee Member: Spruill, Carl. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
8

Chao, Jackson Sheng-Kuang. "Analysis of variance impact on manufacturing flow time." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13339.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1991, and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1991.
Includes bibliographical references (leaves 119-120).
by Jackson Sheng-Kuang Chao.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
9

Pathiravasan, Chathurangi Heshani Karunapala. "Generalized Semiparametric Approach to the Analysis of Variance." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/dissertations/1702.

Full text
Abstract:
The one-way analysis of variance (ANOVA) is mainly based on several assumptions and can be used to compare the means of two or more independent groups of a factor. To relax the normality assumption in one-way ANOVA, recent studies have considered exponential distortion or tilt of a reference distribution. The reason for the exponential distortion was not investigated before; thus the main objective of the study is to closely examine the reason behind it. In doing so, a new generalized semi-parametric approach for one-way ANOVA is introduced. The proposed method not only compares the means but also variances of any type of distributions. Simulation studies show that proposed method has favorable performance than classical ANOVA. The method is demonstrated on meteorological radar data and credit limit data. The asymptotic distribution of the proposed estimator was determined in order to test the hypothesis for equality of one sample multivariate distributions. The power comparison of one sample multivariate distributions reveals that there is a significant power improvement in the proposed chi-square test compared to the Hotelling's T-Square test for non normal distributions. A bootstrap paradigm is incorporated for testing equidistributions of multiple samples. As far as power comparison simulations for multiple large samples are considered, the proposed test outperforms other existing parametric, nonparametric and semi-parametric approaches for non normal distributions.
APA, Harvard, Vancouver, ISO, and other styles
10

Prosser, Robert James. "Robustness of multivariate mixed model ANOVA." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25511.

Full text
Abstract:
In experimental or quasi-experimental studies in which a repeated measures design is used, it is common to obtain scores on several dependent variables on each measurement occasion. Multivariate mixed model (MMM) analysis of variance (Thomas, 1983) is a recently developed alternative to the MANOVA procedure (Bock, 1975; Timm, 1980) for testing multivariate hypotheses concerning effects of a repeated factor (called occasions in this study) and interaction between repeated and non-repeated factors (termed group-by-occasion interaction here). If a condition derived by Thomas (1983), multivariate multi-sample sphericity (MMS), regarding the equality and structure of orthonormalized population covariance matrices is satisfied (given multivariate normality and independence for distributions of subjects' scores), valid likelihood-ratio MMM tests of group-by-occasion interaction and occasions hypotheses are possible. To date, no information has been available concerning actual (empirical) levels of significance of such tests when the MMS condition is violated. This study was conducted to begin to provide such information. Departure from the MMS condition can be classified into three types— termed departures of types A, B, and C respectively: (A) the covariance matrix for population ℊ (ℊ = 1,...G), when orthonormalized, has an equal-diagonal-block form but the resulting matrix for population ℊ is unequal to the resulting matrix for population ℊ' (ℊ ≠ ℊ'); (B) the G populations' orthonormalized covariance matrices are equal, but the matrix common to the populations does not have equal-diagonal-block structure; or (C) one or more populations has an orthonormalized covariance matrix which does not have equal-diagonal-block structure and two or more populations have unequal orthonormalized matrices. In this study, Monte Carlo procedures were used to examine the effect of each type of violation in turn on the Type I error rates of multivariate mixed model tests of group-by-occasion interaction and occasions null hypotheses. For each form of violation, experiments modelling several levels of severity were simulated. In these experiments: (a) the number of measured variables was two; (b) the number of measurement occasions was three; (c) the number of populations sampled was two or three; (d) the ratio of average sample size to number of measured variables was six or 12; and (e) the sample size ratios were 1:1 and 1:2 when G was two, and 1:1:1 and 1:1:2 when G was three. In experiments modelling violations of types A and C, the effects of negative and positive sampling were studied. When type A violations were modelled and samples were equal in size, actual Type I error rates did not differ significantly from nominal levels for tests of either hypothesis except under the most severe level of violation. In type A experiments using unequal groups in which the largest sample was drawn from the population whose orthogonalized covariance matrix has the smallest determinant (negative sampling), actual Type I error rates were significantly higher than nominal rates for tests of both hypotheses and for all levels of violation. In contrast, empirical levels of significance were significantly lower than nominal rates in type A experiments in which the largest sample was drawn from the population whose orthonormalized covariance matrix had the largest determinant (positive sampling). Tests of both hypotheses tended to be liberal in experiments which modelled type B violations. No strong relationships were observed between actual Type I error rates and any of: severity of violation, number of groups, ratio of average sample size to number of variables, and relative sizes of samples. In equal-groups experiments modelling type C violations in which the orthonormalized pooled covariance matrix departed at the more severe level from equal-diagonal-block form, actual Type I error rates for tests of both hypotheses tended to be liberal. Findings were more complex under the less severe level of structural departure. Empirical significance levels did not vary with the degree of interpopulation heterogeneity of orthonormalized covariance matrices. In type C experiments modelling negative sampling, tests of both hypotheses tended to be liberal. Degree of structural departure did not appear to influence actual Type I error rates but degree of interpopulation heterogeneity did. Actual Type I error rates in type C experiments modelling positive sampling were apparently related to the number of groups. When two populations were sampled, both tests tended to be conservative, while for three groups, the results were more complex. In general, under all types of violation the ratio of average group size to number of variables did not greatly affect actual Type I error rates. The report concludes with suggestions for practitioners considering use of the MMM procedure based upon the findings and recommends four avenues for future research on Type I error robustness of MMM analysis of variance. The matrix pool and computer programs used in the simulations are included in appendices.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Curran, Thomas Schimpff Joshua. "An analysis of the factors generating the variance between the budgeted and actual operating results of the Naval Aviation Depot at North Island, California." Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA483507.

Full text
Abstract:
"Submitted in partial fulfillment of the requirements for the degree of Master of Business Administration from the Naval Postgraduate School, June 2008."
Advisor(s): Euske, Kenneth J. ; Mutty, John E. "June 2008." "MBA professional report"--Cover. Description based on title screen as viewed on August 8, 2008. Includes bibliographical references (p. 69). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Yani. "Dimension reduction in the regressions through weighted variance estimation." HKBU Institutional Repository, 2009. http://repository.hkbu.edu.hk/etd_ra/1073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Du, Jichang. "Covariate-matched estimator of the error variance in nonparametric regression." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

馮子豪 and Tze-ho Fung. "Bootstrap estimation of variance in survey sampling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31208198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fung, Tze-ho. "Bootstrap estimation of variance in survey sampling /." [Hong Kong] : University of Hong Kong, 1987. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12362694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wiedemann, Eric A. "Reducing variance between two systems by inducing correlation." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/23345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Newton, Wesley E. "Data Analysis Using Experimental Design Model Factorial Analysis of Variance/Covariance (DMAOVC.BAS)." DigitalCommons@USU, 1985. https://digitalcommons.usu.edu/etd/6378.

Full text
Abstract:
DMAOVC.BAS is a computer program written in the compiler version of microsoft basic which performs factorial analysis of variance/covariance with expected mean squares. The program accommodates factorial and other hierarchical experimental designs with balanced sets of data. The program is writ ten for use on most modest sized microprocessors, in which the compiler is available. The program is parameter file driven where the parameter file consists of the response variable structure, the experimental design model expressed in a similar structure as seen in most textbooks, information concerning the factors (i.e. fixed or random, and the number of levels), and necessary information to perform covariance analysis. The results of the analysis are written to separate files in a format that can be used for reporting purposes and further computations if needed.
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Yu-pin. "Multiple-point variance analysis for groundwater monitoring network design." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/19511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Yuh-Ing. "Nonparametric procedures for structured effects in analysis of variance /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487671108307997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Harrison, David. "The development of analysis of variance techniques for angular data." Thesis, Sheffield Hallam University, 1987. http://shura.shu.ac.uk/19759/.

Full text
Abstract:
In many areas of research, such as within medical statistics, biology and geostatistics, problems arise requiring the analysis of angular (or directional) data. Many possess experimental design problems and require analysis of variance techniques for suitable analysis of the angular data. These techniques have been developed for very limited cases and the sensitivity of such techniques to the violation of assumptions made, and their possible extension to larger experimental models, has yet to be investigated. The general aim of this project is therefore to develop suitable experimental design models and analysis of variance type techniques for the analysis of directional data. Initially a generalised linear modelling approach is used to derive parameter estimates for one-way classification designs leading to maximum likelihood methods. This approach however, when applied to larger experimental designs is shown to be intractable due to optimization problems. The limited analysis of variance techniques presently available for angular data are reviewed and extended to take account of the possible addition of further factors within an experimental design. These are shown to breakdown under varying conditions and question basic underlying assumptions regarding the components within the original approach. A new analysis of variance approach is developed which possesses many desirable properties held in standard 'linear' statistical analysis of variance. Finally several data sets are analysed to support the validity of the new techniques.
APA, Harvard, Vancouver, ISO, and other styles
21

Adiga, Nagesh. "Contributions to variable selection for mean modeling and variance modeling in computer experiments." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43592.

Full text
Abstract:
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments. In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined. The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques. The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments. We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Marshall, Williams S. IV. "Robust variance estimation for ranking and selection." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/24263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Caples, Jerry Joseph. "Variance reduction and variable selection methods for Alho's logistic capture recapture model with applications to census data /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Duong, Tamanh Q. Johnson Greg R. Uribe Juan C. "Factors affecting Navy Working Capital Fund (NWCF) net operating result a case study of Naval Facilities Engineering Command Washington, Washington D.C. /." Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/MBAPR/2009/Dec/09Dec%5FDuong%5FMBA.pdf.

Full text
Abstract:
"Submitted in partial fulfillment of the requirements for the degree of Master of Business Administration from the Naval Postgraduate School, December 2009."
Advisor(s): Jones, Lawrence R. ; Potvin, Lisa. "December 2009." "MBA Professional report"--Cover. Description based on title screen as viewed on January 28, 2010. Author(s) subject terms: Navy Working Capital Fund, Net Operating Result (NOR), Accumulative Operating Result (AOR), Naval Facilities Engineering Command (NAVFAC), Public Works Department (PWD), Utilities, Labor Hours, Expenses, Revenues, Break Even, Budget Forecasting, Business Model, Variance Analysis, Stabilized Rate, Financial Management, Operations, Policy, Workload, Industrial Business Information System (IBIS). Includes bibliographical references (p. 71-73). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
25

Cho, Gyo-Young. "Multivariate control charts for the mean vector and variance-covariance matrix with variable sampling intervals." Diss., Virginia Tech, 1991. http://hdl.handle.net/10919/37242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Skanke, Björn. "Analysis of Pension Strategies." Thesis, KTH, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143342.

Full text
Abstract:
In a time where people tend to retire earlier and live longer in combination with an augmented personal responsibility of allocating or at least choosing adequately composed pension funds, the importance of a deeper understanding of long term investment strategies is inevitably accentuated. On the background of discrepancies in suggested pension fund strategies by influential fund providers, professional advisers and previous literature, this thesis aims at addressing foremost one particular research question: How should an investor optimally allocate between risky and risk-less assets in a pension fund depending on age? In order to answer the question the sum of Human wealth, defined as the present value of all expected future incomes, and ordinary Financial wealth is maximized by applying a mean-variance and a expected utility approach. The latter, and mathematically more sound method yields a strategy suggesting 100% of available capital to be invested in risky assets until the age of 47 whereafter the portion should be gradually reduced and reach the level of 32% at the last period before retirement. The strategy is clearly favorable to solely holding a risk-free asset and it just outperforms the commonly applied "100 minus age"-strategy.
APA, Harvard, Vancouver, ISO, and other styles
27

Arndt, Carl-Fredrik. "Energy estimates and variance estimation for hyperbolic stochastic partial differentialequations." Thesis, Linköpings universitet, Beräkningsvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70355.

Full text
Abstract:
In this thesis the connections between the boundary conditions and the vari- ance of the solution to a stochastic partial differential equation (PDE) are investigated. In particular a hyperbolical system of PDE’s with stochastic initial and boundary data are considered. The problem is shown to be well- posed on a class of boundary conditions through the energy method. Stability is shown by using summation-by-part operators coupled with simultaneous- approximation-terms. By using the energy estimates, the relative variance of the solutions for different boundary conditions are analyzed. It is concluded that some types of boundary conditions yields a lower variance than others. This is verified by numerical computations.
APA, Harvard, Vancouver, ISO, and other styles
28

Chan, Iok Ip. "Analysis of variance and its applications in Macao educational researches." Thesis, University of Macau, 2007. http://umaclib3.umac.mo/record=b1687709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Langan, Dean. "Estimating the heterogeneity variance in a random-effects meta-analysis." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/13507/.

Full text
Abstract:
In a meta-analysis, differences in the design and conduct of studies may cause variation in effects beyond what is expected from chance alone. This additional variation is commonly known as heterogeneity, which is incorporated into a random-effects model. The heterogeneity variance parameter in this model is commonly estimated by the DerSimonian-Laird method, despite being shown to produce negatively biased estimates in simulated data. Many other methods have been proposed, but there has been less research into their properties. This thesis compares all methods to estimate the heterogeneity variance in both empirical and simulated meta-analysis data. First, methods are compared in 12,894 empirical meta-analyses from the Cochrane Database of Systematic Reviews (CDSR). These results showed high discordance in estimates of the heterogeneity variance between methods, so investigating their properties in simulated meta-analysis data is worthwhile. A systematic review of relevant simulation studies was then conducted and identified 12 studies, but there was little consensus between them and conclusions could only be considered tentative. A new simulation study was conducted in collaboration with other statisticians. Results confirmed that the DerSimonian-Laird method is negatively biased in scenarios where within-study variances are imprecise and/or biased. On the basis of these results, the REML approach to heterogeneity variance estimation is recommended. A secondary analysis combines simulated and empirical meta-analysis data and shows all methods usually have poor properties in practice; only marginal improvements are possible using REML. In conclusion, caution is advised when interpreting estimates of the heterogeneity variance and confidence intervals should always be presented to express its uncertainty. More promisingly, the Hartung-Knapp confidence interval method is robust to poor heterogeneity variance estimates, so sensitivity analysis is not usually required for inference on the mean effect.
APA, Harvard, Vancouver, ISO, and other styles
30

Seppala, Christopher Toomath. "Dynamic analysis of variance methods for monitoring control system performance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0006/NQ42975.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hall, Nathan E. "A Radiographic Analysis of Variance in Lower Incisor Enamel Thickness." VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/887.

Full text
Abstract:
The purpose of this study was to help predict the enamel thickness of mandibular incisors. At least two direct digital periapical radiographs were made for each of the 80 subjects. Radiographs were scaled to control for magnification errors using dental study models and computer software. Mesiodistal incisor width and mesial and distal enamel thicknesses were measured. Lateral incisors were determined to be wider than central incisors and distal enamel thicknesses were larger than mesial enamel thicknesses on average. The African American group demonstrated wider incisors and enamel thicknesses than the Caucasian group on average. Enamel thickness positively correlated with tooth width for all incisors. No statistically significant differences were detected between male and female groups. Some conclusions relating to enamel thickness can be made based on race, incisor position, and incisor width, but correlations were not considered strong enough to accurately determine enamel width, without the aid of radiographs.
APA, Harvard, Vancouver, ISO, and other styles
32

Ellington, James Kemp. "Systematic Sources of Variance in Supervisory Job Performance Ratings: A Multilevel Analysis of Between-Rater and Between-Context Variance." NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-11022006-194219/.

Full text
Abstract:
The appraisal of job performance is critical for both the practice of human resource management and organizational research. Furthermore, the most frequently used method for measuring performance is a supervisory rating. Given the prevalence of this method, it is crucial to understand the factors which influence rater behaviors. Recent research has indicated that a large portion of the variance in ratings is idiosyncratic to the rater (Scullen, Mount, & Goff, 2000). However, the nature of this idiosyncratic variance remains unclear. Previous models of appraisal have focused on either the cognitive processes involved, or more recently, the appraisal context. Although this recent focus on contextual issues has shown promise, the extent to which raters are influenced by the context in which they work is unknown. Therefore, the purpose of this research was to contribute to our understanding of supervisory ratings by incorporating a multilevel analytic approach in order to partition the variance between raters from the variance between contexts. This approach allowed for the investigation of several rater and context-level characteristics, in attempt to explain the variance associated with these two sources. More specifically, a conceptual model was proposed in order to examine rater-level variables including rater tendencies for leniency and halo, along with rater opportunities to observe performance. Contextual factors proposed for study included norms for leniency and halo, opportunities to observe performance at the context-level, and the nature of work/activity itself within various contexts. Moreover, this research incorporated a multidimensional performance criterion, in order to provide a more thorough investigation of the relationships of interest here. Results suggested that the rating context accounted for significant variance in both task and citizenship performance ratings. Furthermore, the rater tendency for leniency explained significant between-rater variation in both criteria. The rater tendency for halo was also significant, however this finding did not recur when analyzing a replication sample of data. At the context-level, the norm for leniency consistently predicted variance in citizenship performance, but was only a significant predictor of task performance in one sample. Finally, although these relationships were not consistent across samples, the nature of work/activity and the contextual norm for halo explained significant between-context variation in citizenship performance ratings. The interpretation and implications of these results are discussed, along with limitations of this research and suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
33

Wardak, Mohammad Alif. "Survival analysis." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2810.

Full text
Abstract:
Survival analysis pertains to a statistical approach designed to take into account the amount of time an experimental unit contributes to a study. A Mayo Clinic study of 418 Primary Biliary Cirrhosis patients during a ten year period was used. The Kaplan-Meier Estimator, a non-parametric statistic, and the Cox Proportional Hazard methods were the tools applied. Kaplan-Meier results include total values/censored values.
APA, Harvard, Vancouver, ISO, and other styles
34

Brien, Christopher J. "Factorial linear model analysis." Title page, table of contents and summary only, 1992. http://thesis.library.adelaide.edu.au/public/adt-SUA20010530.175833.

Full text
Abstract:
"February 1992" Bibliography: leaf 323-344. Electronic publication; Full text available in PDF format; abstract in HTML format. Develops a general strategy for factorial linear model analysis for experimental and observational studies, an iterative, four-stage, model comparison procedure. The approach is applicable to studies characterized as being structure-balanced, multitiered and based on Tjur structures unless the structure involves variation factors when it must be a regular Tjur structure. It covers a wide range of experiments including multiple-error, change-over, two-phase, superimposed and unbalanced experiments. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001.
APA, Harvard, Vancouver, ISO, and other styles
35

Fitzsimmons, Kevin Michael. "Use of saline wastewater for revegetation and creation of wildlife habitat." Diss., The University of Arizona, 1999. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1999_281_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yockey, Ron David. "An investigation of the type I error rates and power of standard and alternative multivariate tests on means under homogeneous and heterogeneous covariance matrices and multivariate normality and nonnormality /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Estolano, Marcial Perez. "Split plot designs." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Talbert, Matthew Brandon. "A column based variance analysis approach to static reservoir model upgridding." Texas A&M University, 2008. http://hdl.handle.net/1969.1/86055.

Full text
Abstract:
The development of coarsened reservoir simulation models from high resolution geologic models is a critical step in a simulation study. The optimal coarsening sequence becomes particularly challenging in a fluvial channel environment where the channel sinuosity and orientation can result in pay/non-pay juxtaposition in many regions of the geologic model. The optimal coarsening sequence is also challenging in tight gas sandstones where sharp changes between sandstone and shale beds are predominant and maintaining the pay/non-pay distinction is difficult. Under such conditions, a uniform coarsening will result in mixing of pay and non-pay zones and will likely result in geologically unrealistic simulation models which create erroneous performance predictions. In particular, the upgridding algorithm must keep pay and non-pay zones distinct through a non-uniform coarsening of the geologic model. We present a coarsening algorithm to determine an optimal reservoir simulation grid by grouping fine scale geologic model cells into effective simulation cells. Our algorithm groups the layers in such a way that the heterogeneity measure of an appropriately defined static property is minimized within the layers and maximized between the layers. The optimal number of layers is then selected based on an analysis resulting in a minimum loss of heterogeneity. We demonstrate the validity of the optimal gridding by applying our method to a history matched waterflood in a structurally complex and faulted offshore turbiditic oil reservoir. The field is located in a prolific hydrocarbon basin offshore South America. More than 10 years of production data from up to 8 producing wells are available for history matching. We demonstrate that any coarsening beyond the degree indicated by our analysis overly homogenizes the properties on the simulation grid and alters the reservoir response. An application to a tight gas sandstone developed by Schlumberger DCS is also used in our verification of our algorithm. The specific details of the tight gas reservoir are confidential to Schlumberger's client. Through the use of a reservoir section we demonstrate the effectiveness of our algorithm by visually comparing the reservoir properties to a Schlumberger fine scale model.
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Ruiwei. "Genome-wide analysis of DNA methylation variance in healthy human subjects." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52977.

Full text
Abstract:
DNA methylation is a type of epigenetic modification that modulates gene expression by acting as an intermediate between genes and environment; this in turn could trigger phenotypic changes with widespread implications in both disease and population models. Unlike DNA sequence, which is relatively stable and finite, DNA methylation presents itself differently in different tissues, and it is described as the sum of interactions affecting attachment of methyl groups to DNA mostly as a result of development and aging, with minor influences from stochastic variability, and environmental factors. Most studies involving DNA methylation focus on finding epigenetic changes related to pathogenicity or disease, as a result, there are certain foundational questions that remain unanswered. In order to translate the current knowledge into reliable insights, it is important to answer these questions, then standardize research methods and establish reference epigenomes. Here we begin to address this challenge through two avenues: epigenomic characterization and environmental interaction. To characterize the epigenome, we monitored the peripheral blood mononuclear cell DNA methylation levels from healthy subjects over a circadian day, a month, and under prolonged sample storage. We also investigated tissue specific variability in DNA methylation by comparing matched peripheral blood mononuclear and buccal epithelial cell samples from healthy subjects. Lastly, we analyzed the impact of diesel exhaust on the DNA methylation. We discovered that while overall DNA methylation was stable within a circadian day, certain loci demonstrated significant changes over the course of a month. Prolonged sample storage, on the other hand, had an even larger effect on DNA methylation. When we compared differences across tissues, we found that although both tissues showed extensive probe-wise variability, the specific regions and magnitude of that variability differed strongly between tissues. Lastly, in light of environmental influences, we observed that DNA methylation was sensitive to even short-term exposure to diesel exhaust, and we identified associated CpG sites across the functional genome, as well as in Alu and LINE1 repetitive elements, with most of these exposure sensitive sites demonstrating loss of DNA methylation.
Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
40

Flaspoehler, Timothy Michael. "FW-CADIS variance reduction in MAVRIC shielding analysis of the VHTR." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45743.

Full text
Abstract:
In the following work, the MAVRIC sequence of the Scale6.1 code package was tested for its efficacy in calculating a wide range of shielding parameters with respect to HTGRs. One of the NGNP designs that has gained large support internationally is the VHTR. The development of the Scale6.1 code package at ORNL has been primarily directed towards supporting the current United States' reactor fleet of LWR technology. Since plans have been made to build a prototype VHTR, it is important to verify that the MAVRIC sequence can adequately meet the simulation needs of a different reactor technology. This was accomplished by creating a detailed model of the VHTR power plant; identifying important, relevant radiation indicators; and implementing methods using MAVRIC to simulate those indicators in the VHTR model. The graphite moderator used in the design shapes a different flux spectrum than water-moderated reactors. The different flux spectrum could lead to new considerations when quantifying shielding characteristics and possibly a different gamma-ray spectrum escaping the core and surrounding components. One key portion of this study was obtaining personnel dose rates in accessible areas within the power plant from both neutron and gamma sources. Additionally, building from professional and regulatory standards a surveillance capsule monitoring program was designed to mimic those used in the nuclear industry. The high temperatures were designed to supply heat for industrial purposes and not just for power production. Since tritium, a heavier radioactive isotope of hydrogen, is produced in the reactor it is important to know the distribution of tritium production and the subsequent diffusion from the core to secondary systems to prevent contamination outside of the nuclear island. Accurately modeling indicators using MAVRIC is the main goal. However, it is almost equally as important for simulations to be carried out in a timely manner. MAVRIC uses the discrete ordinates method to solve the fixed-source transport equation for both neutron and gamma rays on a crude geometric representation of the detailed model. This deterministic forward solution is used to solve an adjoint equation with the adjoint source specified by the user. The adjoint solution is then used to create an importance map that can weight particles in a stochastic Monte Carlo simulation. The goal of using this hybrid methodology is to provide complete accuracy with high precision while decreasing overall simulation times by orders of magnitude. The MAVRIC sequence provides a platform to quickly alter inputs so that vastly different shielding studies can be simulated using one model with minimal effort by the user. Each separate shielding study required unique strategies while looking at different regions in the VHTR plant. MAVRIC proved to be effective for each case.
APA, Harvard, Vancouver, ISO, and other styles
41

Fowler, A. M. "Variance Stabilization Revisited: A Case For Analysis Based On Data Pooling." Tree-Ring Society, 2009. http://hdl.handle.net/10150/622607.

Full text
Abstract:
The traditional approach to standardizing tree-ring time series is to divide raw ring widths by a fitted curve. Although the derived ratios are conceptually elegant and have a more homogenous variance through time than simple differences, residual heteroscedasticity associated with variance dependence on local mean ring width may remain. Incorrect inferences about climate forcing may result if this heteroscedasticity is not corrected for, or at least recognized (with appropriate caveats). A new variance stabilization method is proposed that specifically targets this source of heteroscedasticity. It is based on stabilizing the magnitude of differences from standardization curves to a common reference local mean ring width and uses data pooled from multiple radii. Application of the method to a multi-site kauri (Agathis australis (D. Don) Lindley) data set shows that (a) the heteroscedasticity issue addressed may be generic rather than radius-specific, at least for some species, (b) variance stabilization using pooled data works well for standardization curves of variable flexibility, (c) in the case of kauri, simple ratios do not appear to be significantly affected by this cause of heteroscedasticity, and (d) centennial-scale variance trends are highly sensitive to the analytical methods used to build tree-ring chronologies.
APA, Harvard, Vancouver, ISO, and other styles
42

Song, Chenxiao. "Monte Carlo Variance Reduction Methods with Applications in Structural Reliability Analysis." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29801.

Full text
Abstract:
Monte Carlo variance reduction methods have attracted significant interest due to the continuous demand for reducing computational costs in various fields of application. This thesis is based on the content of a collection of six papers contributing to the theory and application of Monte Carlo methods and variance reduction techniques. For theoretical developments, we establish a novel framework of Monte Carlo integration over simplices, throughout from sampling to variance reduction. We also investigate the effect of batching for adaptive variance reduction, which aims at running the Monte Carlo simulation simultaneously with the parameter search algorithm using a common sequence of random realizations. Such adaptive variance reduction is moreover employed by strata in a newly proposed stratified sampling framework with dynamic budget allocation. For application in estimating the probability of failure in the context of structural reliability analysis, we formulate adaptive frameworks of stratified sampling with variance reduction by strata as well as stratified directional importance sampling, and survey a variety of numerical approaches employing Monte Carlo methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Singh, Gurprit. "Sampling and Variance Analysis for Monte Carlo Integration in Spherical Domain." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10121/document.

Full text
Abstract:
Cette thèse introduit un cadre théorique pour l'étude de différents schémas d'échantillonnage dans un domaine sphérique, et de leurs effets sur le calcul d'intégrales pour l'illumination globale. Le calcul de l'illumination (du transport lumineux) est un composant majeur de la synthèse d'images réalistes, qui se traduit par l'évaluation d'intégrales multidimensionnelles. Les schémas d'intégration numériques de type Monte-Carlo sont utilisés intensivement pour le calcul de telles intégrales. L'un des aspects majeurs de tout schéma d'intégration numérique est l'échantillonnage. En effet, la façon dont les échantillons sont distribués dans le domaine d'intégration peut fortement affecter le résultat final. Par exemple, pour la synthèse d'images, les effets liés aux différents schémas d'échantillonnage apparaissent sous la forme d'artéfacts structurés ou, au contrire, de bruit non structuré. Dans de nombreuses situations, des résultats complètement faux (biaisés) peuvent être obtenus à cause du schéma d'échantillonnage utilisé pour réaliser l'intégration. La distribution d'un échantillonnage peut être caractérisée à l'aide de son spectre de Fourier. Des schémas d'échantillonnage peuvent être générés à partir d'un spectre de puissance dans le domaine de Fourier. Cette technique peut être utilisée pour améliorer l'erreur d'intégration, car un tel contrôle spectral permet d'adapter le schéma d'échantillonnage au spectre de Fourier de l'intégrande. Il n'existe cependant pas de relation directe entre l'erreur dans l'intégration par méthode de Monte-Carlo et le spectre de puissance de la distribution des échantillons. Dans ces travaux, nous proposons une formulation de la variance qui établit un lien direct entre la variance d'une méthode de Monte-Carlo, les spectres de puissance du schéma d'échantillonnage ainsi que de l'intégrande. Pour obtenir notre formulation de la variance, nous utilisons la notion d'homogénéité de la distribution des échantillons qui permet d'exprimer l'erreur de l'intégration par une méthode de Monte-Carlo uniquement sous forme de variance. À partir de cette formulation de la variance, nous développons un outil d'analyse pouvant être utilisé pour déterminer le taux de convergence théorique de la variance de différents schémas d'échantillonnage proposés dans la littérature. Notre analyse fournit un éclairage sur les bonnes pratiques à mettre en œuvre dans la définition de nouveaux schémas d'échantillonnage basés sur l'intégrande
This dissertation introduces a theoretical framework to study different sampling patterns in the spherical domain and their effects in the evaluation of global illumination integrals. Evaluating illumination (light transport) is one of the most essential aspect in image synthesis to achieve realism which involves solving multi-dimensional space integrals. Monte Carlo based numerical integration schemes are heavily employed to solve these high dimensional integrals. One of the most important aspect of any numerical integration method is sampling. The way samples are distributed on an integration domain can greatly affect the final result. For example, in images, the effects of various sampling patterns appear in the form of either structural artifacts or completely unstructured noise. In many cases, we may get completely false (biased) results due to the sampling pattern used in integration. The distribution of sampling patterns can be characterized using their Fourier power spectra. It is also possible to use the Fourier power spectrum as input, to generate the corresponding sample distribution. This further allows spectral control over the sample distributions. Since this spectral control allows tailoring new sampling patterns directly from the input Fourier power spectrum, it can be used to improve error in integration. However, a direct relation between the error in Monte Carlo integration and the sampling power spectrum is missing. In this work, we propose a variance formulation, that establishes a direct link between the variance in Monte Carlo integration and the power spectra of both the sampling pattern and the integrand involved. To derive our closed-form variance formulation, we use the notion of homogeneous sample distributions that allows expression of error in Monte Carlo integration, only in the form of variance. Based on our variance formulation, we develop an analysis tool that can be used to derive theoretical variance convergence rates of various state-of-the-art sampling patterns. Our analysis gives insights to design principles that can be used to tailor new sampling patterns based on the integrand
APA, Harvard, Vancouver, ISO, and other styles
44

Antonini, Claudia. "Folded Variance Estimators for Stationary Time Series." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6931.

Full text
Abstract:
This thesis is concerned with simulation output analysis. In particular, we are inter- ested in estimating the variance parameter of a steady-state output process. The estimation of the variance parameter has immediate applications in problems involving (i) the precision of the sample mean as a point estimator for the steady-state mean and #956;X, and (ii) confidence intervals for and #956;X. The thesis focuses on new variance estimators arising from Schrubens method of standardized time series (STS). The main idea behind STS is to let such series converge to Brownian bridge processes; then their properties are used to derive estimators for the variance parameter. Following an idea from Shorack and Wellner, we study different levels of folded Brownian bridges. A folded Brownian bridge is obtained from the standard Brownian bridge process by folding it down the middle and then stretching it so that it spans the interval [0,1]. We formulate the folded STS, and deduce a simplified expression for it. Similarly, we define the weighted area under the folded Brownian bridge, and we obtain its asymptotic properties and distribution. We study the square of the weighted area under the folded STS (known as the folded area estimator ) and the weighted area under the square of the folded STS (known as the folded Cram??von Mises, or CvM, estimator) as estimators of the variance parameter of a stationary time series. In order to obtain results on the bias of the estimators, we provide a complete finite-sample analysis based on the mean-square error of the given estimators. Weights yielding first-order unbiased estimators are found in the area and CvM cases. Finally, we perform Monte Carlo simulations to test the efficacy of the new estimators on a test bed of stationary stochastic processes, including the first-order moving average and autoregressive processes and the waiting time process in a single-server Markovian queuing system.
APA, Harvard, Vancouver, ISO, and other styles
45

Ma, Tingting. "Isotropy test and variance estimation for high order statistics of spatial point process." HKBU Institutional Repository, 2011. https://repository.hkbu.edu.hk/etd_ra/1297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chien, Pao-hua, and 簡寶樺. "Robust Analysis of Variance." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/64999971403528399063.

Full text
Abstract:
碩士
國立中央大學
統計研究所
97
Under the generalized multiple linear regression, Tsou(2009) proposed the robust likelihood method for normal working model. Even if the working model is wrong, it still provides correct inferences for the parameter of interest.   We focus on applying the robust method to the analysis of variance, and further revising the F statistic and the likelihood ratio statistic. Using the robust F statistic can correctly infer the significance of regressors. The robust analysis of variance can still provide correct statistical analysis for a regression model, even if the normal assumption is improper. The efficacy of the proposed robust method is demonstrated via simulation studies and real data analyses.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yun-Ru, and 黃韻如. "A Study of Performance Variances of Taiwanese Firms in Mainland China: Using Variance Component Analysis, Hierarchical Linear Model, and Analysis of Variance Method." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/d8x7kp.

Full text
Abstract:
碩士
國立臺灣科技大學
企業管理系
105
Based on Industry Organization Theory, Resource-View of Firm and Institutional Theory as our theoretical background, the purpose of this paper is to applied Variance Component Analysis (VCA), Hierarchical Linear Model (HLM), and Analysis of Variance Method (ANOVA) to identify the source of performance variances among Taiwanese firm in Mainland China. Under all performance variables (including ROS, ROA, and ROE), firm effects were found to explain 13.93 to 47.69 percent of variances among Taiwan firms performance in Mainland China. Industry effects accounted for 0 to 15.76 percent of performance differences. Corporate effects accounted for 7 percent of performance variances. Region effects were found to range from 0 to 2.90 percent influence. Year effects are only 0.7 percent. The main finding of this study was that firms’ performance is decided by their own specific, idiosyncratic resource and competences. Besides, region effects have clearly impact on the performance of Taiwanese firms in Mainland China due to the different environment and resource endowment.
APA, Harvard, Vancouver, ISO, and other styles
48

Yu, Wei. "Variance Analysis for Nonlinear Systems." 2007. http://hdl.handle.net/1974/878.

Full text
Abstract:
In the past decades there has been onsiderable commercial and academic interest in methods for monitoring control system performance for linear systems. Far less has been written on control system performance for nonlinear dynamic / stochastic systems. This thesis presents research results on three control performance monitoring topics for the nonlinear systems: i) Controller assessment of a class of nonlinear systems: The use of autoregressive moving average (ARMA) models to assess the control loop performance for linear systems is well known. Classes of nonlinear dynamic / stochastic systems for which a similar result can be obtained are established for SISO discrete systems. For these systems, the performance lower bounds can be estimated from closed-loop routine operating data using nonlinear autoregressive moving average with exogenous inputs (NARMAX) models. ii) Variance decomposition of nonlinear systems / time series: We develop a variance decomposition approach to quantify the effects of different sources of disturbances on the nonlinear dynamic / stochastic systems. A method, called ANOVA-like decomposition, is employed to achieve this variance decomposition. Modifications of ANOVA-like decomposition are proposed so that the NOVA-like decomposition can be used to deal with the time dependency and the initial condition. iii) Parameter uncertainty effects on the variance decomposition: For the variance decomposition in the second part, the model parameters are assumed to be exactly known. However, parameters of empirical or mechanistic models are uncertain. The uncertainties associated with parameters should be included when the model is used for variance analysis. General solutions of the parameter uncertainty effects on the variance decomposition for the general nonlinear systems are proposed. Analytical solutions of the parameter uncertainty effects on the variance decomposition are provided for models with linear parameters.
Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2007-10-17 16:02:26.376
This work was sponsored by NSERC Discovery, NSERC Equipment, Shell Global Solutions, OGSST and QGA
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Sue-Mei, and 林淑美. "Multi-Product Cost Variance Analysis." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/21794701554280843797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Li-Ching, and 黃莉晴. "Analysis of Variance under Heteroscedasticity." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/22044215403919555613.

Full text
Abstract:
碩士
國立成功大學
統計學系碩博士班
97
Assuming a fixed-effects ANOVA model where the error terms are independent and follow normal distributions with unequal and unknown variances in addition to unknown experimental factor effects, the interest is to test the hypotheses of various effects at a given level of significance and a given power. A design-oriented two-stage sampling procedure is employed to conduct various tests and simultaneously determine the necessary sample sizes at the fixed level and power. This is possible because the distributions of various test statistics under null hypothese are independent of all unknown parameters. Tables of critical values and design constants to meet the given level and the power are provided for practitioners. When the two-stage sampling procedure cannot be completed due to budget cut, time limit, missing experimental units or some other cost factors, a one-stage data analysis-oriented procedure can be employed to conduct the statistical tests based on available observations on hand. It can be seen that the distributions of the one-stage test statistics are also independent of all unknown parameters. As a result, the one-stage sampling procedure can supplement the two-stage sampling procedure to continue the statistical analysis to draw a conclusion. Two-way and three-way layout in ANOVA models with all interactions are studied and conclusions are made accordingly.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography