To see the other types of publications on this topic, follow the link: Analysis of Varience (ANOVA).

Dissertations / Theses on the topic 'Analysis of Varience (ANOVA)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analysis of Varience (ANOVA).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Prosser, Robert James. "Robustness of multivariate mixed model ANOVA." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25511.

Full text
Abstract:
In experimental or quasi-experimental studies in which a repeated measures design is used, it is common to obtain scores on several dependent variables on each measurement occasion. Multivariate mixed model (MMM) analysis of variance (Thomas, 1983) is a recently developed alternative to the MANOVA procedure (Bock, 1975; Timm, 1980) for testing multivariate hypotheses concerning effects of a repeated factor (called occasions in this study) and interaction between repeated and non-repeated factors (termed group-by-occasion interaction here). If a condition derived by Thomas (1983), multivariate multi-sample sphericity (MMS), regarding the equality and structure of orthonormalized population covariance matrices is satisfied (given multivariate normality and independence for distributions of subjects' scores), valid likelihood-ratio MMM tests of group-by-occasion interaction and occasions hypotheses are possible. To date, no information has been available concerning actual (empirical) levels of significance of such tests when the MMS condition is violated. This study was conducted to begin to provide such information. Departure from the MMS condition can be classified into three types— termed departures of types A, B, and C respectively: (A) the covariance matrix for population ℊ (ℊ = 1,...G), when orthonormalized, has an equal-diagonal-block form but the resulting matrix for population ℊ is unequal to the resulting matrix for population ℊ' (ℊ ≠ ℊ'); (B) the G populations' orthonormalized covariance matrices are equal, but the matrix common to the populations does not have equal-diagonal-block structure; or (C) one or more populations has an orthonormalized covariance matrix which does not have equal-diagonal-block structure and two or more populations have unequal orthonormalized matrices. In this study, Monte Carlo procedures were used to examine the effect of each type of violation in turn on the Type I error rates of multivariate mixed model tests of group-by-occasion interaction and occasions null hypotheses. For each form of violation, experiments modelling several levels of severity were simulated. In these experiments: (a) the number of measured variables was two; (b) the number of measurement occasions was three; (c) the number of populations sampled was two or three; (d) the ratio of average sample size to number of measured variables was six or 12; and (e) the sample size ratios were 1:1 and 1:2 when G was two, and 1:1:1 and 1:1:2 when G was three. In experiments modelling violations of types A and C, the effects of negative and positive sampling were studied. When type A violations were modelled and samples were equal in size, actual Type I error rates did not differ significantly from nominal levels for tests of either hypothesis except under the most severe level of violation. In type A experiments using unequal groups in which the largest sample was drawn from the population whose orthogonalized covariance matrix has the smallest determinant (negative sampling), actual Type I error rates were significantly higher than nominal rates for tests of both hypotheses and for all levels of violation. In contrast, empirical levels of significance were significantly lower than nominal rates in type A experiments in which the largest sample was drawn from the population whose orthonormalized covariance matrix had the largest determinant (positive sampling). Tests of both hypotheses tended to be liberal in experiments which modelled type B violations. No strong relationships were observed between actual Type I error rates and any of: severity of violation, number of groups, ratio of average sample size to number of variables, and relative sizes of samples. In equal-groups experiments modelling type C violations in which the orthonormalized pooled covariance matrix departed at the more severe level from equal-diagonal-block form, actual Type I error rates for tests of both hypotheses tended to be liberal. Findings were more complex under the less severe level of structural departure. Empirical significance levels did not vary with the degree of interpopulation heterogeneity of orthonormalized covariance matrices. In type C experiments modelling negative sampling, tests of both hypotheses tended to be liberal. Degree of structural departure did not appear to influence actual Type I error rates but degree of interpopulation heterogeneity did. Actual Type I error rates in type C experiments modelling positive sampling were apparently related to the number of groups. When two populations were sampled, both tests tended to be conservative, while for three groups, the results were more complex. In general, under all types of violation the ratio of average group size to number of variables did not greatly affect actual Type I error rates. The report concludes with suggestions for practitioners considering use of the MMM procedure based upon the findings and recommends four avenues for future research on Type I error robustness of MMM analysis of variance. The matrix pool and computer programs used in the simulations are included in appendices.<br>Education, Faculty of<br>Educational and Counselling Psychology, and Special Education (ECPS), Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
2

Halldestam, Markus. "ANOVA - The Effect of Outliers." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-295864.

Full text
Abstract:
This bachelor’s thesis focuses on the effect of outliers on the one-way analysis of variance and examines whether the estimate in ANOVA is robust and whether the actual test itself is robust from influence of extreme outliers. The robustness of the estimates is examined using the breakdown point while the robustness of the test is examined by simulating the hypothesis test under some extreme situations. This study finds evidence that the estimates in ANOVA are sensitive to outliers, i.e. that the procedure is not robust. Samples with a larger portion of extreme outliers have a higher type-I error probability than the expected level.
APA, Harvard, Vancouver, ISO, and other styles
3

Adnan, Arisman. "Analysis of taste-panel data using ANOVA and ordinal logistic regression." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yuan. "Mixed anova model analysis of microarray experiments with locally polled error /." Electronic version (PDF), 2004. http://dl.uncw.edu/etd/2004/liuy/yuanliu.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carter, Bruce Jerome. "An ANOVA Analysis of Education Inequities Using Participation and Representation in Education Systems." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4274.

Full text
Abstract:
A problem recognized in the United States is that a K-12 public education in urban communities is more likely to support existing patterns of inequality than to serve as a pathway to opportunity. The specific focus of this research was on the poor academic performance in U.S K-12 urban communities. Using Benet's polarities of democracy theory as the foundation, the purpose of this correlational study was to determine which independent variables, enrollment rates, high school graduation rates, property tax funding rates for schools, teacher quality, and youth literacy rates are- statistically associated- with quality education outcomes by using the polarities of democracy participation and representation tenets as proxy variables. Secondary data spanning a 5-year aggregate period, 2010-2015, was compared for both Massachusetts and the United States, using Germany as the benchmark. Data were acquired from the Programme for International Student Assessment from the Organisation for Economic Cooperation and Development. The total sample included 150 cases randomly selected from 240 schools in Massachusetts and 150 schools in Germany. Data were analyzed using ANOVA. The results of this study indicate a statistically- significant (p- < .001) pairwise association between each of the 5 independent variables and the dependent variable. The 5 independent variables had a positive statistically significant effect on education quality. The implication for social change from this study includes insight and recommendations to the U.S Department of Education into best practices for reducing educational inequality and improving educational quality as measured by achievement in the United States.
APA, Harvard, Vancouver, ISO, and other styles
6

Lind, Ingela. "Regressor and Structure Selection : Uses of ANOVA in System Identification." Doctoral thesis, Linköping : Linköpings universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hammi, Malik, and Ahmet Hakan Akdeve. "Poweranalys : bestämmelse av urvalsstorlek genom linjära mixade modeller och ANOVA." Thesis, Linköpings universitet, Statistik och maskininlärning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149026.

Full text
Abstract:
In research where experiments on humans and animals is performed, it is in advance important to determine how many observations that is needed in a study to detect any effects in groups and to save time and costs. This could be examined by power analysis, in order to determine a sample size which is enough to detect any effects in a study, a so called “power”. Power is the probability to reject the null hypothesis when the null hypothesis is false. Mälardalen University and the Caroline Institute have in cooperation, formed a study (The Climate Friendly and Ecological Food on Microbiota) based on individual’s dietary intake. Every single individual have been assigned to a specific diet during 8 weeks, with the purpose to examine whether emissions of carbon dioxide, CO2, differs reliant to the specific diet each individuals follows. There are two groups, one treatment and one control group. Individuals assigned to the treatment group are supposed to follow a climatarian diet while the individuals in the control group follows a conventional diet. Each individual have been followed up during 8 weeks in total, with three different measurements occasions, 4 weeks apart. The different measurements are Baseline assessment, Midline assessment and End assessment. In the CLEAR-study there are a total of 18 individuals, with 9 individuals in each group. The amount of individuals are not enough to reach any statistical significance in a test and therefore the sample size shall be examined through power analysis. In terms of, data, every individual have three different measurements occasions that needs to be modeled through mixed-design ANOVA and linear mixed models. These two methods takes into account, each individual’s different measurements. The models which describes data are applied in the computations of sample sizes and power. All the analysis are done in the programming language R with means and standard deviations from the study and the models as a base. Sample sizes and power have been computed for two different linear mixed models and one ANOVA model. The linear mixed models required less individuals than ANOVA in terms of a desired power of 80 percent. 24 individuals in total were required by the linear mixed model that had the factors group, time, id and the covariate sex. 42 individuals were required by ANOVA that includes the variables id, group and time.<br>Inom forskning där försök, dels utförs på människor och djur, vill man försäkra sig om en lämplig urvalsstorlek för att spara tid och kostnad samtidigt som en önskad statistisk styrka uppnås. Mälardalens högskola och Karolinska institutet har gjort en pilotstudie (CLEAR) som undersöker människors koldioxidutsläpp i förhållande till kosthållning. Varje individ i studien har fått riktlinjer om att antingen följa en klimatvänlig- eller en konventionell kosthållning i totalt 8 veckor. Individerna följs upp med 4 veckors mellanrum, vilket har resulterat i tre mättillfällen, inklusive en baslinjemätning. I CLEAR-studien finns variabler om individernas kön, ålder, kosthållning samt intag av makro- och mikronäringsämnen. Nio individer i respektive grupp finns, där grupperna är klimat- och kontrollgruppen. Totala antalet individer i pilotstudien är för få för att erhålla statistisk signifikans vid statistiska tester och därför bör urvalsstorleken undersökas genom att göra styrkeberäkningar. Styrkan som beräknas är sannolikheten att förkasta nollhypotesen när den är falsk. För att kunna beräkna urvalsstorlekar måste modeller skapas utifrån strukturen på data, vilket kommer att göras med metoderna mixed-design ANOVA och linjära mixade modeller. Metoderna tar hänsyn till att varje individ har fler än en mätning. Modellerna som beskriver data tillämpas i beräkningarna av styrka. Urvalsstorlekarna och styrkan som beräknats är simuleringsbaserad och har analyserats i programspråket R med modellerna och värden från pilotstudien som grund. Styrka och urvalsstorlekar har beräknats för två linjära mixade modeller och en ANOVA. De linjära mixade modellerna kräver färre individer än ANOVA för en önskad styrka på 80 procent. Av de linjära mixade modellerna som krävde minst individer behövdes totalt 24 individer medan mixed design-ANOVA krävde 42 individer totalt.
APA, Harvard, Vancouver, ISO, and other styles
8

An, Qian. "A Monte Carlo study of several alpha-adjustment procedures using a testing multiple hypotheses in factorial anova." Ohio : Ohio University, 2010. http://www.ohiolink.edu/etd/view.cgi?ohiou1269439475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jordaan, Aletta Gertruida. "Empirical Bayes estimation of the extreme value index in an ANOVA setting." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86216.

Full text
Abstract:
Thesis (MComm)-- Stellenbosch University, 2014.<br>ENGLISH ABSTRACT: Extreme value theory (EVT) involves the development of statistical models and techniques in order to describe and model extreme events. In order to make inferences about extreme quantiles, it is necessary to estimate the extreme value index (EVI). Numerous estimators of the EVI exist in the literature. However, these estimators are only applicable in the single sample setting. The aim of this study is to obtain an improved estimator of the EVI that is applicable to an ANOVA setting. An ANOVA setting lends itself naturally to empirical Bayes (EB) estimators, which are the main estimators under consideration in this study. EB estimators have not received much attention in the literature. The study begins with a literature study, covering the areas of application of EVT, Bayesian theory and EB theory. Different estimation methods of the EVI are discussed, focusing also on possible methods of determining the optimal threshold. Specifically, two adaptive methods of threshold selection are considered. A simulation study is carried out to compare the performance of different estimation methods, applied only in the single sample setting. First order and second order estimation methods are considered. In the case of second order estimation, possible methods of estimating the second order parameter are also explored. With regards to obtaining an estimator that is applicable to an ANOVA setting, a first order EB estimator and a second order EB estimator of the EVI are derived. A case study of five insurance claims portfolios is used to examine whether the two EB estimators improve the accuracy of estimating the EVI, when compared to viewing the portfolios in isolation. The results showed that the first order EB estimator performed better than the Hill estimator. However, the second order EB estimator did not perform better than the “benchmark” second order estimator, namely fitting the perturbed Pareto distribution to all observations above a pre-determined threshold by means of maximum likelihood estimation.<br>AFRIKAANSE OPSOMMING: Ekstreemwaardeteorie (EWT) behels die ontwikkeling van statistiese modelle en tegnieke wat gebruik word om ekstreme gebeurtenisse te beskryf en te modelleer. Ten einde inferensies aangaande ekstreem kwantiele te maak, is dit nodig om die ekstreem waarde indeks (EWI) te beraam. Daar bestaan talle beramers van die EWI in die literatuur. Hierdie beramers is egter slegs van toepassing in die enkele steekproef geval. Die doel van hierdie studie is om ’n meer akkurate beramer van die EWI te verkry wat van toepassing is in ’n ANOVA opset. ’n ANOVA opset leen homself tot die gebruik van empiriese Bayes (EB) beramers, wat die fokus van hierdie studie sal wees. Hierdie beramers is nog nie in literatuur ondersoek nie. Die studie begin met ’n literatuurstudie, wat die areas van toepassing vir EWT, Bayes teorie en EB teorie insluit. Verskillende metodes van EWI beraming word bespreek, insluitend ’n bespreking oor hoe die optimale drempel bepaal kan word. Spesifiek word twee aanpasbare metodes van drempelseleksie beskou. ’n Simulasiestudie is uitgevoer om die akkuraatheid van beraming van verskillende beramingsmetodes te vergelyk, in die enkele steekproef geval. Eerste orde en tweede orde beramingsmetodes word beskou. In die geval van tweede orde beraming, word moontlike beramingsmetodes van die tweede orde parameter ook ondersoek. ’n Eerste orde en ’n tweede orde EB beramer van die EWI is afgelei met die doel om ’n beramer te kry wat van toepassing is vir die ANAVA opset. ’n Gevallestudie van vyf versekeringsportefeuljes word gebruik om ondersoek in te stel of die twee EB beramers die akkuraatheid van beraming van die EWI verbeter, in vergelyking met die EWI beramers wat verkry word deur die portefeuljes afsonderlik te ontleed. Die resultate toon dat die eerste orde EB beramer beter gevaar het as die Hill beramer. Die tweede orde EB beramer het egter slegter gevaar as die tweede orde beramer wat gebruik is as maatstaf, naamlik die passing van die gesteurde Pareto verdeling (PPD) aan alle waarnemings bo ’n gegewe drempel, met behulp van maksimum aanneemlikheidsberaming.
APA, Harvard, Vancouver, ISO, and other styles
10

King, Taylor J. "Power Analysis to Determine the Importance of Covariance Structure Choice in Mixed Model Repeated Measures Anova." Thesis, North Dakota State University, 2017. https://hdl.handle.net/10365/28656.

Full text
Abstract:
Repeated measures experiments involve multiple subjects with measurements taken on each subject over time. We used SAS to conduct a simulation study to see how different methods of analysis perform under various simulation parameters (e.g. sample size, autocorrelation, repeated measures). Our goals were to: compare the multivariate analysis of variance method using PROC GLM to the mixed model method using PROC MIXED in terms of power, determine how choosing the incorrect covariance structure for mixed model analysis affects power, and identify sample sizes needed to produce adequate power of 90 percent under different scenarios. The findings support using the mixed model method over the multivariate method because power is generally higher when using the mixed model method. Simpler covariance structures may be preferred when testing the within-subjects effect to obtain high power. Additionally, these results can be used as a guide for determining the sample size needed for adequate power.
APA, Harvard, Vancouver, ISO, and other styles
11

Patrick, Joshua Daniel. "Simulations to analyze Type I error and power in the ANOVA F test and nonparametric alternatives." [Pensacola, Fla.] : University of West Florida, 2009. http://purl.fcla.edu/fcla/etd/WFE0000158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nguyen, Nga. "Multivariate analysis and GIS in generating vulnerability map of acid sulfate soils." Thesis, KTH, Mark- och vattenteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170472.

Full text
Abstract:
The study employed multi-variate methods to generate vulnerability maps for acid sulfate soils (AS) in the Norrbotten county of Sweden. In this study, the relationships between the reclassified datasets and each biogeochemical element was carefully evaluated with ANOVA Kruskal Wallis and PLS analysis. The sta-tistical results of ANOVA Kruskall-Wallis provided us a useful knowledge of the relationships of the preliminary vulnerability ranks in the classified datasets ver-sus the amount of each biogeochemical element. Then, the statistical knowledge and expert knowledge were used to generate the final vulnerability ranks of AS soils in the classified datasets which were the input independent variables in PLS analyses. The results of Kruskal-Wallis one way ANOVA and PLS analyses showed a strong correlation of the higher levels total Cu2+, Ni2+ and S to the higher vulnerability ranks in the classified datasets. Hence, total Cu2+, Ni2+ and S were chosen as the dependent variables for further PLS analyses. In particular, the Variable Importance in the Projection (VIP) value of each classified dataset was standardized to generate its weight. Vulnerability map of AS soil was a result of a lineal combination of the standardized values in the classified dataset and its weight. Seven weight sets were formed from either uni-variate or multi-variate PLS analyses. Accuracy tests were done by testing the classification of measured pH values of 74 soil profiles with different vulnerability maps and evaluating the areas that were not the AS soil within the groups of medium to high AS soil probability in the land-cover and soil-type datasets. In comparison to the other weight sets, the weight set of multi-variate PLS analysis of the matrix of total Ni2+&amp; S or total Cu2+&amp; S had the robust predictive performance. Sensitivity anal-ysis was done in the weight set of total Ni2+&amp; S, and the results of sensitivity analyses showed that the availability of ditches, and the change in the terrain sur-faces, the altitude level, and the slope had a high influence to the vulnerability map of AS soils. The study showed that using multivariate analysis was a very good approach methodology for predicting the probability of acid sulfate soil.
APA, Harvard, Vancouver, ISO, and other styles
13

Pedott, Alexandre Homsi. "Análise de dados funcionais aplicada ao estudo de repetitividade e reprodutividade : ANOVA das distâncias." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/24726.

Full text
Abstract:
Esta dissertação apresenta um método adaptado do estudo de repetitividade e reprodutibilidade para analisar a capacidade e o desempenho de sistemas de medição, no contexto da análise de dados funcionais. Dado funcional é a variável de resposta dada por uma coleção de dados que formam um perfil ou uma curva. O método adaptado contribui para o avanço do estado da arte sobre a análise de sistemas de medição. O método proposto é uma alternativa ao uso de métodos tradicionais de análise, que usados de forma equivocada, podem deteriorar a qualidade dos produtos monitorados através de variáveis de resposta funcionais. O método proposto envolve a adaptação de testes de hipótese e da análise de variância de um e dois fatores usados em comparações de populações, na avaliação de sistemas de medições. A proposta de adaptação foi baseada na utilização de distâncias entre curvas. Foi usada a Distância de Hausdorff como uma medida de proximidade entre as curvas. A adaptação proposta à análise de variância foi composta de três abordagens. Os métodos adaptados foram aplicados a um estudo simulado de repetitividade e reprodutibilidade. O estudo foi estruturado para analisar cenários em que o sistema de medição foi aprovado e reprovado. O método proposto foi denominado de ANOVA das Distâncias.<br>This work presents a method to analyze a measurement system's performance in a functional data analysis context, based on repeatability and reproducibility studies. Functional data are a collection of data points organized as a profile or curve. The proposed method contributes to the state of the art on measurement system analysis. The method is an alternative to traditional methods often used mistakenly, leading to deterioration in the quality of products monitored through functional responses. In the proposed method we adapt hypothesis tests and one-way and two-way ANOVA to be used in measurement system analysis. The method is grounded on the use of distances between curves. For that matter the Hausdorff distance was chosen as a measure of proximity between curves. Three ANOVA approaches were proposed and applied in a simulated repeatability and reproducibility study. The study was structured to analyze scenarios in which the measurement system was approved or rejected. The proposed method was named ANOVA of the distances.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Bairu. "Functional data analysis in orthogonal designs with applications to gait patterns." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/44698.

Full text
Abstract:
This thesis presents a contribution to the active research area of functional data analysis (FDA) and is concerned with the analysis of data from complex experimental designs in which the responses are curves. High resolution, closely correlated data sets are encountered in many research fields, but current statistical methodologies often analyse simplistic summary measures and therefore limit the completeness and accuracy of conclusions drawn. Specifically the nature of the curves and experimental design are not taken into account. Mathematically, such curves can be modelled either as sample paths of a stochastic process or as random elements in a Hilbert space. Despite this more complex type of response, the structure of experiments which yield functional data is often the same as in classical experimentation. Thus, classical experimental design principles and results can be adapted to the FDA setting. More specifically, we are interested in the functional analysis of variance (ANOVA) of experiments which use orthogonal designs. Most of the existing functional ANOVA approaches consider only completely randomised designs. However, we are interested in more complex experimental arrangements such as, for example, split-plot and row-column designs. Similar to univariate responses, such complex designs imply that the response curves for different observational units are correlated. We use the design to derive a functional mixed-effects model and adapt the classical projection approach in order to derive the functional ANOVA. As a main result, we derive new functional F tests for hypotheses about treatment effects in the appropriate strata of the design. The approximate null distribution of these tests is derived by applying the Karhunen- Lo`eve expansion to the covariance functions in the relevant strata. These results extend existing work on functional F tests for completely randomised designs. The methodology developed in the thesis has wide applicability. In particular, we consider novel applications of functional F tests to gait analysis. Results are presented for two empirical studies. In the first study, gait data of patients with cerebral palsy were collected during barefoot walking and walking with ankle-foot orthoses. The effects of ankle-foot orthoses are assessed by functional F tests and compared with pointwise F tests and the traditional univariate repeated-measurements ANOVA. The second study is a designed experiment in which a split-plot design was used to collect gait data from healthy subjects. This is commonly done in gait research in order to better understand, for example, the effects of orthoses while avoiding confounded analysis from the high variability observed in abnormal gait. Moreover, from a technical point of view the study may be regarded as a real-world alternative to simulation studies. By using healthy individuals it is possible to collect data which are in better agreement with the underlying model assumptions. The penultimate chapter of the thesis presents a qualitative study with clinical experts to investigate the utility of gait analysis for the management of cerebral palsy. We explore potential pathways by which the statistical analyses in the thesis might influence patient outcomes. The thesis has six chapters. After describing motivation and introduction in Chapter 1, mathematical representations of functional data are presented in Chapter 2. Chapter 3 considers orthogonal designs in the context of functional data analysis. New functional F tests for complex designs are derived in Chapter 4 and applied in two gait studies. Chapter 5 is devoted to a qualitative study. The thesis concludes with a discussion which details the extent to which the research question has been addressed, the limitations of the work and the degree to which it has been answered.
APA, Harvard, Vancouver, ISO, and other styles
15

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tissot, Jean-yves. "Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM064/document.

Full text
Abstract:
Dans les domaines de la modélisation et de la simulation numérique, les simulateurs développés prennent parfois en compte de nombreux paramètres dont l'impact sur les sorties n'est pas toujours bien connu. L'objectif principal de l'analyse de sensibilité est d'aider à mieux comprendre comment les sorties d'un modèle sont sensibles aux variations de ces paramètres. L'approche la mieux adaptée pour appréhender ce problème dans le cas de modèles potentiellement complexes et fortement non linéaires repose sur la décomposition ANOVA et les indices de Sobol'. En particulier, ces derniers permettent de quantifier l'influence de chacun des paramètres sur la réponse du modèle. Dans cette thèse, nous nous intéressons au problème de l'estimation des indices de Sobol'. Dans une première partie, nous réintroduisons de manière rigoureuse des méthodes existantes au regard de l'analyse harmonique discrète sur des groupes cycliques et des tableaux orthogonaux randomisés. Cela nous permet d'étudier les propriétés théoriques de ces méthodes et de les généraliser. Dans un second temps, nous considérons la méthode de Monte Carlo spécifique à l'estimation des indices de Sobol' et nous introduisons une nouvelle approche permettant de l'améliorer. Cette amélioration est construite autour des hypercubes latins et permet de réduire le nombre de simulations nécessaires pour estimer les indices de Sobol' par cette méthode. En parallèle, nous mettons en pratique ces différentes méthodes sur un modèle d'écosystème marin<br>In the fields of modelization and numerical simulation, simulators generally depend on several input parameters whose impact on the model outputs are not always well known. The main goal of sensitivity analysis is to better understand how the model outputs are sensisitive to the parameters variations. One of the most competitive method to handle this problem when complex and potentially highly non linear models are considered is based on the ANOVA decomposition and the Sobol' indices. More specifically the latter allow to quantify the impact of each parameters on the model response. In this thesis, we are interested in the issue of the estimation of the Sobol' indices. In the first part, we revisit in a rigorous way existing methods in light of discrete harmonic analysis on cyclic groups and randomized orthogonal arrays. It allows to study theoretical properties of this method and to intriduce generalizations. In a second part, we study the Monte Carlo method for the Sobol' indices and we introduce a new approach to reduce the number of simulations of this method. In parallel with this theoretical work, we apply these methods on a marine ecosystem model
APA, Harvard, Vancouver, ISO, and other styles
17

Tissot, Jean-Yves. "Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00762800.

Full text
Abstract:
Dans les domaines de la modélisation et de la simulation numérique, les simulateurs développés prennent parfois en compte de nombreux paramètres dont l'impact sur les sorties n'est pas toujours bien connu. L'objectif principal de l'analyse de sensibilité est d'aider à mieux comprendre comment les sorties d'un modèle sont sensibles aux variations de ces paramètres. L'approche la mieux adaptée pour appréhender ce problème dans le cas de modèles potentiellement complexes et fortement non linéaires repose sur la décomposition ANOVA et les indices de Sobol'. En particulier, ces derniers permettent de quantifier l'influence de chacun des paramètres sur la réponse du modèle. Dans cette thèse, nous nous intéressons au problème de l'estimation des indices de Sobol'. Dans une première partie, nous réintroduisons de manière rigoureuse des méthodes existantes au regard de l'analyse harmonique discrète sur des groupes cycliques et des tableaux orthogonaux randomisés. Cela nous permet d'étudier les propriétés théoriques de ces méthodes et de les généraliser. Dans un second temps, nous considérons la méthode de Monte Carlo spécifique à l'estimation des indices de Sobol' et nous introduisons une nouvelle approche permettant de l'améliorer. Cette amélioration est construite autour des hypercubes latins et permet de réduire le nombre de simulations nécessaires pour estimer les indices de Sobol' par cette méthode. En parallèle, nous mettons en pratique ces différentes méthodes sur un modèle d'écosystème marin.
APA, Harvard, Vancouver, ISO, and other styles
18

Frisch, Jessica Lynne. "Chromatographic and mass spectral analyses of oligosaccharides and indigo dye extracted from cotton textiles with manova and anova statistical data analyses." Orlando, Fla. : University of Central Florida, 2008. http://purl.fcla.edu/fcla/etd/CFE0002068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Balaguer-Barbosa, Maraida. "Recovery of Nutrients from Anaerobically Digested Enhanced Biological Phosphorus Removal (EBPR) Sludge through Struvite Precipitation." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7471.

Full text
Abstract:
Water resources in Florida have been severely degraded by eutrophic conditions, resulting toxic algae blooms, which negatively affect health and tourism. Eutrophication, or excessive amount of phosphorus (P) and nitrogen (N) in water, overstimulates the production of aquatic plants, depletes dissolved oxygen, and deteriorates the aquatic environment. However, phosphorus is a non-renewable resource essential for all living organisms. In fact, more than half of the total demand for P globally is to supply the food industry, which has concerningly accelerated the depletion rates of phosphate reserves. In many wastewater treatment plants (WWTPs), the enhanced biological phosphorus removal (EBPR) approach has been employed to achieve high phosphorus removals from wastewater through phosphate-accumulating organisms (PAOs). However, during either anaerobic or aerobic digestion of EBPR sludge, stored polyphosphates are released and carried into the sidestream, which is typically returned to the headworks of the main treatment facility, thereby recycling phosphorus back into the system. This treatment train is highly inefficient because nutrients rather are recirculated rather than recovered. Struvite (MgNH4PO4•6H2O) is precipitated in oversaturated aqueous solutions with equal molar concentrations of magnesium, ammonium, and phosphate. The controlled crystallization of struvite may be applied to remove phosphorus and some ammonium from sidestreams, which is the liquid portion of the digester effluent. Struvite can be employed as a sustainable slow-release fertilizer due to its low solubility in water. This offers the opportunity of marketing the struvite produced under controlled conditions and creating a revenue for the utility. The specific research objectives of this thesis are (1) to investigate different possible operating conditions under which anaerobically digested sludge from EBPR facilities might be treated through struvite precipitation; (2) to quantify the removal of N and P from sidestreams from anaerobically digested EBPR sludge via struvite precipitation and assess the composition of the precipitate obtained; and (3) to generate a cost analysis to assess the trade-offs between the capital and operation and maintenance (O&M) costs of struvite production and the benefits such as reduced chemical use and production of a slow-release fertilizer. The main parameters affecting struvite precipitation are the Mg2+ to PO43- molar ratio, pH, temperature, mixing speed, hydraulic retention time (HRT), and the seed quantity added to promote nucleation. Different operating conditions within these parameters were batch-tested as part of this study using sidestream from the pilot-scale anaerobic digester (AD) fed from Falkenburg Advanced Wastewater Treatment Plant (FAWWTP) EBPR sludge. Additionally, the effect of temperature and pH were investigated using Visual MINTEQ simulations. Analysis of Variance (ANOVA) was employed to investigate the variance within the removals from the centrate obtained for phosphate, ammonium, magnesium, and calcium. The chemical composition of the solids collected after employing the selected operating conditions was analyzed by powder X-ray diffraction (PXRD). The results for the batch tests performed as part of this thesis were quantified in terms of the removals of phosphate, ammonium, magnesium, and calcium from the centrate. The greatest amount of phosphate removal was achieved by operating the struvite reactor at 4.0 mmol of Mg2+ per mmole of PO43-. The other molar ratios tested were 1.0, 2.0, and 3.0. Visual inspection of the data showed significant variability in removals of ammonium, calcium, and magnesium, which are likely to be correlated with the highly variable influent concentrations into the struvite reactor. In this case, ANOVA will require larger data sets to accurately analyze variance in the results. The statistical results given by ANOVA for the pH suggests that the main species to contribute with struvite being precipitated are statistically stable within the tested pH values of 8.5, 9.0, and 9.5. The results obtained by the simulation using Visual MINTEQ indicated that maximum saturation as function of pH takes place at a pH between 9.5 and 10.0. The ANOVA for the mixing speed showed that significant amounts of ammonium were removed at higher mixing speeds. This is likely due ammonium being volatilized, which is enhanced by turbulence. Magnesium and phosphate showed lower removals at higher mixing speeds, suggesting that too high mixing speeds will promote struvite seed dissolution. ANOVA identified NH4+ and Ca2+ as the species significantly impacted by modifying the HRT from 8 to 20 minutes. This suggests that prolonged HRT promotes inorganic nitrogen species to volatilize. It is likely that at higher HRT, tricalcium phosphates (TCP) or other favored calcium species coprecipitated together with struvite. Regarding the added struvite seed for nucleation, the greatest removals of ammonium, magnesium, and, phosphate were observed when 1g/L of struvite seed was added. The results also indicated that adding 5 and 10 g/L was an excessive amount of seed, which ended up contributing significantly to more nutrients into the centrate rather than precipitating them. The results also suggested that the struvite crystals formed in the sidestream by secondary nucleation, since removals close to zero were reached after adding no seed. The optimum temperature identified by the simulation in Visual MINTEQ was 21°C. Operating the struvite reactor under the optimal conditions identified in the batch tests, resulted in an average of 99% total P (TP) and 17% total N (TN) removals. The precipitate molar composition for [Mg2+:NH4+:PO43-] was equal to [2:2:1] based on the concentrations that disappeared from the aqueous solution, suggesting that other minerals coprecipitated with struvite. Visual MINTEQ predicted that together with struvite, CaHPO4 and CaHPO4•2H2O will also precipitate under the tested conditions. However, given the obtained ratio it is likely that other unpredicted species by Visual MINTEQ, such as magnesium carbonates or magnesium hydroxide coprecipitated with struvite. PXRD analysis also revealed that the sample was likely contaminated struvite, although the specific contaminants were not identified. A cost analysis was performed to distinguish the economic feasibility of incorporating a struvite harvesting system to treat the anaerobically digested sidestream from the Biosolids Management Facility (BMF) within the Northwest Regional Water Reclamation Facility (NWRWRF). Three different scenarios were evaluated; in Scenario (1) Ostara® Nutrient Recovery Technologies Inc. (Ostara®) evaluated the production of struvite from anaerobically digested EBPR sidestream using a fluidized reactor. In Scenario (2), Ostara® evaluated the production of struvite in a fluidized bed reactor by employing Waste Activated Sludge Stripping to Remove Internal Phosphorus (WASSTRIP™) in a mixture of post-anaerobic digestion centrate and pre-digester thickener liquor. Scenario (3) was addressed by Schwing Bioset Inc. (SBI) for a continuously-stirred reactor followed by a struvite harvesting system. Scenario (2) offers the highest TP and TN recoveries through WASSTRIP™ release due to the additional mass of phosphorus that is sent to the phosphorus recovery process. Therefore, although Scenario (2) has the highest total capital costs ($5M) it also has the shortest payback period (18 years). Scenarios (1) and Scenario (3) have similar payback periods (22-23 years) but very different total capital costs. The annual savings by producing struvite in Scenario (3) is $40K, which is about 30% less than producing struvite in Scenario (1). This is probably because the only savings considered under Scenario (3) were the lower alum usage and the fertilizer revenue; however, the savings by producing class A biosolids, were not accounted for. Consequently, the reduced total capital cost of $960K and the annual payment amount per interest period close to $80K, positioned Scenario (3) as the more feasible one, considering 20 years as the expected life of the asset at a 5% interest rate.
APA, Harvard, Vancouver, ISO, and other styles
20

Chastaing, Gaëlle. "Indices de Sobol généralisés par variables dépendantes." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM046.

Full text
Abstract:
Dans un modèle qui peut s'avérer complexe et fortement non linéaire, les paramètres d'entrée, parfois en très grand nombre, peuvent être à l'origine d'une importante variabilité de la sortie. L'analyse de sensibilité globale est une approche stochastique permettant de repérer les principales sources d'incertitude du modèle, c'est-à-dire d'identifier et de hiérarchiser les variables d'entrée les plus influentes. De cette manière, il est possible de réduire la dimension d'un problème, et de diminuer l'incertitude des entrées. Les indices de Sobol, dont la construction repose sur une décomposition de la variance globale du modèle, sont des mesures très fréquemment utilisées pour atteindre de tels objectifs. Néanmoins, ces indices se basent sur la décomposition fonctionnelle de la sortie, aussi connue soue le nom de décomposition de Hoeffding. Mais cette décomposition n'est unique que si les variables d'entrée sont supposées indépendantes. Dans cette thèse, nous nous intéressons à l'extension des indices de Sobol pour des modèles à variables d'entrée dépendantes. Dans un premier temps, nous proposons une généralisation de la décomposition de Hoeffding au cas où la forme de la distribution des entrées est plus générale qu'une distribution produit. De cette décomposition généralisée aux contraintes d'orthogonalité spécifiques, il en découle la construction d'indices de sensibilité généralisés capable de mesurer la variabilité d'un ou plusieurs facteurs corrélés dans le modèle. Dans un second temps, nous proposons deux méthodes d'estimation de ces indices. La première est adaptée à des modèles à entrées dépendantes par paires. Elle repose sur la résolution numérique d'un système linéaire fonctionnel qui met en jeu des opérateurs de projection. La seconde méthode, qui peut s'appliquer à des modèles beaucoup plus généraux, repose sur la construction récursive d'un système de fonctions qui satisfont les contraintes d'orthogonalité liées à la décomposition généralisée. En parallèle, nous mettons en pratique ces différentes méthodes sur différents cas tests<br>A mathematical model aims at characterizing a complex system or process that is too expensive to experiment. However, in this model, often strongly non linear, input parameters can be affected by a large uncertainty including errors of measurement of lack of information. Global sensitivity analysis is a stochastic approach whose objective is to identify and to rank the input variables that drive the uncertainty of the model output. Through this analysis, it is then possible to reduce the model dimension and the variation in the output of the model. To reach this objective, the Sobol indices are commonly used. Based on the functional ANOVA decomposition of the output, also called Hoeffding decomposition, they stand on the assumption that the incomes are independent. Our contribution is on the extension of Sobol indices for models with non independent inputs. In one hand, we propose a generalized functional decomposition, where its components is subject to specific orthogonal constraints. This decomposition leads to the definition of generalized sensitivity indices able to quantify the dependent inputs' contribution to the model variability. On the other hand, we propose two numerical methods to estimate these constructed indices. The first one is well-fitted to models with independent pairs of dependent input variables. The method is performed by solving linear system involving suitable projection operators. The second method can be applied to more general models. It relies on the recursive construction of functional systems satisfying the orthogonality properties of summands of the generalized decomposition. In parallel, we illustrate the two methods on numerical examples to test the efficiency of the techniques
APA, Harvard, Vancouver, ISO, and other styles
21

Yue, Xiaohui. "Detecting Rater Centrality Effect Using Simulation Methods and Rasch Measurement Analysis." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/28423.

Full text
Abstract:
This dissertation illustrates how to detect the rater centrality effect in a simulation study that approximates data collected in large scale performance assessment settings. It addresses three research questions that: (1) which of several centrality-detection indices are most sensitive to the difference between effect raters and non-effect raters; (2) how accurate (and inaccurate), in terms of Type I error rate and statistical power, each centrality-detection index is in flagging effect raters; and (3) how the features of the data collection design (i.e., the independent variables including the level of centrality strength, the double-scoring rate, and the number of raters and ratees) influence the accuracy of rater classifications by these centrality-detection indices. The results reveal that the measure-residual correlation, the expected-residual correlation, and the standardized deviation of assigned scores perform better than the point-measure correlation. The mean-square fit statistics, traditionally viewed as potential indicators of rater centrality, perform poorly in terms of differentiating central raters from normal raters. Along with the rater slope index, the mean-square fit statistics did not appear to be sensitive to the rater centrality effect. All of these indices provided reasonable protection against Type I errors when all responses were double scored, and that higher statistical power was achieved when responses were 100% double scored in comparison to only 10% being double scored. With a consideration on balancing both Type I error and statistical power, I recommend the measure-residual correlation and the expected-residual correlation for detecting the centrality effect. I suggest using the point-measure correlation only when responses are 100% double scored. The four parameters evaluated in the experimental simulations had different impact on the accuracy of rater classification. The results show that improving the classification accuracy for non-effect raters may come at a cost of reducing the classification accuracy for effect raters. Some simple guidelines for the expected impact of classification accuracy when a higher-order interaction exists summarized from the analyses offer a glimpse of the â prosâ and â consâ in adjusting the magnitude of the parameters when we evaluate the impact of the four experimental parameters on the outcomes of rater classification.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Williams, Karen. "Key success factors in managing the visitor experience at the Cape Town International Jazz Festival / Williams K." Thesis, North-West University, 2011. http://hdl.handle.net/10394/7611.

Full text
Abstract:
The event tourism industry is one of the fastest growing tourism industries worldwide. One type of event that is growing immensely is festivals, especially music festivals such as the Cape Town International Jazz Festival. As a result of the fast growing pace of festivals, it has become crucial for a festival to sustain itself in the market place to stay competitive. The Cape Town International Jazz Festival (the Jazz Festival) is a fast growing music festival and hosts numerous well–known local and international jazz artists, as well as young up–and–coming artists. For this exciting Jazz Festival to keep growing, it needs to be sustainable. To achieve this, the organisers and managers of the Jazz Festival need to know what is important to the visitors of the Jazz Festival, so they can fulfil their needs. This in turn leads to satisfied visitors that will return to the Jazz Festival and keep the festival sustainable. Generally speaking, music festivals have a more professional management approach than other tourism events and thus are more likely to be more successful. Key Success Factors (KSFs) are a precondition for the success of any event and will influence the competitiveness of the event in the market place. It is imperative for organisers to identify the KSFs that are important to the visitors so as to provide them with a satisfactory experience. This will also assist in measuring the achievement of the event’s goals and objectives. The main purpose of this study was to determine the KSFs in managing the visitor experience at the Cape Town International Jazz Festival. To reach this goal, the study is divided into two articles. Research for both articles was conducted at the Cape Town International Jazz Festival through distributing 400 questionnaires randomly throughout the two days of the festival, which was held on 3 and 4 April 2010. Article 1 is titled: “Key aspects for efficient and effective management of the Cape Town International Jazz Festival: a visitor’s perspective”. The main purpose of this article was to identify the Key Success Factors in managing the Cape Town International Jazz Festival, to determine what visitors deemed as important when attending the Jazz Festival. A factor analysis was done to achieve this goal. Results indicated that Hospitality Factors, Quality Venues, Information Dissemination, Marketing and Sales, and Value and Quality are the KSFs that are of importance when managing the Jazz Festival. The results of this article provided festival managers with valuable information when organising an event such as the Cape Town International Jazz Festival. Article 2 is titled: “The importance of different Key Success Factors to different target markets of the Cape Town International Jazz Festival based on travel motives”. The main purpose of this article was to determine whether different target markets that are visiting the Jazz Festival, deemed different KSFs as important, depending on their travel motives. An analysis of variance (ANOVA) was done to determine if there were statistically significant differences between the three clusters and the KSFs that they deemed important. Results showed that the three clusters, namely, Escapists, Culture Seekers and Jazz Lovers, deemed different KSFs as important when they are visiting the Jazz Festival. The results of this article gave festival organisers and marketing managers insight as to which markets to focus scarce marketing resources on and which markets to keep growing, as they will sustain the festival in the long term. Therefore, this research revealed the KSFs that are of utmost importance when managing the Cape Town International Jazz Festival, and that these aspects differ for certain markets. Organisers therefore need to assess the KSFs to provide products that will satisfy the visitor in order for him/her to return each year and keep the festival competitive and sustainable.<br>Thesis (M.Com. (Tourism))--North-West University, Potchefstroom Campus, 2012.
APA, Harvard, Vancouver, ISO, and other styles
23

Guder, Christopher S. "Exploring the Relationship between Patron Type, Carnegie Classification, and Satisfaction with Library Services: An Analysis of LibQUAL+® Results." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1354726349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Adnan, Muhammad. "Usability Evaluation of Smart Phone Application Store." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-115606.

Full text
Abstract:
In this study, the usability of smart phone application store app is evaluated. The study was performed on different smart phone operating systems. Data about usability was gathered through surveys and think aloud based experiment. Anova analysis was also performed on data to identify significant issues. A lot of smartphone users reported issues with installing, locating and searching about apps. Many users had issues with uninstalling of apps and navigating the search results when looking for apps. The smartphone operating system and the app store does not provide seamless navigation and alot of content is not tailored for smart phone users.
APA, Harvard, Vancouver, ISO, and other styles
25

Varanda, Luciano Donizeti. "Produção e avaliação do desempenho de painéis de partículas de Eucalyptus grandis confeccionados com adição de casca de aveia." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/88/88131/tde-09082012-165028/.

Full text
Abstract:
Os painéis à base de madeira vêm sendo amplamente utilizados em todo o mundo, em resposta à redução de oferta de madeira maciça em diversos segmentos da indústria madeireira, como em móveis, painéis, estruturas e outros componentes na construção civil. O grande volume de resíduos gerados pela agroindústria viabiliza o desenvolvimento de materiais alternativos e sustentáveis, destacando-se os painéis de partículas. Este trabalho apresenta um estudo da produção e avaliação de painéis de partículas de Eucalyptus grandis e casca de aveia, aderidas sob pressão com dois tipos de resinas (poliuretana à base de mamona e ureia formaldeído). O desempenho físico-mecânico dos painéis produzidos foi avaliado com base na ABNT NBR 14810:2006. Por meio de análise de variância (ANOVA), avaliou-se a influência dos fatores adotados: madeira de Eucalyptus grandis, com proporções mássicas de 70, 85 e 100%; casca de aveia, nas proporções de 15, 30 e 100%; e adesivos, nas proporções de 10, 12 e 14%, bem como a combinação entre ambos, em cada uma das variáveis respostas (propriedades físico-mecânicas) avaliadas. Os resultados apontaram excelentes propriedades físico-mecânicas, em alguns casos muito superiores aos requisitos estipulados pelas normas nacionais e internacionais. Desta maneira, ficou comprovado o bom desempenho dos painéis de partículas produzidos, além de sua compatibilidade para aplicações em indústrias, como de móveis, painéis, embalagens e na construção civil.<br>Wood-based panels have been widely used around the world, replacing solid wood in various segments of the industry, such as furniture, panels, structures and other building components. The large volume of waste generated by agro industry enables the development of alternative and sustainable materials, highlighting the particleboard. This paper presents a study about production and evaluation of particleboard of Eucalyptus grandis and oat hulls, bonded under pressure with two types of resins (polyurethane based on castor oil and urea formaldehyde). Physical-mechanical performance of the panels produced was evaluated based on ABNT NBR 14810:2006. A variance analysis (ANOVA) evaluated the factors influence: Eucalyptus grandis, with mass ratios of 70, 85 and 100%; oat hulls, in proportions of 15, 30 and 100%; adhesives, in proportions of 10, 12 and 14%, and the combination of both, in each of the response variables (physical and mechanical properties) evaluated. Results showed excellent physical-mechanical properties, in some cases much higher than national and international codes requirements. Thus, panel particles good performance was proved, in addition to its compatibility with applications in wood industries, such as furniture, panels, packaging and in building construction.
APA, Harvard, Vancouver, ISO, and other styles
26

Nyberg, Karl-Johan. "Performance Analysis of Detection System Design Algorithms." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/41789.

Full text
Abstract:
Detection systems are widely used in industry. Designers, operators and users of these systems need to choose an appropriate design, based on the intended usage and the operating environment. The purpose of this research is to analyze the effect of various system design variables (controllable) and system parameters (uncontrollable) on the performance of detection systems. To optimize system performance one must manage the tradeoff between two errors that can occur. A False Alarm occurs if the detection system falsely indicates a target is present and a False Clear occurs if the detection system falsely fails to indicate a target is present. Given a particular detection system and a pre-specified false clear (or false alarm) rate, there is a minimal false alarm (or false clear) rate that can be achieved. Earlier research has developed methods that address this false alarm, false clear tradeoff problem (FAFCT) by formulating a Neyman-Pearson hypothesis problem, which can be solved as a Knapsack problem. The objective of this research is to develop guidelines that can be of help in designing detection systems. For example, what system design variables must be implemented to achieve a certain false clear standard for a parallel 2-sensor detection system for Salmonella detection? To meet this objective, an experimental design is constructed and an analysis of variance is performed. Computational results are obtained using the FAFCT-methodology and the results are presented and analyzed using ROC (Receiver Operating Characteristic) curves and an analysis of variance. The research shows that sample size (i.e., size of test data set used to estimate the distribution of sensor responses) has very little effect on the FAFCT compared to other factors. The analysis clearly shows that correlation has the most influence on the FAFCT. Negatively correlated sensor responses outperform uncorrelated and positively correlated sensor responses with large margins, especially for strict FC-standards (FC-standard is defined as the maximum allowed False Clear rate). Suggestions for future research are also included. FC-standard is the second most influential design variable followed by grid size.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
27

Durrande, Nicolas. "Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. Une approche fonctionnelle et probabiliste." Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0631/document.

Full text
Abstract:
Le thème général de cette thèse est celui de la construction de modèles permettantd’approximer une fonction f lorsque la valeur de f(x) est connue pour un certainnombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage,peuvent être abordés suivant deux points de vue : celui de l’approximation dans les espacesde Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens.Lorsque l’on souhaite modéliser une fonction dépendant d’une dizaine de variables, lenombre de points nécessaires pour la construction du modèle devient très important etles modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avonscherché à construire des modèles simplifié en travaillant sur un objet clef des modèles dekrigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l’utilisation denoyaux additifs pour la construction de modèles additifs et la décomposition des noyauxusuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nousproposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVAdes modèles associés et à l’analyse de sensibilité globale<br>The framework of this thesis is the approximation of functions for which thevalue is known at limited number of points. More precisely, we consider here the so-calledkriging models from two points of view : the approximation in reproducing kernel Hilbertspaces and the Gaussian Process regression.When the function to approximate depends on many variables, the required numberof points can become very large and the interpretation of the obtained models remainsdifficult because the model is still a high-dimensional function. In light of those remarks,the main part of our work adresses the issue of simplified models by studying a key conceptof kriging models, the kernel. More precisely, the following aspects are adressed: additivekernels for additive models and kernel decomposition for sparse modeling. Finally, wepropose a class of kernels that is well suited for functional ANOVA representation andglobal sensitivity analysis
APA, Harvard, Vancouver, ISO, and other styles
28

Bentil, Sarah A. "A Fractional Zener Constitutive Model to Describe the Degradation of Swine Cerebrum with Validation from Experimental Data and Predictions using Finite Element Analysis." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366254988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gonsalez, Camila Gianini [UNESP]. "Metodologias para reconhecimento de padrões em sistemas SHM utilizando a técnica da Impedância Eletromecânica (E/M)." Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/94506.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:27:13Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-02-24Bitstream added on 2014-06-13T19:14:27Z : No. of bitstreams: 1 gonsalez_cg_me_ilha.pdf: 4679748 bytes, checksum: 5f6a627734b2110f92059053c2470814 (MD5)<br>Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)<br>Pesquisadores de diversas partes do mundo se empenham em desenvolver técnicas capazes de monitorar a integridade de máquinas, veículos e estruturas, principalmente as que a ruptura ou destruição possa provocar acidentes e catástrofes. Neste contexto, várias técnicas não destrutivas podem ser utilizadas para monitorar estes sistemas permitindo a realização de reparos e, evitando maiores prejuízos econômicos e danos sociais. A técnica da Impedância Eletromecânica está entre as técnicas baseadas na utilização de materiais piezelétricos e, particularmente, utiliza-se de uma curva sensível a pequenas variações na estrutura, característica que faz a técnica ser eficiente na detecção de danos incipientes. No entanto, sob variações das condições ambiente e de teste, a sensibilidade da técnica pode produzir falsos diagnósticos. Desta forma, o desafio atual é aplicar a técnica da Impedância Eletromecânica em sistemas de monitoramento considerando condições mais próximas às condições de operação reais dos sistemas a serem monitorados. Este trabalho apresenta duas metodologias para sistemas SHM, a primeira consiste em utilizar a técnica de agrupamento Fuzzy c-means para entender e considerar o efeito da temperatura nos sinais da Impedância Eletromecânica. A segunda metodologia utiliza análise de variância (ANOVA) para propor uma metodologia de detecção mais robusta, e assim, incorporar variações aleatórias nos sistemas de medição e aquisição sem comprometer o diagnóstico SHM<br>Researchers around the world are engaged to develop techniques for structural health monitoring of machinery, vehicles and structures, especially systems where damage or destruction could induce accidents and disasters. In this context, several non-destructive techniques can be used to monitor these systems allowing repairs and avoiding major economic losses or social losses. The electromechanical impedance technique is among the techniques based on piezoelectric materials use and it is sensible to small variations in the structure which makes it efficient in detecting incipient damages. However, variations in the ambient or test conditions can cause false diagnoses. Therefore, the current challenge is to apply the electromechanical impedance technique considering monitoring conditions closer to real operating conditions of the systems to be monitored. This work presents two methodologies for SHM systems. The first one uses Fuzzy c-means clustering to distinguish the temperature effect on impedance signal. The second method uses analysis of variance (ANOVA) to propose a more robust detection methodology and thus incorporate random variations in measurement systems and acquisition without loss of SHM diagnostic
APA, Harvard, Vancouver, ISO, and other styles
30

Markusson, Lisa. "Powder Characterization for Additive Manufacturing Processes." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-62683.

Full text
Abstract:
The aim of this master thesis project was to statistically correlate various powder characteristics to the quality of additively manufactured parts. An additional goal of this project was to find a potential second source supplier of powder for GKN Aerospace Sweden in Trollhättan. Five Inconel® alloy 718 powders from four individual powder suppliers have been analyzed in this project regarding powder characteristics such as: morphology, porosity, size distribution, flowability and bulk properties. One powder out of the five, Powder C, is currently used in production at GKN and functions as a reference. The five powders were additively manufactured by the process of laser metal deposition according to a pre-programmed model utilized at GKN Aerospace Sweden in Trollhättan. Five plates were produced per powder and each cut to obtain three area sections to analyze, giving a total of fifteen area sections per powder. The quality of deposited parts was assessed by means of their porosity content, powder efficiency, geometry and microstructure. The final step was to statistically evaluate the results through the analysis methods of Analysis of Variance (ANOVA) and simple linear regression with the software Minitab. The method of ANOVA found a statistical significant difference between the five powders regarding their experimental results. This made it possible to compare the five powders against each other. Statistical correlations by simple linear regression analysis were found between various powder characteristics and quality of deposited part. This led to the conclusion that GKN should consider additions to current powder material specification by powder characteristics such as: particle morphology, powder porosity and flowability measurements by a rheometer. One powder was found to have the potential of becoming a second source supplier to GKN, namely Powder A. Powder A had overall good powder properties such as smooth and spherical particles, high particle density at 99,94% and good flowability. The deposited parts with Powder A also showed the lowest amount of pores compared to Powder C, a total of 78 in all five plates, and sufficient powder efficiency at 81,6%.
APA, Harvard, Vancouver, ISO, and other styles
31

Anday, Tekie T. "Application of ANOVA for the analysis of temporal and spatial differences in the length of pelagic goby preyed on by Cape fur seals in the coasts of Namibia." Master's thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/4369.

Full text
Abstract:
Includes bibliographical references (leaves 62-66).<br>The Analysis of variance is a robust technique whereby the total variation present in a set of data is partitioned into two or more components (Wayne, 1999). In this thesis, ANOVA was used to uncover the differences in goby length preyed on by three different colonies of fur seals at the Namibian coast. Moreover, ANOVA was used to investigate temporal differences in lengths of goby preyed on by fur seals in each location of the seal colonies. Results of the analysis are shown in the Analysis and results section, and the findings are discussed in the discussion section. But before these two sections, there are three sections of the thesis. The first section is the general introduction that explains about the general situation and the targets of this thesis. The second section gives a general background on the ANOVA technique. The third section explains the nature of the data and gives background information on gobies.
APA, Harvard, Vancouver, ISO, and other styles
32

Marais, Michellé. "Key success factors of managing a wine festival / Michelle Marais." Thesis, North-West University, 2009. http://hdl.handle.net/10394/4266.

Full text
Abstract:
Wine tourism is very much an "experience", be it the wine, the destination or the opportunity to learn and "grow".Competitive positioning of wine tourism regions has become a strategically important issue, as the number of wine festivals has increased considerably and numerous regions are now marketing aggressively to attract high-yield wine tourists. The Wacky Wine Festival is one of the most unique and popular wine festivals in South Africa, and is the biggest regional wine festival. Managers of the Wacky Wine Festival need to know what visitors see as the important key success factors needed for managing a wine festival. This encourages competiveness and attempts to be sustainable over the long term of the wine festival's product lifecycle. When managing a wine festival, managers also need to identify whether different visitor groups have different perceptions of the managerial aspects. Key success factors (KSFs) are a prerequisite for the success of any organisation. KSFs concern what every manager within the tourism industry must be competent at doing or must concentrate on achieving to be Competitively and financially successful. KSFs are aspects which influence the organisation's ability to thrive in the market place. It is important to identify key success factors as these will assist a business in measuring achievements and indicating the improvement a business is making towards achieving certain targets. The main purpose of this study was therefore to determine key success factors for managing a wine festival by identifying what visitors to the Wacky Wine Festival view as important managerial aspects (KSFs). To reach the above-mentioned goal, the study is divided into 2 articles. Research for both articles was undertaken at the Wacky Wine Festival. Questionnaires were interview-administrated and distributed randomly during the course of the Festival at different wine farms. In total 424 questionnaires werecompleted during the visitor survey from 3-7 June 2009. Article 1 is titled: "Aspects concerning effective and efficient management of the Wacky Wine Festival". The main purpose of this article was therefore to identify the key success factors in managing the Wacky Wine Festival. This was done to determine what people visiting the Wacky Wine Festival view important. A factor analysis was used as instrument for achieving the above-mentioned goal. Results indicated that quality and good management, wine farm attributes, effective marketing, route development, festival attractiveness, entertainment and activities and accessibility are the key success factors that are important when managing a wine festival. These results generated strategic insights on what managers need to focus on when they are organising and managing a wine festival, such as the Wacky Wine Festival. Article 2 is titled: "A management appraisal of the Wacky Wine Festival". The main purpose of this article was to identify why a management appraisal is important when managing a wine festival. An analysis of variance (ANOVA) was used to determine whether significant differences occurred between the different visitor groups of the wine festival and their perceptions regarding the managerial aspects. Results revealed that there are three different visitor groups, namely the festinos, the epicureans and the social adventurers that visit the Wacky Wine Festival. Each of them agreed or disagreed what managerial aspects they find are very important when managing the Wacky Wine Festival. Some of the factors were found significant, namely quality and good management, effective marketing and entertainment and activities. Hence managers of the wine festival need to regard these key success factors as important to focus on. This research therefore revealed the key success factors for efficient management of the festival. There are further three types of visitor groups that visit the Wacky Wine Festival were also identified, namely the festinos, the epicureans and the social adventures. Research also indicated that specific markets have different evaluations concerning the importance of management aspects in ensuring success.<br>Thesis (M.Com. (Tourism))--North-West University, Potchefstroom Campus, 2010.
APA, Harvard, Vancouver, ISO, and other styles
33

Copeland, Matthew Blair. "Learner Modal Preference and Content Delivery Method Predicting Learner Performance and Satisfaction." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc862858/.

Full text
Abstract:
The purpose of the study was to investigate how the online, computer-based learner's personal learning profile (Preference), the content delivery method supplemented with visual content based on Neil Fleming's VARK (visual, aural, read/write, kinesthetic) model (Content), and the interaction of Preference and Content, influenced learner performance (Performance) and/or learner self-reported satisfaction (Satisfaction). Participants were drawn from a population of undergraduates enrolled in a large public southwestern research university during the fall 2015 semester. The 165 student participants (13.79% completion rate) were comprised of 52 (31.5%) females and 113 (68.5%) males age 18-58+ years with 126 (76.4%) age 18-24 years. For race/ethnicity, participants self-identified as 1 (0.66%) American Indian/Alaska Native, 21 (12.7%) Asian/Pacific Islander, 27 (16.4%) Black, non-Hispanic, 28 (17%) Hispanic, 78 (47.3%) White, non-Hispanic, 10 (6.1%) other. Reported socioeconomic status was 22 (13.3%) withheld, 53 (32.1%) did not know, 45 (27.3%) low, 13 (7.9%) moderately low, 16 (9.7%) middle, 8 (4.8%) upper middle, and 8 (4.8%) upper. This causal-comparative and quasi-experimental, mixed-method, longitudinal study used researcher-developed web-based modules to measure Performance and Satisfaction, and used the criterion p < .05 for statistical significance. A two-way, 4 x 3 repeated measures (Time) analysis of variance (RM-ANOVA) using Preference and Content was statistically significant on each Performance measure over Time, and at two measures on Satisfaction over Time. The RM-ANOVA was statistically significant on between-subjects main effect Performance for read/write modality Content compared to aural and kinesthetic Content. There were no statistically significant main effects observed for Satisfaction. A Pearson r correlation analysis showed that participants that were older, married, and of higher socioeconomic status performed better. The correlation analysis also showed that participants who performed better reported greater likelihood to take online courses in the future, higher motivation, sufficient time and support for studies, and sufficient funding for and access to the Internet. The study results suggested that regardless of Preference, using read/write modality Content based on the VARK model while maintaining the verbal language can yield better Performance outcomes. The study results also suggested that while maintaining the verbal language, Preference, and Content based on the VARK model do not distinguish learner Satisfaction outcomes. However, because Satisfaction has been shown to impact Performance, efficacy, and retention, it matters to educational institutions. Future research should consider more granular models and factorial research methods, because models that utilize a single representative construct score can mask effects when analyzing Performance and Satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
34

Joubert, Ronel. "Factors influencing the degree of burnout experienced by nurses working in neonatal intensive care units." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20217.

Full text
Abstract:
Thesis (MCur)--Stellenbosch University, 2012.<br>ENGLISH ABSTRACT: Burnout is one of the challenges that nurses are faced with in their stressful and rapidly changing work environment. The vulnerability of nurses to burnout remains a major concern which affects both the individual and institution. Knowledge about burnout and associated risk factors which influence the development of burnout is vital for early recognition and intervention. The research question which guided this study was: “What are the factors influencing the degree of burnout experienced by nurses working in neonatal intensive care units?” The objectives included determining which physical, psychological, social and occupational factors influenced the degree of burnout experienced by nurses. A descriptive, explorative research design with a quantitative approach was applied. The target population consisted of (n=105) permanent nursing staff members working in the neonatal units of two different hospitals. A convenience sampling method was used. Participants (n=102) who gave voluntary consent to participate was included in the study. Validity and reliability was supported through the use of a validated questionnaire, Maslach Burnout Inventory – General Survey including a section based on demographical information and a section based on physical, psychosocial, social and occupational factors. Validity of the questionnaire was supported by the use of a research methodologist, nurse expert and a statistician in the particular field. A pilot study was done to test the feasibility of the study and to test the questionnaire for any errors and ambiguities. Ethics approval was obtained from Stellenbosch University and permission from the Heads of the hospitals where the study was conducted. The data was analyzed with the assistance of a statistician and these are presented in histograms, tables and frequencies. The relationship between response variables and nominal input variables was analysed using analysis of variance (ANOVA). Various statistical tests were applied to determine statistical associations between variables such as the Spearman test, using a 95% confidence interval. Results have shown that participants experienced an average level of emotional exhaustion, a high level of professional efficacy and a low level of cynicism. Further analyses have shown that there is a statistical significant difference between emotional exhaustion and the rank of the participant (p=<0.01), highest qualification (p=0.05) and a high workload (p=0.01). Furthermore a statistical significant difference was found between professional efficacy and rank of participants (p=<0.01). In addition a statistical significant difference was found between cynicism and the number of years participants were in the profession (p=0.05). Multiple factors were determined in this study that influences the degree of burnout nurses experience. The majority of participants (n=56/55%) experienced decreased job satisfaction and accomplishment, (n=52/51%) of participants experienced that their workload is too much for them and (n=63/62%) participants received no recognition for their work. Recommendations are based on preventative measures, because preventing burnout is easier and more cost-effective than resolving burnout once it has occurred. In conclusion, the prevention strategies, early recognition of work stress and appropriate interventions are crucial in addressing the problem of burnout.<br>AFRIKAANSE OPSOMMING: Uitbranding is een van die uitdagings waarmee verpleegsters te kampe het in hulle stresvolle en vinnig veranderende werkomgewing. Die kwesbaarheid van verpleegsters vir uitbranding bly ’n kritieke bekommernis wat beide die individu en die inrigting affekteer. Kennis omtrent uitbranding en verwante risiko faktore wat die ontwikkeling van uitbranding beïnvloed, is deurslaggewend vir vroeë opsporing en intervensie. Die navorsingsvraag wat hierdie studie gelei het, is: “Wat is die faktore wat die mate van uitbranding beïnvloed wat deur verpleegsters ondervind word wat in neonatale intensiewe sorgeenhede werk?” Die doelwitte wat ingesluit is, is om te bepaal watter fisiese, sielkundige, maatskaplike en beroepsfaktore die mate van uitbranding wat deur verpleegsters ervaar word, beïnvloed. ’n Beskrywende, ondersoekende navorsingsontwerp met ’n kwantitatiewe benadering is toegepas. Die teikengroep het bestaan uit (n=105) permanente verpleegpersoneel wat in die neonatale eenhede van twee verskillende hospitale werk. ’n Gerieflikheidsteekproef metode is gebruik. Deelnemers (n=102) wat vrywillige toestemming gegee het om deel te neem, is ingesluit in die navorsingstudie. Geldigheid en betroubaarheid is ondersteun deur die gebruik van ’n geldige vraelys van “Maslach Burnout Inventory – General Survey”, asook ’n afdeling gebaseer op demografiese inligting en ’n afdeling gebaseer op fisiese, sielkundige, maatskaplike en beroepsfaktore. Geldigheid van die vraelys is ondersteun deur ’n navorsingsmetodoloog, ’n verpleegspesialis en ’n statistikus op die navorsingsgebied. ’n Loodsondersoek is gedoen om die haalbaarheid van die studie te toets en om die vraelys te toets vir enige foute en dubbelsinnighede. Etiese goedkeuring is verkry van die Universiteit van Stellenbosch en goedkeuring van die Hoofde van die hospitale waar die studie uitgevoer is. Die data is geanaliseer met die hulp van ’n statistikus en is aangebied in histogramtafels en frekwensies. Die verwantskap tussen responsveranderlikes en nominale insetveranderlikes is geanaliseer deur gebruik te maak van die analise van variansie (ANOVA). Verskeie statistiese toetse is toegepas om statistiese assosiasies tussen veranderlikes te bepaal, soos deur van die Spearmantoets gebruik te maak, met ’n 95% betroubaarheidsinterval. Resultate het bewys dat deelnemers ’n gemiddelde vlak van emosionele uitputting, ’n hoë vlak van professionele effektiwiteit en ’n lae vlak van sinisme ervaar. Verdere analise het bewys dat daar ’n statistiese beduidende verskil tussen emosionele uitputting en die rang van die deelnemers (p=<0.01) is, hoogste kwalifikasie (p=0.05) en ’n hoë werklading (p=0.01). Verder is ’n statistiese beduidende verskil gevind tussen professionele effektiwiteit en rang van deelnemers (p=<0.01). Saam hiermee is ’n statistiese beduidende verskil gevind tussen siniesheid en die aantal jare wat deelnemers in die beroep is (p=0.05). Voorts, is veelvuldige faktore bepaal in hierdie studie wat die mate van uitbranding beïnvloed wat verpleegsters ervaar. Die meeste van die deelnemers (n=56/55%) het ’n afname in werksbevrediging en -verrigting ervaar, (n=52/51%) deelnemers het ervaar dat hul werklading te veel is vir hulle en (n=63/62%) deelnemers het geen erkenning vir hulle werk ontvang nie. Aanbevelings is gebaseer op voorkomende maatreëls, want om uitbranding te voorkom, is makliker en meer koste-effektief as om uitbranding te probeer oplos as dit alreeds begin het. Ten slotte, die voorkomende strategieë, vroeë identifisering van werkstres en geskikte intervensies is deurslaggewend om die probleem van uitbranding aan te spreek.
APA, Harvard, Vancouver, ISO, and other styles
35

Gonsalez, Camila Gianini. "Metodologias para reconhecimento de padrões em sistemas SHM utilizando a técnica da Impedância Eletromecânica (E/M) /." Ilha Solteira : [s.n.], 2012. http://hdl.handle.net/11449/94506.

Full text
Abstract:
Orientador: Vicente Lopes Junior<br>Banca: Samuel Silva<br>Banca: Michael John Brennan<br>Banca: Carlos Alberto Cimini Junior<br>Resumo: Pesquisadores de diversas partes do mundo se empenham em desenvolver técnicas capazes de monitorar a integridade de máquinas, veículos e estruturas, principalmente as que a ruptura ou destruição possa provocar acidentes e catástrofes. Neste contexto, várias técnicas não destrutivas podem ser utilizadas para monitorar estes sistemas permitindo a realização de reparos e, evitando maiores prejuízos econômicos e danos sociais. A técnica da Impedância Eletromecânica está entre as técnicas baseadas na utilização de materiais piezelétricos e, particularmente, utiliza-se de uma curva sensível a pequenas variações na estrutura, característica que faz a técnica ser eficiente na detecção de danos incipientes. No entanto, sob variações das condições ambiente e de teste, a sensibilidade da técnica pode produzir falsos diagnósticos. Desta forma, o desafio atual é aplicar a técnica da Impedância Eletromecânica em sistemas de monitoramento considerando condições mais próximas às condições de operação reais dos sistemas a serem monitorados. Este trabalho apresenta duas metodologias para sistemas SHM, a primeira consiste em utilizar a técnica de agrupamento Fuzzy c-means para entender e considerar o efeito da temperatura nos sinais da Impedância Eletromecânica. A segunda metodologia utiliza análise de variância (ANOVA) para propor uma metodologia de detecção mais robusta, e assim, incorporar variações aleatórias nos sistemas de medição e aquisição sem comprometer o diagnóstico SHM<br>Abstract: Researchers around the world are engaged to develop techniques for structural health monitoring of machinery, vehicles and structures, especially systems where damage or destruction could induce accidents and disasters. In this context, several non-destructive techniques can be used to monitor these systems allowing repairs and avoiding major economic losses or social losses. The electromechanical impedance technique is among the techniques based on piezoelectric materials use and it is sensible to small variations in the structure which makes it efficient in detecting incipient damages. However, variations in the ambient or test conditions can cause false diagnoses. Therefore, the current challenge is to apply the electromechanical impedance technique considering monitoring conditions closer to real operating conditions of the systems to be monitored. This work presents two methodologies for SHM systems. The first one uses Fuzzy c-means clustering to distinguish the temperature effect on impedance signal. The second method uses analysis of variance (ANOVA) to propose a more robust detection methodology and thus incorporate random variations in measurement systems and acquisition without loss of SHM diagnostic<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
36

Höög, Andrée, and Christoffer Wrangenby. "Studenters förhållning till Javas kodkonventioner inom högskoleingenjörsutbildningar i Sverige - En komparativ studie." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20454.

Full text
Abstract:
Att sätta sig in i andra utvecklares kod kan vara svårt och tidskrävande. Kodkonventionerär framtagna i syfte att underlätta underhållsarbetet för andra utvecklare som arbetar medsamma projekt, då det sällan är personen som skrev programmet som sedan underhållersamma program. För lite fokus läggs på kvalitétsaspekten i utvecklingsfasen av projektvilket kostar företag pengar och resurser. Denna studie har för avsikt att undersöka hurhögskoleingenjörsstudenter inom datateknik vid olika lärosäten i Sverige förhåller sig tillJavas kodkonventioner, och om det går att urskilja en signifikant skillnad i kodkvalitémellan lärosäten i avseende på identifierare, kommentarer samt format och struktur. Enwebbenkätundersökning genomfördes vars empiri analyserades med hjälp av ANOVA somär en vedertagen metod för hypotesprövning. 21 av 23 ANOVA-test visade att ingen signifikantskillnad förekom mellan lärosätena i avseende på identifierare, kommentarer samtformat och struktur. Resultatet visade även att de kvalitétsaspekter studenterna tyckerär viktiga och prioriterar att lägga tid på är också de aspekter som studenterna tycks hastörst förståelse och kunskap kring.<br>To get acquainted with other developers code can be difficult and time consuming. Codeconventions are developed in order to facilitate maintenance work for other developersworking on the same project, where the person who wrote the program rarely is the personthat maintains the same program. Little attention is paid to the aspect of quality ina projects development phase, which costs companies money and resources. This studyintends to investigate how students studying for a Bachelor of Science in Engineering inComputer Science at various universities in Sweden relate to Java code conventions, andif it is possible to discern a significant difference in code quality between these universitiesin terms of identifiers, comments, format and structure. An online survey was conductedwhose empirical data was analyzed using ANOVA, which is a recognized method forstatistical hypothesis testing. 21 of 23 ANOVA test showed that no significant differenceexisted between the institutions in terms of identifiers, comments and format and structure.The results also showed that the aspects of quality, which the students think are themost important, are also in fact the aspects they have the greatest understanding andknowledge about.
APA, Harvard, Vancouver, ISO, and other styles
37

Forsberg, Niklas. "What affects the tear strength of paperboard? : Consequences of unbalance in a designed experiment." Thesis, Karlstads universitet, Handelshögskolan, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-63985.

Full text
Abstract:
This essay covers a designed experiment on paperboard where the quality under study is tear strength alongside and across. The objective is to examine what consequences the loss of balance in a designed experiment has on the explanatory power of the proposed empirical model. As did happen, the trial plan didn’t go as planned when the first run caused a disruption of the paperboard in the machine. Decision from the company was to raise the low level of one of the design factors to prevent this from happening again. The consequence of this is an alteration of the design during ongoing experimentation. This in turn affects what analysis approaches are appropriate for the problem. Three different approaches for analyzing the data are presented, each with different propositions on how to deal with the complication that occurred. The answer to the research question is that the ability of the empirical model to discover significant effects is moderately weakened by the loss of one run (out of eight total). The price payed for retrieving less information from the experiment is that the empirical model, for tear strength across, doesn’t deem the effects significant at the same level as for the candidate model with eight runs. Instead of concluding that the main effect of  and the interaction effect  is significant at the 2%- and 4%-level, respectively, we must now settle with deeming them significant at the 6%- and 7%-level.
APA, Harvard, Vancouver, ISO, and other styles
38

Varanda, Luciano Donizeti. "Painéis de alta densidade para aplicação em pisos: produção e avaliação de desempenho." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18158/tde-16112016-174123/.

Full text
Abstract:
Temas relacionados ao desenvolvimento de novos materiais têm sido cada vez mais abordados e discutidos, num contexto em que questões como meio ambiente, sociedade, economia de energia e aproveitamento de resíduos, vêm se tornando relevantes. Neste cenário, faz-se necessário estudar aplicações de insumos alternativos na produção de pisos de madeira, tanto para reduzir o consumo de essências tropicais quanto para suprir o aumento da demanda de madeira nas indústrias deste segmento. O objetivo deste estudo foi produzir painéis de partículas homogêneos de alta densidade, com resíduos de madeira de Pinus elliottii e casca de aveia (Avena sativa), aderidos sob pressão com dois tipos de adesivo, poliuretano à base de óleo de mamona e melamina formaldeído, nos percentuais de 11 e 13%, e avaliar o desempenho físico-mecânico de tais painéis para aplicação em pisos. O desempenho físico-mecânico dos painéis (Planejamento I - 20 tratamentos) foi avaliado com base nas normas ABNT NBR 14810 (2006 e 2013). Realizou-se análise de variância (ANOVA) para testar a influência dos fatores individuais (percentual de casca de aveia, percentual de adesivo e tipo de adesivo), além das interações entre tais fatores (dois a dois e três a três) nas propriedades físico-mecânicas dos painéis. Também foi avaliado o desempenho para pisos, tanto dos painéis (Planejamento II - 12 tratamentos) quanto de três espécies de madeira tropical (Angelim Vermelho, Dinizia excelsa; Cumaru, Dipteryx odorata e Jatobá, Hymenaea sp.), segundo diversas normas relacionadas a pisos de madeira. Os resultados apontaram propriedades físico-mecânicas dos painéis, em alguns tratamentos superiores aos requisitos estipulados por normas nacionais e internacionais. Quanto ao desempenho para pisos, os painéis apresentaram desempenho semelhante as três espécies de madeira, na maioria das propriedades avaliadas. A análise de porosimetria por intrusão de mercúrio confirmou a similaridade entre os painéis (do Planejamento II) e as três espécies de madeira avaliadas, evidenciando a potencialidade dos painéis produzidos para aplicação na indústria de pisos engenheirados.<br>Matters related to the development of new materials have been increasingly addressed and discussed in a context where issues such as the environment, society, energy and waste recovery economy, is becoming relevant. In this scenario, it is necessary to study alternative inputs for applications in the production of wood floors, both to reduce the consumption of tropical essences as to meet the increasing demand for wood in industries in this segment. The aim this study was to produce high density homogeneous particleboard with waste wood of Pinus elliottii and oat hulls (Avena sativa), adhered under pressure with two types of adhesive, castor oil-based polyurethane and melamine formaldehyde, the percentage of 11 and 13%, and evaluate the physical and mechanical performance of such panels for use in floors. The physical-mechanical performance of the panels (Planning I - 20 treatments) was evaluated based on the NBR 14810 (2006 and 2013) standards. We conducted an analysis of variance (ANOVA) to evaluate the influence of individual factors (oat hulls percentage, adhesive percentage and type of adhesive), and the interactions between these factors (two by two and three by three) on the physical properties-mechanical panels. It was also evaluated the performance for floors, both panels (Planning II - 12 treatments) as three species of tropical wood (Angelim Vermelho, Dinizia excelsa; Cumaru, Dipteryx odorata e Jatobá, Hymenaea sp.), according to various standards related to wood floors. The results indicated physical and mechanical properties of the panels, in some treatments superior to the requirements stipulated by national and international standards. As for performance flooring, panels statistically equivalent to the three species of wood, most of the evaluated properties. Porosimetry analysis by mercury intrusion confirmed the similarity between the panels (from Planning II) and the three wood species evaluated, demonstrating the potential of the panels produced for use in the flooring industry engineered.
APA, Harvard, Vancouver, ISO, and other styles
39

Lundeberg, Kirsten Marie. "A Comparison of Three Groups of Undergraduate College Males--Physically Abusive, Psychologically Abusive, and Non-Abusive: a Quantitative Analysis." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/35155.

Full text
Abstract:
This study compares three groups of undergraduate college males in heterosexual dating relationships: those who are physically and psychologically abusive (n=39), those who are solely psychologically abusive (n=44), and those who are non-abusive (n=34). These three groups are compared along the following variables: self-reported history of experiencing family of origin violence; self-reported history of witnessing family of origin violence; level of self-reported impulsivity; level of self-reported satisfaction with life; level of self-reported alcohol use; level of self-reported relationship satisfaction; and amount of self-reported anger management skill. An analysis of variance (ANOVA) revealed significant main effects among the three groups of males along several of the variables examined (Wilks' Lambda F = 4.80, df = 10, 220, p <.001). Post hoc tests revealed significant differences among the three groups of males. This study revealed that these three groups differ significantly along their levels of alcohol use (F = 10.16, p <.001), their reported levels of relationship satisfaction (F = 4.23, p <.05), and their levels of anger management skills (F = 14.56, p<.001). This information can be helpful to clinicians and educators who are working with college populations. It would seem that psychoeducation might be useful for some of these men so that they might develop alternatives to violence, and may hopefully decrease the risk factors associated with the perpetration of relationship violence. Intervening early and effectively with these dating relationships can be a substantive step towards preventing the escalation and maintenance of violence in relationships.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
40

Bach, Sébastien. "Cadres méthodologiques pour la conception innovante d'un Plan énergétique durable à l'échelle d'une organisation : application d'une planification énergétique pour une économie compétitive bas carbone au Sonnenhof." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAD027/document.

Full text
Abstract:
Les entreprises et plus généralement les organisations sont confrontées à des enjeux climatiques et économiques avec pour obligation de respecter un cadre légal et des orientations définis à des plus grandes échelles (régionale, nationale et internationale). Une organisation est souvent au fait du but ou de l’objectif à atteindre ; en revanche le moyen d’y parvenir peut nécessiter de l’apprentissage voire de la recherche. Le but de cette thèse est de fournir une méthodologie à l’usage des organisations pour réaliser le management stratégique des projets relatifs à leur transition énergétique. A partir de différents états de l’art sur la planification énergétique et la conception en particulier, nous avons pointé le déficit méthodologique auquel doit faire face une organisation : si les démarches et outils existent lorsqu’un problème est clairement identifié, comment justement identifier un ou des problèmes à partir uniquement d’une formulation de buts ou d’intentions ? La première partie propose une démarche de planification énergétique à l’échelle d’une organisation qui fait émerger, de manière structurée, les problèmes auxquels l’organisation sera potentiellement confrontée. Notre démarche repose sur l’utilisation des BEGES et des méthodes de management de l’énergie/GES d’une part, complétés par des démarches et outils de conception d’autre part. Ces derniers facilitent la consolidation des informations et des données nécessaires pour formuler et structurer les problèmes à résoudre. A l’issue de cette démarche certains problèmes sont formulés sous forme de contradictions et de conflits. La démarche développée est purement qualitative et adaptée au travail de groupe avec des experts. Cependant certaines données numériques traduisent des comportements de systèmes qui sont peu maitrisés par les parties prenantes du projet. La deuxième partie propose une méthode combinant la simulation et l’analyse de données pour identifier les contradictions d’objectifs et de cause qui peuvent ou semblent empêcher l’atteinte des objectifs. Ces contradictions sont formulées de sorte à pouvoir être traitées avec les méthodes de résolution de problèmes inventifs. Le principe d’identification des contradictions d’objectifs repose sur la transformation des réponses expérimentales ou de simulation des systèmes étudiés en données qualitatives binaires et sur l’identification des Paretos optimaux des données ainsi transformées. Les contradictions de causes concernent les facteurs ou paramètres de conception qui induisent les conflits d’objectifs. Nous proposons de les identifier à l’aide d’une méthode d’analyse discriminante binaire à base d’apprentissage supervisé associée à l’ANOVA. Nous montrons sur un cas d’étude, d’une part, comment intégrer cette approche dans la démarche présentée en partie 1 du mémoire, et d’autre part, comment l’utiliser pour obtenir des concepts de solutions dans un contexte multi-objectifs (diminution des consommations d’énergie, des émissions de GES, du coût etc.)<br>Companies and more generally organizations are confronted with climatic and economic issues, they have to respect a legal framework and orientations defined in larger scales (regional, national and international). An organization usually knows the goal or the objective to be achieved; however the way to do can require learning or even research. The goal of this thesis is to provide a methodology for the use of organizations to realize strategic management of their energy transition projects. From many different states of the art about energy planning and conception in particular, we show the methodological deficit which an organization has to face: if approaches and tools exist when a problem is clearly identified, how actually identify one or several problems from only a goal or intention formulation? The first part proposes an energy planning approach at an organizational scale to bring out in structured way problems which the organization may be confronted. Our approach is based on greenhouse gas emission assessments and energy/GHG management methods which are completed with conception approaches and some tools and methodologies. They facilitate the consolidation of required information and data to formulate and structure problems to solve. As a result of our approach some problems are formulated as contradictions and conflicts. The developed approach is purely qualitative and adapted to workgroup with experts. However some numerical data translate system behaviors which are sparsely mastered by project stakeholders. The second part proposes a combined method of simulation and data analysis to identify objective and cause contradictions which can or seem to prevent achieving the objectives. These contradictions are formulated in such a way to be handled with methods of resolution of inventive problems. The identification of objective contradictions is based on the transformation of experimental or simulation answers of the studied systems in binary qualitative data and on the identification of optimal Pareto of the transformed data. Cause contradictions concern conception factors or parameters which induce objective conflicts. We suggest identifying these contradictions with a binary discriminant analysis method based on supervised learning associated with ANOVA. On one hand, we show on a study case how integrate this initiative into the presented approaches in part 1 and on the other hand, how use it to obtain solution concepts in a multi-objective context (energy consumptions, GHG emissions or cost reduction etc.)
APA, Harvard, Vancouver, ISO, and other styles
41

Laird, Daniel T. "Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596393.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV<br>Over the past several years DoD imposed constraints on test deliverables, requiring objective measures of test results, i.e., statistically defensible test and evaluation (SDT&E) methods and results. These constraints force the tester to employ statistical hypotheses, analyses and perhaps modeling to assess test results objectively, i.e., based on statistical metrics, probability of confidence and logical inference to supplement rather than rely solely on expertise, which is too subjective. Experts often disagree on interpretation. Numbers, although interpretable, are less variable than opinion. Logic, statistical inference and belief are the bases of testable, repeatable and refutable hypothesis and analyses. In this paper we apply linear regression modeling and analysis of variance (ANOVA) to time-space position information (TSPI) to determine if a telemetry (TM) antenna control unit (ACU) under test (AUT) tracks statistically, thus as efficiently, in C-band while receiving both C- and S-band signals. Together, regression and ANOVA compose a method known as analysis of covariance (ANCOVA). In this, the second of three papers, we use data from a range test, but make no reference to the systems under test, nor to causes of error. The intent is to present examples of tools and techniques useful for SDT&E methodologies in testing.
APA, Harvard, Vancouver, ISO, and other styles
42

Balachandra, P. "Rational Supply Planning In Resource Constrained Electricity Systems." Thesis, Indian Institute of Science, 2000. http://hdl.handle.net/2005/200.

Full text
Abstract:
Electricity is the most preferred source of energy, because of its quality and convenience of usage. It is probably one of the most vital infrastructural inputs for economic development of a country. Indeed it is the fulcrum which can leverage the future pace of growth and development. These reasons have made the electric power industry one of the fastest growing sectors in most developing countries and particularly in India. Therefore it is not surprising to observe the economic growth of a country being related to the increase in electricity consumption. In India, the growth rate of demand for power is generally higher than that of Gross Domestic Product (GDP). However, to achieve this kind of growth in electricity supply, the capital investments required are very huge. Even though the electricity sector generally gels a major share in the budgetary allocations in India, this is inadequate to add the required quantum of new generation capacity to keep pace with the increase in demand for electricity. Additional constraints like capital scarcity in the public sector, lack of enthusiasm among the private and foreign investors, and strong opposition from the environmentalists have further contributed to this slow pace of new generating capacity addition. This has resulted in severely constrained systems in India. The main focus of the present research work is on the development of an integrated approach for electricity planning using a mathematical modeling framework in (he context of resource constrained systems. There are very few attempts in the literature to integrate short, medium and long term issues in electricity planning. This is understandable from the point of view of unconstrained electricity systems where this type of integration is unnecessary since such systems have a luxury of surplus capacity to meet the current demand and capacity additions are required only for meeting predicted future increase in demand. However, in the case of constrained electricity systems, which are characterized by shortages, this kind of integration is very essential. These systems have to manage with inadequate capacity in the present, plan capacity additions to bridge the existing gap and to meet future increase in demand, and always explore the possibility of adding capacity with short gestation period. The integrated approach is expected to achieve effective supply-demand matching on a continuous basis encompassing both the short term and long term horizons. To achieve this, we have considered three alternatives- existing supply, new supply and non-supply (rationing) of electricity. The electricity system of the state of Karnataka, which is severely constrained by both limited capital and energy resources, has been selected for this purpose. As a first step, the supply and demand situation has been studied in the context of resource constraints. In terms of supply, both existing and future additions are studied in detail with respect to the potential created, generation types, import potential, technical constraints, energy and power shortages, planned and proposed capacity additions by both public and private sectors, etc. The demand patterns have been studied by introducing a new concept of "Representative Load Curves (RLCs)". These RLCs are used to model the temporal and structural variations in demand for electricity. Also, appropriate non-supply options (rationing measures) for effective management of shortages are identified. Incorporating this information, an integrated mathematical model, which is expected to generate a target plan for a detailed generation scheduling exercises and a requirement plan for a regular generation expansion planning, has been developed. The other important alternative "Demand-Side-Management (DSM)", which could be considered as an effective option to achieve efficient supply-demand matching has not been included in the present research work. The major reason for not including the DSM alternatives is due to the difficulty in integrating these in the modelling approach adopted here. In the present approach we have used typical daily load curves (RLCs) to represent the demand for electricity. These are aggregate load curves and do not contain any sector-wise or end-use-wisc details. On the other hand, DSM alternatives are end-use focused. To incorporate DSM alternatives, we should have information on end-usc-wisc power demand (kW or MW), savings potential, time-of-use, etc. For this purpose it may be required to have end-use-wisc daily load curves. This information is not available and a separate detailed survey may be required to generate these load curves. This, we felt, is out of the scope of this present research work and a separate study may be required to do this. Therefore, we restricted our focus to supply planning alone. A detailed literature review is conducted to understand different types of modeling approaches to electricity planning. For the present study, however, the review of literature has been restricted to the methods of generation expansion planning and scheduling. In doing so, we attempted to bring out the differences in various approaches in terms of solution methods adopted, alternatives included and modifications suggested. Also, we briefly reviewed the literature on models for power and energy rationing, because management of shortages is an important aspect of the present study. Subsequently, a separate section is devoted to present an overview of the non-supply of electricity and its economic impacts on the consumers. We found that the low reliability of the electrical system is an indicator of the existence of severe shortages of power and energy, which cause non-supply of electricity to the consumers. The overview also presented a discussion on reasons for non-supply of electricity, and the types of non-supply options the utilities adopt to over come these shortages. We also attempted to explain what we mean by non-supply of electricity, what are its cost implications, and the methods available in the literature to estimate these costs. The first objective of the research pertains to the development of a new approach to model the varying demand for electricity. Using the concept of Representative Load Curves (RLCs) we model the hourly demand for a period of four years, 1993-94, 1994-95, 1995-96 and 1996-97, to understand the demand patterns of both unconstrained and constrained years. Multiple discriminant analysis has been used to cluster the 365 load curves into nine RLCs for each of the four years. The results show that these RLCs adequately model the variations in demand and bring out the distinctions in the demand patterns existed during the unconstrained and constrained years. The demand analysis using RLCs helped to study the differences in demand patterns with and without constraints, impacts of constraints on preferred pattern of electricity consumption, success of non-supply options in both reducing the demand levels and greatly disturbing the electricity usage patterns. Multifactor ANOVA analyses are performed to quantify the statistical significance of the ability of the logically obtained factors in explaining the overall variations in demand. The results of the ANOVA analysis clearly showed that the considered factors accounted for maximum variations in demand at very high significance levels. It also brought out the significant influence of rationing measures in explaining the variations in demand during the constrained years. Concerning the second objective, we explained in detail, the development of an integrated mixed integer-programming model, which we felt is appropriate for planning in the case of resource constrained electricity systems. Two types of integrations are attempted (i) existing supply, non-supply and new supply options for dynamically matching supply and demand, (ii) operational and strategic planning in terms of providing target plans for the former and requirement plans for the latter. Broadly, the approach addresses the effective management of existing capacity, optimal rationing plan to effectively manage shortages and rationally decide on the new capacity additions both to bridge the existing gap between supply and demand, and to meet the future increases in demand. There is also an attempt to arrive at an optimal mix of public and private capacity additions for a given situation. Finally, it has been attempted to verify the possibility of integration of captive generation capacity with the grid. Further, we discussed in detail about the data required for the model implementation. The model is validated through the development of a number of scenarios for the state of Karnataka. The base case scenario analyses are carried out for both the unconstrained and constrained years to compare the optimal allocations with actual allocations that were observed, and to find out how sensitive are the results for any change in the values of various parameters. For the constrained years, a few more scenarios are used to compare the optimal practice of managing shortages with to what has been actually followed by the utility. The optimal allocations of the predicted demand to various existing supply and non-supply options clearly showed that the actual practice, reflected by the actual RLCs, are highly ad hoc and sub-optimal. The unit cost comparisons among different scenarios show that the least cost choice of options by the utility does not necessarily lead to good choices from the consumers’ perspective. Further, a number of future scenarios are developed to verify the ability of the model to achieve the overall objective of supply-demand matching both in the short and long term. For this purpose both the short horizon annual scenarios (1997-98 to 2000-01) and long horizon terminal year scenarios (2005-06 and 2010-11) are developed assuming capacity additions from only public sector. Overall, the results indicated that with marginal contributions from non-supply options and if the public sector generates enough resources to add the required capacity, optimal matching of supply and demand could be achieved. The scenario analyses also showed that it is more economical to have some level of planned rationing compared to having a more reliable system. The quantum of new capacity additions required and the level of investments associated with it clearly indicated the urgent need of private sector participation in capacity additions. Finally, we made an attempt to verify the applicability of the integrated model to analyse the implications of private sector participation in capacity additions. First, a number of scenarios are developed to study the optimal allocations of predicted hourly demand to private capacity under different situations. Secondly, the impacts of privatisation on the public utility and consumers are analysed. Both short term and long term scenarios are developed for this purpose. The results showed the advantage of marginal non-supply of electricity both in terms of achieving overall effective supply-demand matching and economic benefits that could be generated through cost savings. The results also showed the negative impacts of high guarantees offered to the private sector in terms of the opportunity costs of reduced utilization of both the existing and new public capacity. The estimates of unit cost of supply and effective cost of supply facilitated the relative comparison among various scenarios as well as finding out the merits and demerits of guarantees to private sector and non-supply of electricity. The unit cost estimates are also found to be useful in studying the relative increase in electricity prices for consumers on account of privatization, guarantees and reliable supply of electricity. Using the results of scenario analyses, likely generation expansion plans till the year 2010-11 are generated. The analyses have been useful in providing insights into fixing the availability and plant load factors for the private sector capacity. Based on the analysis, the recommended range for plant utilization factor is 72.88 - 80.57%. The estimated generation losses and the associated economic impacts of backing down of existing and new public capacity on account of guarantees offered to private sector are found to be significantly high. The analyses also showed that the backing down might take place mainly during nights and low demand periods of monsoon and winter seasons. Other impacts of privatization that studied are in terms of increased number of alternatives for the utility to buy electricity for distribution and the associated increase in its cost of purchase. Regarding the consumers, the major impact could be in terms of significant increase in expected tariffs. The major contributions of this thesis are summarized as follows: i. An integrated approach to electricity planning that is reported here, is unique in the sense that it considers options available under various alternatives, namely, existing supply, non-supply and new supply. This approach is most suited for severely constrained systems having to manage with both energy and capital resource shortages. ii. The integration of operational and strategic planning with coherent target plans for the former and requirement plans for the latter bridges the prevailing gap in electricity planning approaches. iii. The concept of Representative Load Curves (RLCs), which is introduced here, captures the hourly, daily and seasonal variations in demand. Together, all the RLCs developed for a given year are expected to model the hourly demand patterns of that year. These RLCs are useful for planning in resource constrained electricity systems and in situations where it is required to know the time variations in demand (e.g. supply-demand matching, seasonal scheduling of hydro plants and maintenance scheduling). RLCs are also useful in identifying the factors influencing variations in demand. This approach will overcome the limitations of current method of representation in the form of static and aggregate annual load duration curves. iv. A new term, "non-supply of electricity" has been introduced in this thesis. A brief overview of non-supply presented here includes reasons for non-supply, type of non-supply, methods to estimate cost of non-supply and factors influencing these estimates. v. The integrated mixed integer programming model developed in the study has been demonstrated as a planning tool for- • Optimal hourly and seasonal scheduling of various existing supply, non-supply and new supply options • Estimation of supply shortages on a representative hourly basis using the information on resource constraints • Effectively planning non-supply of electricity through appropriate power/energy rationing methods • Estimation of the need for the new capacity additions both to bridge the existing gap and to take care of increase in future demand levels • Optimal filling of gaps between demand and supply on a representative hourly basis through new supply of electricity • Optimally arriving at the judicious mix of public and private capacity additions • Studying the impacts of private capacity on the existing and new public sector capacity, and on the consumers • Optimally verifying the feasibility of integrating the captive generation with the total system vi. The demand analysis using RLCs helped to bring out the differences in demand patterns with and without constraints, impacts of constraints on preferred pattern of electricity consumption, success of non-supply options in both reducing the demand levels and greatly disturbing the electricity usage patterns. Multifactor ANOVA analyses results showed that the logically obtained factors accounted for maximum variations in demand at very high significance levels. vii. A comparison of optimal (represented by optimal predicted RLCs) and actual (reflected by actual RLCs) practices facilitated by the model showed that the actual practice during constrained years is highly ad hoc and sub-optimal. viii. The results of the scenario analyses showed that it is more economical to have some amount of planned rationing compared to having a more reliable system, which does not allow non-supply of electricity. ix. The scenarios, which analysed the impacts of high guarantees offered to the private sector, showed the negative impacts of these in terms of reduced utilization of both the existing and new public capacity. x. Generation expansion plans till the year 2010-11 are developed using the results of various kinds of scenario analyses. Two groups of year-wise generation expansion plans are generated, one with only public sector capacity additions and the other with private sector participation. xi. The impacts of privatization of capacity additions are studied from the point of view of the utility and consumers in terms of expected increase in cost of purchase of electricity and tariffs. xii. The analyses are also made for developing some insights into fixing the availability and plant load factors for the private capacity. Based on the analysis, the recommended range for plant utilization factor is 72.88 - 80.57%. We believe that the integrated approach presented and the results obtained in this thesis would help utilities (both suppliers and distributors of electricity) and governments in making rational choices in the context of resource constrained systems. The results reported here may also be used towards rationalization of Government policies vis-a-vis tariff structures in the supply of electricity, planning new generation capacity additions and effective rationing of electricity. It is also hoped that the fresh approach adopted in this thesis would attract further investigations in future research on resource constrained systems.
APA, Harvard, Vancouver, ISO, and other styles
43

Souza, Amós Magalhães de. "Produção e avaliação do desempenho de compósitos à base de madeira a partir de insumos alternativos." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18158/tde-27112017-102235/.

Full text
Abstract:
A utilização de produtos provenientes de fontes renováveis e livres de substâncias tóxicas é tendência global, prova disso é o constante aumento da demanda por produtos à base de madeira. Segundo o Forest Products Statistics (2015), a produção mundial de painéis de madeira reconstituída, em 2014, foi de 388 milhões de m³, um aumento de 5,5% em comparação com o ano anterior e um aumento de 34% quando comparado com 2010. No entanto, o setor industrial de painéis de madeira há décadas enfrenta um grande desafio com relação às emissões tóxicas vindas dos adesivos convencionais. Diante destes problemas, a proposta do presente trabalho foi testar a viabilidade da produção de painéis aglomerados e painéis OSB, com significativa redução da emissão de formaldeído, a partir de resíduos de madeira de Pinus sp. e Tectona grandis (Teca). Neste sentido, buscou-se contribuir para o aumento do conhecimento científico sobre a utilização do polímero natural polihidroxibutirato (PHB) e do resíduo de tinta à base de resina epóxi como adesivos alternativos. Os painéis foram fabricados variando-se os parâmetros de produção para obtenção de melhores condições de processo, sendo estes: densidade baixa, média e alta; teor de adesivo de 20, 30 e 40%; adição de partículas de Teca nas proporções de 0, 25, 50, 75, 100%. O desempenho físico-mecânico dos painéis foi avaliado com base nas normas ABNT NBR 14810 (2013) e ANSI A208.1 (2009). Realizou-se análise de variância (ANOVA) para testar a influência dos fatores individuais (densidade, percentual de adesivo e fração de Teca), além das interações entre tais fatores nas propriedades físico-mecânicas dos painéis. Os resultados apontaram para excelentes propriedades físicas e mecânicas dos painéis aglomerados de média e alta densidade, tendo resíduo de tinta, principalmente, nas proporções de 30 e 40%. Na maioria dos casos as propriedades de tais painéis foram superiores aos requisitos estipulados pela norma brasileira e pelas internacionais. Assim, ficou confirmada a viabilidade de produção dos aglomerados com pelo menos um dos insumos estudados bem como seu potencial de emprego para as finalidades compatíveis com produtos desta natureza.<br>The use of products from renewable sources and free of toxic substances is a global trend, the proof is the steady increase in demand for wood-based products. According to the Forest Products Statistics (2015), world production of reconstituted wood panels in 2014 was 388 million cubic meters, an increase of 5.5% compared to the previous year and an increase of 34% compared to 2010. However, the industrial sector of wood panels for decades faces a major challenge with regard to toxic emissions coming from conventional adhesives. Faced with these problems, the purpose of this study was to test the feasibility of production of panels and OSB panels, with significant reduction of formaldehyde emissions from wood waste Pinus sp. and teak (Teak). In this sense, we sought to contribute to the increase of scientific knowledge on the use of natural polymer polyhydroxybutyrate (PHB) and residual ink resin-based epoxy as alternative adhesives. The panels were manufactured by varying the production parameters to obtain the best processing conditions, namely: low density, medium and high; resin content of 20, 30 and 40%; adding particles of Teak in proportions of 0, 25, 50, 75, 100%. The physical-mechanical performance of the panels was evaluated based on the standards NBR 14810 (2013) and ANSI A208.1 (2009). Held analysis of variance (ANOVA) to test the influence of individual factors (density, adhesive percentage and fraction Teak), and the interactions among these factors in the physical-mechanical properties of the panels. The results showed excellent physical and mechanical properties of the panels average clusters and high density, and ink residue, mainly in the proportions of 30 and 40%. In most cases the properties of these boards were superior to the requirements set by the Brazilian and international standards. Thus, it was confirmed production of agglomerates with at least one viability of inputs studied as well as their potential for employment for purposes compatible with products of this nature.
APA, Harvard, Vancouver, ISO, and other styles
44

Mankayi, Dolphia Thozama. "An investigation into the relationship between satisfaction with life and sense of coherence amongst the unemployed." University of the Western Cape, 1996. http://hdl.handle.net/11394/7861.

Full text
Abstract:
Magister Commercii (Industrial Psychology) - MCom(IPS)<br>The present study investigated the relationship between the Sense Of Coherence and Satisfaction With Life amongst the unemployed. The study attempted to test the following hypotheses. 1. People with a high Sense Of Coherence tend to be satisfied with their lives in general. 2. Demographic variables such as age, gender, race and level of education have an influence on the subjects' scores on the Sense Of Coherence and Satisfaction With Life scales. 3. Length of unemployment has an impact on the subjects' Sense Of Coherence and Satisfaction With Life. In this study, subjects were drawn from the Department of Manpower in the Western Cape region. The data were obtained from a sample of 100 participants. Of this 100, 52 were males. Subjects were asked to complete the Sense Of Coherence and the Satisfaction With Life scales. Statistical procedures that were used are Multiple Linear Regression analysis, Product Moment Correlational Co-efficients, Analysis of Variance test (ANOVA) and the Cronbach Alphas of the various scales. It was found that Sense Of Coherence correlated significantly with Satisfaction With Life, thus supporting the first hypothesis. This led to the conclusion that a person with a strong Sense Of Coherence tends to be more satisfied with his/her life. However, a person with a weaker Sense Of Coherence finds it difficult to make sense out of his/her life. It was also found that most of the demographic variables did not reach any statistical significance. The general trend in this sample was that younger people had higher education and had been unemployed for fewer years. In comparison, older people were found to have less education and had been unemployed for more years. It was concluded that formal and informal education system will be necessary to develop and equip both young and older people with the experience and skills to use at work. This study was concluded by the discussion of the implications of the findings and suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
45

Úlehlová, Eva. "Návrh postupu kontroly vybraných součástí revolveru." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417743.

Full text
Abstract:
The goal of this master’s thesis was design of the inspection procedure for hammer and trigger of the specific revolver model. Thesis was developed in cooperation with the manufacturer of the revolvers. The theoretical part deals with the MSA methodology, which is used to assess acceptability of measurement systems. The practical part describes the current measurement system and performs gage repeatability and reproducibility study. It was confirmed that the current measurement system requires improvement. Subsequently coordinate systems were designed, based on functional features of the hammer and trigger. Automated optical measurements, based on the coordinate systems, were performed. The results from these measurements were again assessed by the gage R&R study. The analysis confirmed improvement of acceptability of the designed measurement systems. Based on these results, it is recommended to apply suggested procedures in practice. Results and recommendations of this master’s thesis can contribute to develop metrology in the company and improve the existing measurement system.
APA, Harvard, Vancouver, ISO, and other styles
46

Partin, Matthew L. "The CLEM Model: Path Analysis of the Mediating Effects of Attitudes and Motivational Beliefs on the Relationship Between Perceived Learning Environment and Course Performance in an Undergraduate Nonmajor Biology Course." Bowling Green State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1213985302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ma, Shuhui. "A methodology to predict the effects of quench rates on mechanical properties of cast aluminum alloys." Link to electronic dissertation, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050106-174639/.

Full text
Abstract:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.<br>Keywords: Time-Temperature-Property curve, Jominy End Quench, ANOVA analysis. Quench Factor Analysis, Taguchi design, Polymer quench, Cast Al-Si-Mg alloys, Quenching, Heat treatment. Includes bibliographical references (p.115-117).
APA, Harvard, Vancouver, ISO, and other styles
48

Strauss, Ashley J. "Distribution of and relationship between medically classified weight and self-perceived body size across sexual orientation: An Add Health analysis." Antioch University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=antioch147993895681102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Yulei. "Computer Experiments with Both Quantitative and Qualitative Inputs." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408042133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Aroge, Olatunde O. "Assessment Of Disruption Risk In Supply Chain The Case Of Nigeria’s Oil Industry." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/17396.

Full text
Abstract:
evaluate disruption risks in the supply chain of petroleum production. This methodology is developed to formalise and facilitate the systematic integration and implementation of various models; such as analytical hierarchy process (AHP) and partial least squares structural equation model (PLS-SEM) and various statistical tests. The methodology is validated with the case of Nigeria’s oil industry. The study revealed the need to provide a responsive approach to managing the influence of geopolitical risk factors affecting supply chain in the petroleum production industry. However, the exploration and production risk, and geopolitical risk were identified as concomitant risk factors that impact performance in Nigeria’s oil industry. The research findings show that behavioural-based mechanisms successfully predict the ability of the petroleum industry to manage supply chain risks. The significant implication for this study is that the current theoretical debate on the supply chain risk management creates the understanding of agency theory as a governing mechanism for supply chain risk in the Nigerian oil industry. The systematic approach results provide an insight and objective information for decisions-making in resolving disruption risk to the petroleum supply chain in Nigeria. Furthermore, this study highlights to stakeholders on how to develop supply chain risk management strategies for mitigating and building resilience in the supply chain in the Nigerian oil industry. The developed systematic method is associated with supply chain risk management and performance measure. The approach facilitates an effective way for the stakeholders to plan according to their risk mitigation strategies. This will consistently help the stakeholders to evaluate supply chain risk and respond to disruptions in supply chain. This capability will allow for efficient management of supply chain and provide the organization with quicker response to customer needs, continuity of supply, lower costs of operations and improve return on investment in the Nigeria oil industry. Therefore, the methodology applied provide a new way for implementing good practice for managing disruption risk in supply chain. Further, the systematic approach provides a simplistic modelling process for disruption risk evaluation for researchers and oil industry professionals. This approach would develop a holistic procedure for monitoring and controlling disruption risk in supply chains practices in Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!