To see the other types of publications on this topic, follow the link: Factorial experiment designs.

Dissertations / Theses on the topic 'Factorial experiment designs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Factorial experiment designs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Qin, Hong. "Construction of uniform designs and usefulness of uniformity in fractional factorial designs." HKBU Institutional Repository, 2002. http://repository.hkbu.edu.hk/etd_ra/456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ke, Xiao. "On lower bounds of mixture L₂-discrepancy, construction of uniform design and gamma representative points with applications in estimation and simulation." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/152.

Full text
Abstract:
Two topics related to the experimental design are considered in this thesis. On the one hand, the uniform experimental design (UD), a major kind of space-filling design, is widely used in applications. The majority of UD tables (UDs) with good uniformity are generated under the centralized {dollar}L_2{dollar}-discrepancy (CD) and the wrap-around {dollar}L_2{dollar}-discrepancy (WD). Recently, the mixture {dollar}L_2{dollar}-discrepancy (MD) is proposed and shown to be more reasonable than CD and WD in terms of uniformity. In first part of the thesis we review lower bounds for MD of two-level designs from a different point of view and provide a new lower bound. Following the same idea we obtain a lower bound for MD of three-level designs. Moreover, we construct UDs under the measurement of MD by the threshold accepting (TA) algorithm, and finally we attach two new UD tables with good properties derived from TA under the measurement of MD. On the other hand, the problem of selecting a specific number of representative points (RPs) to maintain as much information as a given distribution has raised attention. Previously, a method has been given to select type-II representative points (RP-II) from normal distribution. These point sets have good properties and minimize the information loss. Whereafter, following similar idea, Fu, 1985 have discussed RP-II for gamma distribution. In second part of the thesis, we improve the discussion of selecting Gamma RP-II and provide more RP-II tables with a number of parameters. Further in statistical simulation, we also evaluate the estimation performance of point sets resampled from Gamma RP-II by making comparison in different situations.
APA, Harvard, Vancouver, ISO, and other styles
3

Brien, Christopher J. "Factorial linear model analysis." Title page, table of contents and summary only, 1992. http://thesis.library.adelaide.edu.au/public/adt-SUA20010530.175833.

Full text
Abstract:
"February 1992" Bibliography: leaf 323-344. Electronic publication; Full text available in PDF format; abstract in HTML format. Develops a general strategy for factorial linear model analysis for experimental and observational studies, an iterative, four-stage, model comparison procedure. The approach is applicable to studies characterized as being structure-balanced, multitiered and based on Tjur structures unless the structure involves variation factors when it must be a regular Tjur structure. It covers a wide range of experiments including multiple-error, change-over, two-phase, superimposed and unbalanced experiments. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001.
APA, Harvard, Vancouver, ISO, and other styles
4

Ke, Xiao. "On the construction of uniform designs and the uniformity property of fractional factorial designs." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/785.

Full text
Abstract:
Uniform design has found successful applications in manufacturing, system engineering, pharmaceutics and natural sciences since it appeared in 1980's. Recently, research related to uniform design is emerging. Discussions are mainly focusing on the construction and the theoretical properties of uniform design. On one hand, new construction methods can help researchers to search for uniform designs in more efficient and effective ways. On the other hand, since uniformity has been accepted as an essential criterion for comparing fractional factorial designs, it is interesting to explore its relationship with other criteria, such as aberration, orthogonality, confounding, etc. The first goal of this thesis is to propose new uniform design construction methods and recommend designs with good uniformity. A novel stochastic heuristic technique, the adjusted threshold accepting algorithm, is proposed for searching uniform designs. This algorithm has successfully generated a number of uniform designs, which outperforms the existing uniform design tables in the website https://uic.edu.hk/~isci/UniformDesign/UD%20Tables.html. In addition, designs with good uniformity are recommended for screening either qualitative or quantitative factors via a comprehensive study of symmetric orthogonal designs with 27 runs, 3 levels and 13 factors. These designs are also outstanding under other traditional criteria. The second goal of this thesis is to give an in-depth study of the uniformity property of fractional factorial designs. Close connections between different criteria and lower bounds of the average uniformity have been revealed, which can be used as benchmarks for selecting the best designs. Moreover, we find non-isomorphic designs have different combinatorial and geometric properties in their projected and level permutated designs. Two new non-isomorphic detection methods are proposed and utilized for classifying fractional factorial designs. The new methods take advantages over the existing ones in terms of computation efficiency and classification capability. Finally, the relationship between uniformity and isomorphism of fractional factorial designs has been discussed in detail. We find isomorphic designs may have different geometric structure and propose a new isomorphic identification method. This method significantly reduces the computational complexity of the procedure. A new uniformity criterion, the uniformity pattern, is proposed to evaluate the overall uniformity performance of an isomorphic design set.
APA, Harvard, Vancouver, ISO, and other styles
5

Hilow, Hisham. "Economic expansible-contractible sequential factorial designs for exploratory experiments." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54426.

Full text
Abstract:
Sequential experimentation, especially for factorial treatment structures, becomes important when one or more of the following, conditions exist: observations become available quickly, observations are costly to obtain, experimental results need to be evaluated quickly, adjustments in experimental set-up may be desirable, a quick screening of the importance of various factors is important. The designs discussed in this study are suitable for these situations. Two approaches to sequential factorial experimentation are considered: one-run-at-a-time (ORAT) plans and one-block-at-a-time (OBAT) plans. For 2ⁿ experiments, saturated non-orthogonal 2ᵥⁿ fractions to be carried out as ORAT plans are reported. In such ORAT plans, only one factor level is changed between any two successive runs. Such plans are useful and economical for situations in which it is costly to change simultaneously more than one factor level at a given time. The estimable effects and the alias structure after each run have been provided. Formulas for the estimates of main-effects and two-factor interactions have been derived. Such formulas can be used for assessing the significance of their estimates. For 3m and 2ⁿ3m experiments, Webb's (1965) saturated non-orthogonal expansible-contractible <0, 1, 2> - 2ᵥⁿ designs have been generalized and new saturated non-orthogonal expansible-contractible 3ᵥm and 2ⁿ3ᵥm designs have been reported. Based on these 2ᵥⁿ, 3ᵥm and 2ⁿ3ᵥm designs, we have reported new OBAT 2ᵥⁿ, 3ᵥm and 2ⁿ3ᵥm plans which will eventually lead to the estimation of all main-effects and all two-factor interactions. The OBAT 2ⁿ, 3m and 2ⁿ3m plans have been constructed according to two strategies: Strategy I OBAT plans are carried out in blocks of very small sizes, i.e. 2 and 3, and factor effects are estimated one at a time whereas Strategy II OBAT plans involve larger block sizes where factors are assumed to fall into disjoint sets and each block investigates the effects of the factors of a particular set. Strategy I OBAT plans are appropriate when severe time trends in the response may be present. Formulas for estimates of main-effects and two-factor interactions at the various stages of strategy I OBAT 2ⁿ, 3m and 2ⁿ3m plans are reported.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Ho, Wai Man. "Case studies in computer experiments, applications of uniform design and modern modeling techniques." HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Weng, Lin Chen. "On the classification and selection of orthogonal designs." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/846.

Full text
Abstract:
Factorial design has played a prominent role in the field of experimental design because of its richness in both theory and application. It explores the factorial effects by allowing the arrangement of efficient and economic experimentation, among which orthogonal design, uniform design and some other factorial designs have been widely used in various scientific investigations. The main contribution of this thesis shows the recent advances in the classification and selection of orthogonal designs. Design isomorphism is essential to the classification, selection and construction of designs. It also covers various popular design criteria as necessary conditions, such connection has led to a rapid growth of research on the novel approaches for either detecting the non-isomorphism or identifying the isomorphism. But further classification of non-isomorphic designs has received little attention, and hence remains an open question. It motivates us to propose the degree of isomorphism, as a more general view of isomorphism, for classifying non-isomorphic subclasses in orthogonal designs, and develop the column-wise identification framework accordingly. Selecting designs in sequential experiments is another concern. As a well-recognized strategy for improving the initial design, fold-over techniques have been widely applied to construct combined designs with better property in a certain sense. While each fold-over method has been comprehensively studied, there is no discussion on the comparison of them. It is the motivation behind our survey on the existing fold-over methods in view of statistical performance and computational complexity. The thesis involves five chapters and it is organized as follows. In the beginning chapter, the underlying statistical models in factorial design are demonstrated. In particular, we introduce orthogonal design and uniform design associated with commonly-used criteria of aberration and uniformity. In Chapter 2, the motivation and previous work of design isomorphism are reviewed. It attempts to explain the evolution of strategies from identification methods to detection methods, especially when the superior efficiency of the latter has been gradually appreciated by the statistical community. In Chapter 3, the concepts including the degree of isomorphism and pairwise distance are proposed. It allows us to establish the hierarchical clustering of non-isomorphic orthogonal designs. By applying the average linkage method, we present a new classification of L 27 (3 13 ) with six different clusters. In Chapter 4, an efficient algorithm for measuring the degree of isomorphism is developed, and we further extend it to a general framework to accommodate different issues in design isomorphism, including the detection of non-isomorphic designs, identification of isomorphic designs and the determination of non-isomorphic subclass for unclassified designs. In Chapter 5, a comprehensive survey of the existing fold-over techniques is presented. It starts with the background of these methods, and then explores the connection between the initial designs and their combined designs in a general framework. The dictionary cross-entropy loss is introduced to simplify a class of criteria that follows the dictionary ordering from pattern into scalar, it allows the statistical performance to be compared in a more straightforward way with visualization
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Yu. "Combinatorial properties of uniform designs and their applications in the constructions of low-discrepancy designs." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hockman, Kimberly Kearns. "A graphical comparison of designs for response optimization based on slope estimation." Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54384.

Full text
Abstract:
The response surface problem is two-fold: to predict values of the response, and to optimize the response. Slope estimation criteria are well suited for the optimization problem. Response prediction capability has been assessed by plotting the average, maximum, and minimum prediction variances on the surface of spheres with radii ranging across the region of interest. Average and maximum prediction bias plots have recently been added to the spherical criteria. Combined with the prediction variance, a graphical MSE criterion results. This research extends these ideas to the slope estimation objective. A direct relationship between precise slope estimation and the ability to pinpoint the location of the optimum is developed, resulting in a general slope variance measure related to E-optimality in slope estimation. A more specific slope variance measure is defined and analyzed for use in evaluating standard response surface (RS) designs,where slopes parallel to the factor axes are estimated with equal precision. Standard second order RS designs are then studied in light of the prediction and optimization goal distinction. Designs which perform well for prediction of the response do not necessarily estimate the slope precisely. A spherical measure of bias in slope estimation is developed and used to measure slope bias due to model misspecification and due to the presence of outliers. A study of augmenting saturated orthogonal arrays of strength two to detect lack of fit is included as an application of a combined squared bias and variance measure of MSE in slope. A study of the designs recommended for precise slope estimation in their robustness to outliers and to missing observations is conducted using the slope bias and general slope variance measures, respectively.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Jo, Jinnam. "Construction and properties of Box-Behnken designs." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/37247.

Full text
Abstract:
Box-Behnken designs are used to estimate parameters in a second-order response surface model (Box and Behnken, 1960). These designs are formed by combining ideas from incomplete block designs (BIBD or PBIBD) and factorial experiments, specifically 2k full or 2k-1 fractional factorials. In this dissertation, a more general mathematical formulation of the Box-Behnken method is provided, a general expression for the coefficient matrix in the least squares analysis for estimating the parameters in the second order model is derived, and the properties of Box-Behnken designs with respect to the estimability of all parameters in a second-order model are investigated when 2kfull factorials are used. The results show that for all pure quadratic coefficients to be estimable, the PBIB(m) design has to be chosen such that its incidence matrix is of full rank, and for all mixed quadratic coefficients to be estimable the PBIB(m) design has to be chosen such that the parameters λ₁, λ₂, ...,λm are all greater than zero. In order to reduce the number of experimental points the use of 2k-1 fractional factorials instead of 2k full factorials is being considered. Of particular interest and importance are separate considerations of fractions of resolutions III, IV, and V. The construction of Box-Behnken designs using such fractions is described and the properties of the designs concerning estimability of regression coefficients are investigated. Using designs obtained from resolution V factorials have the same properties as those using full factorials. Resolutions III and IV designs may lead to non-estimability of certain coefficients and to correlated estimators. The final topic is concerned with Box-Behnken designs in which treatments are applied to experimental units sequentially in time or space and in which there may exist a linear trend effect. For this situation, one wants to find appropriate run orders for obtaining a linear trend-free Box-Behnken design to remove a linear trend effect so that a simple technique, analysis of variance, instead of a more complicated technique, analysis of covariance, to remove a linear trend effect can be used. Construction methods for linear trend-free Box-Behnken designs are introduced for different values of block size (for the underlying PBIB design) k. For k= 2 or 3, it may not always be possible to find linear trend-free Box-Behnken designs. However, for k ≥ 4 linear trend-free Box-Behnken designs can always be constructed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Sang Ik. "Contributions to experimental design for quality control." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53551.

Full text
Abstract:
A parameter design introduced by Taguchi provides a new quality control method which can reduce cost-effectively the product variation due to various uncontrollable noise factors such as product deterioration, manufacturing imperfections, and environmental factors under which a product is actually used. This experimental design technique identifies the optimal setting of the control factors which is least sensitive to the noise factors. Taguchi’s method utilizes orthogonal arrays which allow the investigation of main effects only, under the assumption that interaction effects are negligible. In this paper new techniques are developed to investigate two-factor interactions for 2t and 3t factorial parameter designs. The major objective is to be able to identify influential two-factor interactions and take those into account in properly assessing the optimal setting of the control factors. For 2t factorial parameter designs, we develop some new designs for the control factors by using a partially balanced array. These designs are characterized by a small number of runs and some balancedness property of the variance-covariance matrix of the estimates of main effects and two-factor interactions. Methods of analyzing the new designs are also developed. For 3t factorial parameter designs, a detection procedure consisting of two stages is developed by using a sequential method in order to reduce the number of runs needed to detect influential two-factor interactions. In this paper, an extension of the parameter design to several quality characteristics is also developed by devising suitable statistics to be analyzed, depending on whether a proper loss function can be specified or not.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

St, Omer Ingrid L. J. "The pressure response of synthetic polycrystalline diamond f ilms /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9737861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ba, Shan. "Multi-layer designs and composite gaussian process models with engineering applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44751.

Full text
Abstract:
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
APA, Harvard, Vancouver, ISO, and other styles
14

Khattak, Azizullah. "Design of balanced incomplete factorial experiments." Thesis, University of Leeds, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kovaříková, Ludmila. "Statistická analýza výroby." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228542.

Full text
Abstract:
The aim of Master's thesis is to apply statistical methods in production process. The assignment of the thesis is to describe and evaluate pressing process which is the part of posistor production. The first theoretical part contains introduction to the mathematical statistics, verifying assumptions about the data, describing the regression analysis, the analysis of variance and the part of the design of experiment. The second practical part is focused on design, performing and interpreting of experiments. The thesis is developed according to requirements of company EPCOS s.r.o. Šumperk. Statistical program MINITAB Rlease 14, which is supported by company, was used for all computations.
APA, Harvard, Vancouver, ISO, and other styles
16

Matthews, Emily Sarah. "Design of factorial experiments in blocks and stages." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/384190/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lewsey, James Daniel. "Hypothesis testing in unbalanced experimental designs." Thesis, Glasgow Caledonian University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bingham, Derek R. "Design and analysis of fractional factorial split-plot experiments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0022/NQ51843.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Su, Heng. "Some new ideas on fractional factorial design and computer experiment." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53505.

Full text
Abstract:
This thesis consists of two parts. The first part is on fractional factorial design, and the second part is on computer experiment. The first part has two chapters. In the first chapter, we use the concept of conditional main effect, and propose the CME analysis to solve the problem of effect aliasing in two-level fractional factorial design. In the second chapter, we study the conversion rates of a system of webpages with the proposed funnel testing method, by using directed graph to represent the system, fractional factorial design to conduct the experiment, and a method to optimize the total conversion rate with respect to all the webpages in the system. The second part also has two chapters. In the third chapter, we use regression models to quantify the model form uncertainties in the Perez model in building energy simulations. In the last chapter, we propose a new Gaussian process that can jointly model both point and integral responses.
APA, Harvard, Vancouver, ISO, and other styles
20

Kessel, Lisa. "Regularities in the Augmentation of Fractional Factorial Designs." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/2993.

Full text
Abstract:
Two-level factorial experiments are widely used in experimental design because they are simple to construct and interpret while also being efficient. However, full factorial designs for many factors can quickly become inefficient, time consuming, or expensive and therefore fractional factorial designs are sometimes preferable since they provide information on effects of interest and can be performed in fewer experimental runs. The disadvantage of using these designs is that when using fewer experimental runs, information about effects of interest is sometimes lost. Although there are methods for selecting fractional designs so that the number of runs is minimized while the amount of information provided is maximized, sometimes the design must be augmented with a follow-up experiment to resolve ambiguities. Using a fractional factorial design augmented with an optimal follow-up design allows for many factors to be studied using only a small number of additional experimental runs, compared to the full factorial design, without a loss in the amount of information that can be gained about the effects of interest. This thesis looks at discovering regularities in the number of follow-up runs that are needed to estimate all aliased effects in the model of interest for 4-, 5-, 6-, and 7-factor resolution III and IV fractional factorial experiments. From this research it was determined that for all of the resolution IV designs, four or fewer (typically three) augmented runs would estimate all of the aliased effects in the model of interest. In comparison, all of the resolution III designs required seven or eight follow-up runs to estimate all of the aliased effects of interest. It was determined that D-optimal follow-up experiments were significantly better with respect to run size economy versus fold-over and semi-foldover designs for (i) resolution IV designs and (ii) designs with larger run sizes.
APA, Harvard, Vancouver, ISO, and other styles
21

Santos, Maria Izabel dos. "Identifying active factors by a fractioned factorial experimental design and simulation in road traffic accidents." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18143/tde-28092017-091715/.

Full text
Abstract:
Researchers around the world are constantly seeking for a quick, inexpensive and easy to use way to understand road traffic deaths. This study proposes the use of multibody (MBS) simulation, using a virtual driver, associated to fractional factorial experiments to identify active factors in road traffic accidents. The objectives of this work were to: (i) use DOE to show a more structured direction on the studies of road safety and (ii) investigate possible vehicle state variables to monitor vehicle dynamic stability. The first experiment was a quarter fraction It was designed based on an accident database of a Brazilian Federal Highway. Seven factors were considered (curve radius, path profile, path condition, virtual driver skill, speed, period of the day and car load) and 3 replicates were performed per treatment. Speed and friction coefficient were defined randomly for each treatment, within the defined range for each level. 42 accidents were observed in 96 events. Speed had shown the highest influence on the occurrence, followed by curve radius, period of the day and some second order interactions. The second experiment was based on the results of first one. A half fraction factorial design with five factors (curve radius, car load, virtual driver skill, period of the day and speed), with 14 replicates per treatment, was performed. Speed was defined randomly as per previous experiment. 96 accidents were observed in 224 events. Speed had the highest influence on the occurrence of accidents, followed by the period of the day, curve radius, virtual driver skill and second order interactions. Speed is also pointed by World Health Organization as one of the key factors for the occurrence of accidents. The study indicates that a well-designed experiment with a representative vehicle model can show a direction for further researches. At last, roll angle, yaw rate and displacement of the car on the road are variables suggested to be monitored in experiments using simulation to identify vehicle\'s instability.
Pesquisadores do mundo estão constantemente buscando uma maneira rápida, barata e fácil de usar para entender acidentes de trânsito. O presente estudo propõe o uso de simulação, condutor virtual e experimentos fatoriais para a identificação de fatores ativos em acidentes rodoviários. Os objetivos deste trabalho foram: utilizar experimentos planejados, associado a simulação para obter uma direção para estudos futuros e investigar possíveis variáveis de estado do veículo a serem usadas para monitorar sua estabilidade dinâmica. Para tal, foi utilizado um modelo completo de veículo validado e dados reais de acidentes de um determinado trecho de rodovia brasileira. O primeiro experimento baseou-se em um banco de dados de acidentes de uma rodovia Federal brasileira. Optou-se por fracionar o experimento, utilizando um quarto de fração. Sete fatores foram considerados (raio da curva, perfil da pista, condição da pista, habilidade do condutor virtual, velocidade, período do dia e carga do carro) e foram realizadas três réplicas por tratamento. Velocidade e coeficiente de atrito foram utilizados como fontes de variação do experimento: para cada tratamento, e dentro do intervalo definido para cada nível, ambos foram definidos aleatoriamente. Em 54 dos 96 eventos foram observou-se acidentes. Velocidade, raio da curva, período do dia e algumas interações de segunda ordem foram os fatores com maior influência na ocorrência de acidentes. O segundo experimento utilizou como dado de entrada os resultados obtidos no experimento anterior. O experimento foi fracionado, meia fração, com cinco fatores (raio da curva, carga do carro, habilidade do motorista virtual, período do dia e velocidade). Foram realizadas 14 réplicas por tratamento, e a velocidade foi mantida como fonte de variação. Em 96 dos 224 eventos foram observados acidentes. Velocidade teve maior influência na ocorrência de acidentes, seguida por período do dia, raio da curva, habilidade do motorista virtual e interações de segunda ordem. A velocidade também é apontada pela Organização Mundial da Saúde como um dos fatores-chave para a ocorrência de acidentes. Isto indica que um experimento bem planejado, com um modelo de veículo representativo, pode apontar uma direção a ser seguida em pesquisas futuras. Por último é sugerido o monitoramento do ângulo de rolagem (roll angle), da taxa de guinada (yaw rate), e do deslocamento lateral do carro na pista para identificar instabilidades no veículo quando são utilizadas simulações.
APA, Harvard, Vancouver, ISO, and other styles
22

Sabová, Iveta. "Plánovaný experiment." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231981.

Full text
Abstract:
This thesis deals with the possibility of applying the method of Design of Experiments (DoE) on specific data. In the first chapter of theoretical part, this method is described in detail. The basic principles and guidelines for the design of the experiment are written there. In the next two chapters, factorial design of the experiment and response surface design are described. The latter one includes a central composite design and Box-Behnken design. The following chapter contains practical part, which focuses on modelling firing range of ball from a catapult using the above three types of experimental design. In this work, the models are analysed together with their different characteristics. Their comparison is made by using prediction and confidence intervals and by response optimizing. The last part of the thesis comprises overall evaluation.
APA, Harvard, Vancouver, ISO, and other styles
23

Holec, Tomáš. "Plánovaný experiment." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-254455.

Full text
Abstract:
In this thesis, the design of experiment is studied. Firstly, a theoretic background in mathematical statistics necessary for understanding is built (chapter 2). The design of experiment is then presented in chapters 3 and 4. Chapter 3 is divided into several subchapters in which its brief history is provided as well as its complex theoretic description (basic principles, steps for planning, etc.). Chapter 4 deals with particular types of the design of experiment (Factorial experiments or Response surface design). Simple examples are provided to illustrate the theory in chapters 3 and 4. Last part of the thesis is strictly practical and focuses on an application of the theory for particular data sets and its evaluation (chapter 5).
APA, Harvard, Vancouver, ISO, and other styles
24

Sarmad, Majid. "Robust data analysis for factorial experimental designs : improved methods and software." Thesis, Durham University, 2006. http://etheses.dur.ac.uk/2432/.

Full text
Abstract:
Factorial experimental designs are a large family of experimental designs. Robust statistics has been a subject of considerable research in recent decades. Therefore, robust analysis of factorial designs is applicable to many real problems. Seheult and Tukey (2001) suggested a method of robust analysis of variance for a full factorial design without replication. Their method is generalised for many other factorial designs without the restriction of one observation in each cell. Furthermore, a new algorithm to decompose data from a factorial design is introduced and programmed in the statistical computer package R. The whole procedure of robust data analysis is also programmed in R and it is intended to submit the library to the repository of R software, CRAN. In the procedure of robust data analysis, a cut-off value is needed to detect possible outliers. A set of optimum cut-off values for univariate data and some dimensions of two-way designs (complete and incomplete) has also been provided using an improved design of simulation study.
APA, Harvard, Vancouver, ISO, and other styles
25

Gardiner, Eric. "The design of non-orthogonal experiments with a factorial treatment structure." Thesis, University of Reading, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Karlberg, Victor. "Dynamic analysis of high-rise timber buildings : A factorial experiment." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65559.

Full text
Abstract:
Today high-rise timber buildings are more popular than ever and designers all over the world have discovered the beneficial material properties of timber. In the middle of the 1990’s cross-laminated timber (CLT), was developed in Austria. CLT consists of laminated timber panels that are glued together to form a strong and flexible timber element. In recent years CLT has been on the rise and today it is regarded as a good alternative to concrete and steel in the design of particularly tall buildings. Compared to concrete and steel, timber has lower mass and stiffness. A high-rise building made out of timber is therefore more sensitive to vibration. The vibration of the building can cause the occupants discomfort and it is thus important to thoroughly analyze the building’s dynamic response to external excitation. The standard ISO 10137 provides guidelines for the assesment of habitability of buildings with respect to wind-induced vibration. The comfort criteria herein is based on the first natural frequency and the acceleration of the building, along with human perception of vibration. The aim of this thesis is to identify the important structural properties affecting a dynamic analysis of a high-rise timber building. An important consequence of this study is hopefully a better understanding of the interactions between the structural properties in question. To investigate these properties and any potential interactions a so-called factorial experiment is performed. A factorial experiment is an experiment where all factors are varied together, instead of one at a time, which makes it possible to study the effects of the factors as well as any interactions between these. The factors are varied between two levels, that is, a low level and a high level. The design of a factorial experiment includes all combinations of the levels of the factors. The experiment is performed using the software FEM-Design, which is a modeling software for finite element analysis. A fictitious building is modelled using CLT as the structural system. The modeling and the subsequent dynamic analysis is repeated according to the design of the factorial experiment. The experiment is further analyzed using statistical methods and validated according to ISO 10137 in order to study performance and patterns between the different models. The statistical analysis of the experiment shows that the height of the building, the thickness of the walls and the addition of mass are important in a dynamic analysis. It also shows that interaction is present between the height of the building and the thickness of the walls as well as between the height of the building and the addition of mass. Most of the models of the building does not satisfy the comfort criteria according to ISO 10137. However, it still shows patterns that provides useful information about the dynamic properties of the building. Lastly, based on the natural frequency of the building this study recognizes the stiffness as more relevant than the mass for a building with CLT as the structural system and with up to 16 floors in height.
Idag är höga trähus mer populära än någonsin och konstruktörer runtom i världen har upptäckt de fördelaktiga materialegenskaperna hos trä. I mitten på 1990-talet utvecklades korslimmat trä (KL-trä) i Österrike. KL-trä består av hyvlade brädor som limmas ihop för att bilda en lätt och stark träskiva. På senare år har KL-trä varit på uppgång och idag anses materialet vara ett bra alternativ till betong och stål i framför allt höga byggnader. Jämfört med betong och stål har trä både lägre massa och styvhet. En hög träbyggnad är därför mer känslig för vibrationer. En vibrerande byggnad kan leda till obehag för de boende och det är därför viktigt att analysera byggnadens dynamiska respons då den utsätts för yttre belastning. Standarden ISO 10137 ger riktlinjer för att kunna utvärdera komfortkravet för byggnader med avseende på människors känslighet för vibrationer orsakade av vind. Komfortkravet i fråga jämför byggnadens första naturliga egenfrekvens med dess acceleration. Syftet med detta examensarbete är att identifiera de viktiga egenskaperna i en dynamisk analys av en hög träbyggnad. Förhoppningsvis leder det här examensarbetet till en ökad förståelse av samspelseffekterna mellan dessa egenskaper. För att undersöka dessa egenskaper och eventuella samspelseffekter genomförs ett så kallat faktorförsök. Ett faktorförsök är ett försök där alla faktorer varieras tillsammans, istället för en och en, vilket gör det möjligt att studera effekterna av faktorerna samt eventuella samspelseffekter. Faktorerna varieras mellan två nivåer: en låg nivå och en hög nivå. Ett faktorförsök använder sig av samtliga kombinationer av faktorernas nivåer. Försöket utförs med hjälp av programmet FEM-Design, vilket är ett modelleringsverktyg för FE-analys. En fiktiv byggnad modelleras med CLT som stomsystem och en dynamisk analys görs. Försöket analyseras ytterligare med hjälp av statistiska metoder och valideras enligt ISO 10137. Dessa steg upprepas enligt faktorförsöket. Den statistiska analysen av försöket visar att höjden på byggnaden, tjockleken på väggarna samt en ökad massa är viktiga i en dynamisk analys. Den visar också på en samspelseffekt mellan höjden på byggnaden och tjockleken på väggarna, samt mellan höjden på byggnaden och en ökad massa. Merparten av modellerna av byggnaden uppfyller inte komfortkravet enligt ISO 10137. Däremot går det att urskönja mönster som bidrar med viktig information om byggnadens dynamiska egenskaper. Avslutningsvis, baserat på byggnadens naturliga egenfrekvens framhåller den här studien byggnadens styvhet framför dess massa då byggnaden i fråga stabiliseras med KL-trä och har upp till 16 våningar.
APA, Harvard, Vancouver, ISO, and other styles
27

Schiffl, Katharina [Verfasser]. "Optimal designs for two-color microarray experiments in multi-factorial models / Katharina Schiffl." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2012. http://d-nb.info/1020253266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Katsaounis, Parthena I. "Equivalence of symmetric factorial designs and characterization and ranking of two-level Split-lot designs." The Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1164176825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Adiga, Nagesh. "Contributions to variable selection for mean modeling and variance modeling in computer experiments." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43592.

Full text
Abstract:
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments. In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined. The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques. The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments. We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Meily. "Construction of designs in the presence of polynomial trends for varietal or factorial experiments /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu14875987480194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Newton, Wesley E. "Data Analysis Using Experimental Design Model Factorial Analysis of Variance/Covariance (DMAOVC.BAS)." DigitalCommons@USU, 1985. https://digitalcommons.usu.edu/etd/6378.

Full text
Abstract:
DMAOVC.BAS is a computer program written in the compiler version of microsoft basic which performs factorial analysis of variance/covariance with expected mean squares. The program accommodates factorial and other hierarchical experimental designs with balanced sets of data. The program is writ ten for use on most modest sized microprocessors, in which the compiler is available. The program is parameter file driven where the parameter file consists of the response variable structure, the experimental design model expressed in a similar structure as seen in most textbooks, information concerning the factors (i.e. fixed or random, and the number of levels), and necessary information to perform covariance analysis. The results of the analysis are written to separate files in a format that can be used for reporting purposes and further computations if needed.
APA, Harvard, Vancouver, ISO, and other styles
32

Mays, Darcy P. "Design and analysis for a two level factorial experiment in the presence of dispersion effects." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/39723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

ALMEIDA, ALEXANDRE DE CASTRO. "BLACK OIL RESERVOIRS SIMULATOR PROXY USING COMPUTATIONAL INTELLIGENCE AND FRACTIONAL FACTORIAL DESIGN OF EXPERIMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13210@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Em diversas etapas da cadeia de trabalho da Indústria de Óleo e Gás a atividade de Engenharia de Petróleo demanda processos que envolvem otimização. Mais especificamente, no gerenciamento de reservatórios, as metodologias para a tomada de decisão pelo uso de poços inteligentes envolvem processos de otimização. Nestes processos, normalmente, visa-se maximizar o VPL (Valor Presente Líquido), que é calculado através das curvas de produção de óleo, gás e água fornecidas por um simulador de reservatório. Estas simulações demandam alto custo computacional, muitas vezes inviabilizando processos de otimização. Neste trabalho, empregam-se técnicas de inteligência computacional - modelos de redes neurais artificiais e neuro-fuzzy - para a construção de aproximadores de função para simulador de reservatórios com o objetivo de diminuir o custo computacional de um sistema de apoio à decisão para utilização ou não de poços inteligentes em reservatórios petrolíferos. Para reduzir o número de amostras necessárias para a construção dos modelos, utiliza-se também Projeto de Experimentos Fatoriais Fracionado. Os aproximadores de função foram testados em dois reservatórios petrolíferos: um reservatório sintético, muito sensível às mudanças no controle de poços inteligentes e outro com características reais. Os resultados encontrados indicam que estes aproximadores de reservatório conseguem bom desempenho na substituição do simulador no processo de otimização - devido aos baixos erros encontrados e à substancial diminuição do custo computacional. Além disto, os testes demonstraram que a substituição total do simulador pelo aproximador se revelou uma interessante estratégia para utilização do sistema de otimização, fornecendo ao especialista uma rápida ferramenta de apoio à decisão.
In many stages of the work chain of Oil & Gas Industry, activities of petroleum engineering demand processes that involve optimization. More specifically, in the reservoirs management, the methodologies for decision making by using intelligent wells involve optimization processes. In those processes, usually, the goal is to maximize the NVP (Net Present Value), which is calculated through the curves of oil, gas and water production, supplied by a reservoir simulator. Such simulations require high computational costs, therefore in many cases the optimization processes become unfeasible. Techniques of computational intelligence are applied in this study - artificial neural networks and neuro-fuzzy models - for building proxies for reservoirs simulators aiming at to reduce the computational cost in a decision support system for using, or not, intelligent wells within oil reservoirs. In order to reduce the number of samples needed for build the models, it was used the Fractional Factorial Design of Experiments. The proxies have been tested in two oil reservoirs: a synthetic one, very sensitive to changes in the control of intelligent wells and another one with real characteristics. The replacement of the simulator by the reservoir proxy, in an optimization process, indicates a good result in terms of performance - low errors and significantly reduced computational costs. Moreover, tests demonstrate that the total replacement of the simulator by the proxy, turned out to be an interesting strategy for using the optimization system, which provides to the users a very fast tool for decision support.
APA, Harvard, Vancouver, ISO, and other styles
34

Voulgaris, Dimitrios. "Evaluation of Small Molecules for Neuroectoderm differentiation & patterning using Factorial Experimental Design." Thesis, Chalmers Tekniska Högskola, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273264.

Full text
Abstract:
Screening for therapeutic compounds and treatments for diseases of the Brain does not only encompass the successful generation of iPS-derived homogenous neural stem cell populations but also the capacity of the differentiation protocol to derive on-demand region-specific cells. Νoggin, a human recombinant protein, has been extensively used in neural induction protocols but its high production costs and batch-to-batch variation have switched the focus to utilizing small molecules that can substitute noggin. Resultantly, the aim of this study was to optimize neuroepithelial stem cell generation in a cost-efficient fashion as well as to evaluate the impact that patterning factors (i.e. small molecules or proteins that enhance the emergence of type-specific neuronal populations) have on the regionality of the neural stem cell population. Findings in this study suggest that DMH1 is indeed a small molecule that can replace noggin in neural induction protocols as previously documented in literature; DMHI appears also to have a ventralizing effect on the generated neural population.

QC 20201013

APA, Harvard, Vancouver, ISO, and other styles
35

Eloseily, Ayman. "A comparison of three experimental designs for tolerance allocation." Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176834241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pais, Mônica Sakuray. "Estudo da influência dos parâmetros de algoritmos paralelos da computação evolutiva no seu desempenho em plataformas multicore." Universidade Federal de Uberlândia, 2014. https://repositorio.ufu.br/handle/123456789/14340.

Full text
Abstract:
Parallel computing is a powerful way to reduce the computation time and to improve the quality of solutions of evolutionary algorithms (EAs). At first, parallel evolutionary algorithms (PEAs) ran on very expensive and not easily available parallel machines. As multicore processors become ubiquitous, the improved performance available to parallel programs is a great motivation to computationally demanding EAs to turn into parallel programs and exploit the power of multicores. The parallel implementation brings more factors to influence performance, and consequently adds more complexity on PEAs evaluations. Statistics can help in this task and guarantee the significance and correct conclusions with minimum tests, provided that the correct design of experiments is applied. This work presents a methodology that guarantees the correct estimation of speedups and applies a factorial design on the analysis of PEAs performance. As a case study, the influence of migration related parameters on the performance of a parallel evolutionary algorithm solving two benchmark problems executed on a multicore processor is evaluated.
A computação paralela é um modo poderoso de reduzir o tempo de processamento e de melhorar a qualidade das soluções dos algoritmos evolutivos (AE). No princípio, os AE paralelos (AEP) eram executados em máquinas paralelas caras e pouco disponíveis. Desde que os processadores multicore tornaram-se largamente disponíveis, sua capacidade de processamento paralelo é um grande incentivo para que os AE, programas exigentes de poder computacional, sejam paralelizados e explorem ao máximo a capacidade de processamento dos multicore. A implementação paralela traz mais fatores que podem influenciar a performance dos AEP e adiciona mais complexidade na avaliação desses algoritmos. A estatística pode ajudar nessa tarefa e garantir conclusões corretas e significativas, com o mínimo de testes, se for aplicado o planejamento de experimentos adequado. Neste trabalho é apresentada uma metodologia de experimentação com AEP. Essa metodologia garante a correta estimação do speedup e aplica ao planejamento fatorial na análise dos fatores que influenciam o desempenho. Como estudo de caso, um algoritmo genético, denominado AGP-I, foi paralelizado segundo o modelo de ilhas. O AGP-I foi executado em plataformas com diferentes processadores multicore na resolução de duas funções de teste. A metodologia de experimentação com AEP foi aplicada para se determinar a influência dos fatores relacionados à migração no desempenho do AGP-I.
Doutor em Ciências
APA, Harvard, Vancouver, ISO, and other styles
37

Martins, Sarah Moherdaui. "Desenvolvimento e otimização de péletes de liberação bifásica mediante delineamento experimental." Universidade Estadual do Oeste do Parana, 2015. http://tede.unioeste.br:8080/tede/handle/tede/4.

Full text
Abstract:
Made available in DSpace on 2017-05-12T14:36:22Z (GMT). No. of bitstreams: 1 DEFESA_SARAH_M_MARTINS.pdf: 1948811 bytes, checksum: b506b11e5b86370135e564286deabd70 (MD5) Previous issue date: 2015-11-24
In recent years, the interest of the pharmaceutical industry for new drugs delivery system has been growing, especially aiming the optimization of therapy and reduction of side effects caused by conventional treatments. The multiparticulate systems, besides their techonological and biopharmaceutical advantages when compared to the monolithic systems, allow obtaining different ways of drug delivery, such as the biphasic system, capable of delivering the drug in separate fractions into the bloodstream, and ideal for the treatment of circadian diseases. Thus, this study aimed to obtain a multi particulate system biphasic release, lasting 24 hours, using a combination of polymeric materials hydroxypropyl methylcellulose and ethyl cellulose, as well as a full factorial design 2² to optimize the development stage. Furthermore, it was developed and validated analytical method by UV spectroscopy able to quantify the model drug used, propranolol hydrochloride (PROP) test and the content of dissolution of the dosage form. Using the sugar spheres coating technology, it was possible to obtain the proposed system with a reduced number of experiments. Pellets produced and used in the biphasic formulations showed mechanical characteristics within the expected quality parameters, showing that the technique is robust and can be applied on an industrial scale. The analytical method for the quantification of the proposed drug was linear, precise, accurate, robust against variations in wavelength of mark and sonication solvent in a concentration range of 0.80 - 96 mg L-1 and stable the experimental conditions analyzed, showing a method capable of generating highly reliable results and therefore able to be used in the laboratory routine quality control.
Nos últimos anos, o interesse da indústria farmacêutica por novos sistemas de liberação de fármacos vem crescendo, visando sobretudo a otimização da terapêutica e a diminuição dos efeitos colaterais ocasionados pelos tratamentos convencionais. Os sistemas multiparticulados, além de possuírem vantagens tecnológicas e biofarmacêuticas quando comparadas aos sistemas monolíticos, possibilitam a obtenção de diferentes padrões de liberação de drogas, como por exemplo, o sistema bifásico, capaz de disponibilizar o fármaco em frações distintas para a corrente sanguínea, sendo ideais para o tratamento das doenças circadianas. Desta maneira, o presente trabalho teve como objetivo a obtenção de um sistema multiparticulado de liberação bifásica, com duração de 24 horas, utilizando a combinação dos materiais poliméricos hidroxipropilmetilcelulose e etilcelulose, bem como um planejamento fatorial completo 2² para a otimização da etapa de desenvolvimento. Além disso, foi desenvolvido e validado um método analítico por espectroscopia de UV capaz de quantificar o fármaco modelo utilizado, cloridrato de propranolol (PROP), nos ensaios de teor e dissolução desta forma farmacêutica. Utilizando-se a tecnologia de revestimento de núcleos inertes de sacarose, foi possível a obtenção do sistema proposto com um número reduzido de experimentos. Os péletes produzidos e utilizados nas formulações bifásicas apresentaram características mecânicas dentro dos parâmetros de qualidade esperados, demostrando que a técnica é robusta e pode ser aplicada em escala industrial. O método analítico proposto para a quantificação do fármaco, mostrou-se linear, preciso, exato, robusto em relação a variações no comprimento de onda, marca de solvente e sonicação na faixa de concentração de 0,80 96 μg mL-1 e estável nas condições experimentais analisadas, evidenciando um método capaz de gerar resultados de alta confiabilidade e portanto apto a ser utilizado na rotina laboratorial de controle de qualidade.
APA, Harvard, Vancouver, ISO, and other styles
38

Sundaram, Gurunathan. "Comparison of Friction measured in Linear and Rotational motion." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/theses/2633.

Full text
Abstract:
In the past few decades, brake pad-rotor interface friction studies have gained high importance in the automotive industry. The goal of these studies has been to improve the design to maximize the contact area and performance in brakes. In these studies, friction coefficient has always assumed to be the same for linear and rotational motion. In our study, we show that the rotational and linear friction process have different friction coefficients. We use semi-metallic and ceramic brake material pads reduced into brake samples using scaling laws of physics. The samples were mounted on the Universal Mechanical Tester and experimented for linear and rotational friction process against Pearlitic Gray cast iron rotor. From results, it proved friction coefficients of linear movement is always higher than the rotational movement. The linear friction coefficient was found to be 43% higher on an average than the rotational friction coefficient in both the materials tested at 1MPa and 10 mm/s. These results will help industry in gaining better fundamental understanding about the friction coefficients of rotor- brake contact interfaces.
APA, Harvard, Vancouver, ISO, and other styles
39

Skagersten, Jon. "A MASTER THESIS ON THE PARAMETRIC WELD-DESIGN EVALUATION IN CRANE LOADER BODY USING NOTCH STRESS ANALYSIS." Thesis, KTH, Lättkonstruktioner, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-39491.

Full text
Abstract:
This thesis has been conducted at Cargotec Sweden AB as a case study on the loader body of the HIAB XS 144 crane. The loader body is the innermost part in the cranes arm-system and its fatigue life is critical to the operational life of the whole crane. Welding is the main joining process in Cargotec’s cranes and are often a limiting factor when it comes to fatigue life. The weld joining the column to the loader body is carrying the whole crane moment. Previous testing has shown that this weld often limits the fatigue life of the loader body, it has thus been evaluated. Weld fatigue life is affected by a large amount of parameters. To pinpoint the parameters mainly affecting the weld fatigue life and to understand their influence, calculations have been organized using factorial design. The evaluation has been carried out using 3D finite element calculations utilizing sub-modelling to calculate local stresses in the weld notches. Different parameters have been evaluated based on their influence on the local notch stresses. To estimate stresses from the evaluated parameters, regression equations have been fitted. The effective notch method has been used to estimate weld fatigue life. The evaluation has shown that a butt-weld design with root-support, only being welded from the outside of the loader body, as used on some other crane models, could not provide a robust design for the XS 144 crane. The evaluation could also point out several critical parameters that need to be considered when using such design. Apart from the local weld geometry, plate thickness, plate angle, material offset and thickness in the casted column were mainly affecting the weld notch stresses.
APA, Harvard, Vancouver, ISO, and other styles
40

Chantarat, Navara. "Modern design of experiments methods for screening and experimentations with mixture and qualitative variables." Columbus, OH : Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1064198056.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xiv, 119 p.: ill. (some col.). Includes abstract and vita. Advisor: Theodore T. Allen, Dept. of Industrial and Systems Engineering. Includes bibliographical references (p. 111-119).
APA, Harvard, Vancouver, ISO, and other styles
41

Holmes, William Joseph. "Improved production of industrially relevant recombinant proteins : application of multi-factorial design of experiments and scalable process modelling." Thesis, Aston University, 2009. http://publications.aston.ac.uk/15324/.

Full text
Abstract:
The work described in this thesis focuses on the use of a design-of-experiments approach in a multi-well mini-bioreactor to enable the rapid establishments of high yielding production phase conditions in yeast, which is an increasingly popular host system in both academic and industrial laboratories. Using green fluorescent protein secreted from the yeast, Pichia pastoris, a scalable predictive model of protein yield per cell was derived from 13 sets of conditions each with three factors (temperature, pH and dissolved oxygen) at 3 levels and was directly transferable to a 7 L bioreactor. This was in clear contrast to the situation in shake flasks, where the process parameters cannot be tightly controlled. By further optimisating both the accumulation of cell density in batch and improving the fed-batch induction regime, additional yield improvement was found to be additive to the per cell yield of the model. A separate study also demonstrated that improving biomass improved product yield in a second yeast species, Saccharomyces cerevisiae. Investigations of cell wall hydrophobicity in high cell density P. pastoris cultures indicated that cell wall hydrophobin (protein) compositional changes with growth phase becoming more hydrophobic in log growth than in lag or stationary phases. This is possibly due to an increased occurrence of proteins associated with cell division. Finally, the modelling approach was validated in mammalian cells, showing its flexibility and robustness. In summary, the strategy presented in this thesis has the benefit of reducing process development time in recombinant protein production, directly from bench to bioreactor.
APA, Harvard, Vancouver, ISO, and other styles
42

Mane, Poorna. "Experimental Design and Analysis of Piezoelectric Synthetic Jets in Quiescent Air." VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/768.

Full text
Abstract:
Flow control can lead to saving millions of dollars in fuel costs each year by making an aircraft more efficient. Synthetic jets, a device for active flow control, operate by introducing small amounts of energy locally to achieve non-local changes in the flow field with large performance gains. These devices consist of a cavity with an oscillating diaphragm that divides it, into active and passive sides. The active side has a small opening where a jet is formed, whereas and the passive side does not directly participate in the fluidic jet.Research has shown that the synthetic jet behavior is dependent on the diaphragm and the cavity design hence, the focus of this work. The performance of the synthetic jet is studied under various factors related to the diaphragm and the cavity geometry. Four diaphragms, manufactured from piezoelectric composites, were selected for this study, Bimorph, Thunder®, Lipca and RFD. The overall factors considered are the driving signals, voltage, frequency, cavity height, orifice size, and passive cavity pressure. Using the average maximum jet velocity as the response variable, these factors are individually studied for each actuator and statistical analysis tools were used to select the relevant factors in the response variable. For all diaphragms, the driving signal was found to be the most important factor, with the sawtooth signal producing significantly higher velocities than the sine signal. Cavity dimensions also proved to be relevant factors when considering the designing of a synthetic jet actuator. The cavities with the smaller orifice produced lower velocities than those with larger orifices and the cavities with smaller volumes followed the same trend. Although there exist a relationship between cavity height and orifice size, the orifice size appears as the dominant factor.Driving frequency of the diaphragm was the only common factor to all diaphragms studied that was not statistically significant having a small effect on jet velocity. However along with waveform, it had a combined effect on jet velocity for all actuators. With the sawtooth signal, the velocity remained constant after a particular low frequency, thus indicating that the synthetic jet cavity could be saturated and the flow choked. No such saturation point was reached with the sine signal, for the frequencies tested. Passive cavity pressure seemed to have a positive effect on the jet velocity up to a particular pressure characteristic of the diaphragm, beyond which the pressure had an adverse effect. For Thunder® and Lipca, the passive cavity pressure that produced a peak was measured at approximately 20 and 18kPa respectively independent of the waveform utilized. For a Bimorph and RFD, this effect was not observed.Linear models for all actuators with the factors found to be statistically significant were developed. These models should lead to further design improvements of synthetic jets.
APA, Harvard, Vancouver, ISO, and other styles
43

Nguyen, Cuong Q. "A design of experiments study of procedure for assembling bascule bridge fulcrum." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lima, Luis Gustavo Guedes Bessa. "Planejamento de experimentos bayesianos: aplicações em experimentos na presença de tendências lineares." Universidade Federal de São Carlos, 2007. https://repositorio.ufscar.br/handle/ufscar/4502.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:05:58Z (GMT). No. of bitstreams: 1 DissLGGBL.pdf: 694668 bytes, checksum: fbe25c1e4093e12425b0a024c8a95456 (MD5) Previous issue date: 2007-01-11
Financiadora de Estudos e Projetos
We present a general introduction in the construction of experimental design, spe- cially a general factorial design and factorial design 2k and some Bayesian criteria in the construction of experimental design. In practice, usually the researcher can have a priori knowledge of specialists for estimated quantities from an experiment. The use of Bayesian methods can take on best results with low costs. Many Bayesian criteria in- troduced in literature are presented. One of the main applications in the experimental design construction involve the existance of linear trends with objective of verifying the best sequence of runs, specially the factorial designs with eight runs. In this disertation, we introduce some basic concepts in design of experiments and the use of the Bayesian approach to have more e¢ cient and less cost experiments. The main goal of the work, is to consider a special case of great importance in applied indistrial work: the presence of linear trend. In this case, we present a comparative study in design of experiments under the classical and Bayesian approaches.
Inicialmente apresentamos uma introdução geral sobre planejamentos de experimen- tos, em especial, o planejamento fatorial geral e o planejamento fatorial 2k, e alguns critérios Bayesianos na construção de planejamentos de experimentos. Na prática, usual- mente o pesquisador pode ter conhecimento a priori de especialistas das quantidades a serem estimadas, a partir de um experimento. O uso de métodos Bayesianos pode levar à melhores resultados com menores custos. Vários critérios Bayesianos introduzidos na liter- atura são apresentados. Algumas aplicações são consideradas para ilustrar a metodologia proposta. Uma das principais aplicações na construção de um planejamento de exper- imentos envolve a presença de tendências lineares com o objetivo de verificar a melhor seqüência possível de ensaios, em especial o planejamento fatorial com oito ensaios. Nesta dissertação, pretendemos introduzir alguns conceitos básicos em planejamen- tos de experimentos e o uso do enfoque Bayesiano que leva à experimentos com melhor eficiência e menores custos. Como objetivo principal de trabalho, vamos considerar um caso especial de grande importância nas aplicações industriais: a presença de tendên- cias lineares. Neste caso, vamos apresentar um estudo comparativo em planejamento de experimentos clássicos e planejamento de experimentos Bayesianos.
APA, Harvard, Vancouver, ISO, and other styles
45

Santos, Demetrio Jackson dos. "Estudo experimental da resistência mecânica de junções adesivas." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3151/tde-10012008-104134/.

Full text
Abstract:
Este trabalho tem como objetivo estudar a resistência mecânica de junções adesivas. A influência das condições de superfície e de cura na resistência de junções adesivas, unidas por adesivo acrílico, foi quantificada através da realização de ensaios de cisalhamento de chapas sobrepostas, com resultados processados através do Planejamento Fatorial 2k. Sobreposição simples de chapas também foi utilizada nos ensaios que permitiram analisar outro fator, o comportamento da resistência em função do comprimento de sobreposição das chapas. Esclarecendo contradição apresentada em diferentes publicações. Um dispositivo modificado de Arcan foi utilizado na realização de experimentos, os quais tornaram possível analisar o comportamento de junções adesivas submetidas a esforços combinados, com diferentes velocidades no deslocamento. Este estudo contribui em projetos que envolvem junções adesivas, apresentando informações de alto nível de importância, a serem utilizadas para determinar a melhor condição de trabalho destas junções.
The aim of this work is to study the mechanical strength of adhesively bonded joints. The influence of surface and cure conditions on the strength were taken in consideration. Such influences were sized through shear tests of single lap joints, with results processed by Factorial Design 2k. Single lap joints were also used in tests, which made possible to analyze another factor, the influence of overlap length on the joint mechanical strength. A modified Arcan Device was used, with specific specimens, to analyze the joints behavior, when submitted to combined strengths. This study provided important informations, which are able to be applied to adhesively bonded joints design.
APA, Harvard, Vancouver, ISO, and other styles
46

Olivo, Andréia de Menezes. "Estudo da eficiência do tratamento biológico de efluentes através da otimização de parâmetros físico-químicos auxiliados pelo planejamento fatorial de experimentos." Universidade do Oeste Paulista, 2013. http://bdtd.unoeste.br:8080/tede/handle/tede/333.

Full text
Abstract:
Made available in DSpace on 2016-01-26T18:56:02Z (GMT). No. of bitstreams: 1 Andreia Olivo.pdf: 894633 bytes, checksum: 2ac23611f96154423617674d30caf247 (MD5) Previous issue date: 2013-11-05
The industries effluents treatments was employed in order to reduce the organic material until appropriate level, and subsequently, returns it to the environment, without damages, or reuse itself in the same process. The slaughter-house industry is characterized by consuming a large amounts of water in the production processes, producing a wastewater with high concentrations of organic material, that can be verified by high levels of Biochemical Oxygen Demand (BOD5) and high levels of Chemical Oxygen Demand (COD). Increasingly, these companies search new techniques that are less harmful to the environment in order to eliminate or minimize the impacts. Thus, considering the importance of slaughter-house industry to the economy for Presidente Prudente-SP region and the environmental impact of this activity, the present study was done to optimize the wastewater treatment produced by these industries. To perform this study the concentration of nutrients, pH and oxygen concentration were systematic changed to gain a greater efficiency in the biological wastewater treatment system, employing a factorial design study. The wastewater treatment was carried out in a bench scale through an aerobic biological reactor. Analyzes were performed to characterize the effluent from the reactor by kinetic profiles and the obtain results were employed in the experimental design. These COD and BOD data were used as responses in the experimental design. It was verified an average COD removal of 79.3% and an average BOD5 removal of 76.7% for BOD5. All experiments represent a first order kinetics, and the maximum removal rate reaches 99%. This study demonstrated that a correct variation of nutrient concentration, pH and oxygen concentration values can optimize the organic matter removal.
O tratamento de efluentes, gerados pelas indústrias, tem como intuito diminuir a matéria orgânica até atingir níveis adequados, para posteriormente, devolve-lo ao meio ambiente sem prejudicá-lo ou reaproveitá-lo no próprio processo. A indústria frigorífica se caracteriza por consumir grande quantidade de água nos processos produtivos, gerando efluentes líquidos com altas concentrações de matéria orgânica que pode ser verificado por elevados níveis de Demanda Bioquímica de Oxigênio (DBO5) e Demanda Química de Oxigênio (DQO). Cada vez mais estas empresas buscam o emprego de novas técnicas que sejam menos agressivas ao meio ambiente seguindo a tendência de eliminar ou minimizar os impactos ambientais. Desta forma, tendo em vista a importância da indústria frigorífica para a economia da região de Presidente Prudente - SP e o impacto ambiental causado por esta atividade econômica, o presente trabalho buscou otimizar o tratamento de efluentes gerados em um frigorífico. Para se realizar este estudo foram alteradas a concentração de nutrientes, pH e concentração de oxigênio, visando uma maior eficiência dentro do sistema de tratamento biológico de efluentes, por meio do planejamento fatorial. O tratamento de efluentes foi realizado em escala de bancada através de um reator biológico aeróbio. Foram realizadas análise de caracterização do efluente e acompanhamento do reator para a realização de perfis cinéticos e obtenção dos resultados para a aplicação do planejamento experimental. Como respostas foram utilizadas os dados de DQO e de DBO após o tratamento do efluente coletado. Verificou-se uma remoção média para a DQO de 79,3% e de 76,7% para a DBO5. Todos os experimentos apresentaram uma cinética de primeira ordem, sendo que a taxa de remoção máxima alcançou 99%. Este estudo demonstrou que a variação correta dos valores de concentração de nutrientes, pH e concentração de oxigênio pode otimizar a remoção da matéria orgânica.
APA, Harvard, Vancouver, ISO, and other styles
47

Hrabec, Pavel. "Teoretické vlastnosti a aplikace pokročilých modelů plánovaného experimentu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-410313.

Full text
Abstract:
The methodology of the design of experimnet has become an integral part of the optimisation of manufacturing processes in recent decades. Problems regarding designs of experiments are still up to date, especially because of a variety of approaches to collecting and evaluating data. Scientists in different research and development areas often do not take into account possible shortcommings or even essential assumptions of selected design and/or its evaluation methods. This disertation thesis summarizes theoretical bases of selected designs of experiments. Describes several applications of central composite design on responses regarding wire electrical discharge machining process. And compares different designs of experiment for response surfaces of five parameters with regards to algoritmic selection of statistically signifficant parameters.
APA, Harvard, Vancouver, ISO, and other styles
48

Santos, Marcela G. Mota dos. "Modelagem dinamica e analise do processo de extração supercritica de oleaginosas." [s.n.], 2000. http://repositorio.unicamp.br/jspui/handle/REPOSIP/267597.

Full text
Abstract:
Orientador: Rubens Maciel Filho
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica
Made available in DSpace on 2018-08-01T13:16:30Z (GMT). No. of bitstreams: 1 Santos_MarcelaG.Motados_M.pdf: 3608818 bytes, checksum: dd58239af4dfd4bf4e0e40b36a9134e5 (MD5) Previous issue date: 2000
Resumo: Desde a terceira década do século XX, tem-se realizado o processo de extração de oleaginosas utilizando como solvente o hexano. Como esta substância é tóxica e altamente inflamável, tem-se intensificado a procura por solventes que produzam óleo de boa qualidade utilizando tecnologias limpas minimizando o prejuízo ao meio ambiente e também ao ambiente de produção. É neste contexto que está sendo estudada a substituição do hexano por solventes supercríticos, que são não só solventes efetivos para a extração, mas também apresentam uma série de peculiaridades que os tornam mais vantajosos em relação aos solventes líquidos, geralmente usados. Além das características inerentes aos meios supercríticos, o dióxido de carbono tem-se tornado um solvente potencial para a substituição do hexano por ser não tóxico, não inflamável, por apresentar condições supercríticas relativamente brandas (temperatura crítica de 31°C e pressão crítica de 73.8 bar), além de estar disponível a baixo custo. O objetivo deste trabalho é estudar o comportamento de uma planta de extração supercrítica. Além disso será realizada a análise de sensibilidade paramétrica para a avaliação do comportamento de determinadas variáveis na performance do processo, baseada nos conceitos do planejamento fatorial. Neste trabalho é apresentado um modelo que usa parâmetros experimentais para avaliar políticas de controle e otimização. Partindo deste modelo foi realizada a análise de sensibilidade paramétrica, que possibilitou estudar tanto os efeitos principais das perturbações nas variáveis como os efeitos de interação destas variáveis. A partir disso foi possível determinar as variáveis de maior influência no processo, e, desta forma, determinar estratégias de otimização do processo. Além disso foi avaliada também a estabilidade da solução numérica do modelo apresentado para o método de solução utilizado
Abstract: Since the third decade of the century XX, it has been accomplishing the extraction process of oleaginous for hexane. As this substance is toxic and highly flammable, it has been intensifying the search of solvents that produce oil of good quality using clean technologies, minimizing the damage to the environment and production envirenment. It is in this context that is being studied the substitution of the hexane by supercritical solvents, which are not just effective solvents for the extraction of oil of seeds, but they also present a series of peculiarities they turn them more advantageous in relation to the liquid solvents, generally used. Besides the inherent characteristics to the supercritical means, the carbon dioxide has been proving a potential solvent for the substitution of the hexane for being not toxic, not flammable, to present supercritical conditions relatively mild (critica I temperature of 31°C and critica I pressure of 73.8 bar), besides being at a low cost available.The objective of this work is to study the behavior of a supercritical extraction plant. Besides the parametric sensibility analysis is accomplished for the evaluation of the behavior of certain variables in the precess performance, based on the concepts of the factorial designo. In this work a model is presented that uses experimental parameters for evaluations of control politics and optimization. Starting from this model took place the parametric sensitivity analysis, which facilitated to study the principal and interaction effects of perturbations in certain variables. Starting frem that it was possible to determine the variables of larger influence in the process, and, this way, to determine strategies of the process optimization. Besides the stability of the numeric solution of the model for the used solution method was also evaluated.
Mestrado
Desenvolvimento de Processos Químicos
Mestre em Engenharia Química
APA, Harvard, Vancouver, ISO, and other styles
49

Rosa, Ana Priscila Centeno da. "Produção de biomassa e ácidos graxos por diferentes microalgas e condições de cultivo." reponame:Repositório Institucional da FURG, 2012. http://repositorio.furg.br/handle/1/6110.

Full text
Abstract:
Submitted by Sandra Raquel Correa (sandracorrea42@hotmail.com) on 2016-04-27T01:45:39Z No. of bitstreams: 1 ana priscila centeno da rosa - produo de biomassa e cidos graxos por diferentes microalgas e condies de cultivo (1).pdf: 1584152 bytes, checksum: 9b24bade17037b2da9a900a3e058896c (MD5)
Rejected by dayse paz (daysepaz@hotmail.com), reason: Corrigir espaçamento na citação antes da data, dentro do parentese não tem espaçamento antes da palavra,espaçamento na palavra Instituto. Corrigir palavras-chave 1ª letra sempre em maiúsculo ex: Ácidos graxos Microalgas on 2016-05-04T16:46:34Z (GMT)
Submitted by Sandra Raquel Correa (sandracorrea42@hotmail.com) on 2016-05-06T03:33:18Z No. of bitstreams: 1 ana priscila centeno da rosa - produo de biomassa e cidos graxos por diferentes microalgas e condies de cultivo (1).pdf: 1584152 bytes, checksum: 9b24bade17037b2da9a900a3e058896c (MD5)
Approved for entry into archive by dayse paz (daysepaz@hotmail.com) on 2016-05-09T19:04:12Z (GMT) No. of bitstreams: 1 ana priscila centeno da rosa - produo de biomassa e cidos graxos por diferentes microalgas e condies de cultivo (1).pdf: 1584152 bytes, checksum: 9b24bade17037b2da9a900a3e058896c (MD5)
Made available in DSpace on 2016-05-09T19:04:12Z (GMT). No. of bitstreams: 1 ana priscila centeno da rosa - produo de biomassa e cidos graxos por diferentes microalgas e condies de cultivo (1).pdf: 1584152 bytes, checksum: 9b24bade17037b2da9a900a3e058896c (MD5) Previous issue date: 2012
Devido a sua composição bioquímica, as microalgas apresentam o potencial de serem adicionadas diretamente a alimentos e ração animal, ou indiretamente por meio da adição dos biocompostos produzidos pelas mesmas. Além disso podem ser utilizadas na biofixação de CO2 e na produção de biocombustíveis. O objetivo deste trabalho foi produzir biomassa e ácidos graxos por diferentes microalgas e condições de cultivo. Para o desenvolvimento do trabalho, este foi dividido em três etapas: (i) avaliação da influência da atenuação da intensidade luminosa na produção e composição da microalga Tetraselmis suecica F&M-M33; (ii) avaliação do crescimento e produção de ácidos graxos pela microalga Nannochloropsis oculata em cultivos autotróficos e mixotróficos; e (iii) avaliação do crescimento e produção de ácidos graxos pelas microalgas Chlorella vulgaris e Chlorella kessleri em cultivos autotróficos e mixotróficos. No cultivo da microalga T. suecica F&M-M33 a produtividade foi influenciada pela concentração celular mantida nos cultivos bem como pela radiação solar incidida. Quando os ensaios foram realizados em fotobiorreatores inclinados, estes apresentaram produtividade máxima de 0,96 g.L-1 .d-1 . Quando realizados em GWP verticais, dispostos paralelamente, a produtividade máxima obtida foi 0,45 g.L- 1 .d-1 . As máximas concentrações proteicas (49,87 a 51,01%) e lipídicas (22,03 a 23,36%) foram obtidas quando a microalga foi cultivada nos fotobiorreatores verticais e dispostos paralelamente sem interferência de sombreamento nas laterais. Para a microalga N. oculata foram realizados 2 planejamentos fatoriais 2 3 , em que foram variadas a temperatura, concentração de nitrato no meio de cultivo e fonte de carbono. Os máximos valores para o crescimento celular e produtividade foram obtidos no cultivo mixotrófico (0,64 g.L-1 e 141,95 mg.L-1 .d-1 , respectivamente), quando a microalga N. oculata foi cultivada em meio F/2 utilizando 1 g.L-1 de glicose como fonte orgânica de carbono, 75 mg.L-1 de NO3 e 20 ºC. Para o cultivo autotrófico as máximas concentração celular e produtividade (0,62 g.L-1 e 69,78 mg.L-1 .d-1 , respectivamente) foram obtidas quando a microalga foi cultivada com 1 g.L-1 de NaHCO3, 10 mg.L-1 de NO3 e 20 ºC. Os principais ácidos graxos encontrados em ambos os cultivos foram o ácido mirístico (C14:0), ácido palmítico (C16:0), ácido palmitoleico (C16:1), ácido oleico (C18:1) e ácido eicosapentaenoico (EPA, C20:5-3), destacando-se o ácido palmítico (C16:0) que apresentou concentrações de 21,4 a 47% do total dos ácidos graxos analisados. As microalgas Os cultivos das microalgas C. vulgaris e C. kessleri foram avaliados utilizando metodologia de Planejamento Experimental Plackett Burman. A microalga C. vulgaris apresentou concentração celular máxima (0,97 g.L-1 ) no cultivo autotrófico, com fotoperíodo 24 h claro e 6% de CO2. A máxima produtividade (180,68 mg.L-1 .d-1 ) foi obtida no cultivo mixotrófico para C. vulgaris cultivada com 1 g.L-1 de NaHCO3 e 5 g.L-1 de resíduo industrial de oleaginosas (RIO). O ácido palmítico (C16:0) foi o ácido graxo obtido em maiores concentrações tanto para os cultivos autotróficos (21,22 a 53,78%) como para os mixotróficos (25,43 a 45,98%). A concentração de ácidos graxos poliinsaturados (PUFA) variou de 12,19 a 41,17% nos cultivos autotróficos e de 10,98 a 34,26% nos cultivos mixotróficos, mas não foi afetada significativamente (p<0,05) pela microalga utilizada, podendo tanto a C. vulgaris como a C. kessleri serem utilizadas como fonte de ácidos graxos saturados e insaturados.
Due to its biochemical composition, microalgae are a potential group to be added directly, or indirectly by the addition of microalgae produced biocompounds, in food and feed. In addition, while producing the biomass, the culture may be utilized in carbon dioxide biofixation, as well as the final biomass can be directed to second generation biofuels production. The aim of this work was to produce biomass and fatty acids by different microalgae and growing conditions. To develop this work, it was divided into three stages: (i) light mitigation influence in Tetraselmis suecica F&M-M33 production and composition; (ii) growth profile and fatty acids production evaluation by the microalgae Nannochloropsis oculata in autotrophic an mixotrophic cultures; and (iii) growth profile and fatty acids production evaluation by the microalgae Chlorella vulgaris and Chlorella kessleri in autotrophic an mixotrophic cultures. In the T. suecica F&MM33 culture, the productivity was influenced by solar radition and the cultures cellular concentrations. When the cultures were realized in inclined photobioreactor the productivity achieved maximum values of 0.96 g.L-1 .d-1 . When the cultures were realized in vertical GWP arranged in parallel, maximum achieved productivity was 0.45 g.L-1 .d-1 . Maximum protein (49.87 a 51.01%) and lipid (22.03 a 23.36%) concentrations were obtained in cultures with vertical photobioreactors arranged in parallel, avoiding shadow interference among themselves. To the microalgae N. oculata were carried out two 2³ factorial design, where the studied variables were temperature, nitrate concentration in the medium and carbon source. The highest growth rate and biomass productivity were achieved in the mixotrophic culture (0.64 g.L-1 and 141.95 mg.L-1 .d-1 , respectively), when the microalgae N. oculata was cultivated in F/2 media utilizing 1 g.L-1 of glucose as carbon source, 75 mg.L-1 of NO3 and 20 °C. In the autotrophic culture, maximum cellular concentration and productivity culture (0.62 g.L-1 and 69.78 mg.L-1 .d-1 , respectively) were obtained when the microalgae was cultivated 1 g.L-1 of NaHCO3, 10 mg.L-1 of NO3 and 20 °C. The main fatty acids present in the cultures were the myristic acid (C14:0), palmitic acid (C16:0), palmitoleic acid (C16:1), oleic acid (C18:1) and eicosapentaenoic acid (EPA, C20:5-3), with emphasis on C16:0, which appeared in the highest concentrations, representing 21.4 and 47% of the analyzed fatty acids. The culture of C. vulgaris and C. kessleri were evaluated trough a Plackett Burman Experimental Design. The microalgae C. vulgaris presented maximum cellular concentration (0.97 g.L-1 ) in the autotrophic culture with 24 h photoperiod bright/dark and 6% of CO2. Maximum productivity (180.68 mg.L-1 .d-1 ) was obtained by C. vulgaris mixotrophic culture with 1 g.L-1 of NaHCO3 and 5 g.L-1 of industrial waste oilseeds (RIO). The palmitic acid (C16:0) was the fatty acid with the highest concentration in the autrophic culture (21.22 a 53.78%) as well as in the mixotrophic (25.43 a 45.98%). The polyunsaturated fatty acid concentration varied between 12.19 and 41.17% in the autotrophic cultures and between 10.98 and 34.26% in the mixotrophic cultures. However, its production was not statistically affected (p<0,05) by the utilized microalgae. Therefore, both microalgae, C. vulgaris and C. kessleri¸ may be utilized as a saturated and unsaturated fatty acid source.
APA, Harvard, Vancouver, ISO, and other styles
50

Sibanda, Wilbert. "Comparative study of neural networks and design of experiments to the classification of HIV status / Wilbert Sibanda." Thesis, North West University, 2013. http://hdl.handle.net/10394/13179.

Full text
Abstract:
This research addresses the novel application of design of experiment, artificial neural net-works and logistic regression to study the effect of demographic characteristics on the risk of acquiring HIV infection among the antenatal clinic attendees in South Africa. The annual antenatal HIV survey is the only major national indicator for HIV prevalence in South Africa. This is a vital technique to understand the changes in the HIV epidemic over time. The annual antenatal clinic data contains the following demographic characteristics for each pregnant woman; age (herein called mother's age), partner's age (herein father's age), population group (race), level of education, gravidity (number of pregnancies), parity (number of children born), HIV and syphilis status. This project applied a screening design of experiment technique to rank the effects of individual demographic characteristics on the risk of acquiring an HIV infection. There are a various screening design techniques such as fractional or full factorial and Plackett-Burman designs. In this work, a two-level fractional factorial design was selected for the purposes of screening. In addition to screening designs, this project employed response surface methodologies (RSM) to estimate interaction and quadratic effects of demographic characteristics using a central composite face-centered and a Box-Behnken design. Furthermore, this research presents the novel application of multi-layer perceptron’s (MLP) neural networks to model the demographic characteristics of antenatal clinic attendees. A review report was produced to study the application of neural networks to modelling HIV/AIDS around the world. The latter report is important to enhance our understanding of the extent to which neural networks have been applied to study the HIV/AIDS pandemic. Finally, a binary logistic regression technique was employed to benchmark the results obtained by the design of experiments and neural networks methodologies. The two-level fractional factorial design demonstrated that HIV prevalence was highly sensitive to changes in the mother's age (15-55 years) and level of her education (Grades 0-13). The central composite face centered and Box-Behnken designs employed to study the individual and interaction effects of demographic characteristics on the spread of HIV in South Africa, demonstrated that HIV status of an antenatal clinic attendee was highly sensitive to changes in pregnant mother's age and her educational level. In addition, the interaction of the mother's age with other demographic characteristics was also found to be an important determinant of the risk of acquiring an HIV infection. Furthermore, the central composite face centered and Box-Behnken designs illustrated that, individual-ally the pregnant mother's parity and her partner's age had no marked effect on her HIV status. However, the pregnant woman’s parity and her male partner’s age did show marked effects on her HIV status in “two way interactions with other demographic characteristics”. The multilayer perceptron (MLP) sensitivity test also showed that the age of the pregnant woman had the greatest effect on the risk of acquiring an HIV infection, while her gravidity and syphilis status had the lowest effects. The outcome of the MLP modelling produced the same results obtained by the screening and response surface methodologies. The binary logistic regression technique was compared with a Box-Behnken design to further elucidate the differential effects of demographic characteristics on the risk of acquiring HIV amongst pregnant women. The two methodologies indicated that the age of the pregnant woman and her level of education had the most profound effects on her risk of acquiring an HIV infection. To facilitate the comparison of the performance of the classifiers used in this study, a receiver operating characteristics (ROC) curve was applied. Theoretically, an ROC analysis provides tools to select optimal models and to discard suboptimal ones independent from the cost context or the classification distribution. SAS Enterprise MinerTM was employed to develop the required receiver-of-characteristics (ROC) curves. To validate the results obtained by the above classification methodologies, a credit scoring add-on in SAS Enterprise MinerTM was used to build binary target scorecards comprised of HIV positive and negative datasets for probability determination. The process involved grouping variables using weights-of-evidence (WOE), prior to performing a logistic regression to produce predicted probabilities. The process of creating bins for the scorecard enables the study of the inherent relationship between demographic characteristics and an in-dividual’s HIV status. This technique increases the understanding of the risk ranking ability of the scorecard method, while offering an added advantage of being predictive.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography