To see the other types of publications on this topic, follow the link: Response surface design.

Dissertations / Theses on the topic 'Response surface design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Response surface design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pickle, Stephanie M. "Semiparametric Techniques for Response Surface Methodology." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28517.

Full text
Abstract:
Many industrial statisticians employ the techniques of Response Surface Methodology (RSM) to study and optimize products and processes. A second-order Taylor series approximation is commonly utilized to model the data; however, parametric models are not always adequate. In these situations, any degree of model misspecification may result in serious bias of the estimated response. Nonparametric methods have been suggested as an alternative as they can capture structure in the data that a misspecified parametric model cannot. Yet nonparametric fits may be highly variable especially in small sample settings which are common in RSM. Therefore, semiparametric regression techniques are proposed for use in the RSM setting. These methods will be applied to an elementary RSM problem as well as the robust parameter design problem.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Parikh, Harshal. "Reservoir characterization using experimental design and response surface methodology." Texas A&M University, 2003. http://hdl.handle.net/1969/480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hadjiilias, Hippokrates A. "The aerodynamic design and optimization of a wing-fuselage junction fillet as part of a multi-disciplinary optimization process during the early aircraft design stages." Thesis, Cranfield University, 1996. http://hdl.handle.net/1826/3443.

Full text
Abstract:
An attempt to minimize interference drag in a wing-fuselage junction by means of inserting a fillet is presented in this thesis. The case of a low-wing com- mercial transport aicraft at cruise conditions is examined. Due to the highly three dimensional behaviour of the flow field around the junction, a thin-layer Navier-Stokes code was implemented to estimate the drag forces at the junc- tion. Carefully selected design variable combinations based on-the theory of Design of Experiments constituted the initial group of feasible cases for which the flow solver had to be run. The drag values of these feasible cases were then used to create a second order response surface which could predict with rea- sonable accuracy the interference drag given the value of the design variables within the feasible region. A further optimization isolated the minimum in- terference drag combination of design variable values within the design space. The minimurn interference drag combination of design variable values was eval- uated numerically by the flow solver. The prediction of the response surface and the numerical value obtained by the flow solver for the interference drag of the optimal wing-fuselage combination differed by less than five percent. To demonstrate the ability of the method to be used in an interdisciplinary analysis and optimization program, a landing gear design module is included which provides volume constraints on the fillet geometry during the fillet sur- face definition phase. The Navier Stokes flow analyses were performed on the Cranfield Cray su- percomputer. Each analysis required between eight to twelve CPU hours, and the total CPU time required for the optimization of the six variable model described in the thesis required thirty Navier Stokes runs implementing the Design of Experimens and Surface Response Methodology implementation. For comparison, a typical optimization implementing a classical conjugate di- rections optimizer with no derivative information available would probably require more than forty iterations. Both the optimization and the flow solver results are discussed and some recommendations for improving the efficiency of the code and for further ap- plications of the method are given.
APA, Harvard, Vancouver, ISO, and other styles
4

Gibson, David Riviere. "Model building and design augmentation for improved response surface estimation." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/32948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Davison, Jennifer J. "Response surface designs and analysis for bi-randomization error structures." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-10042006-143852/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chomtee, Boonorm. "Comparison of design optimality criteria of reduced models for response surface designs in a spherical design region." Diss., Montana State University, 2003. http://etd.lib.montana.edu/etd/2003/chomtee/ChomteeB_03.pdf.

Full text
Abstract:
In this dissertation, the major objective is to compare 3 and 4 factor response surface designs in a spherical design region by studying design optimality criteria (D, A, G, and IV-criteria) over sets of reduced models. Hence, theoretical and computational details of evaluating optimality criteria for reduced models for response surface designs in a spherical design region have been described. Specifically, robustness results of the spherical response surface designs and the comparison of design optimality criteria of the response surface designs across the full second-order model and sets of reduced models for 3 and 4 design variables based on the four optimality criteria (D, A, G, and IV-criteria) are presented. Also, new types of D, A, G, and IV optimality criteria for response surface designs in a spherical design region are developed by using prior probability assignment to model effects (for some specified values of pl, pq, p 1, and p2). The four new D, A, G , and IV optimality criteria will be referred to as weighted design optimality criteria. The weighted design optimality criteria of the response surface designs across the weak heredity and strong heredity reduced models for 3 and 4 design variables are evaluated.
APA, Harvard, Vancouver, ISO, and other styles
7

Giunta, Anthony A. "Aircraft Multidisciplinary Design Optimization using Design of Experiments Theory and Response Surface Modeling Methods." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30613.

Full text
Abstract:
Design engineers often employ numerical optimization techniques to assist in the evaluation and comparison of new aircraft configurations. While the use of numerical optimization methods is largely successful, the presence of numerical noise in realistic engineering optimization problems often inhibits the use of many gradient-based optimization techniques. Numerical noise causes inaccurate gradient calculations which in turn slows or prevents convergence during optimization. The problems created by numerical noise are particularly acute in aircraft design applications where a single aerodynamic or structural analysis of a realistic aircraft configuration may require tens of CPU hours on a supercomputer. The computational expense of the analyses coupled with the convergence difficulties created by numerical noise are significant obstacles to performing aircraft multidisciplinary design optimization. To address these issues, a procedure has been developed to create two types of noise-free mathematical models for use in aircraft optimization studies. These two methods use elements of statistical analysis and the overall procedure for using the methods is made computationally affordable by the application of parallel computing techniques. The first modeling method, which has been the primary focus of this work, employs classical statistical techniques in response surface modeling and least squares surface fitting to yield polynomial approximation models. The second method, in which only a preliminary investigation has been performed, uses Bayesian statistics and an adaptation of the Kriging process in Geostatistics to create exponential function-based interpolating models. The particular application of this research involves modeling the subsonic and supersonic aerodynamic performance of high-speed civil transport (HSCT) aircraft configurations. The aerodynamic models created using the two methods outlined above are employed in HSCT optimization studies so that the detrimental effects of numerical noise are reduced or eliminated during optimization. Results from sample HSCT optimization studies involving five and ten variables are presented here to demonstrate the utility of the two modeling methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Yoon G. "A response surface approach to data analysis in robust parameter design." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/38627.

Full text
Abstract:
It has become obvious that combined arrays and a response surface approach can be effective tools in our quest to reduce (process) variability. An important aspect of the improvement of quality is to suppress the magnitude of the influence coming from subtle changes of noise factors. To model and control process variability induced by noise factors we take a response surface approach. The derivative of the standard response function with respect to noise factors, i. e., the slopes of the response function in the direction of the noise factors, play an important role in the study of the minimum process variance. For better understanding of the process variability, we study various properties of both biased and the unbiased estimators of the process variance. Response surface modeling techniques and the ideas involved with variance modeling and estimation through the function of the aforementioned derivatives is a valuable concept in this study. In what follows, we describe the use of the response surface methodology for situations in which noise factors are used. The approach is to combine Taguchi's notion of heterogeneous variability with standard design and modeling techniques available in response surface methodology.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Parker, Peter A. "Response Surface Design and Analysis in the Presence of Restricted Randomization." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/26555.

Full text
Abstract:
Practical restrictions on randomization are commonplace in industrial experiments due to the presence of hard-to-change or costly-to-change factors. Employing a split-plot design structure minimizes the number of required experimental settings for the hard-to-change factors. In this research, we propose classes of equivalent estimation second-order response surface split-plot designs for which the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. Designs that possess the equivalence property enjoy the advantages of best linear unbiased estimates and design selection that is robust to model misspecification and independent of the variance components. We present a generalized proof of the equivalence conditions that enables the development of several systematic design construction strategies and provides the ability to verify numerically that a design provides equivalent estimates, resulting in a broad catalog of designs. We explore the construction of balanced and unbalanced split-plot versions of the central composite and Box-Behnken designs. In addition, we illustrate the utility of numerical verification in generating D-optimal and minimal point designs, including split-plot versions of the Notz, Hoke, Box and Draper, and hybrid designs. Finally, we consider the practical implications of analyzing a near-equivalent design when a suitable equivalent design is not available. By simulation, we compare methods of estimation to provide a practitioner with guidance on analysis alternatives when a best linear unbiased estimator is not available. Our goal throughout this research is to develop practical experimentation strategies for restricted randomization that are consistent with the philosophy of traditional response surface methodology.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Ling, Qi. "Design of Automotive Joints Using Response Surface Polynomials and Neural Networks." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/45205.

Full text
Abstract:
In the early design stages of a car body, a simplified model, which represents the constituent components of the car body by their performance characteristics, is used to optimize the overall car. The determined optimum performance characteristics of the components are used as performance targets to design these components. Since designers do not know the relation between the performance characteristics of the components and their dimensions and mass, this may lead to unreasonable performance targets for the components. Moreover, this process is inefficient because design engineers use empirical procedures to design the components that should meet these targets. To design the component more efficiently, design tools are needed to link the performance targets with the physical design variables of the components. General methodologies for developing two design tools for the design of car joints are presented. These tools can be viewed as translators since they translate the performance characteristics of the joint into its dimensions and vice-versa. The first tool, called translator A, quickly predicts the stiffness and the mass of a given joint. The second tool, called translator B, finds the dimensions and mass of the most efficient joint design that meets given stiffness requirements, packaging, manufacturing and styling constraints. Putting bulkheads in the joint structure is an efficient way to increase stiffness. This thesis investigates the effect of transverse bulkheads on the stiffness of an actual B-pillar to rocker joint. It also develops a translator A for the B-pillar to rocker joint with transverse bulkheads. The developed translator A can quickly predict the stiffness of the reinforced joint. Translator B uses optimization to find the most efficient, feasible joint design that meets given targets. Sequential Linear Programming (SLP) and the Modified Feasible Direction (MFD) method are used for optimization. Both Response Surface Polynomial (RSP) translator B and Neural Network (NN) translator B are developed and validated. Translator A is implemented in an MS-Excel program. Translator B is implemented in a MATHEMATICA program. The methodology for developing translator B is demonstrated on the B-pillar to rocker joint of an actual car. The convergence of the optimizer is checked by solving the optimization problem many times starting from different initial designs. The results from translator B are also checked against FEA results to ensure the feasibility of the optimum designs. By observing the optimum designs and by performing parametric studies for the effect of some important design variables on the joint mass we can establish guidelines for design of joints.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Miller, Michael Chad. "Global Resource Management of Response Surface Methodology." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1621.

Full text
Abstract:
Statistical research can be more difficult to plan than other kinds of projects, since the research must adapt as knowledge is gained. This dissertation establishes a formal language and methodology for designing experimental research strategies with limited resources. It is a mathematically rigorous extension of a sequential and adaptive form of statistical research called response surface methodology. It uses sponsor-given information, conditions, and resource constraints to decompose an overall project into individual stages. At each stage, a "parent" decision-maker determines what design of experimentation to do for its stage of research, and adapts to the feedback from that research's potential "children", each of whom deal with a different possible state of knowledge resulting from the experimentation of the "parent". The research of this dissertation extends the real-world rigor of the statistical field of design of experiments to develop an deterministic, adaptive algorithm that produces deterministically generated, reproducible, testable, defendable, adaptive, resource-constrained multi-stage experimental schedules without having to spend physical resource.
APA, Harvard, Vancouver, ISO, and other styles
12

Truong, David Hien. "Single-Step Factor Screening and Response Surface Optimization Using Optimal Designs with Minimal Aliasing." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/64.

Full text
Abstract:
Cheng and Wu (2001) introduced a method for response surface exploration using only one design by using a 3-level design to first screen a large number of factors and then project onto the significant factors to perform response surface exploration. Previous work generally involved selecting designs based on projection properties first and aliasing structure second. However, having good projection properties is of little concern if the correct factors cannot be identified. We apply Jones and Nachtsheim’s (2009) method for finding optimal designs with minimal aliasing to find 18, 27, and 30-run designs to use for single-step screening and optimization. Our designs have better factor screening capabilities than the designs of Cheng and Wu (2001) and Xu et al. (2004), while maintaining similar D-efficiencies and allowing all projections to fit a full second order model.
APA, Harvard, Vancouver, ISO, and other styles
13

Kaufman, Matthew Douglas. "Variable-Complexity Response Surface Approximations For Wing Structural Weight in HSCT Design." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36566.

Full text
Abstract:
A procedure for generating and using a polynomial approximation to wing bending material weight of a High Speed Civil Transport (HSCT) is presented. Response surface methodology is used to fit a quadratic polynomial to data gathered from a series of structural optimizations. Several techniques are employed in order to minimize the number of required structural optimizations and to maintain accuracy. First, another weight function based on statistical data is used to identify a suitable model function for the response surface. In a similar manner, geometric and loading parameters that are likely to appear in the response surface model are also identified. Next, rudimentary analysis techniques are used to find regions of the design space where reasonable HSCT designs could occur. The use of intervening variables along with analysis of variance reduce the number of polynomial terms in the response surface model function. Structural optimization is then performed by the program GENESIS on a 28-node Intel Paragon. Finally, optimizations of the HSCT are completed both with and without the response surface.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Rich, Jonathan E. "Design Optimization Procedure for Monocoque Composite Cylinder Structures Using Response Surface Techniques." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/35512.

Full text
Abstract:
An optimization strategy for the design of composite shells is investigated. This study differs from previous work in that an advanced analysis package is utilized to provide buckling information on potential designs. The Structural Analysis of General Shells (STAGS) finite element code is used to provide linear buckling calculations for a minimum buckling load constraint. A response surface, spanning the design space, is generated from a set of design points and corresponding buckling load data. This response surface is incorporated into a genetic algorithm for optimization of composite cylinders. Laminate designs are limited to those that are balanced and symmetric. Three load cases and four different variable formulations are examined. In the first approach, designs are limited to those whose normalized in-plane and out-of-plane stiffness parameters would be feasible with laminates consisting of two independent fiber orientation angles. The second approach increases the design space to include those that are bordered by those in the first approach. The third and fourth approaches utilize stacking sequence designs for optimization, with continuous and discrete fiber orientation angle variation, respectively. For each load case and different variable formulation, additional runs are made to account for inaccuracies inherent in the response surface model. This study concluded that this strategy was effective at reducing the computational cost of optimizing the composite cylinders.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Chantarat, Navara. "Modern design of experiments methods for screening and experimentations with mixture and qualitative variables." Columbus, OH : Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1064198056.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xiv, 119 p.: ill. (some col.). Includes abstract and vita. Advisor: Theodore T. Allen, Dept. of Industrial and Systems Engineering. Includes bibliographical references (p. 111-119).
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Li. "Recommendations for Design Parameters for Central Composite Designs with Restricted Randomization." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28794.

Full text
Abstract:
In response surface methodology, the central composite design is the most popular choice for fitting a second order model. The choice of the distance for the axial runs, alpha, in a central composite design is very crucial to the performance of the design. In the literature, there are plenty of discussions and recommendations for the choice of alpha, among which a rotatable alpha and an orthogonal blocking alpha receive the greatest attention. Box and Hunter (1957) discuss and calculate the values for alpha that achieve rotatability, which is a way to stabilize prediction variance of the design. They also give the values for alpha that make the design orthogonally blocked, where the estimates of the model coefficients remain the same even when the block effects are added to the model. In the last ten years, people have begun to realize the importance of a split-plot structure in industrial experiments. Constructing response surface designs with a split-plot structure is a hot research area now. In this dissertation, Box and Hunters' choice of alpha for rotatablity and orthogonal blocking is extended to central composite designs with a split-plot structure. By assigning different values to the axial run distances of the whole plot factors and the subplot factors, we propose two-strata rotatable splitplot central composite designs and orthogonally blocked split-plot central composite designs. Since the construction of the two-strata rotatable split-plot central composite design involves an unknown variance components ratio d, we further study the robustness of the two-strata rotatability on d through simulation. Our goal is to provide practical recommendations for the value of the design parameter alpha based on the philosophy of traditional response surface methodology.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
17

Papila, Nilay U. "Neural network and polynomial-based response surface techniques for supersonic turbine design optimization." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000340.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2001.
Title from title page of source document. Document formatted into pages; contains xii, 191 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
18

Balabanov, Vladimir Olegovich. "Development of Approximations for HSCT Wing Bending Material Weight using Response Surface Methodology." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30730.

Full text
Abstract:
A procedure for generating a customized weight function for wing bending material weight of a High Speed Civil Transport (HSCT) is described. The weight function is based on HSCT configuration parameters. A response surface methodology is used to fit a quadratic polynomial to data gathered from a large number of structural optimizations. To reduce the time of performing a large number of structural optimizations, coarse-grained parallelization with a master-slave processor assignment on an Intel Paragon computer is used. The results of the structural optimization are noisy. Noise reduction in the structural optimization results is discussed. It is shown that the response surface filters out this noise. A statistical design of experiments technique is used to minimize the number of required structural optimizations and to maintain accuracy. Simple analysis techniques are used to find regions of the design space where reasonable HSCT designs could occur, thus customizing the weight function to the design requirements of the HSCT, while the response surface itself is created employing detailed analysis methods. Analysis of variance is used to reduce the number of polynomial terms in the response surface model function. Linear and constant corrections based on a small number of high fidelity results are employed to improve the accuracy of the response surface model. Configuration optimization of the HSCT employing a customized weight function is compared to the configuration optimization of the HSCT with a general weight function.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Kini, Satish D. "An approach to integrating numerical and response surface models for robust design of production systems." Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1080276457.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xviii, 220 p.; also includes graphics (some col.). Includes abstract and vita. Advisor: R. Shivpuri, Dept. of Industrial, Welding and Systems Engineering. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
20

Foster, Kevin G. "Automated Tool Design for Complex Free-Form Components." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2383.

Full text
Abstract:
In today's competitive manufacturing industries, companies strive to reduce manufacturing development costs and lead times in hopes of reducing costs and capturing more market share from early release of their new or redesigned products. Tooling lead time constraints are some of the more significant challenges facing product development of advanced free-form components. This is especially true for complex designs in which large dies, molds or other large forming tools are required. The lead time for tooling, in general, consists of three main components; material acquisition, tool design and engineering, and tool manufacturing. Lead times for material acquisition and tool manufacture are normally a function of vendor/outsourcing constraints, manufacturing techniques and complexity of tooling being produced. The tool design and engineering component is a function of available manpower, engineering expertise, type of design problem (initial design or redesign of tooling), and complexity of the design problem. To reduce the tool design/engineering lead time, many engineering groups have implemented Computer-Aided Design, Engineering, and Manufacturing (CAD/CAE/CAM or CAx) tools as their standard practice for the design and analysis of their products. Although the predictive capabilities are efficient, using CAx tools to expedite advanced die design is time consuming due to the free-form nature and complexity of the desired part geometry. Design iterations can consume large quantities of time and money, thus driving profit margins down or even being infeasible from a cost and schedule standpoint. Any savings based on a reduction in time are desired so long as quality is not sacrificed. This thesis presents an automated tool design methodology that integrates state-of-the-art numerical surface fitting methods with commercially available CAD/CAE/CAM technologies and optimization software. The intent is to virtually create tooling wherein work-piece geometries have been optimized producing products that capture accurate design intent. Results show a significant reduction in design/engineering tool development time. This is due to the integration and automation of associative tooling surfaces automatically derived from the known final design intent geometry. Because this approach extends commercially available CAx tools, this thesis can be used as a blueprint for any automotive or aerospace tooling need to eliminate significant time and costs from the manufacture of complex free-form components.
APA, Harvard, Vancouver, ISO, and other styles
21

Shaibu, Abdul-Baasit. "Development of the integrated censored robust-tolerance engineering design systems via improved response surface modeling." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1246558728/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Crisafulli, Paul J. "Response surface approximations for pitching moment including pitch-up in the multidisciplinary design optimization of a high-speed civil transport." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/45070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Huffman, Jennifer Wade. "Optimal Experimental Design for Poisson Impaired Reproduction Studies." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30751.

Full text
Abstract:
Impaired reproduction studies with Poisson responses are among a growing class of toxicity studies in the biological and medical realm. In recent years, little effort has been focused on the development of efficient experimental designs for impaired reproduction studies. This research concentrates on two areas: 1) the use of Bayesian techniques to make single regressor designs robust to parameter misspecification and 2) the extension of design optimality methods to the k-regressor model. The standard Poisson model with log link is used. Bayesian designs with priors on the parameters are explored using both the D and F-optimality criteria for the single regressor Poisson exponential model. Since these designs are found via numeric optimization techniques, Bayesian equivalence theory functions are derived to verify the optimality of these designs. Efficient Bayesian designs which provide for lack-of-fit testing are discussed. Characterizations of D, Ds, and interaction optimal designs which are factorial in nature are demonstrated for models involving interaction through k factors. The optimality of these designs is verified using equivalence theory. In addition, augmentations of these designs that result in desirable lack of fit properties are discussed. Also, a structure for fractional factorials is given in which specific points are added one at a time to the main effect design in order to gain estimability of the desired interactions. Robustness properties are addressed as well. Finally, this entire line of research is extended to industrial exponential models where different regressors work to increase and/or decrease a count data response produced by a process.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Neff, Angela R. "Bayesian Two Stage Design Under Model Uncertainty." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30303.

Full text
Abstract:
Traditional single stage design optimality procedures can be used to efficiently generate data for an assumed model y = f(x(m),b) + ε. The model assumptions include the form of f, the set of regressors, x(m) , and the distribution of ε. The nature of the response, y, often provides information about the model form (f) and the error distribution. It is more difficult to know, apriori, the specific set of regressors which will best explain the relationship between the response and a set of design (control) variables x. Misspecification of x(m) will result in a design which is efficient, but for the wrong model. A Bayesian two stage design approach makes it possible to efficiently design experiments when initial knowledge of x(m) is poor. This is accomplished by using a Bayesian optimality criterion in the first stage which is robust to model uncertainty. Bayesian analysis of first stage data reduces uncertainty associated with x(m), enabling the remaining design points (second stage design) to be chosen with greater efficiency. The second stage design is then generated from an optimality procedure which incorporates the improved model knowledge. Using this approach, numerous two stage design procedures have been developed for the normal linear model. Extending this concept, a Bayesian design augmentation procedure has been developed for the purpose of efficiently obtaining data for variance modeling, when initial knowledge of the variance model is poor.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Chiacchierini, Lisa M. "Experimental design issues in impaired reproduction applications." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-151533/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Xi, and Qiang Zhou. "Sequential design strategies for mean response surface metamodeling via stochastic kriging with adaptive exploration and exploitation." ELSEVIER SCIENCE BV, 2017. http://hdl.handle.net/10150/626021.

Full text
Abstract:
Stochastic kriging (SK) methodology has been known as an effective metamodeling tool for approximating a mean response surface implied by a stochastic simulation. In this paper we provide some theoretical results on the predictive performance of SK, in light of which novel integrated mean squared error-based sequential design strategies are proposed to apply SIC for mean response surface metamodeling with a fixed simulation budget. Through numerical examples of different features, we show that SIC with the proposed strategies applied holds great promise for achieving high predictive accuracy by striking a good balance between exploration and exploitation. Published by Elsevier B.V.
APA, Harvard, Vancouver, ISO, and other styles
27

Dong, Shuping. "Effects of acid hydrolysis conditions on cellulose nanocrystal yield and properties: A response surface methodology study." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/78102.

Full text
Abstract:
Cellulose nanocrystals (CNCs) are frequently prepared by sulfuric acid hydrolysis of a purified cellulose starting material. CNC yields, however, are generally low, often below 20%. This study employs response surface methodology to optimize the hydrolysis conditions for maximum CNC yield. Two experimental designs were tested and compared: the central composite design (CCD) and the Box–Behnken design (BBD). The three factors for the experimental design were acid concentration, hydrolysis temperature, and hydrolysis time. The responses quantified were CNC yield, sulfate group density, ζ-potential, z-average diameter, and Peak 1 value. The CCD proved suboptimal for this purpose because of the extreme reaction conditions at some of its corners, specifically (1,1,1) and (–1,–1, –1). Both models predicted maximum CNC yields in excess of 65% at similar sulfuric acid concentrations (~59 wt %) and hydrolysis temperatures (~65 °C). With the BBD, the hydrolysis temperature for maximum yield lay slightly outside the design space. All three factors were statistically significant for CNC yield with the CCD, whereas with the BBD, the hydrolysis time in the range 60–150 min was statistically insignificant. With both designs, the sulfate group density was a linear function of the acid concentration and hydrolysis temperature and maximal at the highest acid concentration and hydrolysis temperature of the design space. Both designs showed the hydrolysis time to be statistically insignificant for the ζ-potential of CNCs and yielded potentially data-overfitting regression models. With the BBD, the acid concentration significantly affected both the z-average diameter and Peak 1 value of CNCs. However, whereas the z-average diameter was more strongly affected by the hydrolysis temperature than the hydrolysis time, the Peak 1 value was more strongly affected by the hydrolysis time. The CCD did not yield a valid regression model for the Peak 1 data and a potentially data-overfitting model for the z-average diameter data. A future optimization study should use the BBD but slightly higher hydrolysis temperatures and shorter hydrolysis times than used with the BBD in this study (45–65 °C and 60–150 min, respectively).
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
28

Yah, Fritz Alum. "Design/Evaluation of A Methodology For Performance Optimization Of Indexable Carbide Inserts." Thesis, Högskolan Dalarna, Maskinteknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4574.

Full text
Abstract:
In this project, two broad facets in the design of a methodology for performance optimization of indexable carbide inserts were examined. They were physical destructive testing and software simulation.For the physical testing, statistical research techniques were used for the design of the methodology. A five step method which began with Problem definition, through System identification, Statistical model formation, Data collection and Statistical analyses and results was indepthly elaborated upon. Set-up and execution of an experiment with a compression machine together with roadblocks and possible solution to curb road blocks to quality data collection were examined. 2k factorial design was illustrated and recommended for process improvement. Instances of first-order and second-order response surface analyses were encountered. In the case of curvature, test for curvature significance with center point analysis was recommended. Process optimization with method of steepest ascent and central composite design or process robustness studies of response surface analyses were also recommended.For the simulation test, AdvantEdge program was identified as the most used software for tool development. Challenges to the efficient application of this software were identified and possible solutions proposed. In conclusion, software simulation and physical testing were recommended to meet the objective of the project.
APA, Harvard, Vancouver, ISO, and other styles
29

Cheng, Aili. "CONFIDENCE REGIONS FOR OPTIMAL CONTROLLABLE VARIABLES FOR THE ROBUST PARAMETER DESIGN PROBLEM." Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214761.

Full text
Abstract:
Statistics
Ph.D.
In robust parameter design it is often possible to set the levels of the controllable factors to produce a zero gradient for the transmission of variability from the noise variables. If the number of control variables is greater than the number of noise variables, a continuum of zero-gradient solutions exists. This situation is useful as it provides the experimenter with multiple conditions under which to configure a zero gradient for noise variable transmission. However, this situation requires a confidence region for the multiple-solution factor levels that provides proper simultaneous coverage. This requirement has not been previously recognized in the literature. In the case where the number of control variables is greater than the number of noise variables, we show how to construct critical values needed to maintain the simultaneous coverage rate. Two examples are provided as a demonstration of the practical need to adjust the critical values for simultaneous coverage. The zero-gradient confidence region only focuses on the variance, and there are in fact many such situations in which focus is or could be placed entirely on the process variance. In the situation where both mean and variance need to be considered, a general confidence region in control variables is developed by minimizing weighted mean square error. This general method is applicable to many situations including mixture experiments which have an inherit constraint on the control factors. It also gives the user the flexibility to put different weights on the mean and variance parts for simultaneous optimization. It turns out that the same computational algorithm can be used to compute the dual confidence region in both control factors and the response variable.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
30

Erdural, Serkan. "A Method For Robust Design Of Products Or Processes With Categorical Response." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12608015/index.pdf.

Full text
Abstract:
In industrial processes decreasing variation is very important while achieving the targets. For manufacturers, finding out optimal settings of product and process parameters that are capable of producing desired results under great conditions is crucial. In most cases, the quality response is measured on a continuous scale. However, in some cases, the desired quality response may be qualitative (categorical). There are many effective methods to design robust products/process through industrial experimentation when the response variable is continuous. But methods proposed so far in the literature for robust design with categorical response variables have various limitations. This study offers a simple and effective method for the analysis of categorical response data for robust product or process design. This method handles both location and dispersion effects to explore robust settings in an effective way. The method is illustrated on two cases: A foam molding process design and an iron-casting process design.
APA, Harvard, Vancouver, ISO, and other styles
31

Rodrigues, Marlon Casagrande. "Estudo da influência dos parâmetros de injeção de combustível no ruído emitido por motores diesel, fazendo uso do planejamento multivariado de experimentos." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265284.

Full text
Abstract:
Orientador: Roy Edward Bruns
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-18T11:44:59Z (GMT). No. of bitstreams: 1 Rodrigues_MarlonCasagrande_M.pdf: 2084739 bytes, checksum: 4206c434cf21dfc145d73ca304a9981b (MD5) Previous issue date: 2011
Resumo: Nos últimos anos a emissão de ruído tem sido decisiva para aceitação de veículos no mercado, não somente devido a legislação, mas também no que diz respeito a satisfação do cliente. Por este motivo as empresas fabricantes de veículos e ou motores de combustão interna tem sido obrigadas a dar uma atenção especial as emissões de ruído para poderem competir com suas concorrentes. Neste trabalho realizou-se estudo da influencia dos parâmetros de injeção de combustível do motor MWM 6.12 TCE no nível de ruído emitido pelo motor Diesel na condição de marcha lenta, por meio da técnica de planejamento multivariado de experimentos. Foram escolhidas duas técnicas de medição indireta de ruído (ruído de combustão e aceleração na saia do bloco) para serem utilizadas como variável resposta do planejamento de experimentos. Para verificação da eficácia dos planejamentos realizou-se ensaios qualitativos e quantitativos de ruído propriamente dito. Foi feito um planejamento fatorial fracionário 28-4 para fazer uma triagem de oito fatores de acordo com seus efeitos nas respostas aceleração na saia do bloco e ruído de combustão. Os fatores com efeitos mais significativos, pressão do rail, ponto de injeção principal, ponto de pré-injeção 2 e debito da pré-injeção 2, foram investigados usando um planejamento composto central e superfícies de respostas foram determinadas para cada uma das respostas. Os resultados mostraram que apenas a variável resposta vibração na saia do bloco apresentou resultados satisfatórios para esta condição especifica do motor tanto nos testes quantitativos como qualitativos (redução de 2 dB e melhora na qualidade sonora respectivamente). Apos verificação da influencia dos parâmetros de injeção no ruído emitido verificou-se também a influencia destas alterações no consumo de combustível e emissões dos gases de escape e foi observado que existem influencias significativas nas emissões dos gases de escape
Abstract: In recent years the noise level has been decisive for acceptance of vehicles on the market, not only because of legislation but also with regard to customer satisfaction. For this reason the manufacturers of vehicles and internal combustion engines have been forced to give special attention to the emission of noise to compete with their competitors. In this work, the influences of fuel injection parameters on the noise level of the MWM 6.12 TCE diesel engine emitted under low idle condition were determined using multivariate statistical design of experiments. Two techniques for the indirect measurement of noise, combustion noise and engine crankcase vibration, were chosen as the response variables for the experimental design. To check design effectiveness both qualitative and quantitative noise measurements were carried out. A 28-4 fractional factorial design was performed to screen eight factors according to their effects on engine crankcase vibration, and combustion noise. The factors with the most significant effects, rail pressure, pre-injection point, main injection point and the pre-injection delivery, were investigated using a central composite design and response surfaces were determined for each response. The results showed that only the engine crankcase vibration showed satisfactory results for this particular engine condition in both the quantitative and qualitative analyses (reduction of 2 dB and an improvement in sound quality, respectively). After verification of the influences of the injection parameters on the noise the influences of these changes on fuel consumption and exhaust emissions were also analyzed. Significant influences were observed on the exhaust gas emissions
Mestrado
Projetos
Mestre em Engenharia Automobilistica
APA, Harvard, Vancouver, ISO, and other styles
32

Quintao, Karla K. "Design Optimization of Nozzle Shapes for Maximum Uniformity of Exit Flow." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/779.

Full text
Abstract:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.
APA, Harvard, Vancouver, ISO, and other styles
33

Aldemir, Basak. "Parameter Optimization Of Chemically Activated Mortars Containing High Volumes Of Pozzolan By Statistical Design And Analysis Of Experiments." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607106/index.pdf.

Full text
Abstract:
ABSTRACT PARAMETER OPTIMIZATION OF CHEMICALLY ACTIVATED MORTARS CONTAINING HIGH VOLUMES OF POZZOLAN BY STATISTICAL DESIGN AND ANALYSIS OF EXPERIMENTS Aldemir, BaSak M.S., Department of Industrial Engineering Supervisor: Prof. Dr. Ö
mer Saatç
ioglu Co-Supervisor: Assoc. Prof. Dr. Lutfullah Turanli January 2006, 167 pages This thesis illustrates parameter optimization of early and late compressive strengths of chemically activated mortars containing high volumes of pozzolan by statistical design and analysis of experiments. Four dominant parameters in chemical activation of natural pozzolans are chosen for the research, which are natural pozzolan replacement, amount of pozzolan passing 45 &
#956
m sieve, activator dosage and activator type. Response surface methodology has been employed in statistical design and analysis of experiments. Based on various second-order response surface designs
experimental data has been collected, best regression models have been chosen and optimized. In addition to the optimization of early and late strength responses separately, simultaneous optimization of compressive strength with several other responses such as cost, and standard deviation estimate has also been performed. Research highlight is the uniqueness of the statistical optimization approach to chemical activation of natural pozzolans.
APA, Harvard, Vancouver, ISO, and other styles
34

Manawadu, Harshi Chathurangi. "Design of a nanoplatform for treating pancreatic cancer." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17736.

Full text
Abstract:
Doctor of Philosophy
Department of Chemistry
Stefan H. Bossmann
Pancreatic cancer is the fourth leading cause of cancer-related deaths in the USA. Asymptomatic early cancer stages and late diagnosis leads to very low survival rates of pancreatic cancers, compared to other cancers. Treatment options for advanced pancreatic cancer are limited to chemotherapy and/or radiation therapy, as surgical removal of the cancerous tissue becomes impossible at later stages. Therefore, there's a critical need for innovative and improved chemotherapeutic treatment of (late) pancreatic cancers. It is mandatory for successful treatment strategies to overcome the drug resistance associated with pancreatic cancers. Nanotechnology based drug formulations have been providing promising alternatives in cancer treatment due to their selective targeting and accumulation in tumor vasculature, which can be used for efficient delivery of chemotherapeutic agents to tumors and metastases. The research of my thesis is following the principle approach to high therapeutic efficacy that has been first described by Dr. Helmut Ringsdorf in 1975. However, I have extended the use of the Ringsdorf model from polymeric to nanoparticle-based drug carriers by exploring an iron / iron oxide nanoparticle based drug delivery system. A series of drug delivery systems have been synthesized by varying the total numbers and the ratio of the tumor homing peptide sequence CGKRK and the chemotherapeutic drug doxorubicin at the surfaces of Fe/Fe₃O₄-nanoparticles. The cytotoxicity of these nanoformulations was tested against murine pancreatic cancer cell lines (Pan02) to assess their therapeutic capabilities for effective treatments of pancreatic cancers. Healthy mouse fibroblast cells (STO) were also tested for comparison, because an effective chemotherapeutic drug has to be selective towards cancer cells. Optimal Experimental Design methodology was applied to identify the nanoformulation with the highest therapeutic activity. A statistical analysis method known as response surface methodology was carried out to evaluate the in-vitro cytotoxicity data, and to determine whether the chosen experimental parameters truly express the optimized conditions of the nanoparticle based drug delivery system. The overall goal was to optimize the therapeutic efficacy in nanoparticle-based pancreatic cancer treatment. Based on the statistical data, the most effective iron/iron oxide nanoparticle-based drug delivery system has been identified. Its Fe/Fe₃O₄ core has a diameter of 20 nm. The surface of this nanoparticle is loaded with the homing sequence CGKRK (139-142 peptide molecules per nanoparticle surface) and the chemotherapeutic agent doxorubicin (156-159 molecules per surface), This nanoplatform is a promising candidate for the nanoparticle-based chemotherapy of pancreatic cancer.
APA, Harvard, Vancouver, ISO, and other styles
35

Demko, Daniel Todd. "Tools for Multi-Objective and Multi-Disciplinary Optimization in Naval Ship Design." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/31743.

Full text
Abstract:
This thesis focuses on practical and quantitative methods for measuring effectiveness in naval ship design. An Overall Measure of Effectiveness (OMOE) model or function is an essential prerequisite for optimization and design trade-off. This effectiveness can be limited to individual ship missions or extend to missions within a task group or larger context. A method is presented that uses the Analytic Hierarchy Process combined with Multi-Attribute Value Theory to build an Overall Measure of Effectiveness and Overall Measure of Risk function to properly rank and approximately measure the relative mission effectiveness and risk of design alternatives, using trained expert opinion to replace complex analysis tools. A validation of this method is achieved through experimentation comparing ships ranked by the method with direct ranking of the ships through war gaming scenarios. The second part of this thesis presents a mathematical ship synthesis model to be used in early concept development stages of the ship design process. Tools to simplify and introduce greater accuracy are described and developed. Response Surface Models and Design of Experiments simplify and speed up the process. Finite element codes such as MAESTRO improve the accuracy of the ship synthesis models which in turn lower costs later in the design process. A case study of an Advanced Logistics Delivery Ship (ALDV) is performed to asses the use of RSM and DOE methods to minimize computation time when using high-fidelity codes early in the naval ship design process.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
36

Buyukburc, Atil. "Robust Design Of Lithium Extraction From Boron Clays By Using Statistical Design And Analysis Of Experiments." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1027036/index.pdf.

Full text
Abstract:
In this thesis, it is aimed to design lithium extraction from boron clays using statistical design of experiments and robust design methodologies. There are several factors affecting extraction of lithium from clays. The most important of these factors have been limited to a number of six which have been gypsum to clay ratio, roasting temperature, roasting time, leaching solid to liquid ratio, leaching time and limestone to clay ratio. For every factor, three levels have been chosen and an experiment has been designed. After performing three replications for each of the experimental run, signal to noise ratio transformation, ANOVA, regression analysis and response surface methodology have been applied on the results of the experiments. Optimization and confirmation experiments have been made sequentially to find factor settings that maximize lithium extraction with minimal variation. The mean of the maximum extraction has been observed as 83.81% with a standard deviation of 4.89 and the 95% prediction interval for the mean extraction is (73.729, 94.730). This result is in agreement with the studies that have been made in the literature. However
this study is unique in the sense that lithium is extracted from boron clays by using limestone directly from the nature, and gypsum as a waste product of boric acid production. Since these two materials add about 20% cost to the extraction process, the results of this study become important. Moreover, in this study it has been shown that statistical design of experiments help mining industry to reduce the need for standardization.
APA, Harvard, Vancouver, ISO, and other styles
37

Oblad, Richard Vernon. "Application of Mixture Design Response Surface Methodology for Combination Chemotherapy in PC-3 Human Prostate Cancer Cells." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7351.

Full text
Abstract:
Combining chemotherapeutics to treat malignant tumors has been shown to be effectivein preventing drug resistance, tumor recurrence, and reducing tumor size. We modeledcombination drug therapy in PC-3 human prostate cancer cells using mixture design responsesurface methodology (MDRSM), a statistical technique designed to optimize compositions thatwe applied in a novel manner to design combinations of chemotherapeutics. Conventionalchemotherapeutics (mitoxantrone, cabazitaxel, and docetaxel) and natural bioactive compounds(resveratrol, piperlongumine, and flavopiridol) were used in twelve different combinationscontaining three drugs at varying concentrations. Cell viability and cell cycle data werecollected and used to plot response surfaces in MDRSM that identified the most effectiveconcentrations of each drug in combination. MDRSM allows for extrapolation of data fromthree or more compounds in variable ratio combinations, unlike the Chou-Talalay method.MDRSM combinations were compared with combination index data from the Chou-Talalaymethod and were found to coincide. We propose MDRSM as an effective tool in devisingcombination treatments that can improve treatment effectiveness, and increase treatmentpersonalization because MDRSM measures effectiveness rather than synergism, potentiation orantagonism.
APA, Harvard, Vancouver, ISO, and other styles
38

Hernandez, Enriquez Aurora. "Simulation-based process design and integration for retrofit." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/simulationbased-process-design-and-integration-for-retrofit(90c6bcf4-6421-4731-8f82-1e839478daab).html.

Full text
Abstract:
This research proposes a novel Retrofit Design Approach based on process simulation and the Response Surface Methodology (RSM).Retrofit Design Approach comprises: 1) a diagnosis stage in which the variables are screened and promising variables to improve system performance are identified through a sensitivity analysis, 2) an evaluation stage in which RSM is applied to assess the impact of those promising variables and the most important factors are determined by building a reduced model from the process response behaviour, and 3) an optimisation stage to identify optimal conditions and performance of the system, subject to objective function and model constraints. All these stages are simulation-supported. The main advantages of the proposed Retrofit Design Approach using RSM are that the design method is able to handle a large industrial-scale design problem within a reasonable computational effort, to obtain valuable conceptual insights of design interactions and economic trade-off existed in the system, as well as to systematically identify cost-effective solutions by optimizing the reduced model based on the most important factors. This simplifies the pathway to achieve pseudo-optimal solutions, and simultaneously to understand techno-economic and system-wide impacts of key design variables and parameters. In order to demonstrate the applicability and robustness of the proposed design method, the proposed Retrofit Design Approach has been applied to two case studies which are based on existing gas processing processes. Steady-state process simulation using Aspen Plus TM® has been carried out and the simulation results agree well with the plant data. Reduced models for both cases studies have been obtained to represent the techno-economic behaviour of plants. Both the continuous and discrete design options are considered in the retrofitting of the plant, and the results showed that the Retrofit Design Approach is effective to provide reliable, cost-effective retrofit solutions which yield to improvements in the studied processes, not only economically (i.e. cost and product recovery), but also environmentally linked (i.e. CO₂ emissions and energy efficiency). The main retrofitting solutions identified are, for the first case, column pressure change, pump-around arrangement and additional turbo-expansion capacity, while for the second case, columns pressure change, trays efficiency, HEN retrofit arrangements (re-piping) and onsite utility generation schemes are considered. These promising sets of retrofit design options were further investigated to reflect implications of capital investment for the retrofit scenarios, and this portfolio of opportunities can be very useful for supporting decision-making procedure in practice. It is important to note that in some cases a cost-effective retrofit does not always require structural modifications. In conclusion, the proposed Retrofit Design Approach has been found to be a reliable approach to address the retrofit problem in the context of industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Cho, Gyoungil. "Design, fabrication, and testing of a variable focusing micromirror array lens." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/2186.

Full text
Abstract:
A reflective type Fresnel lens using an array of micromirrors is designed and fabricated using the MUMPs?? surface micromachining process. The focal length of the lens can be rapidly changed by controlling both the rotation and translation of electrostatically actuated micromirrors. The suspension spring, pedestal and electrodes are located under the mirror to maximize the optical efficiency. The micromirror translation and rotation are plotted versus the applied voltage. Relations are provided for the fill-factor and the numerical aperture as functions of the lens diameter, the mirror size, and the tolerances specified by the MUMPs?? design rules. Linnik interferometry is used to measure the translation, rotation, and flatness of a fabricated micromirror. The reflective type Fresnel lens is controlled by independent DC voltages of 16 channels with a 0 to 50V range, and translational and torsional stiffness are calibrated with measured data. The spot diameter of the point source by the fabricated and electrostatically controlled reflective type Fresnel lens is measured to test focusing quality of the lens.
APA, Harvard, Vancouver, ISO, and other styles
40

Gupta, Abhishek. "Robust design using sequential computer experiments." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/492.

Full text
Abstract:
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
APA, Harvard, Vancouver, ISO, and other styles
41

Ozol-Godfrey, Ayca. "Understanding Scaled Prediction Variance Using Graphical Methods for Model Robustness, Measurement Error and Generalized Linear Models for Response Surface Designs." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/30185.

Full text
Abstract:
Graphical summaries are becoming important tools for evaluating designs. The need to compare designs in term of their prediction variance properties advanced this development. A recent graphical tool, the Fraction of Design Space plot, is useful to calculate the fraction of the design space where the scaled prediction variance (SPV) is less than or equal to a given value. In this dissertation we adapt FDS plots, to study three specific design problems: robustness to model assumptions, robustness to measurement error and design properties for generalized linear models (GLM). This dissertation presents a graphical method for examining design robustness related to the SPV values using FDS plots by comparing designs across a number of potential models in a pre-specified model space. Scaling the FDS curves by the G-optimal bounds of each model helps compare designs on the same model scale. FDS plots are also adapted for comparing designs under the GLM framework. Since parameter estimates need to be specified, robustness to parameter misspecification is incorporated into the plots. Binomial and Poisson examples are used to study several scenarios. The third section involves a special type of response surface designs, mixture experiments, and deals with adapting FDS plots for two types of measurement error which can appear due to inaccurate measurements of the individual mixture component amounts. The last part of the dissertation covers mixture experiments for the GLM case and examines prediction properties of mixture designs using the adapted FDS plots.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Koehring, Andrew. "The application of polynomial response surface and polynomial chaos expansion metamodels within an augmented reality conceptual design environment." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Sabová, Iveta. "Plánovaný experiment." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231981.

Full text
Abstract:
This thesis deals with the possibility of applying the method of Design of Experiments (DoE) on specific data. In the first chapter of theoretical part, this method is described in detail. The basic principles and guidelines for the design of the experiment are written there. In the next two chapters, factorial design of the experiment and response surface design are described. The latter one includes a central composite design and Box-Behnken design. The following chapter contains practical part, which focuses on modelling firing range of ball from a catapult using the above three types of experimental design. In this work, the models are analysed together with their different characteristics. Their comparison is made by using prediction and confidence intervals and by response optimizing. The last part of the thesis comprises overall evaluation.
APA, Harvard, Vancouver, ISO, and other styles
44

Gomez, Francisco. "Assessment and Optimization of Ex-Situ Bioremediation of Petroleum Contaminated Soil under Cold Temperature Conditions." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30565.

Full text
Abstract:
Current prices and demand for petroleum hydrocabons have generated an increase of oil spills around the country and the world. Health and environmental impacts associated to these organic pollutants represent a huge concern for the general public, leading the public and private sector to develop new technologies and methods to minimize or eliminate those risks. Ex-Situ bioremediation through biopiles, as a main remediation technique to treat a wide range of hydrocarbons, has been a topic of considerable research interest over the last years. It provides an economical and environmental solution to restore the environment to background levels. Nevertheless, successful bioremediation under cold climate conditions is of considerable concern in countries like Canada, as low temperatures can delay the rate of bioremediation of oil hydrocarbons, thus limiting the operation of soil treatment facilities to certain times of the year. Recent research has found out that bioremediation could be conducted even at low or cold temperatures with larger periods of times. And even more, the addition of petroleum degrading microorganisms (bioaugmentation) and nutrients or biosurfactants (biostimulation) could enhance the process in some cases. In the present study, a comprehensive assessment of bioaugmentation and biostimulation strategies for ex-situ bioremediation of petroleum contaminated soil under cold climate conditions is proposed. Field scale biopiles were constructed and subjected to different concentrations of commercial microbial consortia and mature compost, as bioaugmentation and biostimulation strategies, in a soil treatment facility at Moose Creek, Ontario over a period of 94 days (November 2012 to February 2013). Assessment and comparison of the biodegradation rates of total petroleum hydrocarbons (TPH) and their fractions were investigated. Furthermore, a response surface methodology (RSM) based on a factorial design to investigate and optimize the effects of the microbial consortia application rate and amount of compost on the TPH removal was also assessed. Results showed that biopiles inoculated with microbial consortia and amended with 10:1 soil to compost ratio under aerobic conditions performed the best, degrading 82% of total petroleum hydrocarbons (TPHs) with a first-order kinetic degradation rate of 0.016 d_1, under cold temperature conditions. The average removal efficiencies for TPHs after 94 days for control biopiles, with no amendments or with microbial consortia or compost only treatments were 48%, 55%, and 52%, respectively. Statistical analyses indicated a significant difference (p < 0.05) within and between the final measurements for TPHs and a significant difference between the treatment with combined effect, and the control biopiles. On the other hand, the modeling and optimization statistical analysis of the results showed that the microbial consortia application rate, compost amendment and their interactions have a significant effect on TPHs removal with a coefficient of determination (R2) of 0.88, indicating a high correlation between the observed and the predicted values for the model obtained. The optimum concentrations predicted via RSM were 4.1 ml m-3 for microbial consortia application rate, and 7% for compost amendment to obtain a maximum TPH removal of 90.7%. This research contributes to provide valuable knowledge to practitioners about cost-effective and existing strategies for ex-situ bioremediation under cold weather conditions.
APA, Harvard, Vancouver, ISO, and other styles
45

Tang, Yue tang. "Non-Integer Root Transformations for Preprocessing Nano-Electrospray Ionization High Resolution Mass Spectra for the Classification of Cannabis." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1534221170873289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ding, Yuanhao. "Statistical Analysis and Optimization of Ammonia Removal from Aqueous Solution by Zeolite and Ion-exchange Resin." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32194.

Full text
Abstract:
The ability of natural zeolite and synthetic ion-exchange resin for ammonia removal from aqueous solution was studied through batch experiments. The results showed that both zeolite and ion-exchange resin were effective (up to 87% of removal) in eliminating ammonia from aqueous solution. Factorial design and response surface methodology were applied to evaluate and optimize the effects of pH, dose, contact time, temperature and initial ammonia concentration. Low pH condition was preferred with the optimum pH found to be 6 for both zeolite and ion-exchange resin. High dose generated high removal rate and low exchange capacity. Results of factorial design and response surface methodology showed that temperature was not a significant parameter. The model prediction was in good agreement with observed data (R2 = 0.969 for zeolite and R2 = 0.957 for resin, respectively). For zeolite, the optimum Qe was 22.90 mg/g achieved at pH=7 and initial ammonia concentration of 3000 mg/L. For ion-exchange resin, Qe of 28.78 mg/g was achieved at pH=6 and initial TAN concentration of 3000 mg/L. The reaction kinetics for both of them followed the Pseudo-second order kinetic model (R2=0.998 and R2=0.999, respectively). Equilibrium data were fitted to Langmuir and Freundlich isotherm models with Freundlich model providing a slightly better predication for zeolite (R2=0.992) and Langmuir providing more accurate prediction for ion-exchange resin (R2=0.996). The ion-exchange resin can be completely regenerated by 2N H2SO4.
APA, Harvard, Vancouver, ISO, and other styles
47

Liang, Li. "Graphical Tools, Incorporating Cost and Optimizing Central Composite Designs for Split-Plot Response Surface Methodology Experiments." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/26768.

Full text
Abstract:
In many industrial experiments, completely randomized designs (CRDs) are impractical due to restrictions on randomization, or the existence of one or more hard-to-change factors. Under these situations, split-plot experiments are more realistic. The two separate randomizations in split-plot experiments lead to different error structure from in CRDs, and hence this affects not only response modeling but also the choice of design. In this dissertation, two graphical tools, three-dimensional variance dispersion graphs (3-D VDGs) and fractions of design space (FDS) plots are adapted for split-plot designs (SPDs). They are used for examining and comparing different variations of central composite designs (CCDs) with standard, V- and G-optimal factorial levels. The graphical tools are shown to be informative for evaluating and developing strategies for improving the prediction performance of SPDs. The overall cost of a SPD involves two types of experiment units, and often each individual whole plot is more expensive than individual subplot and measurement. Therefore, considering only the total number of observations is likely not the best way to reflect the cost of split-plot experiments. In this dissertation, cost formulation involving the weighted sum of the number of whole plots and the total number of observations is discussed and the three cost adjusted optimality criteria are proposed. The effects of considering different cost scenarios on the choice of design are shown in two examples. Often in practice it is difficult for the experimenter to select only one aspect to find the optimal design. A realistic strategy is to select a design with good balance for multiple estimation and prediction criteria. Variations of the CCDs with the best cost-adjusted performance for estimation and prediction are studied for the combination of D-, G- and V-optimality criteria and each individual criterion.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
48

Dinegdae, Yared Hailegiorgis. "Reliability-based Design Procedure for Flexible Pavements." Licentiate thesis, KTH, Byggvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-165280.

Full text
Abstract:
Load induced top-down fatigue cracking has been recognized recently as a major distress phenomenon in asphalt pavements. This failure mode has been observed in many parts of the world, and in some regions, it was found to be more prevalent and a primary cause of pavements failure. The main factors which are identified as potential causes of top down fatigue cracking are primarily linked to age hardening, mixtures fracture resistance and unbound layers stiffness. Mechanistic Empirical analytical models, which are based on hot mix asphalt fracture mechanics (HMA-FM) and that could predict crack initiation time and propagation rate, have been developed and shown their capacity in delivering acceptable predictions. However, in these methods, the effect of age hardening and healing is not properly accounted and moreover, these models do not consider the effect of mixture morphology influence on long term pavement performance. Another drawback of these models is, as analysis tools they are not suitable to be used for pavement design purpose. The main objective of this study is to develop a reliability calibrated design framework in load resistance factor design (LRFD) format which could be implemented to design pavement sections against top down fatigue cracking. For this purpose, asphalt mixture morphology based sub-models were developed and incorporated to HMA-FM to characterize the effect of aging and degradation on fracture resistance and healing potential. These sub-models were developed empirically exploiting the observed relation that exist between mixture morphology and fracture resistance. The developed crack initiation prediction model was calibrated and validated using pavement sections that have high quality laboratory data and observed field performance history. As traffic volume was identified in having a dominant influence on predicted performance, two separate model calibration and validation studies were undertaken based on expected traffic volume. The predictions result for both model calibration and validation was found to be in an excellent agreement with the observed performance in the field. A LRFD based design framework was suggested that could be implemented to optimize pavement sections against top-down fatigue cracking. To achieve this objective, pavement sections with various design target reliabilities and functional requirements were analyzed and studied.  A simplified but efficient limit state equation was generated using a central composite design (CCD) based response surface methodology, and FORM based reliability analysis was implemented to compute reliabilities and formulate associated partial safety factors. A design example using the new partial safety factors have clearly illustrated the potential of the new method, which could be used to supplement existing design procedures.

QC 20150427

APA, Harvard, Vancouver, ISO, and other styles
49

Holec, Tomáš. "Plánovaný experiment." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-254455.

Full text
Abstract:
In this thesis, the design of experiment is studied. Firstly, a theoretic background in mathematical statistics necessary for understanding is built (chapter 2). The design of experiment is then presented in chapters 3 and 4. Chapter 3 is divided into several subchapters in which its brief history is provided as well as its complex theoretic description (basic principles, steps for planning, etc.). Chapter 4 deals with particular types of the design of experiment (Factorial experiments or Response surface design). Simple examples are provided to illustrate the theory in chapters 3 and 4. Last part of the thesis is strictly practical and focuses on an application of the theory for particular data sets and its evaluation (chapter 5).
APA, Harvard, Vancouver, ISO, and other styles
50

Rotelli, Matthew D. "Neural networks as a tool for statistical modeling." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-151625/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography