To see the other types of publications on this topic, follow the link: Sequential design of experiments.

Dissertations / Theses on the topic 'Sequential design of experiments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sequential design of experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gupta, Abhishek. "Robust design using sequential computer experiments." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/492.

Full text
Abstract:
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
APA, Harvard, Vancouver, ISO, and other styles
2

Lewi, Jeremy. "Sequential optimal design of neurophysiology experiments." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28201.

Full text
Abstract:
Thesis (M. S.)--Biomedical Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Butera, Robert; Committee Co-Chair: Paninski, Liam; Committee Member: Isbell, Charles; Committee Member: Rozell, Chris; Committee Member: Stanley, Garrett; Committee Member: Vidakovic, Brani.
APA, Harvard, Vancouver, ISO, and other styles
3

Koita, Rizwan R. (Rizwan Rahim). "Strategies for sequential design of experiments." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Hungjen 1971. "Sequential optimization through adaptive design of experiments." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39332.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2007.
Includes bibliographical references (p. 111-118).
This thesis considers the problem of achieving better system performance through adaptive experiments. For the case of discrete design space, I propose an adaptive One-Factor-at-A-Time (OFAT) experimental design, study its properties and compare its performance to saturated fractional factorial designs. The rationale for adopting the adaptive OFAT design scheme become clear if it is imbedded in a Bayesian framework: it becomes clear that OFAT is an efficient response to step by step accrual of sample information. The Bayesian predictive distribution for the outcome by implementing OFAT and the corresponding principal moments when a natural conjugate prior is assigned to parameters that are not known with certainty are also derived. For the case of compact design space, I expand the treatment of OFAT by the removal of two restrictions imposed on the discrete design space. The first is that the selection of input level at each iteration depends only on observed best response and does not depend on other prior information. In most real cases, domain experts possess knowledge about the process being modeled that, ideally, should be treated as sample information in its own right-and not simply ignored.
(cont.) Treating the design problem Bayesianly provides a logical scheme for incorporation of expert information. The second removed restriction is that the model is restricted to be linear with pair-wise interactions - implying that the model considers a relatively small design space. I extend the Bayesian analysis to the case of generalized normal linear regression model within the compact design space. With the concepts of c-optimum experimental design and Bayesian estimations, I propose an algorithm for the purpose of achieving optimum through a sequence of experiments. I prove that the proposed algorithm would generate a consistent Bayesian estimator in its limiting behavior. Moreover, I also derive the expected step-wise improvement achieved by this algorithm for the analysis of its intermediate behavior, a critical criterion for determining whether to continue the experiments.
by Hungjen Wang.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
5

Lehman, Jeffrey S. "Sequential Design of Computer Experiments for Robust Parameter Design." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1027963706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lehman, Jeffrey Scott. "Sequential design of computer experiments for robust parameter design." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486463321623652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Xiaoli. "Sequential ED-design for binary dose-response experiments." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63447.

Full text
Abstract:
Dose-response experiments and subsequent data analyses are often carried out according to optimal designs for the purpose of accurately determining a specific effective dose (ED) level. If the interest is the dose-response relationship over a range of ED levels, many existing optimal designs are not accurate. In this dissertation, we propose a new design procedure, called two-stage sequential ED-design, which directly and simultaneously targets several ED levels. We use a small number of trials to provide a tentative estimation of the model parameters. The doses of the subsequent trials are then selected sequentially, based on the latest model information, to maximize the efficiency of the ED estimation over several ED levels. Although the commonly used logistic and probit models are convenient summaries of the dose-response relationship, they can be too restrictive. We introduce and study a more flexible albeit slightly more complex three-parameter logistic dose-response model. We explore the effectiveness of the sequential ED-design and the D-optimal design under this model, and develop an effective model fitting strategy. We develop a two-step iterative algorithm to compute the maximum likelihood estimate of the model parameters. We prove that the algorithm iteration increases the likelihood value, and therefore will lead to at least a local maximum of the likelihood function. We also study the numerical solution to the D-optimal design for the three-parameter logistic model. Interestingly, all our numerical solutions to the D-optimal design are three-point-support distributions. We also discuss the use of the ED-design when experimental subjects become available in groups. We introduce the group sequential ED-design, and demonstrate how to construct this design. The ED-design has a natural extension to more complex model and can satisfy a broad range of the demands that may arise in applications.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Ling. "Sequential Design of Experiments to Estimate a Probability of Failure." Phd thesis, Supélec, 2012. http://tel.archives-ouvertes.fr/tel-00765457.

Full text
Abstract:
This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
APA, Harvard, Vancouver, ISO, and other styles
9

Williams, Brian J. "Sequential design of computer experiments to minimize integrated response functions /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488203158826046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Roy, Soma. "Sequential-Adaptive Design of Computer Experiments for the Estimation of Percentiles." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218032995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Seyedamin, Arvand. "FINDING IMPORTANT FACTORS IN AN EFFECTS-BASED PLAN USING SEQUENTIAL BIFURCATION." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101212.

Full text
Abstract:
After the pilot phase of a simulation study, if the model contains many factors, then direct experimentation may need too much computer processing time, therefore the purpose of screening simulation experiments is to eliminate negligible or unimportant factors of a simulation model in order to concentrate the efforts upon a short list of important factors. For this matter the Sequential bifurcation procedure developed by Bettonvil and Kleijnen [3] is an efficient and effective screening method which can be used. In this study, the Sequential bifurcation screening method is used to determine the important factors of a simulation based decision support model designed by The Swedish Defense Research Agency (FOI) meant for testing operational plans. By using this simulation model, a decision maker is able to test a number of feasible plans against possible courses of events. The sequential bifurcation procedure was applied and sorted the most important factors involved in this simulation model based on their relative importance.
APA, Harvard, Vancouver, ISO, and other styles
12

Vastola, Justin Timothy. "Sequential experimental design under competing prior knowledge." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47724.

Full text
Abstract:
This research focuses on developing a comprehensive framework for designing and modeling experiments in the presence of multiple sources of competing prior knowledge. In particular, methodology is proposed for process optimization in high-cost, low-resource experimental settings where the underlying response function can be highly non-linear. In the first part of this research, an initial experimental design criteria is proposed for optimization problems by combining multiple, potentially competing, sources of prior information--engineering models, expert opinion, and data from past experimentation on similar, non-identical systems. New methodology is provided for incorporating and combining conjectured models and data into both the initial modeling and design stages. The second part of this research focuses on the development of a batch sequential design procedure for optimizing high-cost, low-resource experiments with complicated response surfaces. The success in the proposed approach lies in melding a flexible, sequential design algorithm with a powerful local modeling approach. Batch experiments are designed sequentially to adapt to balance space-filling properties and the search for the optimal operating condition. Local model calibration and averaging techniques are introduced to easily allow incorporation of statistical models and engineering knowledge, even if such knowledge pertains to only subregions of the complete design space. The overall process iterates between adapting designs, adapting models, and updating engineering knowledge over time. Applications to nanomanufacturing are provided throughout.
APA, Harvard, Vancouver, ISO, and other styles
13

Kernstine, Kemp H. "Design space exploration of stochastic system-of-systems simulations using adaptive sequential experiments." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44799.

Full text
Abstract:
The complexities of our surrounding environments are becoming increasingly diverse, more integrated, and continuously more difficult to predict and characterize. These modeling complexities are ever more prevalent in System-of-Systems (SoS) simulations where computational times can surpass real-time and are often dictated by stochastic processes and non-continuous emergent behaviors. As the number of connections continue to increase in modeling environments and the number of external noise variables continue to multiply, these SoS simulations can no longer be explored with traditional means without significantly wasting computational resources. This research develops and tests an adaptive sequential design of experiments to reduce the computational expense of exploring these complex design spaces. Prior to developing the algorithm, the defining statistical attributes of these spaces are researched and identified. Following this identification, various techniques capable of capturing these features are compared and an algorithm is synthesized. The final algorithm will be shown to improve the exploration of stochastic simulations over existing methods by increasing the global accuracy and computational speed, while reducing the number of simulations required to learn these spaces.
APA, Harvard, Vancouver, ISO, and other styles
14

Hilow, Hisham. "Economic expansible-contractible sequential factorial designs for exploratory experiments." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54426.

Full text
Abstract:
Sequential experimentation, especially for factorial treatment structures, becomes important when one or more of the following, conditions exist: observations become available quickly, observations are costly to obtain, experimental results need to be evaluated quickly, adjustments in experimental set-up may be desirable, a quick screening of the importance of various factors is important. The designs discussed in this study are suitable for these situations. Two approaches to sequential factorial experimentation are considered: one-run-at-a-time (ORAT) plans and one-block-at-a-time (OBAT) plans. For 2ⁿ experiments, saturated non-orthogonal 2ᵥⁿ fractions to be carried out as ORAT plans are reported. In such ORAT plans, only one factor level is changed between any two successive runs. Such plans are useful and economical for situations in which it is costly to change simultaneously more than one factor level at a given time. The estimable effects and the alias structure after each run have been provided. Formulas for the estimates of main-effects and two-factor interactions have been derived. Such formulas can be used for assessing the significance of their estimates. For 3m and 2ⁿ3m experiments, Webb's (1965) saturated non-orthogonal expansible-contractible <0, 1, 2> - 2ᵥⁿ designs have been generalized and new saturated non-orthogonal expansible-contractible 3ᵥm and 2ⁿ3ᵥm designs have been reported. Based on these 2ᵥⁿ, 3ᵥm and 2ⁿ3ᵥm designs, we have reported new OBAT 2ᵥⁿ, 3ᵥm and 2ⁿ3ᵥm plans which will eventually lead to the estimation of all main-effects and all two-factor interactions. The OBAT 2ⁿ, 3m and 2ⁿ3m plans have been constructed according to two strategies: Strategy I OBAT plans are carried out in blocks of very small sizes, i.e. 2 and 3, and factor effects are estimated one at a time whereas Strategy II OBAT plans involve larger block sizes where factors are assumed to fall into disjoint sets and each block investigates the effects of the factors of a particular set. Strategy I OBAT plans are appropriate when severe time trends in the response may be present. Formulas for estimates of main-effects and two-factor interactions at the various stages of strategy I OBAT 2ⁿ, 3m and 2ⁿ3m plans are reported.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Kumar, Arun. "Sequential Calibration Of Computer Models." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218568898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lin, Yao. "An Efficient Robust Concept Exploration Method and Sequential Exploratory Experimental Design." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4799.

Full text
Abstract:
Experimentation and approximation are essential for efficiency and effectiveness in concurrent engineering analyses of large-scale complex systems. The approximation-based design strategy is not fully utilized in industrial applications in which designers have to deal with multi-disciplinary, multi-variable, multi-response, and multi-objective analysis using very complicated and expensive-to-run computer analysis codes or physical experiments. With current experimental design and metamodeling techniques, it is difficult for engineers to develop acceptable metamodels for irregular responses and achieve good design solutions in large design spaces at low prices. To circumvent this problem, engineers tend to either adopt low-fidelity simulations or models with which important response properties may be lost, or restrict the study to very small design spaces. Information from expensive physical or computer experiments is often used as a validation in late design stages instead of analysis tools that are used in early-stage design. This increases the possibility of expensive re-design processes and the time-to-market. In this dissertation, two methods, the Sequential Exploratory Experimental Design (SEED) and the Efficient Robust Concept Exploration Method (E-RCEM) are developed to address these problems. The SEED and E-RCEM methods help develop acceptable metamodels for irregular responses with expensive experiments and achieve satisficing design solutions in large design spaces with limited computational or monetary resources. It is verified that more accurate metamodels are developed and better design solutions are achieved with SEED and E-RCEM than with traditional approximation-based design methods. SEED and E-RCEM facilitate the full utility of the simulation-and-approximation-based design strategy in engineering and scientific applications. Several preliminary approaches for metamodel validation with additional validation points are proposed in this dissertation, after verifying that the most-widely-used method of leave-one-out cross-validation is theoretically inappropriate in testing the accuracy of metamodels. A comparison of the performance of kriging and MARS metamodels is done in this dissertation. Then a sequential metamodeling approach is proposed to utilize different types of metamodels along the design timeline. Several single-variable or two-variable examples and two engineering example, the design of pressure vessels and the design of unit cells for linear cellular alloys, are used in this dissertation to facilitate our studies.
APA, Harvard, Vancouver, ISO, and other styles
17

LAM, CHEN QUIN. "Sequential Adaptive Designs In Computer Experiments For Response Surface Model Fit." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1211911211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Huan, Xun. "Numerical approaches for sequential Bayesian optimal experimental design." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101442.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 175-186).
Experimental data play a crucial role in developing and refining models of physical systems. Some experiments can be more valuable than others, however. Well-chosen experiments can save substantial resources, and hence optimal experimental design (OED) seeks to quantify and maximize the value of experimental data. Common current practice for designing a sequence of experiments uses suboptimal approaches: batch (open-loop) design that chooses all experiments simultaneously with no feedback of information, or greedy (myopic) design that optimally selects the next experiment without accounting for future observations and dynamics. In contrast, sequential optimal experimental design (sOED) is free of these limitations. With the goal of acquiring experimental data that are optimal for model parameter inference, we develop a rigorous Bayesian formulation for OED using an objective that incorporates a measure of information gain. This framework is first demonstrated in a batch design setting, and then extended to sOED using a dynamic programming (DP) formulation. We also develop new numerical tools for sOED to accommodate nonlinear models with continuous (and often unbounded) parameter, design, and observation spaces. Two major techniques are employed to make solution of the DP problem computationally feasible. First, the optimal policy is sought using a one-step lookahead representation combined with approximate value iteration. This approximate dynamic programming method couples backward induction and regression to construct value function approximations. It also iteratively generates trajectories via exploration and exploitation to further improve approximation accuracy in frequently visited regions of the state space. Second, transport maps are used to represent belief states, which reflect the intermediate posteriors within the sequential design process. Transport maps offer a finite-dimensional representation of these generally non-Gaussian random variables, and also enable fast approximate Bayesian inference, which must be performed millions of times under nested combinations of optimization and Monte Carlo sampling. The overall sOED algorithm is demonstrated and verified against analytic solutions on a simple linear-Gaussian model. Its advantages over batch and greedy designs are then shown via a nonlinear application of optimal sequential sensing: inferring contaminant source location from a sensor in a time-dependent convection-diffusion system. Finally, the capability of the algorithm is tested for multidimensional parameter and design spaces in a more complex setting of the source inversion problem.
by Xun Huan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Marin, Ofelia. "Designing computer experiments to estimate integrated response functions." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1135206870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Boya. "Computer Experimental Design for Gaussian Process Surrogates." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99886.

Full text
Abstract:
With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. A surrogate model or emulator, is often employed as a fast substitute for the simulator. Meanwhile, a common challenge in computer experiments and related fields is to efficiently explore the input space using a small number of samples, i.e., the experimental design problem. This dissertation focuses on the design problem under Gaussian process surrogates. The first work demonstrates empirically that space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. A purely random design is shown to be superior to higher-powered alternatives in many cases. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (one-shot design) and sequential settings. The second contribution is motivated by an agent-based model(ABM) of delta smelt conservation. The ABM is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. A batch sequential design scheme is proposed, generalizing one-at-a-time variance-based active learning, as a means of keeping multi-core cluster nodes fully engaged with expensive runs. The acquisition strategy is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. Design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream high-fidelity input sensitivity analysis.
Doctor of Philosophy
With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. Thus, a statistical model built upon input-output observations, i.e., a so-called surrogate model or emulator, is needed as a fast substitute for the simulator. Design of experiments, i.e., how to select samples from the input space under budget constraints, is also worth studying. This dissertation focuses on the design problem under Gaussian process (GP) surrogates. The first work demonstrates empirically that commonly-used space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (design points are allocated at one shot) and sequential settings (data are sampled sequentially). The second contribution is motivated by a stochastic computer simulator of delta smelt conservation. This simulator is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. An innovative batch sequential design method is proposed, generalizing one-at-a-time sequential design to one-batch-at-a-time scheme with the goal of parallel computing. The criterion for subsequent data acquisition is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. The design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream input sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

Frazier, Marian L. "Adaptive Design for Global Fit of Non-stationary Surfaces." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1373284230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

So, Yiu-ching Abby. "Sequential uniform design and its application to quality improvement in the manufacture of smartcards." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B35772025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

So, Yiu-ching Abby, and 蘇耀正. "Sequential uniform design and its application to quality improvement in the manufacture of smartcards." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B35772025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nixon, Janel Nicole. "A Systematic Process for Adaptive Concept Exploration." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/13952.

Full text
Abstract:
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.
APA, Harvard, Vancouver, ISO, and other styles
25

García, Martín Rafael Adrián, and Sánchez José Manuel Gaspar. "Screening for important factors in large-scale simulation models: some industrial experiments." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11484.

Full text
Abstract:
The present project discusses the application of screening techniques in large-scale simulation models with the purpose of determining whether this kind of procedures could be a substitute for or a complement to simulation-based optimization for bottleneck identification and improvement. Based on sensitivity analysis, the screening techniques consist in finding the most important factors in simulation models where there are many factors, in which presumably only a few or some of these factors are important. The screening technique selected to be studied in this project is Sequential Bifurcation. This method consists in grouping the potentially important factors, dividing the groups continuously depending on the response generated from the model of the system under study. The results confirm that the application of the Sequential Bifurcation method can considerably reduce the simulation time because of the number of simulations needed, which decreased compared with the optimization study. Furthermore, by introducing two-factor interactions in the metamodel, the results are more accurate and may even be as accurate as the results from optimization. On the other hand, it has been found that the application of Sequential Bifurcation could become a problem in terms of accuracy when there are many storage buffers in the decision variables list. Due to all of these reasons, the screening techniques cannot be a complete alternative to simulation-based optimization. However, as shown in some initial results, the combination of these two methods could yield a promising roadmap for future research, which is highly recommended by the authors of this project.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Deng. "Experimental planning and sequential kriging optimization using variable fidelity data." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1110297243.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 120 p.; also includes graphics (some col.). Includes bibliographical references (p. 114-120). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
27

Le, Gratiet Loic. "Multi-fidelity Gaussian process regression for computer experiments." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00866770.

Full text
Abstract:
This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (ie the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based metamodels with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (ie when the process is in fact finite-dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Xi, and Qiang Zhou. "Sequential design strategies for mean response surface metamodeling via stochastic kriging with adaptive exploration and exploitation." ELSEVIER SCIENCE BV, 2017. http://hdl.handle.net/10150/626021.

Full text
Abstract:
Stochastic kriging (SK) methodology has been known as an effective metamodeling tool for approximating a mean response surface implied by a stochastic simulation. In this paper we provide some theoretical results on the predictive performance of SK, in light of which novel integrated mean squared error-based sequential design strategies are proposed to apply SIC for mean response surface metamodeling with a fixed simulation budget. Through numerical examples of different features, we show that SIC with the proposed strategies applied holds great promise for achieving high predictive accuracy by striking a good balance between exploration and exploitation. Published by Elsevier B.V.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Dan. "Design of Statistically and Energy Efficient Accelerated Life Tests." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/320992.

Full text
Abstract:
Because of the needs for producing highly reliable products and reducing product development time, Accelerated Life Testing (ALT) has been widely used in new product development as an alternative to traditional testing methods. The basic idea of ALT is to expose a limited number of test units of a product to harsher-than-normal operating conditions to expedite failures. Based on the failure time data collected in a short time period, an ALT model incorporating the underlying failure time distribution and life-stress relationship can be developed to predict the product reliability under the normal operating condition. However, ALT experiments often consume significant amount of energy due to the harsher-than-normal operating conditions created and controlled by the test equipment used in the experiments. This challenge may obstruct successful implementations of ALT in practice. In this dissertation, a new ALT design methodology is developed to improve the reliability estimation precision and the efficiency of energy utilization in ALT. This methodology involves two types of ALT design procedures - the sequential optimization approach and the simultaneous optimization alternative with a fully integrated double-loop design architecture. Using the sequential optimum ALT design procedure, the statistical estimation precision of the ALT experiment will be improved first followed by energy minimization through the optimum design of controller for the test equipment. On the other hand, we can optimize the statistical estimation precision and energy consumption of an ALT plan simultaneously by solving a multi-objective optimization problem using a controlled elitist genetic algorithm. When implementing either of the methods, the resulting statistically and energy efficient ALT plan depends not only on the reliability of the product to be evaluated but also on the physical characteristics of the test equipment and its controller. Particularly, the statistical efficiency of each candidate ALT plan needs to be evaluated and the corresponding controller capable of providing the required stress loadings must be designed and simulated in order to evaluate the total energy consumption of the ALT plan. Moreover, the realistic physical constraints and tracking performance of the test equipment are also addressed in the proposed methods for improving the accuracy of test environment. In this dissertation, mathematical formulations, computational algorithms and simulation tools are provided to handle such complex experimental design problems. To the best of our knowledge, this is the first methodological investigation on experimental design of statistically precise and energy efficient ALT. The new experimental design methodology is different from most of the previous work on planning ALT in that (1) the energy consumption of an ALT experiment, depending on both the designed stress loadings and controllers, cannot be expressed as a simple function of the related decision variables; (2) the associated optimum experimental design procedure involves tuning the parameters of the controller and evaluating the objective function via computer experiment (simulation). Our numerical examples demonstrate the effectiveness of the proposed methodology in improving the reliability estimation precision while minimizing the total energy consumption in ALT. The robustness of the sequential optimization method is also verified through sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Janka, Dennis [Verfasser], and Stefan [Akademischer Betreuer] Körkel. "Sequential quadratic programming with indefinite Hessian approximations for nonlinear optimum experimental design for parameter estimation in differential–algebraic equations / Dennis Janka ; Betreuer: Stefan Körkel." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180500733/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Stroh, Rémi. "Planification d’expériences numériques en multi-fidélité : Application à un simulateur d’incendies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC049/document.

Full text
Abstract:
Les travaux présentés portent sur l'étude de modèles numériques multi-fidèles, déterministes ou stochastiques. Plus précisément, les modèles considérés disposent d'un paramètre réglant la qualité de la simulation, comme une taille de maille dans un modèle par différences finies, ou un nombre d'échantillons dans un modèle de Monte-Carlo. Dans ce cas, il est possible de lancer des simulations basse fidélité, rapides mais grossières, et des simulations haute fidélité, fiables mais coûteuses. L'intérêt d'une approche multi-fidèle est de combiner les résultats obtenus aux différents niveaux de fidélité afin d'économiser du temps de simulation. La méthode considérée est fondée sur une approche bayésienne. Le simulateur est décrit par un modèle de processus gaussiens multi-niveaux développé dans la littérature que nous adaptons aux cas stochastiques dans une approche complètement bayésienne. Ce méta-modèle du simulateur permet d'obtenir des estimations de quantités d'intérêt, accompagnés d'une mesure de l'incertitude associée. L'objectif est alors de choisir de nouvelles expériences à lancer afin d'améliorer les estimations. En particulier, la planification doit sélectionner le niveau de fidélité réalisant le meilleur compromis entre coût d'observation et gain d'information. Pour cela, nous proposons une stratégie séquentielle adaptée au cas où les coûts d'observation sont variables. Cette stratégie, intitulée "Maximal Rate of Uncertainty Reduction" (MRUR), consiste à choisir le point d'observation maximisant le rapport entre la réduction d'incertitude et le coût. La méthodologie est illustrée en sécurité incendie, où nous cherchons à estimer des probabilités de défaillance d'un système de désenfumage
The presented works focus on the study of multi-fidelity numerical models, deterministic or stochastic. More precisely, the considered models have a parameter which rules the quality of the simulation, as a mesh size in a finite difference model or a number of samples in a Monte-Carlo model. In that case, the numerical model can run low-fidelity simulations, fast but coarse, or high-fidelity simulations, accurate but expensive. A multi-fidelity approach aims to combine results coming from different levels of fidelity in order to save computational time. The considered method is based on a Bayesian approach. The simulator is described by a state-of-art multilevel Gaussian process model which we adapt to stochastic cases in a fully-Bayesian approach. This meta-model of the simulator allows estimating any quantity of interest with a measure of uncertainty. The goal is to choose new experiments to run in order to improve the estimations. In particular, the design must select the level of fidelity meeting the best trade-off between cost of observation and information gain. To do this, we propose a sequential strategy dedicated to the cases of variable costs, called Maximum Rate of Uncertainty Reduction (MRUR), which consists of choosing the input point maximizing the ratio between the uncertainty reduction and the cost. The methodology is illustrated in fire safety science, where we estimate probabilities of failure of a fire protection system
APA, Harvard, Vancouver, ISO, and other styles
32

Abtini, Mona. "Plans prédictifs à taille fixe et séquentiels pour le krigeage." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC019/document.

Full text
Abstract:
La simulation numérique est devenue une alternative à l’expérimentation réelle pour étudier des phénomènes physiques. Cependant, les phénomènes complexes requièrent en général un nombre important de simulations, chaque simulation étant très coûteuse en temps de calcul. Une approche basée sur la théorie des plans d’expériences est souvent utilisée en vue de réduire ce coût de calcul. Elle consiste à partir d’un nombre réduit de simulations, organisées selon un plan d’expériences numériques, à construire un modèle d’approximation souvent appelé métamodèle, alors beaucoup plus rapide à évaluer que le code lui-même. Traditionnellement, les plans utilisés sont des plans de type Space-Filling Design (SFD). La première partie de la thèse concerne la construction de plans d’expériences SFD à taille fixe adaptés à l’identification d’un modèle de krigeage car le krigeage est un des métamodèles les plus populaires. Nous étudions l’impact de la contrainte Hypercube Latin (qui est le type de plans les plus utilisés en pratique avec le modèle de krigeage) sur des plans maximin-optimaux. Nous montrons que cette contrainte largement utilisée en pratique est bénéfique quand le nombre de points est peu élevé car elle atténue les défauts de la configuration maximin-optimal (majorité des points du plan aux bords du domaine). Un critère d’uniformité appelé discrépance radiale est proposé dans le but d’étudier l’uniformité des points selon leur position par rapport aux bords du domaine. Ensuite, nous introduisons un proxy pour le plan minimax-optimal qui est le plan le plus proche du plan IMSE (plan adapté à la prédiction par krigeage) et qui est coûteux en temps de calcul, ce proxy est basé sur les plans maximin-optimaux. Enfin, nous présentons une procédure bien réglée de l’optimisation par recuit simulé pour trouver les plans maximin-optimaux. Il s’agit ici de réduire au plus la probabilité de tomber dans un optimum local. La deuxième partie de la thèse porte sur un problème légèrement différent. Si un plan est construit de sorte à être SFD pour N points, il n’y a aucune garantie qu’un sous-plan à n points (n 6 N) soit SFD. Or en pratique le plan peut être arrêté avant sa réalisation complète. La deuxième partie est donc dédiée au développement de méthodes de planification séquentielle pour bâtir un ensemble d’expériences de type SFD pour tout n compris entre 1 et N qui soient toutes adaptées à la prédiction par krigeage. Nous proposons une méthode pour générer des plans séquentiellement ou encore emboités (l’un est inclus dans l’autre) basée sur des critères d’information, notamment le critère d’Information Mutuelle qui mesure la réduction de l’incertitude de la prédiction en tout point du domaine entre avant et après l’observation de la réponse aux points du plan. Cette approche assure la qualité des plans obtenus pour toutes les valeurs de n, 1 6 n 6 N. La difficulté est le calcul du critère et notamment la génération de plans en grande dimension. Pour pallier ce problème une solution a été présentée. Cette solution propose une implémentation astucieuse de la méthode basée sur le découpage par blocs des matrices de covariances ce qui la rend numériquement efficace
In recent years, computer simulation models are increasingly used to study complex phenomena. Such problems usually rely on very large sophisticated simulation codes that are very expensive in computing time. The exploitation of these codes becomes a problem, especially when the objective requires a significant number of evaluations of the code. In practice, the code is replaced by global approximation models, often called metamodels, most commonly a Gaussian Process (kriging) adjusted to a design of experiments, i.e. on observations of the model output obtained on a small number of simulations. Space-Filling-Designs which have the design points evenly spread over the entire feasible input region, are the most used designs. This thesis consists of two parts. The main focus of both parts is on construction of designs of experiments that are adapted to kriging, which is one of the most popular metamodels. Part I considers the construction of space-fillingdesigns of fixed size which are adapted to kriging prediction. This part was started by studying the effect of Latin Hypercube constraint (the most used design in practice with the kriging) on maximin-optimal designs. This study shows that when the design has a small number of points, the addition of the Latin Hypercube constraint will be useful because it mitigates the drawbacks of maximin-optimal configurations (the position of the majority of points at the boundary of the input space). Following this study, an uniformity criterion called Radial discrepancy has been proposed in order to measure the uniformity of the points of the design according to their distance to the boundary of the input space. Then we show that the minimax-optimal design is the closest design to IMSE design (design which is adapted to prediction by kriging) but is also very difficult to evaluate. We then introduce a proxy for the minimax-optimal design based on the maximin-optimal design. Finally, we present an optimised implementation of the simulated annealing algorithm in order to find maximin-optimal designs. Our aim here is to minimize the probability of falling in a local minimum configuration of the simulated annealing. The second part of the thesis concerns a slightly different problem. If XN is space-filling-design of N points, there is no guarantee that any n points of XN (1 6 n 6 N) constitute a space-filling-design. In practice, however, we may have to stop the simulations before the full realization of design. The aim of this part is therefore to propose a new methodology to construct sequential of space-filling-designs (nested designs) of experiments Xn for any n between 1 and N that are all adapted to kriging prediction. We introduce a method to generate nested designs based on information criteria, particularly the Mutual Information criterion. This method ensures a good quality forall the designs generated, 1 6 n 6 N. A key difficulty of this method is that the time needed to generate a MI-sequential design in the highdimension case is very larg. To address this issue a particular implementation, which calculates the determinant of a given matrix by partitioning it into blocks. This implementation allows a significant reduction of the computational cost of MI-sequential designs, has been proposed
APA, Harvard, Vancouver, ISO, and other styles
33

Žakelj, Blaž. "Experimental Investigations on Market Behavior." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/80908.

Full text
Abstract:
This thesis is a collection of three essays on inflation expectations, forecasting uncertainty, and the role of uncertainty in sequential auctions, all using experimental approach. Chapter 1 studies how individuals forecast inflation in fictitious macroeconomic setup and analyzes the effect of monetary policy rules on their decisions. Results display heterogeneity in inflation forecasting rules and demonstrate the importance of adaptive learning forecasting if model switching is assumed. Chapter 2 extends the analysis from Chapter 1 by analyzing individual inflation forecasting uncertainty. Results show that confidence intervals depend on inflation variance and business cycle phase, have a strong inertia, and are often asymmetric. Finally, Chapter 3 analyzes the role of uncertainty about the number of bidders for the behavior of subjects in a sequential auction experiment. Uncertainty does not aggravate price decline, but it changes individual bidding strategies and auction efficiency.
Esta tesis consta de tres ensayos sobre las expectativas de inflación, la incertidumbre de la predicción, y la importancia de la incertidumbre en subastas secuenciales. Todos ellos utilizan un método experimental. El capítulo 1 estudia cómo los individuos predicen la inflación en la economía ficticia y analiza el efecto de las reglas de política monetaria en sus decisiones. Los resultados revelan la heterogeneidad en las reglas de predicción de la inflación y demuestran la importancia del mecanismo de aprendizaje adaptivo si el cambio entre los modelos se supone. Capítulo 2 continúa el análisis del capítulo 1, analiza la incertidumbre individual de las expectativas de inflación. Los resultados muestran que los intervalos de confianza dependen de varianza de la inflación y la fase del ciclo económico, tienen una fuerte inercia, y son frecuentemente asimétricos. Por último, el capítulo 3 analiza la influencia de la incertidumbre sobre el número de oferentes en el comportamiento de los individuos en un experimento de la subasta secuencial. La incertidumbre no agrava la caída de los precios, pero cambia las estrategias de los oferentes y la eficiencia de la subasta.
APA, Harvard, Vancouver, ISO, and other styles
34

Taylor, Kendra C. "Sequential Auction Design and Participant Behavior." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7250.

Full text
Abstract:
This thesis studies the impact of sequential auction design on participant behavior from both a theoretical and an empirical viewpoint. In the first of the two analyses, three sequential auction designs are characterized and compared based on expected profitability to the participants. The optimal bid strategy is derived as well. One of the designs, the alternating design, is a new auction design and is a blend of the other two. It assumes that the ability to bid in or initiate an auction is given to each side of the market in an alternating fashion to simulate seasonal markets. The conditions for an equilibrium auction design are derived and characteristics of the equilibrium are outlined. The primary result is that the alternating auction is a viable compromise auction design when buyers and suppliers disagree on whether to hold a sequence of forward or reverse auctions. We also found the value of information on future private value for a strategic supplier in a two-period case of the alternating and reverse auction designs. The empirical work studies the cause of low aggregation of timber supply in reverse auctions of an online timber exchange. Unlike previous research results regarding timber auctions, which focus on offline public auctions held by the U.S. Forest Service, we study online private auctions between logging companies and mills. A limited survey of the online auction data revealed that the auctions were successful less than 50% of the time. Regression analysis is used to determine which internal and external factors to the auction affect the aggregation of timber in an effort to determine the reason that so few auctions succeeded. The analysis revealed that the number of bidders, the description of the good and the volume demanded had a significant influence on the amount of timber supplied through the online auction exchange. A plausible explanation for the low aggregation is that the exchange was better suited to check the availability for custom cuts of timber and to transact standard timber.
APA, Harvard, Vancouver, ISO, and other styles
35

Sutherland, Sindee S. "Sequential design augmentation with model misspecification." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10032007-171611/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhu, Li. "Some Optimal and Sequential Experimental Designs with Potential Applications to Nanostructure Synthesis and Beyond." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10349.

Full text
Abstract:
Design of Experiments (DOE) is an important topic in statistics. Efficient experimentation can help an investigator to extract maximum information from a dataset. In recent times, DOE has found new and challenging applications in science, engineering and technology. In this thesis, two different experimental design problems, motivated by the need for modeling the growth of nanowires are studied. In the fi rst problem, we consider issues of determining an optimal experimental design for estimation of parameters of a complex curve characterizing nanowire growth that is partially exponential and partially linear. A locally D-optimal design for the non-linear change-point growth model is obtained by using a geometric approach. Further, a Bayesian sequential algorithm is proposed for obtaining the D-optimal design. The advantages of the proposed algorithm over traditional approaches adopted in recent nano-experiments are demonstrated using Monte-Carlo simulations. The second problem deals with generating space- lling design in feasible regions of complex response surfaces with unknown constraints. Two di erent types of sequential design strategies are proposed with the objective of generating a sequence of design points that will quickly carve out the (unknown) infeasible regions and generate more and more points in the (unknown) feasible region. The generated design is space- lling (in certain sense) within the feasible region. The rst strategy is model independent, whereas the second one is model-based. Theoretical properties of proposed strategies are derived and simulation studies are conducted to evaluate the performance of proposed strategies. The strategies are developed assuming that the response function is deterministic, and extensions are proposed for random response functions.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Jiangeng. "Sequential learning, large-scale calibration, and uncertainty quantification." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91935.

Full text
Abstract:
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems.
Doctor of Philosophy
With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
APA, Harvard, Vancouver, ISO, and other styles
38

Brun, Soren Erik. "Sequential scouring, alternating patterns of erosion and deposition, laboratory experiments and mathematical modelling." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0001/NQ35117.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ruder, Joshua Austin. "Experiments on system level design." Thesis, Montana State University, 2006. http://etd.lib.montana.edu/etd/2006/ruder/RuderJ0806.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Khan, A. Z. "Optimal design of pharmacokinetic experiments." Thesis, University of Manchester, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kittelson, John Martin. "The design of group sequential clinical trials." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290621.

Full text
Abstract:
Group sequential clinical trials have become the accepted method for monitoring the results of an ongoing trial. These methods allows early termination of a trial based on the results of "interim analyses" that are conducted after each of the groups of subjects are entered on the study. Existing methods for designing these types of trials are currently comprised of several different constructions, each of which addresses a different clinical setting. The purpose of this dissertation is to unify these constructions into a single framework. This is accomplished by first proposing a general algebraic family of stopping rules for group sequential designs, and then constructing a statistical interpretation of the family. Both Bayesian and frequentist approaches are included in this unification. The properties of the unified family of designs is examined, which lends insight into the similarities and differences between existing approaches to group sequential designs. This work is motivated by several clinical examples, and the clinical application of these designs is given detailed consideration. A particular example is used to illustrate the application of these methods, and to describe how they would be implemented in an ongoing monitoring program for a clinical trial.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Aiying. "Multiple Testing Procedures under Group Sequential Design." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/384082.

Full text
Abstract:
Statistics
Ph.D.
This dissertation is focused on multiple hypotheses testing procedures under group sequential design, under which the data are accrued sequentially or periodically in time. We propose two stepwise procedures using the error spending function approach. The first procedure controls the Family-wise Error Rate (FWER), under the assumption that the test statistics follow normal distribution with known correlations. This procedure involves repeated application of a step-down procedure at each stage on the hypotheses that are not rejected in the previous stages. The second proposed procedure is a group sequential BH procedure (GSBH) controlling the False Discovery Rate (FDR), which is a natural extension of the original BH method from single to multiple stages under a group sequential design. Similar to the proposed step-down procedure controlling the FWER, a step-up procedure is applied on the active hypotheses at each stage in the GSBH procedure. This GSBH procedure is theoretically proved to control the FDR under some positive dependence condition. An adaptive version of GSBH procedure (ad.GSBH) is also introduced, which is proved to control the FDR under independence. Simulation studies are performed to investigate the performance of these three procedures. The simulation results show that these procedures are often powerful and provide more reduction of the expected sample sizes compared to their relevant competitors.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
43

Marston, Nathan Stuart. "The design of line-sequential 3D displays." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Yan. "Asymptotic theory for decentralized sequential hypothesis testing problems and sequential minimum energy design algorithm." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41082.

Full text
Abstract:
The dissertation investigates asymptotic theory of decentralized sequential hypothesis testing problems as well as asymptotic behaviors of the Sequential Minimum Energy Design (SMED). The main results are summarized as follows. 1.We develop the first-order asymptotic optimality theory for decentralized sequential multi-hypothesis testing under a Bayes framework. Asymptotically optimal tests are obtained from the class of "two-stage" procedures and the optimal local quantizers are shown to be the "maximin" quantizers that are characterized as a randomization of at most M-1 Unambiguous Likelihood Quantizers (ULQ) when testing M >= 2 hypotheses. 2. We generalize the classical Kullback-Leibler inequality to investigate the quantization effects on the second-order and other general-order moments of log-likelihood ratios. It is shown that a quantization may increase these quantities, but such an increase is bounded by a universal constant that depends on the order of the moment. This result provides a simpler sufficient condition for asymptotic theory of decentralized sequential detection. 3. We propose a class of multi-stage tests for decentralized sequential multi-hypothesis testing problems, and show that with suitably chosen thresholds at different stages, it can hold the second-order asymptotic optimality properties when the hypotheses testing problem is "asymmetric." 4. We characterize the asymptotic behaviors of SMED algorithm, particularly the denseness and distributions of the design points. In addition, we propose a simplified version of SMED that is computationally more efficient.
APA, Harvard, Vancouver, ISO, and other styles
45

Emmett, Marta. "Design of experiments with multivariate response." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

陳令由 and Ling-yau Chan. "Optimal design for experiments with mixtures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B31230799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Khattak, Azizullah. "Design of balanced incomplete factorial experiments." Thesis, University of Leeds, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chan, Ling-yau. "Optimal design for experiments with mixtures /." [Hong Kong] : University of Hong Kong, 1986. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12326306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Khashab, Rana Hamza H. "Optimal design of experiments with mixtures." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/420031/.

Full text
Abstract:
The structural nature of the world often provides a clear guide to where sought objects are likely to appear, as well as the kind of objects they may repeatedly appear in the presence of. The relationship between the targets, distractors and the landscape provides context, which ensures efficient search. This thesis will explore the dynamics of how knowledge of the environment ahead will inform search on future presentation of those scenes, as well as explore how several factors between individuals (such as cognitive resources, or tendencies towards anxiety) may influence search and learning processes. This thesis reports three studies using a new eye movement experimental paradigm termed the repeated scenes search task (RSST). This task presented scenes taken on a route around a suburban neighbourhood as search arrays, while participants searched for target superimposed in naturalistic locations. The scenes were presented on 8 occasions in each experiment, and performance improved with number of repeats. In the experimental chapters the influence of scene order on search was examined with targets appearing in several contingencies with relation to scene identity and compared between the scenes appearing in a consistent or randomised order. Subtle benefits to search were found when scenes were presented in a consistent order. The influence of boosting WM and inducing a state of anxiety upon participant responses (via more efficient eye movements) were also examined. The impact of these findings upon the general literature and with regard to individuals searching in dangerous environments are discussed, with the key finding that attentional networks, working memory and a state of anxiety are important factors to consider in search through familiar environments.
APA, Harvard, Vancouver, ISO, and other styles
50

Thattil, Raphel. "Design and analysis of intercropping experiments." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/49940.

Full text
Abstract:
The statistical problems of intercropping experiments (which involve the growing of two or more crops together) are investigated in this study. Measures of combined yield are discussed; the Land Equivalent Ratio (LER) is shown to be the 'best' index for intercropping. Problems that arise in the standardization of LER are investigated, and use of a single pair of divisors is recommended. The use of systematic designs are advocated for yield-density studies, to reduce the number of guard rows. A 3-way systematic design is proposed and methods of analysis are suggested. A regression model is employed for the combined yield data (LER), from which estimates of the optimum densities can be calculated. The study also deals with varietal trials in intercropping. Methods for reducing the large number of possible varietal combinations to be tested in the field and ways of reducing the block size are given. The field layout is discussed, and illustrated by examples. Stability measures that can be used in intercropping are derived and it is shown how they can be used in evaluating stable varietal combinations. It is also shown how information about the contribution to stability of each crop can be obtained. The best proportions of the component crops in the intercropping mixture is also investigated. Design and analysis for an experiment on proportions in conjunction with varying densities is given.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography