To see the other types of publications on this topic, follow the link: Optimal Sample Size.

Dissertations / Theses on the topic 'Optimal Sample Size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Optimal Sample Size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Medeiros, José António Amaro Correia. "Optimal sample size for assessing bacterioneuston structural diversity." Master's thesis, Universidade de Aveiro, 2011. http://hdl.handle.net/10773/10901.

Full text
Abstract:
Mestrado em Biologia Aplicada - Microbiologia Clínica e Ambiental<br>The surface microlayer (SML) is located at the interface atmospherehydrosphere and is theoretically defined as the top millimeter of the water column. However, the SML is operationally defined according to the sampling method used and the thickness varies with weather conditions and organic matter content, among other factors. The SML is a very dynamic compartment of the water column involved in the process of transport of materials between the hydrosphere and the atmosphere. Bacterial communities inhabiting the SML (bacterioneuston) are expected to be adapted to the particular SML environment which is characterized by physical and chemical stress associated to surface tension, high exposure to solar radiation and accumulation of hydrophobic compounds, some of which pollutants. However, the small volumes of SML water obtained with the different sampling methods reported in the literature, make the sampling procedure laborious and time-consuming. Sample size becomes even more critical when microcosm experiments are designed. The objective of this work was to determine the smallest sample size that could be used to assess bacterioneuston diversity by culture independent methods without compromising representativeness and therefore ecological significance. For that, two extraction methods were tested on samples of 0,5 mL, 5 mL and 10 mL of natural SML obtained at the estuarine system Ria de Aveiro. After DNA extraction, community structure was assessed by DGGE profiling of rRNA gene sequences. The CTAB-extraction procedure was selected as the most efficient extraction method and was later used with larger samples (1 mL, 20 mL and 50 mL). The DNA obtained was once more analyzed by DGGE and the results showed that the estimated diversity of the communities does not increase proportionally with increasing sample size and that a good estimate of the structural diversity of bacterioneuston communities can be obtained with very small samples.<br>A microcamada superficial marinha (SML) situa-se na interface atmosferahidrosfera e teoricamente é definida como o milímetro mais superficial da coluna de água. Operacionalmente, a espessura da SML depende do método de amostragem utilizado e é também variável com outros fatores, nomeadamente, as condições meteorológicas e teor de matéria orgânica, entre outros. A SML é um compartimento muito dinâmico da coluna de água que está envolvida no processo de transporte de materiais entre a hidrosfera e a atmosfera. As comunidades bacterianas que habitam na SML são designadas de bacterioneuston e existem indícios de que estão adaptadas ao ambiente particular da SML, caracterizado por stresse físico e químico associado à tensão superficial, alta exposição à radiação solar e acumulação de compostos hidrofóbicos, alguns dos quais poluentes de elevada toxicidade. No entanto, o reduzido volume de água da SML obtidos em cada colheita individual com os diferentes dispositivos de amostragem reportados na literatura, fazem com que o procedimento de amostragem seja laborioso e demorado. O tamanho da amostra torna-se ainda mais crítico em experiências de microcosmos. O objectivo deste trabalho foi avaliar se amostras de pequeno volume podem ser usadas para avaliar a diversidade do bacterioneuston, através de métodos de cultura independente, sem comprometer a representatividade, e o significado ecológico dos resultados. Para isso, foram testados dois métodos de extracção em amostras de 0,5 mL, 5 mL e 10 mL de SML obtida no sistema estuarino da Ria de Aveiro. Após a extracção do DNA total, a estrutura da comunidade bacteriana foi avaliada através do perfil de DGGE das sequências de genes que codificam para a sub unidade 16S do rRNA. O procedimento de extracção com brometo de cetil trimetil de amônia (CTAB) foi selecionado como sendo o método de extração com melhor rendimento em termos de diversidade do DNA e mais tarde foi aplicado a amostras de maior dimensão (1 mL, 20 mL e 50 mL). O DNA obtido foi mais uma vez usado para análise dos perfis de DGGE de 16S rDNA da comunidade e os resultados mostraram que a estimativa da diversidade de microorganismos não aumentou proporcionalmente com o aumento do tamanho da amostra e que com amostras de pequeno volume podem ser obtidas boas estimativas da diversidade estrutural das comunidades de bacterioneuston.
APA, Harvard, Vancouver, ISO, and other styles
2

Thach, Chau Thuy. "Self-designing optimal group sequential clinical trials /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Takazawa, Akira. "Optimal decision criteria for the study design and sample size of a biomarker-driven phase III trial." Kyoto University, 2020. http://hdl.handle.net/2433/253492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brungard, Colby W. "Alternative Sampling and Analysis Methods for Digital Soil Mapping in Southwestern Utah." DigitalCommons@USU, 2009. http://digitalcommons.usu.edu/etd/472.

Full text
Abstract:
Digital soil mapping (DSM) relies on quantitative relationships between easily measured environmental covariates and field and laboratory data. We applied innovative sampling and inference techniques to predict the distribution of soil attributes, taxonomic classes, and dominant vegetation across a 30,000-ha complex Great Basin landscape in southwestern Utah. This arid rangeland was characterized by rugged topography, diverse vegetation, and intricate geology. Environmental covariates calculated from digital elevation models (DEM) and spectral satellite data were used to represent factors controlling soil development and distribution. We investigated optimal sample size and sampled the environmental covariates using conditioned Latin Hypercube Sampling (cLHS). We demonstrated that cLHS, a type of stratified random sampling, closely approximated the full range of variability of environmental covariates in feature and geographic space with small sample sizes. Site and soil data were collected at 300 locations identified by cLHS. Random forests was used to generate spatial predictions and associated probabilities of site and soil characteristics. Balanced random forests and balanced and weighted random forests were investigated for their use in producing an overall soil map. Overall and class errors (referred to as out-of-bag [OOB] error) were within acceptable levels. Quantitative covariate importance was useful in determining what factors were important for soil distribution. Random forest spatial predictions were evaluated based on the conceptual framework developed during field sampling.
APA, Harvard, Vancouver, ISO, and other styles
5

Kothawade, Manish. "A Bayesian Method for Planning Reliability Demonstration Tests for Multi-Component Systems." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1416154538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hathaway, John Ellis. "Determining the Optimum Number of Increments in Composite Sampling." BYU ScholarsArchive, 2005. https://scholarsarchive.byu.edu/etd/425.

Full text
Abstract:
Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a model is developed which accounts for different sources of variation in compositing and data analysis. This model is used to define and understand the process of determining the optimum number of increments that should be used in forming a composite. The blending variance is shown to have a smaller range of possible values than previously reported when estimating the number of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated number of increments.
APA, Harvard, Vancouver, ISO, and other styles
7

Vong, Camille. "Model-Based Optimization of Clinical Trial Designs." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.

Full text
Abstract:
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
APA, Harvard, Vancouver, ISO, and other styles
8

Fitton, N. V. "Why and How to Report Distributions of Optima in Experiments on Heuristic Algorithms." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1006054556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Feijó, Sandra. "Técnicas para execução de experimentos sob ambiente protegido para a cultura da abobrinha italiana." Universidade Federal de Santa Maria, 2005. http://repositorio.ufsm.br/handle/1/3178.

Full text
Abstract:
Conselho Nacional de Desenvolvimento Científico e Tecnológico<br>In order to determinate technicals to execution trials in protected environments for italian pumpkin was accomplished an experiment, from 18/08/2003 to 07/12/2003, in the area of the Department of Fitotecnia UFSM, in plastic greenhouse, with four rows and each row consisted of twenty-four plants. A total of twenty-seven harvests were made, evaluating fruit weight with length ≥15cm. The sample soil was sampled before accomplishment the experiment; one sample point is equal to four sub sample. The Smith heterogeneity index was estimated using SMITH s method (1938) and optimum plot size using method modified maximum curvature (MEIER & LESSMAN, 1971). Estimate, for how long productive period of italian pumpkin, in plastic greenhouse, must be evaluated aiming to estimate the experimental error and the difference among four intervals of harvest, was the aim in order to first paper. The harvest and evaluation of the initial half of the productive period of Italian pumpkin in plastic greenhouse was sufficient to estimate the experimental error, using six plants by plot, to evaluate different intervals of harvest. Because of high experimental error, the evaluation of the Italian pumpkin production during all the productive period is not sufficient to differentiate four treatments of intervals of harvest. The aim the second paper, was evaluate the chemicals characteristics of the soil heterogeneity index under environmental protect and determinate sample size. The values the Smith heterogeneity index were considerate small. The optimum plot size was equal to one basic unit, in other words, one sample point. The sample size estimated was ten sample points for the half width of the confidence interval of 20%, to 5% of error probability. The third paper, aimed evaluate the Smith heterogeneity index in order to different intervals of the harvest of fruits in different levels of the accumulate harvests of italian pumpkin in greenhouse, estimate the optimum plot size and determinate the least significant differences within treatments with variation in the size plot and number of replications. The Smith heterogeneity index is smaller and the use of smaller plots with larger number of replications benefit the experimental precision. The optimum plot size in order to yield italian pumpkin varying between one and seven plants, promoted appropriate evaluation of the yield italian pumpkin in the different studied treatments. Plots with three plants and six replications is better in order conduction the experiments, with least significant differences within treatments (average percentage) in 75,94%.<br>Para se determinar técnicas para execução de experimentos, sob ambiente protegido, para a cultura da abobrinha italiana, foi realizado um experimento em estufa plástica no período de 18/08/2003 a 07/12/2003, em área pertencente ao Departamento de Fitotecnia, na UFSM, Santa Maria, RS. As mudas foram transplantadas para a estufa plástica com espaçamento de 0,80 m entre plantas e 1,0 m entre filas, totalizando 24 plantas por fila. Foram realizadas 27 colheitas de frutos,com comprimento ≥ 15 cm. As amostras de solo foram coletadas antes da implantação do experimento, cada ponto amostral era composto por quatro subamostras. O índice de heterogeneidade de Smith (b) foi estimado pelo método de SMITH (1938) e o tamanho ótimo de parcela através do método da máxima curvatura modificado (MEIER & LESSMAN, 1971). A estimativa do erro experimental e a diferença entre quatro intervalos de colheita foram avaliados no primeiro trabalho. A colheita e avaliação da metade inicial do período produtivo da abobrinha italiana em estufa de plástica foi suficiente para estimar o erro experimental, usando seis plantas por parcela, para avaliar diferentes intervalos de colheita. O objetivo do segundo trabalho, foi avaliar o índice de heterogeneidade de Smith das principais características químicas do solo, em estufa plástica, e determinar o tamanho de amostra. Para todas variáveis analisadas, o índice de heterogeneidade de Smith, foi próximo a zero e o tamanho ótimo de parcela, foi igual à uma unidade básica, ou seja, um ponto amostral. O tamanho de amostra estimado, foi de dez pontos amostrais, como representativo para todas as variáveis analisadas, com semiamplitude do intervalo de confiança da média em porcentagem, de 20% em nível 5% de probabilidade de erro. O terceiro trabalho teve por objetivos avaliar o índice de heterogeneidade de Smith da produção de abobrinha italiana, para os diferentes intervalos de colheita dos frutos, em diferentes níveis de colheitas acumuladas, em ambiente protegido; estimar o tamanho ótimo de parcela e determinar a diferença mínima significativa entre tratamentos, variando o tamanho da parcela e o número de repetições. Como conclusão, o índice de heterogeneidade de Smith, foi baixo e o tamanho ótimo de parcela para a produção total de abobrinha italiana varia entre uma e sete plantas, conforme a freqüência de colheitas. O uso de parcelas com três plantas, seis repetições é mais adequado e apresenta uma diferença mínima significativa entre tratamentos, em porcentagem da média, de 75,94%.
APA, Harvard, Vancouver, ISO, and other styles
10

Her, Chi-Way, and 何淇瑋. "The Optimal Sample Size For Interval Estimation Of Correlation Coefficient." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20731985747790441131.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>99<br>As the degree of correlation between two variables is one of concern to many social science issues, thus using the sample correlation coefficient to infer population correlation coefficient is a common method. However, the decision of the optimum sample size for the entire study will save a lot of time and cost. Traditionally, the sample size determination in addition to hypothesis testing method, this research will introduce the expected interval length method and the expected interval coverage probability method. Expected interval coverage probability method is based on interval estimation, but it can adjust sample size strict and loose according to the different set coverage probability. In this dissertation, the SAS software is used to construct model, after finding the optimal sample size, we will select the sample size randomly from the two designed population, and observe the interval width and interval coverage probability composed of sample size whether consistent with our original set. The results shows: the expected interval length method will have a better simulation results only when the samples are large enough, and the expected interval coverage probability method will shows unstable when the population parameters very close to 0.
APA, Harvard, Vancouver, ISO, and other styles
11

Susarla, Hari Krishna. "Optimal sample size determination in seed health testing : a simulation study." Thesis, 2005. http://spectrum.library.concordia.ca/8582/1/MR10215.pdf.

Full text
Abstract:
Selection of an appropriate sample size is an important prerequisite to any statistical investigation. In this thesis the problem of identifying the sample size for testing the seed health by noting the presence or the absence of pathogen(s) is considered. The cross-classified data of variety by seed by pathogen is collected for the purpose, which consists of N observations for each variety of seed. Here N is regarded as population size and the outcome is a Bernoulli random variable. A simulation method for identifying the sample size is developed and is compared with five existing methods. The simulation method is based on chi-square ({2) measure of goodness of fit of empirical distribution with that of a theoretical distribution. Here k repeated samples for each of the sample sizes n=10(10)50(25)100(100)500, using a simple random sampling without replacement (SRSWOR), are considered. For each of the k samples of size n, the chi-square ({2) measure of goodness of fit is computed.
APA, Harvard, Vancouver, ISO, and other styles
12

Tai, Chih-Ying, and 戴志穎. "Optimal Sample Size Allocation for a Series System under Accelerated Life Tests." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/527r7d.

Full text
Abstract:
碩士<br>國立中央大學<br>統計研究所<br>105<br>In accelerated life tests of system reliability, the sample size allocation under different stress levels could affect the accuracy of the reliability inference. Given three stress levels of an accelerated variable, this thesis tackles the issue on the optimal allocation of an accelerated life test of series systems. It turns out that the objective functions frequently are of the form of the product of second elementary symmetric functions. We fist derive the sufficient condition when the optimal plan is reduced to a two-level test with equal sample size allocated at the lowest and the highest levels for systems connected by two components. Under independent exponential life time distributions of the components, more specific results, such as the relative efficiency of the three-level uniform design to the optimal allocation, are developed. The results are also demonstrated and justified by a real example. Generalization to a multi-component series system is conjectured and verified by numerical results.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, I.-Chen, and 李宜真. "Optimal Sample Size Allocation for Accelerated Degradation Test (Based on Exponential Dispersion Model)." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/15800875410986355797.

Full text
Abstract:
碩士<br>國立清華大學<br>統計學研究所<br>99<br>Accelerated Degradation tests (ADTs) are widely used to assess the lifetime information of highly reliable products possessing quality characteristics that both degrade over time and can be related to reliability. Hence, how to design an efficient ADT plan for assessing product’s lifetime information at normal-use stress (especially for the optimal sample-size allocation to higher test-stress levels) turns out to be a challenging issue for reliability analysts. In the literature, several papers had addressed this decision problem. However, the results are only based on a specific degradation model (such as Wiener, Gamma, inverse Gaussian models, etc.) and it lacks of a uniform approach for a general degradation model. To overcome this difficulty, we first propose an exponential dispersion (ED) degradation model which covers all mentioned-above degradation models. Next, by using V-optimality, D-optimality, and A-optimality criterion, we derive the analytical expression of the optimal sample-size allocation for a 2-stress ADT when the underlying degradation model follows an ED degradation model. The results demonstrate that the V-optimal and A-optimal allocations are the functions of unknown parameters and life-stress function, while D-optimal allocation turns out to be an equal sample-size allocation. Furthermore, we also discuss the relative efficiency of the D-optimal and A-optimal allocations with respect to V-optimal allocation and it demonstrates that the relative efficiency of D-optimal and A-optimal allocations with respect to V-optimal allocation are around 85% and 83%, respectively. Key words and phrases: Accelerated Degradation Tests; Exponential Dispersion Model; V-optimality; D-optimality; A-optimality.
APA, Harvard, Vancouver, ISO, and other styles
14

Wen, Yu-Ju, and 溫鈺如. "A Study on Optimal Sample Size for Destructive Inspection under Bayesian Sequential Sampling Plan." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/09774408609859024071.

Full text
Abstract:
碩士<br>國立屏東科技大學<br>工業管理系<br>94<br>In this paper, we focus our attention on sample size for destructive inspection, and consider inspection cost and cost of sampling error, we applied Bayesian estimation model to derive the posterior pdf of P. We formulated a mathematical model for expected total losses. Applying computerized numerical analysis method, we can find out the optimal sample size that minimize the total losses. Furthermore, we use the concept of sequential sampling, the decision-maker can draw a sample , and inspect it in each sequential observation, and to determine whether to stop sampling and then making decision or not, to construct the decision chart of sequential sampling. We develop a numerical example to illustrate the meaning of this research. Furthermore, analyzing and comparing with sampling plan of ABC-STD 105, in order to test and verify whether this study is a applicative policy-making plan. Finally, thirteen conclusions are drawn for future studies and applications.
APA, Harvard, Vancouver, ISO, and other styles
15

陳玟穎. "Optimal Sample Size Allocation for Accelerated Degradation Test (based on Exponential Dispersion Model andV-optimality Criterion)." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/18501934393076313934.

Full text
Abstract:
碩士<br>國立清華大學<br>統計學研究所<br>100<br>Accelerated degradation test (ADT) is widely used to assess the lifetime information (e.g.,p-thquantileor mean-time-to-failure (MTTF))of highly reliable products. Hence,it is a challenging issue for reliabilityengineer to plan an efficientADT test. Recently, Lee (2011) proposedan exponential-dispersion accelerated degradation (EDAD) model and derived the analyticalsolution of optimal sample-size allocation. The advantage of this resultis that EDAD model covers well-knownmodels such as Wiener, Gamma and Inverse Gaussian accelerated degradation model. However, the results are very restricted to the case of the number of the stress levels equal to two. To overcome this difficulty, we will address the problem for the number of the stress levels greater than three. In this thesis, we first demonstrate that anecessary conditionof the sample-size allocationfor 3-stress based on EDAD model is that we only need to assign testing units into two stresses level. Furthermore, we also obtained the optimal sample-sizeallocation formulafor thementioned-above accelerated degradation models. More specifically,under Gamma accelerated degradation model, we must assign testing units at stressesS1 and S3; for Wiener or Inverse Gaussian accelerated degradation model,we may arrange either the stress level in , or depending on different conditions.
APA, Harvard, Vancouver, ISO, and other styles
16

"Optimal sample size allocation for multi-level stress testing with extreme value regression under type-I censoring." 2012. http://library.cuhk.edu.hk/record=b5549162.

Full text
Abstract:
在多組壽命試驗中,為了準確地估計模型的參數,我們必須找出最合適的實驗品數量,以分配給每一個應力水平。近來, Ng, Chan and Balakrishnan(2006),在完整樣本情況下,利用「極值回歸模型」發展了找尋實驗品數量最合適的分配方法。其後,Ka, Chan, Ng and Balakrishnan (2011)在同一個回歸模型下,研究了對於「II型截尾樣本」最合適的分配方法。因為我們仍未確立對「I型截尾樣本」的最合適分配方法,所以我們將會在本篇論文中探討如何在「I型截尾壽命試驗」中找出最合適的實驗品分配方法。<br>在本論文中,我們會利用最大似然估計的方法去估計模型參數。我們也會計算出「逆費雪訊息矩陣」(「漸近方差協方差矩陣」)I⁻¹,用以量度參數估計值的準確度。以下是三個對最合適分配方法的決定準則:<br>1.費雪訊息矩陣的行列式最大化,<br>2. ν1估計值的方差最小化, var( ν1)(V -優化準則 )<br>3.漸近方差協方差矩陣的跡最小化, tr(⁻¹)(A-優化準則 )<br>我們也會討論在「極值回歸模型」的特例:「指數回歸模型」之下最合適的分配方法。<br>In multi-group life-testing experiment, it is essential to optimize the allocation of the items under test to dierent stress levels in order to estimate the model parameter accurately. Recently Ng, Chan and Balakrishnan(2006) developed the optimal allocation for complete sample case with extreme value regression model, and Ka, Chan, Ng and Balakrishnan (2011) discussed about the optimal allocation for Type -II censoring cases with the same model. The optimal allocation for Type-I censoring scheme has not been established, so in this thesis, we are going to investigate the optimal allocation if Type-I censoring scheme is adopted in life-testing experiment.<br>Maximum likelihood estimation method will be adopted in this thesis for estimating model parameter. The inverted Fisher information matrix (asymptotic variance -covariance matrix),I⁻¹ , will be derived and used to measure the accuracy of the estimated parameters. The optimal allocation will be determined based on three optimal criteria:<br>1. Maximizing the determinant of the expected Fisher Information matrix,<br>2. Minimizing the variance of the estimator of ν1, var( ν1) (V -optimality )<br>3. Minimizing the trace of the variance-covariance matrix, tr(I⁻¹) (A-optimality )<br>Optimal allocation under the exponential regression model,which is a spe¬cial case of extreme value regression model, will also be discussed.<br>Detailed summary in vernacular field only.<br>Detailed summary in vernacular field only.<br>Detailed summary in vernacular field only.<br>Detailed summary in vernacular field only.<br>Detailed summary in vernacular field only.<br>Detailed summary in vernacular field only.<br>So, Hon Yiu.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.<br>Includes bibliographical references (leaves 46-48).<br>Abstracts also in Chinese.<br>Abstract --- p.i<br>Acknowledgement --- p.i<br>Chapter 1 --- Introduction --- p.1<br>Chapter 1.1 --- Accelerated Life Test --- p.1<br>Chapter 1.2 --- Life-Stress Relationship --- p.1<br>Chapter 1.3 --- Type I Censoring --- p.3<br>Chapter 1.4 --- Optimal Allocation --- p.3<br>Chapter 1.5 --- The Scope of the Thesis --- p.4<br>Chapter 2 --- Extreme Value Regression Model --- p.5<br>Chapter 2.1 --- Introduction --- p.5<br>Chapter 2.2 --- Model and Maximum Likelihood Estimation --- p.5<br>Chapter 2.3 --- Expected Fisher Information --- p.8<br>Chapter 3 --- Criteria for Optimization and the Optimal Allocation --- p.12<br>Chapter 3.1 --- Introduction --- p.12<br>Chapter 3.2 --- Criteria for Optimization --- p.12<br>Chapter 3.3 --- Numerical Illustrations and the Optimal Allocation --- p.14<br>Chapter 4 --- Sensitivity Analysis --- p.17<br>Chapter 4.1 --- Introduction --- p.17<br>Chapter 4.2 --- Sensitivity Analysis --- p.17<br>Chapter 4.3 --- Numerical Illustrations --- p.19<br>Chapter 4.3.1 --- Illustration with McCool (1980) Data --- p.19<br>Chapter 4.3.2 --- Further Study --- p.21<br>Chapter 5 --- Exponential Regression Estimation --- p.26<br>Chapter 5.1 --- Introduction --- p.26<br>Chapter 5.2 --- The Model and the Likelihood Inference --- p.27<br>Chapter 5.3 --- Optimal Sample Size Allocation for Estimation of Model Pa- rameters --- p.30<br>Chapter 5.4 --- Numerical Illustration --- p.33<br>Chapter 5.5 --- Sensitivity Analysis --- p.35<br>Chapter 5.5.1 --- Parameter Misspeci cation --- p.35<br>Chapter 5.5.2 --- Censoring Time --- p.38<br>Chapter 5.5.3 --- Further Study --- p.40<br>Chapter 6 --- Conclusion and Further Research --- p.44
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Tin-Han, and 林廷翰. "Optimal Sample Size Allocation for Accelerated Life Test with Multiple Levels of Stress under Location-Scale Distributions." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/22qm9x.

Full text
Abstract:
碩士<br>淡江大學<br>數學學系數學與數據科學碩士班<br>106<br>Accelerated life test is widely used to assess the lifetime information (e.g., p-th quantile or mean-time-to-failure (MTTF)) of the highly reliable products. Hence, how to design an efficient accelerated life test plan for assessing the product’s lifetime information at normal-use stress such as the optimal sample-size allocation turns out to be a challenging issue for reliability analysts. In this paper, motivated by a mylar-polyurethane data, we first proposed an accelerated life model that random error is a location-scale distribution. Next, by using the optimality criterion that minimized the asymptotic variance of the estimator of the product&apos;&apos;s p-th percentile lifetime, this article derived the analytical expression of the optimal sample-size allocation for a k-stress accelerated life test. We demonstrated that a necessary condition of the sample-size allocation for k-stress based on the criterion is to assign tested units into the lowest and highest stress levels. A Monte Carlo simulation study was conducted to demonstrate the simulated values are quite close to the asymptotic values when sample sizes are large.
APA, Harvard, Vancouver, ISO, and other styles
18

Teng, Zhaoyang. "Optimal and adaptive designs for multi-regional clinical trials with regional consistency requirement." Thesis, 2015. https://hdl.handle.net/2144/15706.

Full text
Abstract:
To shorten the time for drug development and regulatory approval, a growing number of clinical trials are being conducted in multiple regions simultaneously. One of the challenges to multi-regional clinical trials (MRCT) is how to utilize the data obtained from other regions within the entire trial to help make local approval decisions. In addition to the global efficacy, the evidence of consistency in treatment effects between the local region and the entire trial is usually required for regional approval. In recent years, a number of statistical models and consistency criteria have been proposed. The sample size requirement for the region of interest was also studied. However, there is no specific regional requirement being broadly accepted; sample size planning considering regional requirement of all regions of interest is not well developed; how to apply the adaptive design to MRCT has not been studied. In this dissertation, we have made a number of contributions. First, we propose a unified regional requirement for the consistency assessment of MRCT, which generalizes the requirements proposed by Ko et al. (2010), Chen et al. (2012) and Tsong et al. (2012), make recommendations for choosing the value of parameters defining the proposed requirement, and determine the sample size increase needed to preserve power. Second, we propose two optimal designs for MRCT: minimal total sample size design and maximal utility design, which will provide more effective sample size allocation to ensure certain overall power and assurance probabilities of all interested regions. We also introduce the factors which should be considered in designing MRCT and analyze how each factor affects sample size planning. Third, we propose an unblinded region-level adaptive design to perform sample size re-estimation and re-allocation at interim based on the observed values of each region. We can determine not only whether to stop the whole MRCT based on the conditional power, but also whether to stop any individual region based on the conditional success rate at interim. The simulation results support that the proposed adaptive design has better performance than the classical design in terms of overall power and success rate of each region.
APA, Harvard, Vancouver, ISO, and other styles
19

Hewes, Bailey. "An Investigation of the Optimal Sample Size, Relationship between Existing Tests and Performance, and New Recommended Specifications for Flexible Base Courses in Texas." Thesis, 2013. http://hdl.handle.net/1969.1/149371.

Full text
Abstract:
The purpose of this study was to improve flexible base course performance within the state of Texas while reducing TxDOT’s testing burden. The focus of this study was to revise the current specification with the intent of providing a “performance related” specification while optimizing sample sizes and testing frequencies based on material variability. A literature review yielded information on base course variability within and outside the state of Texas, and on what tests other states, and Canada, are currently using to characterize flexible base performance. A sampling and testing program was conducted at Texas A&M University to define current variability information, and to conduct performance related tests including resilient modulus and permanent deformation. In addition to these data being more current, they are more representative of short-term variability than data obtained from the literature. This “short-term” variability is considered more realistic for what typically occurs during construction operations. A statistical sensitivity analysis (based on the 80th percentile standard deviation) of these data was conducted to determine minimum sample sizes for contractors to qualify for the proposed quality monitoring program (QMP). The required sample sizes for contractors to qualify for the QMP are 20 for gradation, compressive strength, and moisture-density tests, 15 for Atterberg Limits, and 10 for Web Ball Mill. These sample sizes are based on a minimum 25,000 ton stockpile, or “lot”. After qualifying for the program, if contractors can prove their variability is better than the 80th percentile, they can reduce their testing frequencies. The sample size for TxDOT’s verification testing is 5 samples per lot and will remain at that number regardless of reduced variability. Once qualified for the QMP, a contractor may continue to send material to TxDOT projects until a failing sample disqualifies the contractor from the program. TxDOT does not currently require washed gradations for flexible base. Dry and washed sieve analyses were performed during this study to investigate the need for washed gradations. Statistical comparisons of these data yielded strong evidence that TxDOT should always use a washed method. Significant differences between the washed and dry method were determined for the percentage of material passing the No. 40 and No. 200 sieves. Since TxDOT already specifies limits on the fraction of material passing the No. 40 sieve, and since this study yielded evidence of that size fraction having a relationship with resilient modulus (performance), it would be beneficial to use a washed sieve analysis and therefore obtain a more accurate reading for that specification. Furthermore, it is suggested the TxDOT requires contractors to have “target” test values, and to place 90 percent within limits (90PWL) bands around those target values to control material variability.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Wan-Chin, and 林琬津. "Optimal Sample Sizes for Behrens-Fisher Problem—with Allocation Constraints." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/99881721817606352291.

Full text
Abstract:
碩士<br>中原大學<br>應用數學研究所<br>98<br>The Behrens–Fisher problem is the problem concerning the mean differences between two normally distributed populations and assuming that the variances of the two populations are unequal or unknown, based on two independent samples. In the thesis, we consider using Welch‘s t test to evaluate power to find optimal sample sizes. First, adjusting the test statistic as exact distribution and adjusting the critical value as a function of Beta distribution. Second, discussing that what are the two optimal sample sizes required to attain the specified power level. The first problem is finding out the optimal sample sizes when the ratio of sample sizes is fixed in advance. The second problem is determinating the other optimal sample sizes when one sample sizes is fixed in advance.
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, Chia-De, and 黃嘉德. "Optimal Sample Sizes for Behrens-Fisher Problem Based on Power and Cost Considerations." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/38011428035543720963.

Full text
Abstract:
碩士<br>中原大學<br>應用數學研究所<br>98<br>Abstract Two groups are normal distribution. The mean and variance of two groups are unknown. The question is how to test the mean of two groups which is called Behrens – Fisher problem. This paper mainly applies Lee’s method in 1992, we use the power function which have exact t distribution to replace the original power function. Using the power function to find the best numbers that is concerning of two kinds problem about the cost and numbers. One kind consider the restriction of power to find the numbers which have the minimum cost, another is consider the restriction of cost to find the numbers which have maximum power. We can use the program R of statistic to find the numbers and observing the correlation of numbers and variables. Keyword: Behrens – Fisher Problem; Power Function; Power; Cost; Best Numbers
APA, Harvard, Vancouver, ISO, and other styles
22

Cheng, Yu-Chieh, and 鄭宇傑. "Optimal Sample Sizes for Behrens-Fisher Problem Based on Precision and Cost Considerations." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/22253431304170858803.

Full text
Abstract:
碩士<br>中原大學<br>應用數學研究所<br>98<br>This paper mainly researches the Behrens-Fisher problem on the confidence interval for the difference of two normal population means, according to precision of confidence interval and the unit cost considerations, to discuss the selection of optimal sample size. Using the approximate t statistic of the Welch approximate t test obtained though the change of variable to derive the precision of the calculated formula is main method to this. There are two topics for the research, one is that the sample size choosing of the lowest total cost which is under the restriction of precision, and other is that the sample size choosing of the highest precision which is under the restriction of budgets.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Hsien-Cheng, and 王獻正. "On Some Determinations of the Critical Values and Optimal Sample Sizes for the Process Capability Indices Cpm and Cpp." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/43876756311350292145.

Full text
Abstract:
碩士<br>淡江大學<br>統計學系<br>90<br>In recent years, process capability indices (PCI’s) have been applied in the quality control by most practitioners, that are used to assess the ability of a production process whether is capable. However, these practitioners usually simply look at the value if the estimates calculate from the sample data and then make a conclusion on whether the given process meets the capability requirement. This approach is not appropriate, since sampling errors have been ignored and estimates of process indices are point estimates. However, a more appropriate estimate would be provided by a confidence interval. Therefore, estimates can be obtained by constructing the confidence interval. In this paper, we use the approach of the non-central chi-square distribution, and the approximations that Patniak(1949), Zar(1978) and Wilson-Hilferty(1931) proposed. Under the condition of Alpa-risk and power, we apply interval estimation to derive the minimum critical values of the process indices Cpm and the maximum critical values of the process indices Cpp, and we also can determine the optimal sample sizes, respectively. In most case, both mean and standard deviation are unknown. Therefore, in this paper, we use the range method and apply interval estimation to derive the minimum critical values of the process indices Cpm and the maximum critical values of the process indices Cpp, and we can determine the p-value, respectively. For this reason, the process is considered capable in connection with the test of hypothesis approach, to assess the process capability are more reliable.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography