Academic literature on the topic 'Bayesian Sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian Sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Bayesian Sample size"

1

Cámara, Hagen Luis Tomás. "A consensus based Bayesian sample size criterion." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ64329.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Dunlei Stamey James D. "Topics in Bayesian sample size determination and Bayesian model selection." Waco, Tex. : Baylor University, 2007. http://hdl.handle.net/2104/5039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Islam, A. F. M. Saiful. "Loss functions, utility functions and Bayesian sample size determination." Thesis, Queen Mary, University of London, 2011. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1259.

Full text
Abstract:
This thesis consists of two parts. The purpose of the first part of the research is to obtain Bayesian sample size determination (SSD) using loss or utility function with a linear cost function. A number of researchers have studied the Bayesian SSD problem. One group has considered utility (loss) functions and cost functions in the SSD problem and others not. Among the former most of the SSD problems are based on a symmetrical squared error (SE) loss function. On the other hand, in a situation when underestimation is more serious than overestimation or vice-versa, then an asymmetric loss function should be used. For such a loss function how many observations do we need to take to estimate the parameter under study? We consider different types of asymmetric loss functions and a linear cost function for sample size determination. For the purposes of comparison, firstly we discuss the SSD for a symmetric squared error loss function. Then we consider the SSD under different types of asymmetric loss functions found in the literature. We also introduce a new bounded asymmetric loss function and obtain SSD under this loss function. In addition, to estimate a parameter following a particular model, we present some theoretical results for the optimum SSD problem under a particular choice of loss function. We also develop computer programs to obtain the optimum SSD where the analytic results are not possible. In the two parameter exponential family it is difficult to estimate the parameters when both are unknown. The aim of the second part is to obtain an optimum decision for the two parameter exponential family under the two parameter conjugate utility function. In this case we discuss Lindley’s (1976) optimum decision for one 6 parameter exponential family under the conjugate utility function for the one parameter exponential family and then extend the results to the two parameter exponential family. We propose a two parameter conjugate utility function and then lay out the approximation procedure to make decisions on the two parameters. We also offer a few examples, normal distribution, trinomial distribution and inverse Gaussian distribution and provide the optimum decisions on both parameters of these distributions under the two parameter conjugate utility function.
APA, Harvard, Vancouver, ISO, and other styles
4

M'lan, Cyr Emile. "Bayesian sample size calculations for cohort and case-control studies." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82923.

Full text
Abstract:
Sample size determination is one of the most important statistical issues in the early stages of any investigation that anticipates statistical analyses.<br>In this thesis, we examine Bayesian sample size determination methodology for interval estimation. Four major epidemiological study designs, cohort, case-control, cross-sectional and matched pair are the focus. We study three Bayesian sample size criteria: the average length criterion (ALC), the average coverage criterion ( ACC) and the worst outcome criterion (WOC ) as well as various extensions of these criteria. In addition, a simple cost function is included as part of our sample size calculations for cohort and case-controls studies. We also examine the important design issue of the choice of the optimal ratio of controls per case in case-control settings or non-exposed to exposed in cohort settings.<br>The main difficulties with Bayesian sample size calculation problems are often at the computational level. Thus, this thesis is concerned, to a considerable extent, with presenting sample size methods that are computationally efficient.
APA, Harvard, Vancouver, ISO, and other styles
5

Banton, Dwaine Stephen. "A BAYESIAN DECISION THEORETIC APPROACH TO FIXED SAMPLE SIZE DETERMINATION AND BLINDED SAMPLE SIZE RE-ESTIMATION FOR HYPOTHESIS TESTING." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/369007.

Full text
Abstract:
Statistics<br>Ph.D.<br>This thesis considers two related problems that has application in the field of experimental design for clinical trials: • fixed sample size determination for parallel arm, double-blind survival data analysis to test the hypothesis of no difference in survival functions, and • blinded sample size re-estimation for the same. For the first problem of fixed sample size determination, a method is developed generally for testing of hypothesis, then applied particularly to survival analysis; for the second problem of blinded sample size re-estimation, a method is developed specifically for survival analysis. In both problems, the exponential survival model is assumed. The approach we propose for sample size determination is Bayesian decision theoretical, using explicitly a loss function and a prior distribution. The loss function used is the intrinsic discrepancy loss function introduced by Bernardo and Rueda (2002), and further expounded upon in Bernardo (2011). We use a conjugate prior, and investigate the sensitivity of the calculated sample sizes to specification of the hyper-parameters. For the second problem of blinded sample size re-estimation, we use prior predictive distributions to facilitate calculation of the interim test statistic in a blinded manner while controlling the Type I error. The determination of the test statistic in a blinded manner continues to be nettling problem for researchers. The first problem is typical of traditional experimental designs, while the second problem extends into the realm of adaptive designs. To the best of our knowledge, the approaches we suggest for both problems have never been done hitherto, and extend the current research on both topics. The advantages of our approach, as far as we see it, are unity and coherence of statistical procedures, systematic and methodical incorporation of prior knowledge, and ease of calculation and interpretation.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Say Beng. "Bayesian decision theoretic methods for clinical trials." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Safaie, Nasser. "A fully Bayesian approach to sample size determination for verifying process improvement." Diss., Wichita State University, 2010. http://hdl.handle.net/10057/3656.

Full text
Abstract:
There has been significant growth in the development and application of Bayesian methods in industry. The Bayes’ theorem describes the process of learning from experience and shows how knowledge about the state of nature is continually modified as new data become available. This research is an effort to introduce the Bayesian approach as an effective tool for evaluating process adjustments aimed at causing a change in a process parameter. This is usually encountered in scenarios where the process is found to be stable but operating away from the desired level. In these scenarios, a number of changes are proposed and tested as part of the improvement efforts. Typically, it is desired to evaluate the effect of these changes as soon as possible and take appropriate actions. Despite considerable research efforts to utilize the Bayesian approach, there are few guidelines for loss computation and sample size determination. This research proposed a fully Bayesian approach for determining the maximum economic number of measurements required to evaluate and verify such efforts. Mathematical models were derived and used to establish implementation boundaries from economic and technical viewpoints. In addition, numerical examples were used to illustrate the steps involved and highlight the economic advantages of the proposed procedures.<br>Thesis (Ph.D.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering
APA, Harvard, Vancouver, ISO, and other styles
8

Kaouache, Mohammed. "Bayesian modeling of continuous diagnostic test data: sample size and Polya trees." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107833.

Full text
Abstract:
Parametric models such as the bi-normal have been widely used to analyse datafrom imperfect continuous diagnostic tests. Such models rely on assumptions thatmay often be unrealistic and/or unveri_able, and in such cases nonparametric modelspresent an attractive alternative. Further, even when normality holds, researcherstend to underestimate the sample size required to accurately estimate disease preva-lence from bi-normal models when densities from diseased and non-diseased subjectsoverlap. In this thesis we investigate both of these problems. First, we study theuse of nonparametric Polya tree models to analyze continuous diagnostic test data.Since we do not assume a gold standard test is available, our model includes a latentclass component, the latent data being the unknown true disease status for each sub-ject. Second, we develop methods for the sample size determination when designingstudies with continuous diagnostic tests. Finally, we show how Bayes factors can beused to compare the _t of Polya tree models to parametric bi-normal models. Bothsimulations and a real data illustration are included.<br>Les modèles paramétriques tel que le modèle binormal ont été largement utilisés pour analyser les données provenant de tests de diagnostic continus et non parfaits. De tels modèles reposent sur des suppositions souvent non réalistes et/ou non verifiables, et dans de tels cas les modèles nonparamétriques représentent une alternative attrayante. De plus, même quand la supposition de normalité est rencontrée les chercheurs ont tendence à sous-estimer la taille d'échantillon requise pour estimer avec exactitude la prédominance d'une maladie à partir de ces modèles bi-normaux quand les densités associées aux sujets malades se chevauchent avec celles associées aux sujets non malades. D'abord, nous étudions l'utilisation de modèles nonparametriques d'arbres de Polya pour analyser les données provenant de tests de diagnostic continus. Puisque nous ne supposons pas l'existance d'un test étalon d'or, notre modèle contient une composante de classe latente, les données latentes étant le vrai état de maladie de chaque sujet. Ensuite nous développons des méthodes pourla determination de la taille d'échantillon quand on planifie des études avec des tests de diagnostic continus. Finalement, nous montrons comment les facteurs de Bayes peuvent être utilisés pour comparer la qualité d'ajustement de modèles d'arbres de Polya à celles de modèles paramétriques binormaux. Des simulations ansi que des données réelles sont incluses.
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Junheng. "Contributions to Numerical Formal Concept Analysis, Bayesian Predictive Inference and Sample Size Determination." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1285341426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kikuchi, Takashi. "A Bayesian cost-benefit approach to sample size determination and evaluation in clinical trials." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:f5cb4e27-8d4c-4a80-b792-469e50efeea2.

Full text
Abstract:
Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the costs of the trial and the possible benefits by using a new medical treatment, and fail to consider the important fact that the number of users depends on evidence for improvement compared with the current treatment. A novel Bayesian approach, Behavioral Bayes (or BeBay for short) (Gittins and Pezeshk, 2000a,b, 2002a,b; Pezeshk, 2003), assumes that the number of patients switching to the new treatment depends on the strength of the evidence which is provided by clinical trials, and takes a value between zero and the number of potential patients in the country. The better a new treatment, the more patients switch to it and the more the resulting benefit. The model defines the optimal sample size to be the sample size that maximises the expected net benefit resulting from a clinical trial. Gittins and Pezeshk use a simple form of benefit function for paired comparisons between two medical treatments and assume that the variance of the efficacy is known. The research in this thesis generalises these original conditions by introducing a logistic benefit function to take account of differences in efficacy and safety between two drugs. The model is also extended to the more general cases of unpaired comparisons and unknown variance. The expected net benefit defined by Gittins and Pezeshk is based on the efficacy of the new drug only. It does not consider the incidence of adverse reactions and their effect on patients’ preferences. Here we include the costs of treating adverse reactions and calculate the total benefit in terms of how much the new drug can reduce societal expenditure. We describe how our model may be used for the design of phase III clinical trials, cluster randomised clinical trials and bridging studies. This is done in some detail and using illustrative examples based on published studies. For phase III trials we allow the possibility of unequal treatment group sizes, which often occur in practice. Bridging studies are those carried out to extend the range of applicability of an established drug, for example to new ethnic groups. Throughout the objective of our procedures is to optimise the costbenefit in terms of national health-care. BeBay is the leading methodology for determining sample sizes on this basis. It explicitly takes account of the roles of three decision makers, namely patients and doctors, pharmaceutical companies and the health authority.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography