Academic literature on the topic 'Sample size computations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sample size computations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sample size computations"

1

Kang, Dongwoo, Janice B. Schwartz, and Davide Verotta. "Sample Size Computations for PK/PD Population Models." Journal of Pharmacokinetics and Pharmacodynamics 32, no. 5-6 (2005): 685–701. http://dx.doi.org/10.1007/s10928-005-0078-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mindrila, Diana. "Bayesian Latent Class Analysis: Sample Size, Model Size, and Classification Precision." Mathematics 11, no. 12 (2023): 2753. http://dx.doi.org/10.3390/math11122753.

Full text
Abstract:
The current literature includes limited information on the classification precision of Bayes estimation for latent class analysis (BLCA). (1) Objectives: The present study compared BLCA with the robust maximum likelihood (MLR) procedure, which is the default procedure with the Mplus 8.0 software. (2) Method: Markov chain Monte Carlo simulations were used to estimate two-, three-, and four-class models measured by four binary observed indicators with samples of 1000, 750, 500, 250, 100, and 75 observations, respectively. With each sample, the number of replications was 500, and entropy and average latent class probabilities for most likely latent class membership were recorded. (3) Results: Bayes entropy values were more stable and ranged between 0.644 and 1. Bayes’ average latent class probabilities ranged between 0.528 and 1. MLR entropy values ranged between 0.552 and 0.958. and MLR average latent class probabilities ranged between 0.539 and 0.993. With the two-class model, BLCA outperformed MLR with all sample sizes. With the three-class model, BLCA had higher classification precision with the 75-sample size, whereas MLR performed slightly better with the 750- and 1000-sample sizes. With the 4-class model, BLCA underperformed MLR and had an increased number of unsuccessful computations, particularly with smaller samples.
APA, Harvard, Vancouver, ISO, and other styles
3

Mehta, Cyrus R., Nitin R. Patel, and Pralay Senchaudhuri. "Exact Power and Sample-Size Computations for the Cochran-Armitage Trend Test." Biometrics 54, no. 4 (1998): 1615. http://dx.doi.org/10.2307/2533685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chaibub Neto, Elias. "Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications." PLOS ONE 10, no. 6 (2015): e0131333. http://dx.doi.org/10.1371/journal.pone.0131333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dilba, Gemechis, Frank Bretz, Ludwig A. Hothorn, and Volker Guiard. "Power and sample size computations in simultaneous tests for non-inferiority based on relative margins." Statistics in Medicine 25, no. 7 (2006): 1131–47. http://dx.doi.org/10.1002/sim.2359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meng, Fanyu, Wei Shao, and Yuxia Su. "Computing Simplicial Depth by Using Importance Sampling Algorithm and Its Application." Mathematical Problems in Engineering 2021 (December 31, 2021): 1–11. http://dx.doi.org/10.1155/2021/6663641.

Full text
Abstract:
Simplicial depth (SD) plays an important role in discriminant analysis, hypothesis testing, machine learning, and engineering computations. However, the computation of simplicial depth is hugely challenging because the exact algorithm is an NP problem with dimension d and sample size n as input arguments. The approximate algorithm for simplicial depth computation has extremely low efficiency, especially in high-dimensional cases. In this study, we design an importance sampling algorithm for the computation of simplicial depth. As an advanced Monte Carlo method, the proposed algorithm outperforms other approximate and exact algorithms in accuracy and efficiency, as shown by simulated and real data experiments. Furthermore, we illustrate the robustness of simplicial depth in regression analysis through a concrete physical data experiment.
APA, Harvard, Vancouver, ISO, and other styles
7

Royston, Patrick, and Abdel Babiker. "A Menu-driven Facility for Complex Sample Size Calculation in Randomized Controlled Trials with a Survival or a Binary Outcome." Stata Journal: Promoting communications on statistics and Stata 2, no. 2 (2002): 151–63. http://dx.doi.org/10.1177/1536867x0200200204.

Full text
Abstract:
We present a menu-driven Stata program for the calculation of sample size or power for complex clinical trials with a survival time or a binary outcome. The features supported include up to six treatment arms, an arbitrary time-to-event distribution, fixed or time-varying hazard ratios, unequal patient allocation, loss to follow-up, staggered patient entry, and crossover of patients from their allocated treatment to an alternative treatment. The computations of sample size and power are based on the logrank test and are done according to the asymptotic distribution of the logrank test statistic, adjusted appropriately for the design features.
APA, Harvard, Vancouver, ISO, and other styles
8

Avramchuk, Valeriy V., E. E. Luneva, and Alexander G. Cheremnov. "Increasing the Efficiency of Using Hardware Resources for Time-Frequency Correlation Function Computation." Advanced Materials Research 1040 (September 2014): 969–74. http://dx.doi.org/10.4028/www.scientific.net/amr.1040.969.

Full text
Abstract:
In the article the techniques of increasing efficient of using multi-core processors for the task of calculating the fast Fourier transform were considered. The fast Fourier transform is led on the basis of calculating a time time-frequency correlation function. The time-frequency correlation function allows increasing the information content of the analysis as compared with the classic correlation function. The significant computational capabilities are required to calculate the time-frequency correlation function, that by reason of the necessity of multiple computing fast Fourier transform. For computing the fast Fourier transform the Cooley-Tukey algorithm with fixed base two is used, which lends itself to efficient parallelization and is simple to implement. Immediately before the fast Fourier transform computation the procedure of bit-reversing the input data sequence is used. For algorithm of calculating the time-frequency correlation function parallel computing technique was used that experimentally allowed obtaining the data defining the optimal number of iterations for each core of the CPU, depending on the sample size. The results of experiments allowed developing special software that automatically select the effective amount of subtasks for parallel processing. Also the software provides the choice of sequential or parallel computations mode, depending on the sample size and the number of frequency intervals in the calculation of time-frequency correlation function.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Lin, Kelvin C. P. Wang, Qiang Li, Wenting Luo, and Jiangang Guo. "Impacts of Sample Size on Calculation of Pavement Texture Indicators with 1mm 3D Surface Data." Periodica Polytechnica Transportation Engineering 46, no. 1 (2017): 42. http://dx.doi.org/10.3311/pptr.9587.

Full text
Abstract:
The emerging 1mm resolution 3D data collection technology is capable of covering the entire pavement surface, and provides more data sets than traditional line-of-sight data collection systems. As a result, quantifying the impact of sample size including sample width and sample length on the calculation of pavement texture indicators is becoming possible. In this study, 1mm 3D texture data are collected and processed at seven test sites using the PaveVision3D Ultra system. Analysis of Variance (ANOVA) test and linear regression models are developed to investigate various sample length and width on the calculation of three widely used texture indicators: Mean Profile Depth (MPD), Mean Texture Depth (MTD) and Power Spectra Density (PSD). Since the current ASTM standards and other procedures cannot be directly applied to 3D surface for production due to a lack of definitions, the results from this research are beneficial in the process to standardize texture indicators’ computations with 1mm 3D surface data of pavements.
APA, Harvard, Vancouver, ISO, and other styles
10

Harris, Richard J., and Dana Quade. "The Minimally Important Difference Significant Criterion for Sample Size." Journal of Educational Statistics 17, no. 1 (1992): 27–49. http://dx.doi.org/10.3102/10769986017001027.

Full text
Abstract:
For a wide range of tests of single-df hypotheses, the sample size needed to achieve 50% power is readily approximated by setting N such that a significance test conducted on data that fit one’s assumptions perfectly just barely achieves statistical significance at one’s chosen alpha level. If the effect size assumed in establishing one’s N is the minimally important effect size (i.e., that effect size such that population differences or correlations smaller than that are not of any practical or theoretical significance, whether statistically significant or not), then 50% power is optimal, because the probability of rejecting the null hypothesis should be greater than .5 when the population difference is of practical or theoretical significance but lower than .5 when it is not. Moreover, the actual power of the test in this case will be considerably higher than .5, exceeding .95 for a population difference two or more times as large as the minimally important difference (MID). This minimally important difference significant (MIDS) criterion extends naturally to specific comparisons following (or substituting for) overall tests such as the ANOVA F and chi-square for contingency tables, although the power of the overall test (i.e., the probability of finding some statistically significant specific comparison) is considerably greater than .5 when the MIDS criterion is applied to the overall test. However, the proper focus for power computations is one or more specific comparisons (rather than the omnibus test), and the MIDS criterion is well suited to setting sample size on this basis. Whereas Nmids(the sample size specified by the MIDS criterion) is much too small for the case in which we wish to prove the modified H0 that there is no important population effect, it nonetheless provides a useful metric for specifying the necessary sample size. In particular, the sample size needed to have a 1 – α probability that the (1 − α)-level confidence interval around one’s population parameter includes no important departure from H0 is four times Nmids when H0 is true and approximately [4/(1 – b)2].NMIDS when b (the ratio between the actual population difference and the minimally important difference) is between zero and unity. The MIDS criterion for sample size provides a useful alternative to the methods currently most commonly employed and taught.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sample size computations"

1

Schintler, Laurie A., and Manfred M. Fischer. "The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6637/1/2018%2D10%2D28_Big_Data_on_cities_and_regions_untrack_changes.pdf.

Full text
Abstract:
Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.<br>Series: Working Papers in Regional Science
APA, Harvard, Vancouver, ISO, and other styles
2

Riou, Jérémie. "Multiplicité des tests, et calculs de taille d'échantillon en recherche clinique." Thesis, Bordeaux 2, 2013. http://www.theses.fr/2013BOR22066/document.

Full text
Abstract:
Ce travail a eu pour objectif de répondre aux problématiques inhérentes aux tests multiples dans le contexte des essais cliniques. A l’heure actuelle un nombre croissant d’essais cliniques ont pour objectif d’observer l’effet multifactoriel d’un produit, et nécessite donc l’utilisation de co-critères de jugement principaux. La significativité de l’étude est alors conclue si et seulement si nous observons le rejet d’au moins r hypothèses nulles parmi les m hypothèses nulles testées. Dans ce contexte, les statisticiens doivent prendre en compte la multiplicité induite par cette pratique. Nous nous sommes consacrés dans un premier temps à la recherche d’une correction exacte pour l’analyse des données et le calcul de taille d’échantillon pour r = 1. Puis nous avons travaillé sur le calcul de taille d’´echantillon pour toutes valeurs de r, quand les procédures en une étape, ou les procédures séquentielles sont utilisées. Finalement nous nous sommes intéressés à la correction du degré de signification engendré par la recherche d’un codage optimal d’une variable explicative continue dans un modèle linéaire généralisé<br>This work aimed to meet multiple testing problems in clinical trials context. Nowadays, in clinical research it is increasingly common to define multiple co-primary endpoints in order to capture a multi-factorial effect of the product. The significance of the study is concluded if and only if at least r null hypotheses are rejected among the m null hypotheses. In this context, statisticians need to take into account multiplicity problems. We initially devoted our work on exact correction of the multiple testing for data analysis and sample size computation, when r = 1. Then we worked on sample size computation for any values of r, when stepwise and single step procedures are used. Finally we are interested in the correction of significance level generated by the search for an optimal coding of a continuous explanatory variable in generalized linear model
APA, Harvard, Vancouver, ISO, and other styles
3

Maremba, Thanyani Alpheus. "Computation of estimates in a complex survey sample design." Thesis, 2019. http://hdl.handle.net/10386/2920.

Full text
Abstract:
Thesis (M.Sc. (Statistics)) -- University of Limpopo, 2019<br>This research study has demonstrated the complexity involved in complex survey sample design (CSSD). Furthermore the study has proposed methods to account for each step taken in sampling and at the estimation stage using the theory of survey sampling, CSSD-based case studies and practical implementation based on census attributes. CSSD methods are designed to improve statistical efficiency, reduce costs and improve precision for sub-group analyses relative to simple random sample(SRS).They are commonly used by statistical agencies as well as development and aid organisations. CSSDs provide one of the most challenging fields for applying a statistical methodology. Researchers encounter a vast diversity of unique practical problems in the course of studying populations. These include, interalia: non-sampling errors,specific population structures,contaminated distributions of study variables,non-satisfactory sample sizes, incorporation of the auxiliary information available on many levels, simultaneous estimation of characteristics in various sub-populations, integration of data from many waves or phases of the survey and incompletely specified sampling procedures accompanying published data. While the study has not exhausted all the available real-life scenarios, it has outlined potential problems illustrated using examples and suggested appropriate approaches at each stage. Dealing with the attributes of CSSDs mentioned above brings about the need for formulating sophisticated statistical procedures dedicated to specific conditions of a sample survey. CSSD methodologies give birth to a wide variety of approaches, methodologies and procedures of borrowing the strength from virtually all branches of statistics. The application of various statistical methods from sample design to weighting and estimation ensures that the optimal estimates of a population and various domains are obtained from the sample data.CSSDs are probability sampling methodologies from which inferences are drawn about the population. The methods used in the process of producing estimates include adjustment for unequal probability of selection (resulting from stratification, clustering and probability proportional to size (PPS), non-response adjustments and benchmarking to auxiliary totals. When estimates of survey totals, means and proportions are computed using various methods, results do not differ. The latter applies when estimates are calculated for planned domains that are taken into account in sample design and benchmarking. In contrast, when the measures of precision such as standard errors and coefficient of variation are produced, they yield different results depending on the extent to which the design information is incorporated during estimation. The literature has revealed that most statistical computer packages assume SRS design in estimating variances. The replication method was used to calculate measures of precision which take into account all the sampling parameters and weighting adjustments computed in the CSSD process. The creation of replicate weights and estimation of variances were done using WesVar, astatistical computer package capable of producing statistical inference from data collected through CSSD methods. Keywords: Complex sampling, Survey design, Probability sampling, Probability proportional to size, Stratification, Area sampling, Cluster sampling.
APA, Harvard, Vancouver, ISO, and other styles
4

TABASSO, MYRIAM. "Spatial regression in large datasets: problem set solution." Doctoral thesis, 2014. http://hdl.handle.net/11573/918515.

Full text
Abstract:
In this dissertation we investigate a possible attempt to combine the Data Mining methods and traditional Spatial Autoregressive models, in the context of large spatial datasets. We start to considere the numerical difficulties to handle massive datasets by the usual approach based on Maximum Likelihood estimation for spatial models and Spatial Two-Stage Least Squares. So, we conduct an experiment by Monte Carlo simulations to compare the accuracy and computational complexity for decomposition and approximation techniques to solve the problem of computing the Jacobian in spatial models, for various regular lattice structures. In particular, we consider one of the most common spatial econometric models: spatial lag (or SAR, spatial autoregressive model). Also, we provide new evidences in the literature, by examining the double effect on computational complexity of these methods: the influence of "size effect" and "sparsity effect". To overcome this computational problem, we propose a data mining methodology as CART (Classification and Regression Tree) that explicitly considers the phenomenon of spatial autocorrelation on pseudo-residuals, in order to remove this effect and to improve the accuracy, with significant saving in computational complexity in wide range of spatial datasets: realand simulated data.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sample size computations"

1

Wilson, Kenneth R. A computer program for sample size computations for banding studies. U.S. Dept. of the Interior, Fish and Wildlife Service, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Anderson, James A. Programming. Oxford University Press, 2018. http://dx.doi.org/10.1093/acprof:oso/9780199357789.003.0014.

Full text
Abstract:
The author makes several suggestions for how to control the direction taken by an active cognitive process. He proposes a neural/cognitive programming mechanism: traveling waves on cortex. Evidence for traveling waves exists, and interactions of such waves have useful properties. One example is due to Pitts and McCulloch: Why are squares of different sizes seen as examples of squares? If excitation propagates from the corners of a square, waves meet at the diagonals. Squares of different sizes then have a common diagonal representation. Later models include “grassfire models” and “medial axis” models. Experiments suggests that response exists at a “medial axis” halfway between bounding contours, and in this approach “Identity” and “Symmetry” become the same computation. Traveling waves in audition can be used to give the pattern-dependent frequency independent responses seen in some kinds of speech perception.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sample size computations"

1

Dalgaard, Peter. "Power and the computation of sample size." In Statistics and Computing. Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-79054-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivasan, Anand, Rituparna Maiti, and Archana Mishra. "Computation of Sample Size for Clinical Studies." In R for Basic Biostatistics in Medical Research. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-97-6980-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hothorn, L. "Sample Size Estimation for Several Trend Tests in the k-Sample Problem." In Computational Statistics. Physica-Verlag HD, 1992. http://dx.doi.org/10.1007/978-3-642-48678-4_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mussabayev, Rustam, and Ravil Mussabayev. "Superior Parallel Big Data Clustering Through Competitive Stochastic Sample Size Optimization in Big-Means." In Intelligent Information and Database Systems. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4985-0_18.

Full text
Abstract:
AbstractThis paper introduces a novel K-means clustering algorithm, an advancement on the conventional Big-means methodology. The proposed method efficiently integrates parallel processing, stochastic sampling, and competitive optimization to create a scalable variant designed for big data applications. It addresses scalability and computation time challenges typically faced with traditional techniques. The algorithm adjusts sample sizes dynamically for each worker during execution, optimizing performance. Data from these sample sizes are continually analyzed, facilitating the identification of the most efficient configuration. By incorporating a competitive element among workers using different sample sizes, efficiency within the Big-means algorithm is further stimulated. In essence, the algorithm balances computational time and clustering quality by employing a stochastic, competitive sampling strategy in a parallel computing setting.
APA, Harvard, Vancouver, ISO, and other styles
5

Katsis, Athanassios, and Hector E. Nistazakis. "Bayesian Sample Size Calculations with Imperfect Diagnostic Tests." In Advances in Computational Methods in Sciences and Engineering 2005 (2 vols). CRC Press, 2022. http://dx.doi.org/10.1201/9780429077166-66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Hui-Qiong, and Liu-Cang Wu. "Sample Size Determination via Non-unity Relative Risk for Stratified Matched-Pair Studies." In Computational Risk Management. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18387-4_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pommellet, Adrien, Daniel Stan, and Simon Scatton. "SAT-Based Learning of Computation Tree Logic." In Automated Reasoning. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63498-7_22.

Full text
Abstract:
AbstractThe learning problem consists in finding for a given sample of positive and negative Kripke structures a distinguishing formula that is verified by the former but not by the latter. Further constraints may bound the size and shape of the desired formula or even ask for its minimality in terms of syntactic size. This synthesis problem is motivated by explanation generation for dissimilar models, e.g. comparing a faulty implementation with the original protocol. We devise a -based encoding for a fixed size formula, then provide an incremental approach that guarantees minimality. We further report on a prototype implementation whose contribution is twofold: first, it allows us to assess the efficiency of various output fragments and optimizations. Secondly, we can experimentally evaluate this tool by randomly mutating Kripke structures or syntactically introducing errors in higher-level models, then learning distinguishing formulas.
APA, Harvard, Vancouver, ISO, and other styles
8

Vaclavik, Marek, Zuzana Sikorova, and Iva Cervenkova. "Analysis of Independences of Normality on Sample Size with Regards to Reliability." In Computational Statistics and Mathematical Modeling Methods in Intelligent Systems. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31362-3_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rajeshwari, I., and K. Shyamala. "Study on Performance of Classification Algorithms Based on the Sample Size for Crop Prediction." In Computational Vision and Bio-Inspired Computing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37218-7_110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fonger, Frederik, Niclas Nebelung, Arvid Lepsien, Milda Aleknonytė-Resch, and Agnes Koschmider. "Representative Sampling in Process Mining: Two Novel Sampling Algorithms for Event Logs." In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_4.

Full text
Abstract:
Abstract Process mining allows the discovery of business processes from an event log. However, event logs are rapidly increasing in size and process mining algorithms struggle with the computational load when efficient processing is required. This calls for methods that decrease the event log size while still preserving the representativeness of the event log. This paper presents two new algorithms for sampling event logs. The first algorithm called chooses traces from an event log above a threshold and subsequently selects traces with underrepresented Directly Follows Relations. The second sampling algorithm called selects samples that have a high intersection of Directly Follows Relations with the original event log. Usually, is complemented with for a more accurate sample representation. They perform well for conformance checking and excel in certain scenarios for process discovery. Thus, both algorithms outperform existing sampling algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sample size computations"

1

Aminpour, Sara, Yaser Banad, and Sarah Sharif. "Quantum Machine Learning Performance Analysis: Accuracy and Efficiency Trade-offs in Linear Classification." In Frontiers in Optics. Optica Publishing Group, 2024. https://doi.org/10.1364/fio.2024.jw5a.72.

Full text
Abstract:
This study introduces the Nelder-Mead minimization method for data reuploading and examines the performance of quantum machine learning algorithms for linear classification using 1-qubit, 2-qubit, and 2-qubit entangled systems. We analyze accuracy and computation time across varying training sample sizes, revealing trade-offs between classification performance and computational efficiency in quantum systems.
APA, Harvard, Vancouver, ISO, and other styles
2

BONIFACE, Jean-Christophe. "A Computational Framework for Helicopter Fuselage Drag Reduction Using Vortex Generators." In Vertical Flight Society 70th Annual Forum & Technology Display. The Vertical Flight Society, 2014. http://dx.doi.org/10.4050/f-0070-2014-9444.

Full text
Abstract:
A computational framework has been developed for the CFD computation of a blunt helicopter fuselage equipped with vortex generators (VG). The VG are explicitly discretized in the CFD mesh, within the use of the overset grid method. A dedicated mesh generator developed at ONERA has been improved together with an all-in-one computational set-up, allowing parametric investigations for VG arrangement, shape, position, and size effects. The methodology has been applied to a down-scaled GOAHEAD-like wind-tunnel model, for drag reduction by passive flow control on the backdoor surface at cruise-flight conditions. A test-matrix has been completed and a reference VG configuration was identified as promising for an array of 2x8 counter-rotating VG with zero-thickness vane-type like planform. The VG were sized according to the approximated boundary-layer thickness following classical integral formulation for a longitudinal flow. At cruise conditions, the VG configuration achieves about 5% total drag reduction by strongly reducing the extension of the separated flow at the backdoor/tail boom junction. The proposed reference VG configuration will be tested during a wind-tunnel test campaign for the same GOAHEAD-like downscaled model.
APA, Harvard, Vancouver, ISO, and other styles
3

Ge, Zhengwei, and Chun Yang. "Concentration of Samples in Microfluidic Structure Using Joule Heating Effects." In ASME 2009 Second International Conference on Micro/Nanoscale Heat and Mass Transfer. ASMEDC, 2009. http://dx.doi.org/10.1115/mnhmt2009-18308.

Full text
Abstract:
Microfluidic concentration is achieved using temperature gradient focusing (TGF) in a microchannel with a step change in cross-section. A mathematical model is developed to describe the complex TGF processes. The proposed mathematical model includes a set of governing equations for the applied electric potential, electroosmotic flow field, Joule heating induced temperature field, and sample analyte concentration distributions as well. Scaling analysis was conducted to estimate time scales so as to simplify the mathematical model. Numerical computations were performed to obtain the temperature, velocity and sample concentration distributions. Experiments were carried out to study the effects of applied voltage, buffer concentration, and channel size on sample concentration in the TGF processes. These effects were analyzed and summarized using a dimensionless Joule number that was introduced in this study. In addition, Joule number effect in the PDMS/PDMS microdevice was compared with the PDMS/Glass microdevice. A more than 450-fold concentration enhancement was obtained within 75 seconds in the PDMS/PDMS microdevice. Overall, the numerical simulations were found in a reasonable agreement with the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
4

Bos, Jeannine A., and Michael M. Chen. "On the Prediction of Weld Pool Size and Heat Affected Zone in Shallow-Pool Welding." In ASME 1996 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/imece1996-1035.

Full text
Abstract:
Abstract Numerical modeling of the Weld pool has been pursued by many investigators over the last decade. Most of these efforts have focused on the details of fluid flow and heat transfer in the weld pool. These efforts have not addressed the important practical issues of the welding engineer which include reliable prediction of the weld pool dimensions and the size of the heat affected zone based on welding parameters such as power and speed. A consequence of this is that available prediction methods are still based on heat conduction considerations without any input from weld pool convection analysis. In the present paper it is shown that the dependence of weld dimensions on power and speed are relatively insensitive to the details of the flow and are primarily influenced by the aspect ratio of the weld pool. It is possible to divide the prediction of weld dimensions into two separate tasks. One task is the prediction of the pool aspect ratio. This is the role of weld pool convection studies. The other is the prediction of weld dimensions based on given aspect ratios and welding parameters. This is accomplished through heat conduction analysis outside the weld pool. Numerical computations for the latter have been carried out and the results are presented in correlations which are convenient for prediction purposes. Sample calculations to illustrate the use of the results are also given.
APA, Harvard, Vancouver, ISO, and other styles
5

Salazar, Addisson, Luis Vergara, and Alberto Gonzalez. "A Training Sample Size Estimation for the Bayes Classifier." In 2023 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2023. http://dx.doi.org/10.1109/csci62032.2023.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, Ernie, Muhammad Hassan Rashid, Pin-Jie Lin, et al. "Revisiting Sample Size Determination in Natural Language Understanding." In Findings of the Association for Computational Linguistics: ACL 2023. Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-acl.419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Phan, John H., Richard A. Moffitt, Andrea B. Barrett, and May D. Wang. "Improving Microarray Sample Size Using Bootstrap Data Combination." In 2008 International Multi-symposiums on Computer and Computational Sciences (IMSCCS). IEEE, 2008. http://dx.doi.org/10.1109/imsccs.2008.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pourgol-Mohammad, Mohammad. "Uncertainty Propagation in Complex Codes Calculations." In 2013 21st International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icone21-16570.

Full text
Abstract:
The uncertainty propagation is an important segment of quantitative uncertainty analysis for complex computational codes (e.g., RELAP5 thermal-hydraulics) computations. Different sampling techniques, dependencies between uncertainty sources, and accurate inference on results are among the issues to be considered. The dynamic behavior of the system codes executed in each time step, results in transformation of accumulated errors and uncertainties to next time step. Depending on facility type, availability of data, scenario specification, computing machine and the software used, propagation of uncertainty results in considerably different results. This paper discusses the practical considerations of uncertainty propagation for code computations. The study evaluates the implications of the complexity on propagation of the uncertainties through inputs, sub-models and models. The study weighs different techniques of propagation, their statistics with considering their advantages and limitation at dealing with the problem. The considered methods are response surface, Monte Carlo (including simple, Latin Hypercube, and importance sampling) and boot-strap techniques. As a case study, the paper will discuss uncertainty propagation of the Integrated Methodology on Thermal-Hydraulics Uncertainty Analysis (IMTHUA). The methodology comprehensively covers various aspects of complex code uncertainty assessment for important accident transients. It explicitly examines the TH code structural uncertainties by treating internal sub-model uncertainties and by propagating such model uncertainties along with parameters in the code calculations. The two-step specification of IMTHUA (input phase following with the output updating) makes it special case to make sure that the figure of merit statistical coverage is achieved at the end with target confidence level. Tolerance limit statistics provide confidence a level on the level of coverage depending on the sample size, number of output measures, and one-sided or two-sided type of statistics. This information should be transferred to the second phase in the form of a probability distribution for each of the output measures. The research question is how to use data to develop such distributions from the corresponding tolerance limit statistics. Two approaches of using extreme values method and Bayesian updating are selected to estimate the parametric distribution parameters and compare the coverage in respect to the selected coverage criteria. The analysis is demonstrated on the large break loss of coolant accident for the LOFT test facility.
APA, Harvard, Vancouver, ISO, and other styles
9

Pagalthivarthi, Krishnan V., John M. Furlan, and Robert J. Visintainer. "Effective Particle Size Representation for Erosion Wear in Centrifugal Pump Casings." In ASME 2017 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/fedsm2017-69240.

Full text
Abstract:
For the purpose of Computational Fluid Dynamic (CFD) simulations, the broad particle size distribution (PSD) encountered in industrial slurries is classified into a discrete number of size classes. Since mono-size simulations consume much less computational time, especially in 3D simulations, it would be advantageous to determine an equivalent single particle size representation which yields the same wear distribution predictions as the multi-size simulations. This work extends the previous two-dimensional study [1], which was for a specific PSD slurry flow through three selected pumps, to determine an effective equivalent mono-size representation. The current study covers two-dimensional simulations over a wide range of pumps of varying sizes (40 pumps), 2 inlet concentrations and 4 different particle size distributions. Comparison is made between the multi-size wear prediction and different possible representative mono-size particle wear predictions. In addition, a comparison of multi-size and different mono-size results using three dimensional simulations is also shown for a typical slurry pump as a sample case to highlight that the conclusions drawn for two dimensional simulation could hold good for three dimensional simulations as well. It is observed that by using a mono-size equivalent, the computation time is 20–25% of the computation time for multi-size (6-particle) simulation.
APA, Harvard, Vancouver, ISO, and other styles
10

Dong, Guangling, Chi He, and Zhengguo Dai. "Optimal sample size allocation for integrated test scheme." In 2015 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA). IEEE, 2015. http://dx.doi.org/10.1109/civemsa.2015.7158624.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sample size computations"

1

Gantzer, Clark J., Shmuel Assouline, and Stephen H. Anderson. Synchrotron CMT-measured soil physical properties influenced by soil compaction. United States Department of Agriculture, 2006. http://dx.doi.org/10.32747/2006.7587242.bard.

Full text
Abstract:
Methods to quantify soil conditions of pore connectivity, tortuosity, and pore size as altered by compaction were done. Air-dry soil cores were scanned at the GeoSoilEnviroCARS sector at the Advanced Photon Source for x-ray computed microtomography of the Argonne facility. Data was collected on the APS bending magnet Sector 13. Soil sample cores 5- by 5-mm were studied. Skeletonization algorithms in the 3DMA-Rock software of Lindquist et al. were used to extract pore structure. We have numerically investigated the spatial distribution for 6 geometrical characteristics of the pore structure of repacked Hamra soil from three-dimensional synchrotron computed microtomography (CMT) computed tomographic images. We analyzed images representing cores volumes 58.3 mm³ having average porosities of 0.44, 0.35, and 0.33. Cores were packed with &lt; 2mm and &lt; 0.5mm sieved soil. The core samples were imaged at 9.61-mm resolution. Spatial distributions for pore path length and coordination number, pore throat size and nodal pore volume obtained. The spatial distributions were computed using a three-dimensional medial axis analysis of the void space in the image. We used a newly developed aggressive throat computation to find throat and pore partitioning for needed for higher porosity media such as soil. Results show that the coordination number distribution measured from the medial axis were reasonably fit by an exponential relation P(C)=10⁻C/C0. Data for the characteristic area, were also reasonably well fit by the relation P(A)=10⁻ᴬ/ᴬ0. Results indicates that compression preferentially affects the largest pores, reducing them in size. When compaction reduced porosity from 44% to 33%, the average pore volume reduced by 30%, and the average pore-throat area reduced by 26%. Compaction increased the shortest paths interface tortuosity by about 2%. Soil structure alterations induced by compaction using quantitative morphology show that the resolution is sufficient to discriminate soil cores. This study shows that analysis of CMT can provide information to assist in assessment of soil management to ameliorate soil compaction.
APA, Harvard, Vancouver, ISO, and other styles
2

Lozev. L52022 Validation of Current Approaches for Girth Weld Defect Sizing Accuracy. Pipeline Research Council International, Inc. (PRCI), 2002. http://dx.doi.org/10.55274/r0011325.

Full text
Abstract:
Computational tools based on probabilistic fracture mechanics have been developed to enable reliability-based fitness-for-service assessments of flawed girth welds. The same tools are readily adapted for establishing maximum allowable defect sizes to achieve targeted weld reliability. Sensitivity studies have shown that of the various input parameter uncertainties, measured defect height often has the greatest impact on the probabilities of both fracture and plastic collapse. A reduction in sizing uncertainty should thus dramatically improve predicted reliabilities. The increasing use of mechanized ultrasonic testing (UT) in pipeline construction, driven by the demands of engineering critical assessment (ECA) -based acceptance criteria, highlights the need to quantify this uncertainty, particularly for systems incorporating pulse-echo (P/E) and time-of-flight diffraction (TOFD) methods and phased-array (PA) technology. EWI collected third-party independent data and statistically characterized the systematic and random errors in girth weld defect sizing, as measured by mechanized UT using P/E and TOFD methods, as well as PA ultrasonic technology, in support of pipeline reliability assessments.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Yingjie, Selim Gunay, and Khalid Mosalam. Hybrid Simulations for the Seismic Evaluation of Resilient Highway Bridge Systems. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2020. http://dx.doi.org/10.55461/ytgv8834.

Full text
Abstract:
Bridges often serve as key links in local and national transportation networks. Bridge closures can result in severe costs, not only in the form of repair or replacement, but also in the form of economic losses related to medium- and long-term interruption of businesses and disruption to surrounding communities. In addition, continuous functionality of bridges is very important after any seismic event for emergency response and recovery purposes. Considering the importance of these structures, the associated structural design philosophy is shifting from collapse prevention to maintaining functionality in the aftermath of moderate to strong earthquakes, referred to as “resiliency” in earthquake engineering research. Moreover, the associated construction philosophy is being modernized with the utilization of accelerated bridge construction (ABC) techniques, which strive to reduce the impact of construction on traffic, society, economy and on-site safety. This report presents two bridge systems that target the aforementioned issues. A study that combined numerical and experimental research was undertaken to characterize the seismic performance of these bridge systems. The first part of the study focuses on the structural system-level response of highway bridges that incorporate a class of innovative connecting devices called the “V-connector,”, which can be used to connect two components in a structural system, e.g., the column and the bridge deck, or the column and its foundation. This device, designed by ACII, Inc., results in an isolation surface at the connection plane via a connector rod placed in a V-shaped tube that is embedded into the concrete. Energy dissipation is provided by friction between a special washer located around the V-shaped tube and a top plate. Because of the period elongation due to the isolation layer and the limited amount of force transferred by the relatively flexible connector rod, bridge columns are protected from experiencing damage, thus leading to improved seismic behavior. The V-connector system also facilitates the ABC by allowing on-site assembly of prefabricated structural parts including those of the V-connector. A single-column, two-span highway bridge located in Northern California was used for the proof-of-concept of the proposed V-connector protective system. The V-connector was designed to result in an elastic bridge response based on nonlinear dynamic analyses of the bridge model with the V-connector. Accordingly, a one-third scale V-connector was fabricated based on a set of selected design parameters. A quasi-static cyclic test was first conducted to characterize the force-displacement relationship of the V-connector, followed by a hybrid simulation (HS) test in the longitudinal direction of the bridge to verify the intended linear elastic response of the bridge system. In the HS test, all bridge components were analytically modeled except for the V-connector, which was simulated as the experimental substructure in a specially designed and constructed test setup. Linear elastic bridge response was confirmed according to the HS results. The response of the bridge with the V-connector was compared against that of the as-built bridge without the V-connector, which experienced significant column damage. These results justified the effectiveness of this innovative device. The second part of the study presents the HS test conducted on a one-third scale two-column bridge bent with self-centering columns (broadly defined as “resilient columns” in this study) to reduce (or ultimately eliminate) any residual drifts. The comparison of the HS test with a previously conducted shaking table test on an identical bridge bent is one of the highlights of this study. The concept of resiliency was incorporated in the design of the bridge bent columns characterized by a well-balanced combination of self-centering, rocking, and energy-dissipating mechanisms. This combination is expected to lead to minimum damage and low levels of residual drifts. The ABC is achieved by utilizing precast columns and end members (cap beam and foundation) through an innovative socket connection. In order to conduct the HS test, a new hybrid simulation system (HSS) was developed, utilizing commonly available software and hardware components in most structural laboratories including: a computational platform using Matlab/Simulink [MathWorks 2015], an interface hardware/software platform dSPACE [2017], and MTS controllers and data acquisition (DAQ) system for the utilized actuators and sensors. Proper operation of the HSS was verified using a trial run without the test specimen before the actual HS test. In the conducted HS test, the two-column bridge bent was simulated as the experimental substructure while modeling the horizontal and vertical inertia masses and corresponding mass proportional damping in the computer. The same ground motions from the shaking table test, consisting of one horizontal component and the vertical component, were applied as input excitations to the equations of motion in the HS. Good matching was obtained between the shaking table and the HS test results, demonstrating the appropriateness of the defined governing equations of motion and the employed damping model, in addition to the reliability of the developed HSS with minimum simulation errors. The small residual drifts and the minimum level of structural damage at large peak drift levels demonstrated the superior seismic response of the innovative design of the bridge bent with self-centering columns. The reliability of the developed HS approach motivated performing a follow-up HS study focusing on the transverse direction of the bridge, where the entire two-span bridge deck and its abutments represented the computational substructure, while the two-column bridge bent was the physical substructure. This investigation was effective in shedding light on the system-level performance of the entire bridge system that incorporated innovative bridge bent design beyond what can be achieved via shaking table tests, which are usually limited by large-scale bridge system testing capacities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography