To see the other types of publications on this topic, follow the link: Minimum sample size.

Journal articles on the topic 'Minimum sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Minimum sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Louangrath, P. I. "Sample Size Calculation for Continuous and Discrete Data." International Journal of Research and Methodology in Social Science 5, no. 4 (2019): 44–56. https://doi.org/10.5281/zenodo.3877623.

Full text
Abstract:
The purpose of this paper is to provide a practical guidance to researcher in social science on sample size determination. Sample size calculation is a basic and indispensable requisite for applied research in social science. Most research in social science is about population studies. In population studies, researchers could only study the sample of the population because detailed examination of the population is not feasible. In order for the sample to represent the population, a minimum sample must be obtained. Thus, minimum sample determination becomes a critical requisite in survey collection, interviews, or data collection. In this paper, we present minimum sample calculation methods for continuous and discrete data in non-time series scenarios. The data came from randomly generated values by using Excel command: rand()*100 for test sample sizes of <em>n = 5, 10, 20, 30, 50, 100, 200, 300, 400, 500, </em>and<em> 1,000</em>. We proposed a new minimum sample size method that consistently produces <em>n = 30.</em>
APA, Harvard, Vancouver, ISO, and other styles
2

Louangrath, P.I. "Minimum Sample Size Method Based on Survey Scales." Inter. J. Res. Methodol. Soc. Sci. 3, no. 3 (2017): 44–52. https://doi.org/10.5281/zenodo.1322593.

Full text
Abstract:
The objective of this paper is to introduce a new sample size calculation method based on the type of response scale used surveys. The current literature on sample size calculation focuses data attributes and distribution. There is no prior research using response scale as the basis for minimum sample size calculation. This paper fills that gap in the literature. We introduced a new minimum sample size calculation method called <em>n* (n-Star)</em> by using the Monte Carlo iteration as the basis to find asymptotic normality in the survey response scale. This new method allows us to achieve up to 95% accuracy in the sample-population inference. The data used in this study came from the numerical elements of the survey scales. Three Likert and one non-Likert scales were used to determine minimum sample size. Through Monte Carlo simulation and NK landscape optimization, we found that minimum sample size according to survey scales in all cases is n* = 31.61&plusmn;2.33 (<em>p &lt; 0.05</em>). We combined four scales to test for validity and reliable of the new sample size. Validity was tested by NK landscape optimization method resulted in error of F(z*) = 0.001 compared to the theoretical value for the center of the distribution curve at F(z) = 0.00. Reliability was tested by using Weibull system analysis method. It was found that the system drift tendency is L = 0.00 and system reliability R = 1.00.
APA, Harvard, Vancouver, ISO, and other styles
3

Scaketti, Matheus, Patricia Sanae Sujii, Alessandro Alves-Pereira, et al. "Sample Size Impact (SaSii): An R script for estimating optimal sample sizes in population genetics and population genomics studies." PLOS ONE 20, no. 2 (2025): e0316634. https://doi.org/10.1371/journal.pone.0316634.

Full text
Abstract:
Obtaining large sample sizes for genetic studies can be challenging, time-consuming, and expensive, and small sample sizes may generate biased or imprecise results. Many studies have suggested the minimum sample size necessary to obtain robust and reliable results, but it is not possible to define one ideal minimum sample size that fits all studies. Here, we present SaSii (Sample Size Impact), an R script to help researchers define the minimum sample size. Based on empirical and simulated data analysis using SaSii, we present patterns and suggest minimum sample sizes for experiment design. The patterns were obtained by analyzing previously published genotype datasets with SaSii and can be used as a starting point for the sample design of population genetics and genomic studies. Our results showed that it is possible to estimate an adequate sample size that accurately represents the real population without requiring the scientist to write any program code, extract and sequence samples, or use population genetics programs, thus simplifying the process. We also confirmed that the minimum sample sizes for SNP (single-nucleotide polymorphism) analysis are usually smaller than for SSR (simple sequence repeat) analysis and discussed other patterns observed from empirical plant and animal datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Louangrath, Paul, and Chanoknath Sutanapong. "Minimum Sample Size Calculation Using Cumulative Distribution Function." Inter. J. Res. Methodol. Soc. Sci. 5, no. 1 (2019): 100–113. https://doi.org/10.5281/zenodo.2667494.

Full text
Abstract:
Minimum sample size is a requirement in most experimental designs. Research in social science requires minimum sample size calculation in order to support the claim that the sample represents the population. If the sample does not adequately represent the population, generalizability could not be achieved. In this study, we present a minimum sample size calculation method by using the cumulative distribution function of the normal distribution. Since most quantitative data in social science research employ surveys with responses in the form of Likert or non-Likert scales, the CDF of the normal distribution curve is an appropriate tool for sample size determination. We use binary data in a form of (0,1), and continuous data, in a form of quantitative non-Likert (0,1,2,3), and Likert&nbsp; (1,2,3,4,5), (1,2,3,4,5,6,7) and (1,2,3,4,5,6,7,8,9,10) scales as the bases for our modeling. We used Monte Carlo simulation to determine the number of repetition for each scale to achieve normality. The minimum sample size was determined by taking the natural log of the Monte Carlo repetition multiplied by <em>pi</em>. We found that in all cases, the minimum sample size is about 30 where we maintain the confidence interval at 95%. For non-parametric case, the new sample size calculation method may be used for discrete and continuous data. For parametric modeling, we employed the entropy function for common distribution as the basis for sample size determination. This proposed sample size determination method is a contribution to the field because it served as a unified method for all data types and is a practical tool in research methodology.
APA, Harvard, Vancouver, ISO, and other styles
5

Straat, J. Hendrik, L. Andries van der Ark, and Klaas Sijtsma. "Minimum Sample Size Requirements for Mokken Scale Analysis." Educational and Psychological Measurement 74, no. 5 (2014): 809–22. http://dx.doi.org/10.1177/0013164414529793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jenkins, David G., and Pedro F. Quintana-Ascencio. "A solution to minimum sample size for regressions." PLOS ONE 15, no. 2 (2020): e0229345. http://dx.doi.org/10.1371/journal.pone.0229345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mundfrom, Daniel J., Dale G. Shaw, and Tian Lu Ke. "Minimum Sample Size Recommendations for Conducting Factor Analyses." International Journal of Testing 5, no. 2 (2005): 159–68. http://dx.doi.org/10.1207/s15327574ijt0502_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacka, T. H. "Investigations of discrepancies between laboratory studies of the flow of ice: density, sample shape and size, and grain-size." Annals of Glaciology 19 (1994): 146–54. http://dx.doi.org/10.3189/1994aog19-1-146-154.

Full text
Abstract:
Laboratory results are presented concerning ice creep at minimum creep rate (at ~1% strain) for fine-grained, initially isotropic, polycrystalline samples. The effect on the creep rate of ice density, sample shape (aspect ratio) and size, grain-size and ratio of grain-size to sample size is examined. Provided sample density is above ~0.83 Mg m−3 (i.e. the close-off density), there is no effect of density on ice-creep rate. Results provide no evidence of a creep rate dependence on test sample length for cylindrical samples. Sample diameter, however, does affect creep rate. Over the range of sample diameters studied (16.2 to 90 mm) creep rate decreases monotonically by a factor of ~4. This effect is independent of sample aspect ratio. Experiments examining size effects in simple shear indicate no dependence of minimum flow rate on shape or size in this stress configuration. Two grain-sizes were represented within the samples tested for the effect of sample size. As expected from earlier work, no grain-size effect on minimum creep rate is evident. In addition, there was no evidence of an effect on creep rate of the ratio of grain-size to sample size.
APA, Harvard, Vancouver, ISO, and other styles
9

Jacka, T. H. "Investigations of discrepancies between laboratory studies of the flow of ice: density, sample shape and size, and grain-size." Annals of Glaciology 19 (1994): 146–54. http://dx.doi.org/10.1017/s0260305500011137.

Full text
Abstract:
Laboratory results are presented concerning ice creep at minimum creep rate (at ~1% strain) for fine-grained, initially isotropic, polycrystalline samples. The effect on the creep rate of ice density, sample shape (aspect ratio) and size, grain-size and ratio of grain-size to sample size is examined. Provided sample density is above ~0.83 Mg m−3 (i.e. the close-off density), there is no effect of density on ice-creep rate. Results provide no evidence of a creep rate dependence on test sample length for cylindrical samples. Sample diameter, however, does affect creep rate. Over the range of sample diameters studied (16.2 to 90 mm) creep rate decreases monotonically by a factor of ~4. This effect is independent of sample aspect ratio. Experiments examining size effects in simple shear indicate no dependence of minimum flow rate on shape or size in this stress configuration. Two grain-sizes were represented within the samples tested for the effect of sample size. As expected from earlier work, no grain-size effect on minimum creep rate is evident. In addition, there was no evidence of an effect on creep rate of the ratio of grain-size to sample size.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Chenchen, and Shihong Yue. "Minimum Sample Size Estimate for Classifying Invasive Lung Adenocarcinoma." Applied Sciences 12, no. 17 (2022): 8469. http://dx.doi.org/10.3390/app12178469.

Full text
Abstract:
Statistical Learning Theory (SLT) plays an important role in prediction estimation and machine learning when only limited samples are available. At present, determining how many samples are necessary under given circumstances for prediction accuracy is still an unknown. In this paper, the medical diagnosis on lung cancer is taken as an example to solve the problem. Invasive adenocarcinoma (IA) is a main type of lung cancer, often presented as ground glass nodules (GGNs) in patient’s CT images. Accurately discriminating IA from non-IA based on GGNs has important implications for taking the right approach to treatment and cure. Support Vector Machine (SVM) is an SLT application and is used to classify GGNs, wherein the interrelation between the generalization and the lower bound of necessary sampling numbers can be effectively recovered. In this research, to validate the interrelation, 436 GGNs were collected and labeled using surgical pathology. Then, a feature vector was constructed for each GGN sample through the fully connected layer of AlexNet. A 10-dimensional feature subset was then selected with the p-value calculated using Analysis of Variance (ANOVA). Finally, four sets with different sample sizes were used to construct an SVM classifier. Experiments show that a theoretical estimate of minimum sample size is consistent with actual values, and the lower bound on sample size can be solved under various generalization requirements.
APA, Harvard, Vancouver, ISO, and other styles
11

TEKİNDAL, Mustafa Agah, and Ayşe Canan YAZICI. "Williams Test Required Sample Size For Determining The Minimum Effective Dose." Turkiye Klinikleri Journal of Biostatistics 8, no. 1 (2016): 53–81. http://dx.doi.org/10.5336/biostatic.2015-46401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

CHEN, ZHENMIN, and FANG ZHAO. "DETERMINING MINIMUM SURVEY SAMPLE SIZE FOR MULTI-CELL CASE." International Journal of Reliability, Quality and Safety Engineering 17, no. 06 (2010): 579–86. http://dx.doi.org/10.1142/s0218539310003962.

Full text
Abstract:
Survey analysis method is widely used in many areas such as social study, marketing research, economics, public health, clinical trials and transportation data analysis. Minimum sample size determination is always needed before a survey is conducted to avoid huge cost. Some statistical methods can be found from the literature for finding the minimum required sample size. This paper proposes a method for finding the minimum total sample size needed for the survey when the population is divided into cells. The proposed method can be used for both the infinite population case and the finite population case. A computer program is needed to realize the sample size calculation. The computer program the authors used is SAS/IML, which is a special integrated matrix language (IML) procedure of the Statistical Analysis System (SAS) software.
APA, Harvard, Vancouver, ISO, and other styles
13

Honglei, Wang, and Hu Sigui. "Minimum truncated sample size sequential test for a proportion." SCIENTIA SINICA Mathematica 49, no. 6 (2019): 931. http://dx.doi.org/10.1360/n012016-00084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Trenhaile, Alan S., and V. Chris Lakhan. "Transverse micro-erosion meter measurements; determining minimum sample size." Geomorphology 134, no. 3-4 (2011): 431–39. http://dx.doi.org/10.1016/j.geomorph.2011.07.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cai, Yuzhi, and Dominic Hames. "Minimum Sample Size Determination for Generalized Extreme Value Distribution." Communications in Statistics - Simulation and Computation 40, no. 1 (2010): 87–98. http://dx.doi.org/10.1080/03610918.2010.530368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Niedzielski, Tomasz, Piotr Migoń, and Agnieszka Placek. "A minimum sample size required from Schmidt hammer measurements." Earth Surface Processes and Landforms 34, no. 13 (2009): 1713–25. http://dx.doi.org/10.1002/esp.1851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Westfall, James A., Paul L. Patterson, and John W. Coulston. "Post-stratified estimation: within-strata and total sample size recommendations." Canadian Journal of Forest Research 41, no. 5 (2011): 1130–39. http://dx.doi.org/10.1139/x11-031.

Full text
Abstract:
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many different population structures. The impacts of minimum within-strata and total sample sizes on estimates of means and standard errors were examined for two forest inventory variables: proportion forestland and cubic net volume. Estimates of the means seem unbiased across a range of minimum within-strata sample sizes. A ratio that described the decrease in variability with increasing sample size allowed for assessment of minimum within-strata sample requirements to obtain stable estimates of means. This metric indicated that the minimum within-strata sample size should be at least 10. Estimates of standard errors were found to be biased at small total sample sizes. To obtain a bias of less than 3%, the required minimum total sample size was 25 for proportion forestland and 75 for cubic net volume. The results presented allow analysts to determine within-stratum and total sample size requirements corresponding to their criteria for acceptable levels of bias and variability.
APA, Harvard, Vancouver, ISO, and other styles
18

Madanes, Nora, and José Roberto Dadon. "Assessment of the minimum sample size required to characterize site‐scale airborne pollen." Grana 37, no. 4 (1998): 239–45. http://dx.doi.org/10.1080/00173139809362673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rajabi, Mahdi, Patrick Gerard, and Jennifer Ogle. "Highway Safety Manual Calibration: Estimating the Minimum Required Sample Size." Transportation Research Record: Journal of the Transportation Research Board 2676, no. 4 (2021): 510–23. http://dx.doi.org/10.1177/03611981211062219.

Full text
Abstract:
Crash frequency has been identified by many experts as one of the most important safety measures, and the Highway Safety Manual (HSM) encompasses the most commonly accepted predictive models for predicting the crash frequency on specific road segments and intersections. The HSM recommends that the models be calibrated using data from a jurisdiction where the models will be applied. One of the most common start-up issues with the calibration process is how to estimate the required sample size to achieve a specific level of precision, which can be a function of the variance of the calibration factor. The published research has indicated great variance in sample size requirements, and some of the sample size requirements are so large that they may deter state departments of transportation (DOT) from conducting calibration studies. In this study, an equation is derived to estimate the sample size based on the coefficient of variation of the calibration factor and the coefficient of variation of the observed crashes. Using this equation, a framework is proposed for state and local agencies to estimate the required sample size for calibration based on their desired level of precision. Using two recent calibration studies, South Carolina and North Carolina, it is shown that the proposed framework leads to more accurate estimates of sample size compared with current HSM recommendations. Whereas the minimum sample size requirement published in the HSM is based on the summation of the observed crashes, this paper demonstrates that the summation of the observed crashes may result in calibration factors that are less likely to be equally precise and the coefficient of the variation of the observed crashes can be considered instead.
APA, Harvard, Vancouver, ISO, and other styles
20

Jones, P. W., and S. A. Madhi. "Bayesian minimum sample size designs for the bernoulli selection problem." Sequential Analysis 7, no. 1 (1988): 1–10. http://dx.doi.org/10.1080/07474948808836139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ustyuzhaninov, V. G. "Minimum sample size to identify nonzero coefficients in normal regression." Cybernetics and Systems Analysis 30, no. 2 (1994): 170–80. http://dx.doi.org/10.1007/bf02366422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ramos, Antônio, and Elbert Macau. "Minimum Sample Size for Reliable Causal Inference Using Transfer Entropy." Entropy 19, no. 4 (2017): 150. http://dx.doi.org/10.3390/e19040150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

HAN, J. S., and P. B. BUTLER. "RELIABILITY OF MINIMUM SAMPLE SIZE FOR IRREGULARLY SHAPED FINE PARTICLES." Particulate Science and Technology 6, no. 3 (1988): 305–15. http://dx.doi.org/10.1080/02726358808906504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Cheon-Sig, Stephen A. Sedory, and Sarjinder Singh. "Simulated Minimum Sample Size Requirements in Various Randomized Response Models." Communications in Statistics - Simulation and Computation 42, no. 4 (2013): 771–89. http://dx.doi.org/10.1080/03610918.2012.655830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cox, Nicholas J. "Speaking Stata: Finding the denominator: Minimum sample size from percentages." Stata Journal: Promoting communications on statistics and Stata 23, no. 4 (2023): 1086–95. http://dx.doi.org/10.1177/1536867x231212453.

Full text
Abstract:
Percentage breakdowns for a series of classes or categories are sometimes reported without a specification of class frequencies or even the total sample size. This column surveys the problem of estimating the minimum sample size and class frequencies consistent with a reported breakdown and a particular resolution. I introduce and explain a new command, find_denom. Rounding quirks whereby a total is reported as above or below 100% are discussed as a complication.
APA, Harvard, Vancouver, ISO, and other styles
26

Hassouna, Fady M. A., and Khaled Al-Sahili. "Practical Minimum Sample Size for Road Crash Time-Series Prediction Models." Advances in Civil Engineering 2020 (December 29, 2020): 1–12. http://dx.doi.org/10.1155/2020/6672612.

Full text
Abstract:
Road crashes are problems facing the transportation sector. Crash data in many countries are available only for the past 10 to 20 years, which makes it difficult to determine whether the data are sufficient to establish reasonable and accurate prediction rates. In this study, the effect of sample size (number of years used to develop a prediction model) on the crash prediction accuracy using Autoregressive integrated moving average (ARIMA) method was investigated using crash data for years 1971–2015. Based on the availability of annual crash records, road crash data for four selected countries (Denmark, Turkey, Germany, and Israel) were used to develop the crash prediction models based on different sample sizes (45, 35, 25, and 15 years). Then, crash data for 2016 and 2017 were used to verify the accuracy of the developed models. Furthermore, crash data for Palestine were used to test the validity of the results. The used data included fatality, injury, and property damage crashes. The results showed similar trends in the models’ prediction accuracy for all four countries when predicting road crashes for year 2016. Decreasing the sample sizes led to less prediction accuracy up to a sample size of 25; then, the accuracy increased for the 15-year sample size. Whereas there was no specific trend in the prediction accuracy for year 2017, a higher range of prediction error was also obtained. It is concluded that the prediction accuracy would vary based on the varying socioeconomic, traffic safety programs and development conditions of the country over the study years. For countries with steady and stable conditions, modeling using larger sample sizes would yield higher accuracy models with higher prediction capabilities. As for countries with less steady and stable conditions, modeling using smaller sample sizes (15 years, for example) would lead to high accuracy models with good prediction capabilities. Therefore, it is recommended that the socioeconomic and traffic safety program status of the country is considered before selecting the practical minimum sample size that would give an acceptable prediction accuracy, therefore saving efforts and time spent in collecting data (more is not always better). Moreover, based on the data analysis results, long-term ARIMA prediction models should be used with caution.
APA, Harvard, Vancouver, ISO, and other styles
27

McCluskey, Connor J., Manton J. Guers, and Stephen C. Conlon. "Minimum sample size for extreme value statistics of flow-induced response." Marine Structures 79 (September 2021): 103048. http://dx.doi.org/10.1016/j.marstruc.2021.103048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Zhengjia, and Xinjia Chen. "Exact calculation of minimum sample size for estimating a Poisson parameter." Communications in Statistics - Theory and Methods 45, no. 16 (2015): 4692–715. http://dx.doi.org/10.1080/03610926.2014.927497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Xinjia. "Exact computation of minimum sample size for estimation of binomial parameters." Journal of Statistical Planning and Inference 141, no. 8 (2011): 2622–32. http://dx.doi.org/10.1016/j.jspi.2011.02.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Vijayaraghavan, R., A. Loganathan, and D. Rajalakshmi. "Selection of Minimum Sample Size Tightened-Normal-Tightened Sampling Inspection Schemes." Journal of Testing and Evaluation 42, no. 1 (2013): 20120330. http://dx.doi.org/10.1520/jte20120330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Villela-Suárez, Juan M., Oscar A. Aguirre-Calderón, Eduardo J. Treviño-Garza, Marco A. González-Tagle, Israel Yerena-Yamallel, and Benedicto Vargas-Larreta. "Minimum sample size for fitting compatible taper-volume functions for three pine species in Chihuahua." Revista Chapingo Serie Ciencias Forestales y del Ambiente 27, no. 1 (2020): 143–63. http://dx.doi.org/10.5154/r.rchscfa.2020.04.031.

Full text
Abstract:
Introduction: The choice of sample size is an important decision in the development of volume models and taper functions. Objective: To calculate the minimum sample size required for fitting compatible taper-volume functions for Pinus arizonica Engelm., P. durangensis Martínez and P. engelmannii Carr. in Chihuahua. Materials and methods: The methodology was divided into three phases: (i) fitting of a linear regression model to the diameter-height data of 50 trees of each species in the three forest regions; (ii) calculation of the minimum sample size required, and (iii) comparison of the goodness of fit of the taper-volume function using both sample sizes. Results and discussion: The minimum number of trees calculated ranged from 53 (Pinus durangensis) to 88 (P. engelmannii) and it is located in the interval reported in studies carried out to estimate the optimal sample size for the development of taper functions. No significant differences were observed in the goodness of fit (α = 0.05) in terms of the R 2 and the root mean square error, using the full sample size and the calculated minimum sample size; no significant effect was observed in the stem volume estimates. Conclusion: The use of small samples in the fit of taper-volume models generates accurate estimates if adequate representation of the study population is ensured.
APA, Harvard, Vancouver, ISO, and other styles
32

Rogers, Gordon, Martin Szomszor, and Jonathan Adams. "Sample size in bibliometric analysis." Scientometrics 125, no. 1 (2020): 777–94. http://dx.doi.org/10.1007/s11192-020-03647-7.

Full text
Abstract:
Abstract While bibliometric analysis is normally able to rely on complete publication sets this is not universally the case. For example, Australia (in ERA) and the UK (in the RAE/REF) use institutional research assessment that may rely on small or fractional parts of researcher output. Using the Category Normalised Citation Impact (CNCI) for the publications of ten universities with similar output (21,000–28,000 articles and reviews) indexed in the Web of Science for 2014–2018, we explore the extent to which a ‘sample’ of institutional data can accurately represent the averages and/or the correct relative status of the population CNCIs. Starting with full institutional data, we find a high variance in average CNCI across 10,000 institutional samples of fewer than 200 papers, which we suggest may be an analytical minimum although smaller samples may be acceptable for qualitative review. When considering the ‘top’ CNCI paper in researcher sets represented by DAIS-ID clusters, we find that samples of 1000 papers provide a good guide to relative (but not absolute) institutional citation performance, which is driven by the abundance of high performing individuals. However, such samples may be perturbed by scarce ‘highly cited’ papers in smaller or less research-intensive units. We draw attention to the significance of this for assessment processes and the further evidence that university rankings are innately unstable and generally unreliable.
APA, Harvard, Vancouver, ISO, and other styles
33

Suprayitno, H., V. Ratnasari, and N. Saraswati. "Experiment Design for Determining the Minimum Sample Size for Developing Sample Based Trip Length Distribution." IOP Conference Series: Materials Science and Engineering 267 (November 2017): 012029. http://dx.doi.org/10.1088/1757-899x/267/1/012029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Guilbert, Eric. "Studying canopy arthropods in New Caledonia: how to obtain a representative sample." Journal of Tropical Ecology 14, no. 5 (1998): 665–72. http://dx.doi.org/10.1017/s0266467498000467.

Full text
Abstract:
Canopy arthropods were sampled by insecticide fogging to study their community structure in two New Caledonian primary rain forests. The representativeness of these samples was analysed by two different methods: the diversity-area relationship and the relationship between the distribution of the taxa and the sample size, using Pielou's method. The results showed that the higher the degree of aggregation, the higher must be the minimum sample size to ensure a stable distribution. In the same way, the higher the diversity index, the higher must be the sample size to ensure a representative sample of the community. In this study, 40 sample units of 1 m2 were used, although, samples of 9 to 25 m2 seem to be sufficient according to the distribution of the taxa sampled. Five to 30 m2 should be sufficient to ensure representative samples of the whole community for estimating diversity.
APA, Harvard, Vancouver, ISO, and other styles
35

Morse, David T. "MINSIZE: A Computer Program for Obtaining Minimum Sample Size as an Indicator of Effect Size." Educational and Psychological Measurement 58, no. 1 (1998): 142–53. http://dx.doi.org/10.1177/0013164498058001012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bujang, Mohamad Adam, and Nurakmal Baharum. "Sample Size Guideline for Correlation Analysis." World Journal of Social Science Research 3, no. 1 (2016): 37. http://dx.doi.org/10.22158/wjssr.v3n1p37.

Full text
Abstract:
&lt;p class="Abstract"&gt;&lt;em&gt;Correlation analysis is a common statistical analysis in various fields. The aim is usually to determine to what extent two numerical variables are correlate&lt;/em&gt;&lt;em&gt;d&lt;/em&gt;&lt;em&gt; with each other. One of the issues that are important to be considered before conducting any correlation analysis is to plan for the sufficient sample size. This is to ensure, the results that to be derived from the analysis be able to reach a desired minimum correlation coefficient value with sufficient power and desired type I error or p-value. Sample size estimation for correlation analysis should be in line with the study objective. Researchers who are not statistician need simpler guideline to determine the sufficient sample size for correlation analysis. Therefore, this study aims to tabulate tables that show sample size calculation based on desired correlation coefficient, power and type 1 error (p-value) values. Moving towards that, simpler guidelines are proposed to estimate sufficient sample size requirements in different scenarios.&lt;/em&gt;&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
37

Heinzl, H., A. Benner, C. Ittrich, and M. Mittlböck. "Proposals for Sample Size Calculation Programs." Methods of Information in Medicine 46, no. 06 (2007): 655–61. http://dx.doi.org/10.3414/me0295.

Full text
Abstract:
Summary Objectives : Numerous sample size calculation programs are available nowadays. They include both commercial products as well as public domain and open source applications. We propose modifications for these programs in order to even better support statistical consultation during the planning stage of a two-armed clinical trial. Methods : Directional two-sided tests are commonly used for two-armed clinical trials. This may lead to a non-negligible Type III error risk in a severely underpowered study. In the case of a reasonably sized study the question for the so-called auxiliary alternative may evolve. Results : We propose that sample size calculation programs should be able to compute i) Type III errors and the so-called (q-values, ii) minimum sample sizes required to keep the (q-values below pre-specified levels, and iii) detectable effect sizes of the so-called auxiliary alternatives. Conclusions : Proposals iand ii are intended to help prevent irresponsibly underpowered clinical trials, whereas the proposal iii is meant as additional assistance for the planning of reasonably sized clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
38

Zanella, Pablo Giliard, Carlos Augusto Brandão de Carvalho, Everton Teixeira Ribeiro, Afrânio Silva Madeiro, and Raphael Dos Santos Gomes. "Optimal quadrat area and sample size to estimate the forage mass of stargrass." Semina: Ciências Agrárias 38, no. 5 (2017): 3165. http://dx.doi.org/10.5433/1679-0359.2017v38n5p3165.

Full text
Abstract:
The objective of this study was to evaluate the sample size and area of the quadrats necessary to accurately estimate the forage mass (FM) of a fenced pasture of stargrass (Cynodon nlemfuensis cv. Florico) during the winter. Five metal quadrats were used: a 0.09 m² square (0.30 m side), a 0.25 m2 square (0.50 m side), a 0.25 m2 circle (0.28 m diameter), a 0.5 m2 rectangle (0.5 x 1.0 m), and a 1 m2 square (1.0 m side), each with eight replicates. The size and shape of the quadrats were determined based on cumulative variances to identify combinations that minimized the coefficient of variation (CV). The minimum sample size required to estimate the FM, morphological components and height was established by the CV maximum curvature method. The 0.25 m2 square quadrat (0.5 m side) presented the lowest cumulative CV in estimating the FM and the dry mass of dead material. However, for the estimation of the leaf and stem dry mass, the 1.00 m2 square quadrat (1.00 m side) presented the lowest CV. Using the 0.25 m2 square quadrat, a minimum number of six samples were required for the FM estimation, and eight samples were required for estimating the mean height of the stargrass pasture. Therefore, at least eight samples are recommended to obtain accurate results for the estimation of both variables.
APA, Harvard, Vancouver, ISO, and other styles
39

Hwang, D., W. A. Schmitt, G. Stephanopoulos, and G. Stephanopoulos. "Determination of minimum sample size and discriminatory expression patterns in microarray data." Bioinformatics 18, no. 9 (2002): 1184–93. http://dx.doi.org/10.1093/bioinformatics/18.9.1184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bujang, Mohamad Adam. "A Step-by-Step Process on Sample Size Determination for Medical Research." Malaysian Journal of Medical Sciences 28, no. 2 (2021): 15–27. http://dx.doi.org/10.21315/mjms2021.28.2.2.

Full text
Abstract:
Determination of a minimum sample size required for a study is a major consideration which all researchers are confronted with at the early stage of developing a research protocol. This is because the researcher will need to have a sound prerequisite knowledge of inferential statistics in order to enable him/her to acquire a thorough understanding of the overall concept of a minimum sample size requirement and its estimation. Besides type I error and power of the study, some estimates for effect sizes will also need to be determined in the process to calculate or estimate the sample size. The appropriateness in calculating or estimating the sample size will enable the researchers to better plan their study especially pertaining to recruitment of subjects. To facilitate a researcher in estimating the appropriate sample size for their study, this article provides some recommendations for researchers on how to determine the appropriate sample size for their studies. In addition, several issues related to sample size determination were also discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Al-Athari, Faris Muslim. "Experimental Comparison of the Sample Sizes of the Two-Sample Tests of Wilcoxon and Student under the Arcsine Distribution." Journal of Hunan University Natural Sciences 50, no. 1 (2023): 115–22. http://dx.doi.org/10.55463/issn.1674-2974.50.1.12.

Full text
Abstract:
When statistical experiments are performed, a sample size should be chosen in some optimum way so that we should use a sample size no larger than necessary. This paper compares the minimum sample sizes of two-sample t-test and the Wilcoxon rank-sum test under the arcsine distribution, based on their power [1, 2]. To accomplish this task, some essential probabilities that are useful in the power and sample size determination of the Wilcoxon rank-sum test were derived by the author for computing the approximated formula given by Lehmann [2]. The composite numerical integration algorithm is used to compute these probabilities, which are related to the arcsine distribution. In this study, a computer program was built by the author to find the exact (simulated) minimum sample sizes n for any significant level and power by iterating on n with starting points for n provided by the approximated formulas of [2, 3]. The scientific novelty of this research paper is determining the minimum sample sizes by considering a new set of the arcsine distribution shift alternatives of the forms giving the left-hand endpoint of the displaced distribution as the quantile of order p, 0 &lt; p &lt; 1, of the second distribution rather than using alternatives that specify as the quantile of order p, p &lt; 0.5. As considered by [1], a choice that prevents losing some important alternative hypotheses is an extension to the set of alternative hypotheses considered by Guenther [1]. The exact (simulated) minimum sample sizes were computed and compared with each other and with the corresponding approximated formulas given by Lehmann [2] and Guenther [3]. Numerical results showed that the approximated formulas are very accurate and the Wilcoxon rank-sum test is more efficient when the sample size is more than 45. Otherwise, the Student two-sample t-test is better.
APA, Harvard, Vancouver, ISO, and other styles
42

Emerson, W. W. "Size distributions and minimum Stokes diameters of soil particles." Soil Research 41, no. 6 (2003): 1089. http://dx.doi.org/10.1071/sr02139.

Full text
Abstract:
Published data on the settlement of particles in soil suspensions were used to plot the percentage by weight of particles (P) with a Stokes diameter ≤D v. log D. For subsoils, P/log D plots became linear as D approached the limit of measurement of 60 nm. This allowed linear extrapolation to P = 0. The D value of a clay particle was converted to particle radius by assuming particles to be circular plates with a constant ratio of radius to thickness of 2.5. When the linear plot plus its extension covered the whole clay range, calculated surface areas of the clay were then consistent with reported values. The smallest value of D found by extrapolation was 5 nm. It was deduced to correspond to a single sheet of beidellite with 2 water layers on each surface. Plots for samples from a different horizon of the same profile were used to obtain a second estimate of the minimum value of D.Over the silt/fine sand range, plots for subsoils could also be linear, but often were partly sigmoidal due, for example, to the presence of loess. Separate plots for the clay minerals and loess were deduced from the P/log D plot for one sample using the reported mineralogical composition of different size fractions.
APA, Harvard, Vancouver, ISO, and other styles
43

Kolba, Tiffany N., and Alexander Bruno. "Estimation of population parameters using sample extremes from nonconstant sample sizes." PLOS ONE 18, no. 1 (2023): e0280561. http://dx.doi.org/10.1371/journal.pone.0280561.

Full text
Abstract:
We examine the accuracy and precision of parameter estimates for both the exponential and normal distributions when using only a collection of sample extremes. That is, we consider a collection of random variables, where each of the random variables is either the minimum or maximum of a sample of nj independent, identically distributed random variables drawn from a normal or exponential distribution with unknown parameters. Previous work derived estimators for the population parameters assuming the nj sample sizes are constant. Since sample sizes are often not constant in applications, we derive new unbiased estimators that take into account the varying sample sizes. We also perform simulations to assess how the previously derived estimators perform when the constant sample size is simply replaced with the average sample size. We explore how varying the mean, standard deviation, and probability distribution of the sample sizes affects the estimation error. Overall, our results demonstrate that using the average sample size in place of the constant sample size still results in reliable estimates for the population parameters, especially when the average sample size is large. Our estimation framework is applied to a biological example involving plant pollination.
APA, Harvard, Vancouver, ISO, and other styles
44

Kishore, U., and R. Ramadevi. "Analysis and Comparison of Kidney Stone Detection using Minimum Distance to Mean Classifier and Bayesian Classifier with Improved Classification Accuracy." CARDIOMETRY, no. 25 (February 14, 2023): 806–11. http://dx.doi.org/10.18137/cardiometry.2022.25.806811.

Full text
Abstract:
Aim: The goal of this research is to use minimum distance to mean classifier and bayesian classifiers to predict and detect kidney stones. Materials and Methods: This investigation made use of a collection of data from Kaggle website. Samples were collected (N=10) for normal kidney images and (N=10) for kidney with stone images. Total sample size was calculated using clinical.com. As a result the total number of samples 20 was considered for analysis. Using Matlab software and a standard data set collected from Kaggle website, the classification accuracy was obtained. Pretest G power taken as 85 in sample size calculation can be done through clinical.com. Results: The accuracy (%) of both classification techniques are compared using SPSS software by independent sample t-tests. There is a statistical significant difference between minimum distance to mean classifier and Bayesian classifier.Comparison results show that innovative minimum distance to mean classifier give better classification with an accuracy of (78.85%) than bayesian classifiers (71.1314%).There is a statistical significant difference between minimum distance to mean classifier and bayesian classifiers. The Minimum Distance to Mean classifier with p=0.708, p&gt;0.05 insignificant and showed better results in comparison to Bayesian classifiers. Conclusion: The Minimum Distance to Mean Classifier appears to give better accuracy than the Bayesian Classifiers.
APA, Harvard, Vancouver, ISO, and other styles
45

Virtanen, Arja, Veli Kairisto, and Esa Uusipaikka. "Regression-based reference limits: determination of sufficient sample size." Clinical Chemistry 44, no. 11 (1998): 2353–58. http://dx.doi.org/10.1093/clinchem/44.11.2353.

Full text
Abstract:
Abstract Regression analysis is the method of choice for the production of covariate-dependent reference limits. There are currently no recommendations on what sample size should be used when regression-based reference limits and confidence intervals are calculated. In this study we used Monte Carlo simulation to study a reference sample group of 374 age-dependent hemoglobin values. From this sample, 5000 random subsamples, with replacement, were constructed with 10–220 observations per sample. Regression analysis was used to estimate age-dependent 95% reference intervals for hemoglobin concentrations and erythrocyte counts. The maximum difference between mean values of the root mean square error and original values for hemoglobin was 0.05 g/L when the sample size was ≥60. The parameter estimators and width of reference intervals changed negligibly from the values calculated from the original sample regardless of what sample size was used. SDs and CVs for these factors changed rapidly up to a sample size of 30; after that changes were smaller. The largest and smallest absolute differences in root mean square error and width of reference interval between sample values and values calculated from the original sample were also evaluated. As expected, differences were largest in small sample sizes, and as sample size increased differences decreased. To obtain appropriate reference limits and confidence intervals, we propose the following scheme: (a) check whether the assumptions of regression analysis can be fulfilled with/without transformation of data; (b) check that the value of v, which describes how the covariate value is situated in relation to both the mean value and the spread of the covariate values, does not exceed 0.1 at minimum and maximum covariate positions; and (c) if steps 1 and 2 can be accepted, the reference limits with confidence intervals can be produced by regression analysis, and the minimum acceptable sample size will be ∼70.
APA, Harvard, Vancouver, ISO, and other styles
46

Barbiero, Danielle C., Isabela M. Macedo, Bruno Pereira Masi, and Ilana R. Zalmon. "Comparative study of the estimated sample size for benthic intertidal species and communities." Latin American Journal of Aquatic Research 39, no. 1 (2011): 93–102. http://dx.doi.org/10.3856/vol39-issue1-fulltext-9.

Full text
Abstract:
The objective of this study was to determine the minimum sample size for studies of community structure and/or dominant species at different heights of a rocky intertidal zone at Rio de Janeiro. Community structure indicators suggested a variation in the minimum surface of 100 to 800 cm2, with a minimum of 2 to 8 profiles and at least 20 to 80 quadrant sampling points, depending on the height. Indicators of species abundance suggest 100 cm2 for Hypnea musciformis and 400 cm2 for Ulva fasciata, Phragmatopoma lapidosa Kinberg, (1867) and Gymnogongrus griffthsiae at lower heights; 200 cm2 for Chthamalus spp. at intermediate heights; and 800 cm2 for Littorina ziczac at the greatest height. In general, seven to eight profiles and 10 to 20 sampling points were used. Different sample sizes were related to the abundance and spatial distributions of individual species, which varied at each intertidal height according to the degree of environmental stress.
APA, Harvard, Vancouver, ISO, and other styles
47

Yun, Meiping, and Wenwen Qin. "Minimum Sampling Size of Floating Cars for Urban Link Travel Time Distribution Estimation." Transportation Research Record: Journal of the Transportation Research Board 2673, no. 3 (2019): 24–43. http://dx.doi.org/10.1177/0361198119834297.

Full text
Abstract:
Despite the wide application of floating car data (FCD) in urban link travel time estimation, limited efforts have been made to determine the minimum sample size of floating cars appropriate to the requirements for travel time distribution (TTD) estimation. This study develops a framework for seeking the required minimum number of travel time observations generated from FCD for urban link TTD estimation. The basic idea is to test how, with a decreasing the number of observations, the similarities between the distribution of estimated travel time from observations and those from the ground-truth vary. These are measured by employing the Hellinger Distance (HD) and Kolmogorov-Smirnov (KS) tests. Finally, the minimum sample size is determined by the HD value, ensuring that corresponding distribution passes the KS test. The proposed method is validated with the sources of FCD and Radio Frequency Identification Data (RFID) collected from an urban arterial in Nanjing, China. The results indicate that: (1) the average travel times derived from FCD give good estimation accuracy for real-time application; (2) the minimum required sample size range changes with the extent of time-varying fluctuations in traffic flows; (3) the minimum sample size determination is sensitive to whether observations are aggregated near each peak in the multistate distribution; (4) sparse and incomplete observations from FCD in most time periods cannot be used to achieve the minimum sample size. Moreover, this would produce a significant deviation from the ground-truth distributions. Finally, FCD is strongly recommended for better TTD estimation incorporating both historical trends and real-time observations.
APA, Harvard, Vancouver, ISO, and other styles
48

Deng, Shan, Meiyan Zhang, Aiai Li, et al. "Investigation of Sample Size Estimation for Measuring Quantitative Characteristics in DUS Testing of Shiitake Mushrooms." Agronomy 14, no. 6 (2024): 1130. http://dx.doi.org/10.3390/agronomy14061130.

Full text
Abstract:
The sampling technique is commonly used in research investigations to more accurately estimate data with greater precision, at a lower cost and in less time. In plant DUS (distinctness, uniformity, and stability) testing, many quantitative characteristic data usually need to be obtained through individual measurements. However, there is currently no scientific method for determining the appropriate sampling size. The minimum number of testing samples for DUS testing was calculated based on the theory of sample size in descriptive studies and was validated through simple random sampling. The results show that the quantitative characteristics for the edible mushroom Shiitake (Lentinula edodes) in DUS testing were uniform. The calculated results show that 10 fruiting bodies for a single measurement were sufficient. Furthermore, the outcomes of the random sampling revealed that the mean of 10 samples did not significantly differ from the mean of all data. When the sample size exceeded 10, Cohen’s kappa statistic suggested that the conclusion of distinctness was very close to the near-perfect agreement. Reducing the number of samples did not affect the uniform assessment. This study suggests that the theory of sample size in descriptive studies could be applied to calculate the minimum sample size in DUS testing, and for Shiitake DUS testing, measuring 10 fruiting bodies was sufficient.
APA, Harvard, Vancouver, ISO, and other styles
49

Priyanath, Hunuwala Malawarage Suranjan, Ranatunga RVSPK, and Megama RGN. "Methods and Rule-of-Thumbs in The Determination of Minimum Sample Size When Appling Structural Equation Modelling: A Review." JOURNAL OF SOCIAL SCIENCE RESEARCH 15 (March 19, 2020): 102–7. http://dx.doi.org/10.24297/jssr.v15i.8670.

Full text
Abstract:
Basic methods and techniques involved in the determination of minimum sample size at the use of Structural Equation Modeling (SEM) in a research project, is one of the crucial problems faced by researchers since there were some controversy among scholars regarding methods and rule-of-thumbs involved in the determination of minimum sample size when applying Structural Equation Modeling (SEM). Therefore, this paper attempts to make a review of the methods and rule-of-thumbs involved in the determination of sample size at the use of SEM in order to identify more suitable methods. The paper collected research articles related to the sample size determination for SEM and review the methods and rules-of-thumb employed by different scholars. The study found that a large number of methods and rules-of-thumb have been employed by different scholars. The paper evaluated the surface mechanism and rules-of-thumb of more than twelve previous methods that contained their own advantages and limitations. Finally, the study identified two methods that are more suitable in methodologically and technically which have identified by non-robust scholars who deeply addressed all the aspects of the techniques in the determination of minimum sample size for SEM analysis and thus, the prepare recommends these two methods to rectify the issue of the determination of minimum sample size when using SEM in a research project.
APA, Harvard, Vancouver, ISO, and other styles
50

Serra, Jordi, and Montse Nájar. "Asymptotically Optimal Linear Shrinkage of Sample LMMSE and MVDR Filters." IEEE Transactions on Signal Processing 62, no. 14 (2014): 3552–64. https://doi.org/10.1109/TSP.2014.2329420.

Full text
Abstract:
Conventional implementations of the linear minimum mean-square (LMMSE) and minimum variance distortionless response (MVDR) estimators rely on the sample matrix inversion (SMI) technique, i.e., on the sample covariance matrix (SCM). This approach is optimal in the large sample size regime. Nonetheless, in small sample size situations, those sample estimators suffer a large performance degradation. Thus, the aim of this paper is to propose corrections of these sample methods that counteract their performance degradation in the small sample size regime and keep their optimality in large sample size situations. To this aim, a twofold approach is proposed. First, shrinkage estimators are considered, as they are known to be robust to the small sample size regime. Namely, the proposed methods are based on shrinking the sample LMMSE or sample MVDR filters towards a variously called matched filter or conventional (Bartlett) beamformer in array processing. Second, random matrix theory is used to obtain the optimal shrinkage factors for large filters. The simulation results highlight that the proposed methods outperform the sample LMMSE and MVDR. Also, provided that the sample size is higher than the observation dimension, they improve classical diagonal loading (DL) and Ledoit–Wolf (LW) techniques, which counteract the small sample size degradation by regularizing the SCM. Finally, compared to state-of-the-art DL, the proposed methods reduce the computational cost and the proposed shrinkage of the LMMSE obtains performance gains.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography