Academic literature on the topic 'Minimal sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Minimal sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Minimal sample size"

1

Wołynski, Waldemar. "Minimal Sample Size in the Group Classification Problem." Journal of Classification 22, no. 1 (2005): 49–58. http://dx.doi.org/10.1007/s00357-005-0005-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grjibovski, A. M., M. A. Gorbatova, A. N. Narkevich, and K. A. Vinogradov. "Required sample size for comparing means in two paired samples." Marine Medicine 6, no. 4 (2021): 82–88. http://dx.doi.org/10.22328/2413-5747-2020-6-4-82-88.

Full text
Abstract:
This paper continues our series of articles for beginners on required sample size for the most common basic statistical tests used in biomedical research. The most common statistical test for comparing means in paired samples is Student’s paired t-test. In this paper we present a simple algorithm for calculating required sample size for comparing two means in paired samples. As in our earlier papers we demonstrate how to perform calculations using WinPepi and Stata software. Moreover, we have created a table with calculated minimal sample sizes required for using Student’s t-tests for different scenarios with the confidence level of 95% and statistical power of 80%.
APA, Harvard, Vancouver, ISO, and other styles
3

Grjibovski, A. M., M. A. Gorbatova, A. N. Narkevich, and K. A. Vinogradov. "Required sample size for comparing two independent means." Marine Medicine 6, no. 2 (2020): 106–13. http://dx.doi.org/10.22328/2413-5747-2020-6-2-106-113.

Full text
Abstract:
Sample size calculation in a planning phase is still uncommon in Russian research practice. This situation threatens validity of the conclusions and may introduce Type I error when the false null hypothesis is accepted due to lack of statistical power to detect the existing difference between the means. Comparing two means using unpaired Students’ ttests is the most common statistical procedure in the Russian biomedical literature. However, calculations of the minimal required sample size or retrospective calculation of the statistical power were observed only in very few publications. In this paper we demonstrate how to calculate required sample size for comparing means in unpaired samples using WinPepi and Stata software. In addition, we produced tables for minimal required sample size for studies when two means have to be compared and body mass index and blood pressure are the variables of interest. The tables were constructed for unpaired samples for different levels of statistical power and standard deviations obtained from the literature.
APA, Harvard, Vancouver, ISO, and other styles
4

Pareniuk, Dmytro. "Method of evaluation of the minimal sample size for acoustical signal therapy monitored via electroencephalographic activity of human brain." ScienceRise, no. 2 (April 30, 2021): 75–82. https://doi.org/10.21303/2313-8416.2021.001736.

Full text
Abstract:
The aim of the study. Improvement of the preparation to the acoustical signal therapy test or experiment of electroencephalographic activity of human brain and validation of the specified test results. The problem to be solved. Estimation of the minimal possible sample size for maintaining needed research accuracy in the research field of the electroencephalographic activity of human brain via monitoring of the brainwave patterns during exposure to the musical signal. Main scientific results. New method for selection minimal passable sample size for brainwave pattern studies is presented. Example of application of method for one rhythm of the brainwaves (delta-rhythm) is shown. Perspective way of obtaining clinically valuable differences between test group results was acquired. Differences between mean values for groups of results of different types of music and stress factor exposure are presented. The area of practical use of the research results. Research facilities dedicated to the study of electroencephalographic activity of human brain and medical facilities and institutions, dedicated to the treatment of pathologies of the central nervous system, brain damage, stress, and progressive post-stress action psychological state restoration. An innovative technological product. Dedicated method for quick estimation of minimal passable sample size for brainwave pattern studies, which is recommended for usage in the studies of the implementation of music therapy. The area of application of an innovative technological product. Electroencephalographic activity of human brain study via brainwave pattern research. Clinical practice of application of a music therapy.
APA, Harvard, Vancouver, ISO, and other styles
5

Kostic, Aleksandar, Svetlana Ilic, and Petar Milin. "Probability estimate and the optimal text size." Psihologija 41, no. 1 (2008): 35–51. http://dx.doi.org/10.2298/psi0801035k.

Full text
Abstract:
Reliable language corpus implies a text sample of size n that provides stable probability distributions of linguistic phenomena. The question is what is the minimal (i.e. the optimal) text size at which probabilities of linguistic phenomena become stable. Specifically, we were interested in probabilities of grammatical forms. We started with an a priori assumption that text size of 1.000.000 words is sufficient to provide stable probability distributions. Text of this size we treated as a "quasi-population". Probability distribution derived from the "quasi-population" was then correlated with probability distribution obtained on a minimal sample size (32 items) for a given linguistic category (e.g. nouns). Correlation coefficient was treated as a measure of similarity between the two probability distributions. The minimal sample was increased by geometrical progression, up to the size where correlation between distribution derived from the quasi-population and the one derived from an increased sample reached its maximum (r=1). Optimal sample size was established for grammatical forms of nouns, adjectives and verbs. General formalism is proposed that allows estimate of an optimal sample size from minimal sample (i.e. 32 items).
APA, Harvard, Vancouver, ISO, and other styles
6

Pérez-Llorca, Marina, Erola Fenollosa, Roberto Salguero-Gómez, and Sergi Munné-Bosch. "What Is the Minimal Optimal Sample Size for Plant Ecophysiological Studies?" Plant Physiology 178, no. 3 (2018): 953–55. http://dx.doi.org/10.1104/pp.18.01001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tanner, D. Q., J. D. Stednick, and W. C. Leininger. "Minimal herd sample size for determination of blood copper status of cattle." Journal of the American Veterinary Medical Association 192, no. 8 (1988): 1074–76. https://doi.org/10.2460/javma.1988.192.08.1074.

Full text
Abstract:
Summary Copper is required by cattle for synthesis of numerous proteins and enzymes. Copper deficiency in cattle results in a variety of signs ranging from weight loss to diarrhea. In the fall of 1984 and 1985, blood samples were collected from 22 cattle herds near Gunnison, Colo. Approximately one third of the herds were classified as copper deficient (ie, mean serum copper concentration <0.6 mg/L). The inherent variability of serum copper concentrations within a herd mandates the determination of the minimal number of cattle to be tested to properly assess the blood copper status of a herd. Coefficients of variation for serum copper concentration were used to calculate a minimal sample size, with a 95% confidence interval for each herd. Minimal sample size ranged from 3 to 55 cattle/herd (ie, 1 to 22% of the herd); this finding suggested that the usual procedure of testing 10% of the herd may be inappropriate.
APA, Harvard, Vancouver, ISO, and other styles
8

Neely, J. Gail, Ron J. Karni, Samuel H. Engel, Patrick L. Fraley, Brian Nussenbaum, and Randal C. Paniello. "Practical guides to understanding sample size and minimal clinically important difference (MCID)." Otolaryngology–Head and Neck Surgery 136, no. 1 (2007): 14–18. http://dx.doi.org/10.1016/j.otohns.2006.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grjibovski, A. M., M. A. Gorbatova, A. N. Narkevich, and K. A. Vinogradov. "REQUIRED SAMPLE SIZE FOR CORRELATION ANALYSIS." Marine Medicine 6, no. 1 (2020): 101–6. http://dx.doi.org/10.22328/2413-5747-2020-6-1-101-106.

Full text
Abstract:
Sample size calculation prior to data collection is still relatively rare in Russian research practice. This situation threatens validity of the conclusion of many projects due to insufficient statistical power to estimate the parameters of interest with desired precision or to detect the differences of interest. Moreover, in a substantial proportion of cases where sample size calculations are performed simplified formulas with assumption of a normal distribution of the studied variables are used in spite of the fact that this assumption does not hold for many research questions in biomedical research. Correlation analysis is still one of the most commonly used methods of statistical analysis used in Russia. Pearson’s correlation coefficient despite its well-known limitations appears in a greater proportion of publications that non-parametric coefficients. We calculated minimal sample sizes for the parametric Pearson’s coefficient as well its non-parametric alternatives — Spearman’s rho and Kendall’s tau-b correlation coefficients to assist junior researchers with the tool to be able to plan data collection and analysis for several types of data, various expected strengths of associations and research questions. The results are presented in ready-for-use tables with required sample size for the three abovementioned coefficients within the range from 0,10 through 0,90 by 0,05 for statistical power 0,8 and 0,9 and alpha-error or 5% as well as for estimation of the same correlation coefficients with the 95% confidence intervals width equal to 0,1 and 0,2.
APA, Harvard, Vancouver, ISO, and other styles
10

Buckley, W. T., J. Huang, and M. A. Monreal. "Ethanol emission seed vigour test for canola: minimal effects from variations in incubation conditions, sample size and seed moisture content." Seed Science and Technology 41, no. 2 (2013): 270–80. http://dx.doi.org/10.15258/sst.2013.41.2.09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Minimal sample size"

1

Potgieter, Ryno. "Minimum sample size for estimating the Bayes error at a predetermined level." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33479.

Full text
Abstract:
Determining the correct sample size is of utmost importance in study design. Large samples yield classifiers or parameters with more precision and conversely, samples that are too small yield unreliable results. Fixed sample size methods, as determined by the specified level of error between the obtained parameter and population value, or a confidence level associated with the estimate, have been developed and are available. These methods are extremely useful when there is little or no cost (consequences of action), financial and time, involved in gathering the data. Alternatively, sequential sampling procedures have been developed specifically to obtain a classifier or parameter estimate that is as accurate as deemed necessary by the researcher, while sampling the least number of observations required to obtain the specified level of accuracy. This dissertation discusses a sequential procedure, derived using Martingale Limit Theory, which had been developed to train a classifier with the minimum number of observations to ensure, with a high enough probability, that the next observation sampled has a low enough probability of being misclassified. Various classification methods are discussed and tested, with multiple combinations of parameters tested. Additionally, the sequential procedure is tested on microarray data. Various advantages and shortcomings of the sequential procedure are pointed out and discussed. This dissertation also proposes a new sequential procedure that trains the classifier to such an extent as to accurately estimate the Bayes error with a high probability. The sequential procedure retains all of the advantages of the previous method, while addressing the most serious shortcoming. Ultimately, the sequential procedure developed enables the researcher to dictate how accurate the classifier should be and provides more control over the trained classifier.<br>Dissertation (MSc)--University of Pretoria, 2013.<br>Statistics<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
2

Marcondes, Patricia Dione Guerra. "Minimum sample size needed to construct cushion curves based on the stress-energy method." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1181668751/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Forgo, Vincent Z. Mr. "A Distribution of the First Order Statistic When the Sample Size is Random." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3181.

Full text
Abstract:
Statistical distributions also known as probability distributions are used to model a random experiment. Probability distributions consist of probability density functions (pdf) and cumulative density functions (cdf). Probability distributions are widely used in the area of engineering, actuarial science, computer science, biological science, physics, and other applicable areas of study. Statistics are used to draw conclusions about the population through probability models. Sample statistics such as the minimum, first quartile, median, third quartile, and maximum, referred to as the five-number summary, are examples of order statistics. The minimum and maximum observations are important in extreme value theory. This paper will focus on the probability distribution of the minimum observation, also known as the first order statistic, when the sample size is random.
APA, Harvard, Vancouver, ISO, and other styles
4

Houghton, Damon. "Minimum tree height sample sizes necessary for accurately estimating merchantable plot volume in Loblolly pine plantations." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-05022009-040541/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, James Dickson. "Contributions to Profile Monitoring and Multivariate Statistical Process Control." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/30032.

Full text
Abstract:
The content of this dissertation is divided into two main topics: 1) nonlinear profile monitoring and 2) an improved approximate distribution for the T^2 statistic based on the successive differences covariance matrix estimator. (Part 1) In an increasing number of cases the quality of a product or process cannot adequately be represented by the distribution of a univariate quality variable or the multivariate distribution of a vector of quality variables. Rather, a series of measurements are taken across some continuum, such as time or space, to create a profile. The profile determines the product quality at that sampling period. We propose Phase I methods to analyze profiles in a baseline dataset where the profiles can be modeled through either a parametric nonlinear regression function or a nonparametric regression function. We illustrate our methods using data from Walker and Wright (2002) and from dose-response data from DuPont Crop Protection. (Part 2) Although the T^2 statistic based on the successive differences estimator has been shown to be effective in detecting a shift in the mean vector (Sullivan and Woodall (1996) and Vargas (2003)), the exact distribution of this statistic is unknown. An accurate upper control limit (UCL) for the T^2 chart based on this statistic depends on knowing its distribution. Two approximate distributions have been proposed in the literature. We demonstrate the inadequacy of these two approximations and derive useful properties of this statistic. We give an improved approximate distribution and recommendations for its use.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Ruengvirayudh, Pornchanok. "A Monte Carlo Study of Parallel Analysis, Minimum Average Partial, Indicator Function, and Modified Average Roots for Determining the Number of Dimensions with Binary Variables in Test Data: Impact of Sample Size and Factor Structure." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou151516919677091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Sample Size and Test Length Minima for DIMTEST with Conditional Covariance -Based Subtest Selection." Master's thesis, 2012. http://hdl.handle.net/2286/R.I.14957.

Full text
Abstract:
abstract: The existing minima for sample size and test length recommendations for DIMTEST (750 examinees and 25 items) are tied to features of the procedure that are no longer in use. The current version of DIMTEST uses a bootstrapping procedure to remove bias from the test statistic and is packaged with a conditional covariance-based procedure called ATFIND for partitioning test items. Key factors such as sample size, test length, test structure, the correlation between dimensions, and strength of dependence were manipulated in a Monte Carlo study to assess the effectiveness of the current version of DIMTEST with fewer examinees and items. In addition, the DETECT program was also used to partition test items; a second feature of this study also compared the structure of test partitions obtained with ATFIND and DETECT in a number of ways. With some exceptions, the performance of DIMTEST was quite conservative in unidimensional conditions. The performance of DIMTEST in multidimensional conditions depended on each of the manipulated factors, and did suggest that the minima of sample size and test length can be made lower for some conditions. In terms of partitioning test items in unidimensional conditions, DETECT tended to produce longer assessment subtests than ATFIND in turn yielding different test partitions. In multidimensional conditions, test partitions became more similar and were more accurate with increased sample size, for factorially simple data, greater strength of dependence, and a decreased correlation between dimensions. Recommendations for sample size and test length minima are provided along with suggestions for future research.<br>Dissertation/Thesis<br>M.A. Educational Psychology 2012
APA, Harvard, Vancouver, ISO, and other styles
8

Qazi, Abdus Shakur. "Statistical analysis of TxCAP and its subsystems." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-08-4230.

Full text
Abstract:
The Texas Department of Transportation (TxDOT) uses the Texas Condition Assessment Program (TxCAP) to measure and compare the overall road maintenance conditions among its 25 districts. TxCAP combines data from three existing subsystems: the Pavement Management Information System (PMIS), which scores the condition of pavement; the Texas Maintenance Assessment Program (TxMAP), which evaluates roadside conditions; and the Texas Traffic Assessment Program (TxTAP), which evaluates the condition of signs, work zones, railroad crossings, and other traffic elements to get an overall picture of the condition of state roads. As a result, TxCAP provides a more comprehensive assessment of the interstate and non-interstate highways. However, the scores for each of the subsystems are based on data of different sample sizes, accuracy, and levels of variations, making it difficult to decide if the difference between two TxCAP score is a true difference or measurement error. Therefore, whether the use of TxCAP is an effective and consistent means to measure the TxDOT roadway maintenance conditions raises concerns and needs to be evaluated. In order to achieve this objective, statistical analyses of the system were conducted in two ways: 1) to determine whether sufficient samples are collected for each of the subsystems, and 2) to determine if the scores are statistically different from each other. A case study was conducted with a dataset covering the whole state from 2008 to 2010. The case study results show that the difference in scores between two districts are statistically significant for some of the districts and insignificant for some other districts. It is therefore recommended that TxDOT either compare the 25 districts by groups/tiers or increase the sample size of the data being collected to compare the districts as individual ones.<br>text
APA, Harvard, Vancouver, ISO, and other styles
9

"Robust Experimental Design for Speech Analysis Applications." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57412.

Full text
Abstract:
abstract: In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect). This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process.<br>Dissertation/Thesis<br>Masters Thesis Electrical Engineering 2020
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Minimal sample size"

1

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles
2

Cumming, Douglas, ed. The Oxford Handbook of IPOs. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190614577.001.0001.

Full text
Abstract:
Firms generally begin as privately owned entities. When they grow large enough, the decision to go public and its consequences are among the most crucial times in a firm’s life cycle. The first time a firm is a reporting issuer gives rise to tremendous responsibilities about disclosing public information and accountability to a wide array of retail shareholders and institutional investors. Initial public offerings (IPOs) offer tremendous opportunities to raise capital. The economic and legal landscape for IPOs has been rapidly evolving across countries. There have been fewer IPOs in the United States in the aftermath of the 2007–2009 financial crisis and associated regulatory reforms that began in 2002. In 1980–2000, an average of 310 firms went public every year, while in 2001–2014 an average of 110 firms went public every year. At the same time, there are so many firms that seek an IPO in China that there has been a massive waiting list of hundreds of firms in recent years. Some countries are promoting small junior stock exchanges to go public early, and even crowdfunding to avoid any prospectus disclosure. Financial regulation of analysts and investment banks has been evolving in ways that drastically impact the economics of going public—in some countries, such as the United States, drastically increasing the minimum size of a company before it can expect to go public. This Handbook not only systematically and comprehensively consolidates a large body of literature on IPOs, but provides a foundation for future debates and inquiry.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Minimal sample size"

1

Fred, Ana L. N., and José M. N. Leitão. "Minimal sample size in grammatical inference a bootstrapping approach." In Advances in Pattern Recognition. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0033320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wilczyński, Maciej. "Minimax Estimation Under Random Sample Size." In Operations Research Proceedings 1999. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-58300-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Skubalska-Rafajłowicz, Ewa. "Small Sample Size in High Dimensional Space - Minimum Distance Based Classification." In Artificial Intelligence and Soft Computing. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07173-2_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mischko, Jens, Stefan Einbock, and Rainer Wagener. "How to Predict the Product Reliability Confidently and Fast with a Minimum Number of Samples in the Wöhler Test." In Lecture Notes in Mechanical Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_11.

Full text
Abstract:
AbstractTo accurately estimate and predict the (product) lifetime, a large sample size is mandatory, especially for new and unknown materials. The realization of such a sample size is rarely feasible for reasons of cost and capacity. The prior knowledge must be systematically and consistently used to be able to predict the lifetime accurately. By using the example of Wöhler test, it will be shown that the lifetime prediction with a minimum number of specimen and test time can be successful, when taking the prior knowledge into account.
APA, Harvard, Vancouver, ISO, and other styles
5

Polkanov, Art. "Determination of the minimum bat sample group size to provide reliable parasite indices." In The Biology and Conservation of Australasian Bats. Royal Zoological Society of New South Wales, 2011. http://dx.doi.org/10.7882/fs.2011.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Woodroofe, Michael. "An asymptotic minimax determination of the initial sample size in a two-stage sequential procedure." In Institute of Mathematical Statistics Lecture Notes - Monograph Series. Institute of Mathematical Statistics, 2004. http://dx.doi.org/10.1214/lnms/1196285393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oppe, Mark, Richard Norman, Zhihao Yang, and Ben van Hout. "Experimental Design for the Valuation of the EQ-5D-5L." In Value Sets for EQ-5D-5L. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89289-0_3.

Full text
Abstract:
AbstractThe EQ-VT protocol for valuing the EQ-5D-5L offered the opportunity to develop a standardised experimental design to elicit EQ-5D-5L values. This chapter sets out the various aspects of the EQ-VT design and the basis on which methodological choices were made in regard to the stated preference methods used, i.e., composite time trade-off (cTTO) and discrete choice experiments (DCE). These choices include the sub-set of EQ-5D-5L health states to value using these methods; the number of cTTO and DCE valuation tasks per respondent; the minimum sample size needed; and the randomisation schema. This chapter also summarises the research studies developing and testing alternative experimental designs aimed at generating a “Lite” version of the EQ-VT design. This “Lite” version aimed to reduce the number of health states in the design, and thus the sample size, to increase the feasibility of undertaking valuation studies in countries with limited resources or recruitment possibilities. Finally, this chapter outlines remaining methodological issues to be addressed in future research, focusing on refinement of current design strategies, and identification of new designs for novel valuation approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Hofer, Marvin, Sebastian Hellmann, Milan Dojchinovski, and Johannes Frey. "The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows." In Semantic Systems. In the Era of Knowledge Graphs. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59833-4_1.

Full text
Abstract:
Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility, to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .
APA, Harvard, Vancouver, ISO, and other styles
9

Holden, Stein T. "The gender dimensions of land tenure reforms in Ethiopia 1995-2020." In Land governance and gender: the tenure-gender nexus in land management and land policy. CABI, 2021. http://dx.doi.org/10.1079/9781789247664.0012.

Full text
Abstract:
Abstract Continued rapid population growth in rural areas is a major challenge to future land access for all in Ethiopia. Landlessness is growing and farm sizes shrinking. This tends to erode the constitutional right of all rural residents without another livelihood option to access land for subsistence. With the recent land laws also stipulating minimum farm sizes, this also restricts inheritance rights of children living on small farms. It also restricts the opportunity to share land equally among spouses upon divorce. Co-management of land among divorced parents and children on small farms is also challenging. The result may be disguised fragmentation. Given the growing landlessness and inheritance rules and the need for alternative livelihoods for youth, we may wonder whether women are at a disadvantage in non-farm employment. Recent studies of a large sample of resource-poor rural youth that have been eligible to join youth business groups and have been allocated rehabilitated communal lands have female members that on average have fewer assets, lower incomes and less education than male members. They are also much less likely to own a mobile phone and to become group leaders or group board members. This shows that young women in Ethiopia continue to be disadvantaged and are among the most resource-poor and vulnerable. There is a need for more targeted policies to give them equal opportunities in the ongoing rural as well as rural-urban transformation processes.
APA, Harvard, Vancouver, ISO, and other styles
10

Honsdorf, Nora, Jelle Van Loon, Bram Govaerts, and Nele Verhulst. "Crop Management for Breeding Trials." In Wheat Improvement. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90673-3_15.

Full text
Abstract:
AbstractAppropriate agronomic management of breeding trials plays an important role in creating selection conditions that lead to clear expression of trait differences between genotypes. Good trial management reduces experimental error to a minimum and in this way facilitates the detection of the best genotypes. The field site should be representative for the target environment of the breeding program, including soil and climatic conditions, photoperiod, and pest and disease prevalence. Uniformity of a field site is important to provide similar growing conditions to all plants. Field variability is affected by natural and management factors and leads to variability in crop performance. Additionally, pest and disease incidence tend to concentrate in patches, introducing variability not necessarily related to the susceptibility of affected genotypes. Precise agronomic management of breeding trials can reduce natural field variability and can contribute to reduce variability of crop performance. Through specialized agronomic management, contrasting selection conditions can be created in the same experimental station. The use of adequate machinery like plot seeders and harvesters contributes to precise trial management and facilitates operation. Machine seeding assures even seeding depth and density. Plot combines can be equipped with grain cleaners, on-board weighing systems and sensors to measure grain humidity and weight, which can greatly facilitate data collection.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Minimal sample size"

1

Harless, Nikki, John Shingledecker, Kyle Stoodt, Kevin Cwiok, and Anand Kulkarni. "Impact of Three Additive Manufacturing Techniques on Microstructure and Creep Damage Development in Alloy 718." In AM-EPRI 2024. ASM International, 2024. http://dx.doi.org/10.31399/asm.cp.am-epri-2024p0338.

Full text
Abstract:
Abstract Inconel 718 is a nickel-based superalloy known for its excellent combination of high-temperature strength, corrosion resistance, and weldability. Additive Manufacturing (AM) has revolutionized traditional manufacturing processes by enabling the creation of complex and customized components. In this work, three prominent AM techniques: Laser-Based Powder Bed Fusion (PBF), Wire Direct Energy Deposition (DED), and Binder Jet (BJ) processes were explored. A thorough metallographic analysis and comparison of samples was conducted after short-term creep testing originating from each of the three aforementioned techniques in addition to wrought material. Detailed electron microscopy unveiled equiaxed grains in both BJ and wrought samples while PBF samples displayed elongated finer grain structures in the build direction, characteristic of PBF. The DED samples revealed a more bimodal grain distribution with a combination of smaller equiaxed grains accompanied by larger more elongated grains. When assessing the three processes, the average grain size was found to be larger in the BJ samples, while the PBF samples exhibited the most significant variation in grain and sub-grain size. Number density, size, and shape of porosity varied between all three techniques. Post-creep test observations in PBF samples revealed the occurrence of wedge cracking at the failure point, accompanied by a preference for grain boundary creep void formation while BJ samples exhibited grain boundary creep void coalescence and cracking at the failure location. In the DED samples, void formation was minimal however, it seemed to be more prevalent in areas with precipitates. In contrast, the wrought sample showed void formation at the failure site with a preference for areas with primary carbide formation. Despite BJ samples demonstrating similar or even superior rupture life compared to other AM techniques, a noteworthy reduction in rupture ductility was observed. While a coarse, uniform grain size is generally linked to enhanced creep resistance and rupture life, the combination of pre-existing voids along grain boundaries and the formation of new voids is hypothesized to accelerate rapid fracture, resulting in diminished ductility. This research shows careful consideration is needed when selecting an AM technology for high- temperature applications as creep behavior is sensitive to the large microstructural variations AM can introduce.
APA, Harvard, Vancouver, ISO, and other styles
2

Kubushiro, Keiji, Kyohei Nomura, Satoshi Takahashi, Madoka Takahashi, and Hirokatsu Nakagawa. "Effect of Pre-Strain on Creep Properties of Alloy 740." In AM-EPRI 2010, edited by D. Gandy, J. Shingledecker, and R. Viswanathan. ASM International, 2010. http://dx.doi.org/10.31399/asm.cp.am-epri-2010p0164.

Full text
Abstract:
Abstract The effects of pre-strain on creep properties of Alloy 740 have been investigated. Tensile strain was 7.5% and introduced by room temperature tensile test. Creep tests were conducted under 750 degree C, 275-350MPa. Creep rupture life of pre-strained sample decreased by half compared with as-heat treated sample. Creep behaviors of both samples were almost similar in primary creep stage, but onset of creep rate acceleration of pre-strained sample was faster than those of as-heat treated sample. As a result, minimum creep rate of pre-strained sample were two times larger than that of as-heat treated sample. From the observation of ruptured specimen, pre-strained sample had much more sub cracks than as-heat treated sample. On the other hand, microstructure of both samples was also different. There were MC precipitates on grain boundary in both ruptured specimens, but both size and number of MC precipitates were larger in pre-strained sample although creep life of pre-strained sample was shorter than that of as-heat treated sample. In this paper, the difference of creep behavior will be discussed in terms of both the microstructural change and mechanical damage.
APA, Harvard, Vancouver, ISO, and other styles
3

Rao, Shan (Sherry), Da Kuang, and Connor McManus. "Study of Minimal Surface Preparation for Various Patch Repair Coating Products Applied under the Minimum Allowable Temperatures." In CONFERENCE 2024. AMPP, 2024. https://doi.org/10.5006/c2024-21005.

Full text
Abstract:
Abstract Patch repair coating products provide cost-effective solutions for restoring small to medium-sized damages in mainline coatings to extend the lifespan of pipelines. Some coating products are specially designed to tolerate reduced surface conditions while providing the adequate corrosion protection performance. Due to easy mobilization and relatively inexpensive, power tool cleaning is commonly used in the field to prepare surfaces for small area repairs. During field coating work, adverse weather conditions are often encountered. However, little data were publicly available to support if the minimal surface recommended by the product data sheet (PDS) is also suitable for the coating applied under the minimum allowable temperature. In addition, the effect of the similar surface level produced by different power tools on the corrosion resistance of the same coating is rarely reported. This paper aims to address these gaps. Four commercially available coating products were selected to examine their potentials as patch repairs, including one liquid-applied epoxy coating, one butyl-based sealant product, and two viscoelastic coating products. Test samples were prepared using a rust grade C steel pipe section. Different power tools such as wire brush, rotary sander, flap disc, and bristle blaster were used to create SSPC-SP3, SSPC-SP11, and SSPC-SP15 surfaces. All four products were applied and cured at the minimum allowable temperatures specified in their respective PDS or recommended by the manufacturers. A 14-day cathodic disbondment (CD) test was employed to rule out the improper surfaces or power tools, and the next higher surface level was then considered as potentially suitable. The long-term performance characteristics of these coating products on the established minimal surfaces under the minimum allowable application temperatures were evaluated using the selected critical tests from CSA Z245.30 standards.
APA, Harvard, Vancouver, ISO, and other styles
4

Fajt, J. "Sometime More Is Just More: THPS Biocide Laboratory Kill Study on Wildtype Sulfate Reducing Bacteria." In CONFERENCE 2023. AMPP, 2023. https://doi.org/10.5006/c2023-18858.

Full text
Abstract:
Abstract Biocide dose response studies are commonly conducted on water solutions containing bacteria to determine the effect of chemical treatments before application. Biocide product labels provide broad guidelines for dosing. However, site water chemistry and bacteria biology make the minimum effective dose differ for each location difficult to determine. A large volume culture of sulfate reducing bacteria (SRB) was prepared and allowed to grow until 4 log bacteria were present. The sample as then split into four identical 500 ml samples. The four samples were dosed at 0, 5, 50 and 400 ppm of tetrakis (hydroxymethyl)-phosphonium sulfate (THPS) based biocide. The effect on bacteria levels were tested using an enzyme-based bacteria metabolism test after 0.2, 1, 8, 24 and 96 hr. This study showed that a single application of 50 ppm of biocide could be as effective as a 400 ppm on high numbers of planktonic SRB.
APA, Harvard, Vancouver, ISO, and other styles
5

Thompson, J. J., and V. S. Agarwala. "A Heat Treatment for Reducing Corrosion and Stress Corrosion Cracking Susceptibilities in 7XXX Al Alloys." In CORROSION 1986. NACE International, 1986. https://doi.org/10.5006/c1986-86204.

Full text
Abstract:
Abstract The recently developed retrogression and reaging (RRA) heat treatments have been applied to redress the trade-off between strength and corrosion resistance in 7000 series aluminum alloys. So far they have been applied to thin and small sample sizes with some success. In this study a modified RRA treated material was found to show significant improvement in both the exfoliation and stress corrosion cracking (SCC) resistances, with only a minimal loss in yield strength compared to its T6 temper. Comparative SEM and TEM analyses of their fracture mode were made. A model has been proposed to explain the differences in their microstructures and marked differences in SCC susceptibilities.
APA, Harvard, Vancouver, ISO, and other styles
6

Arenberg, Jonathan W. "Determination of minimal test sample size for high-accuracy laser damage testing." In Laser-Induced Damage in Optical Materials: 1994, edited by Harold E. Bennett, Arthur H. Guenther, Mark R. Kozlowski, Brian E. Newnam, and M. J. Soileau. SPIE, 1995. http://dx.doi.org/10.1117/12.213731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Jaesung, Shiyu Zhou, and Junhong Chen. "Sequential Robust Parameter Design With Sample Size Selection." In ASME 2022 17th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/msec2022-85690.

Full text
Abstract:
Abstract In designing engineering systems, it is crucial to find a robust design whose responses have minimal variations and satisfies a constraint on the mean, also known as robust parameter design (RPD). Such optimization is challenging because collecting data is often expensive, and modern engineering systems commonly produce responses whose mean and variance are complex unknown functions of control variables. To address these challenges, we propose a stochastic constrained Bayesian optimization for RPD. We construct the Gaussian process dual response surrogate models for the mean and variance of the response by the sample mean and sample variance. The predicted mean and quantified uncertainties through the surrogate models are utilized to exploit the predictions and explore the regions with high uncertainty in the proposed Bayesian optimization method. Because the sample size of the sample mean and sample variance affects the performance significantly, we propose an effective sample size selection scheme, which effectively balances between exploitation and exploration during optimization. The performance of our method is demonstrated in numerical and case studies. In the case study, we used the real-world data for the robust design of the Graphene field-effect transistor nanosensors.
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Liang, Yanliang Zhang, and Theodorian Borca-Tasciuc. "Heater Size Effect on Temperature Sensing With Wollaston Scanning Thermal Microprobes." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-65621.

Full text
Abstract:
Scanning thermal microscopy (SThM) is an attractive tool for high spatial resolution thermal characterization with minimal sample preparation1. However, complex thermal contact mechanisms often hinder precise quantification of the sample temperature or thermal properties.2
APA, Harvard, Vancouver, ISO, and other styles
9

Henriques, Ana Beatriz B., Paola L. de Aguiar, Raphael G. dos Santos, and Joice Miagava. "Evaluation of Process Parameters Modifications on Directed Energy Deposition Manufactured Parts Obtained in a Hybrid Additive Manufacturing Machine." In ASME 2022 17th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/msec2022-85315.

Full text
Abstract:
Abstract In order to combine advantages of both additive and subtractive manufacturing, hybrid machine tools have been developed. In the hybrid process, directed energy deposition (DED) is the most used additive manufacturing technology due to its adaptability to CNC milling centers. However, in order to assure the integrity of a printed part, several process parameters must be set appropriately. Not only there are several parameters, but also some of these parameters influence different variables — e.g.: scan speed influences both the energy input per unit area and the powder volume that is deposited. In addition, another fact that complicates the achievement of a good quality in a workpiece is that some relevant parameters for additive manufacturing cannot be controlled due to CNC milling center constraints (e.g.: atmosphere). In this work, laser power (280 to 340 W) and scan speed (5 to 7 mm/s) were systematically varied to print 316L test samples with the aim of building a quality matrix. In the future, this matrix will be used to create strategies to optimize the quality of printed parts. Optical stereoscopy shows that the higher the laser power, the higher the sample, indicating that more powder is melted and deposited with an increasing laser power. By fixing the laser power and increasing the scan speed, printed samples were lower, indicating that less powder was deposited. Other parameters were preliminarily tested — e.g.: sample size and shield gas flow. Decreasing the sample size from 9 to 6 mm was sufficient to double the sample height, showing that the heat transfer rate was dramatically changed. Findings of this study shows that all process parameters act together and are determining factors for a good quality printed part. Moreover, it was noted that sample integrity is very sensitive to minimal changes in some process parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Zhifu, and Xiaoping Du. "A System Reliability Method With Dependent Kriging Predictions." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59030.

Full text
Abstract:
When limit-state functions are highly nonlinear, traditional reliability methods, such as the first order and second order reliability methods, are not accurate. Monte Carlo simulation (MCS), on the other hand, is accurate if a sufficient sample size is used, but is computationally intensive. This research proposes a new system reliability method that combines MCS and the Kriging method with improved accuracy and efficiency. Cheaper surrogate models are created for limit-state functions with the minimal variance in the estimate of the system reliability, thereby producing high accuracy for the system reliability prediction. Instead of employing global optimization, this method uses MCS samples from which training points for the surrogate models are selected. By considering the dependence between responses from a surrogate model, this method captures the true contribution of each MCS sample to the uncertainty in the estimate of the system reliability and therefore chooses training points efficiently. Good accuracy and efficiency are demonstrated by three examples.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Minimal sample size"

1

Gervacio Jakabosky, Olivia. Effects of Road and Trail Characteristics on Mountain Grouse Observations in Western Montana. Montana State University, 2022. https://doi.org/10.15788/s2022.curio2.

Full text
Abstract:
Anthropogenic features such as roads and trails, and human activity may affect space use, demography, abundance, and other wildlife population parameters. Human infrastructure and activity may result in biased population estimates by influencing habitat use of a species and thus abundance estimates within a localized area or the ability of biologist to detect individuals during standard population surveys. To evaluate the effects of anthropogenic features on mountain grouse detections, we developed and conducted replicated surveys throughout western Montana during 2020. Biologists and volunteers collected count data for dusky, ruffed, and spruce grouse during point counts surveys conducted at 582 sites and transect surveys conducted for 291 transects located throughout Montana Fish, Wildlife, and Parks Regions 1-5. Survey transects occurred along two types of human infrastructure: U.S. Forest Service roads with minimal traffic during the survey period and U.S. Forest Service trails. As a first step, we compared count data for road and trail transect surveys for each species of grouse. Overall, raw counts of dusky grouse were higher for transects located along trails (0.59 ± 1.07 SD grouse per transect) than roads (0.33 ± 0.91 SD). Raw counts of ruffed grouse were similar for transects located along trails (0.75 ± 1.42 SD) and roads (0.69 ± 1.55 SD). We did not have a sufficient sample size to evaluate spruce grouse counts.
APA, Harvard, Vancouver, ISO, and other styles
2

de Caritat, Patrice, Brent McInnes, and Stephen Rowins. Towards a heavy mineral map of the Australian continent: a feasibility study. Geoscience Australia, 2020. http://dx.doi.org/10.11636/record.2020.031.

Full text
Abstract:
Heavy minerals (HMs) are minerals with a specific gravity greater than 2.9 g/cm3. They are commonly highly resistant to physical and chemical weathering, and therefore persist in sediments as lasting indicators of the (former) presence of the rocks they formed in. The presence/absence of certain HMs, their associations with other HMs, their concentration levels, and the geochemical patterns they form in maps or 3D models can be indicative of geological processes that contributed to their formation. Furthermore trace element and isotopic analyses of HMs have been used to vector to mineralisation or constrain timing of geological processes. The positive role of HMs in mineral exploration is well established in other countries, but comparatively little understood in Australia. Here we present the results of a pilot project that was designed to establish, test and assess a workflow to produce a HM map (or atlas of maps) and dataset for Australia. This would represent a critical step in the ability to detect anomalous HM patterns as it would establish the background HM characteristics (i.e., unrelated to mineralisation). Further the extremely rich dataset produced would be a valuable input into any future machine learning/big data-based prospectivity analysis. The pilot project consisted in selecting ten sites from the National Geochemical Survey of Australia (NGSA) and separating and analysing the HM contents from the 75-430 µm grain-size fraction of the top (0-10 cm depth) sediment samples. A workflow was established and tested based on the density separation of the HM-rich phase by combining a shake table and the use of dense liquids. The automated mineralogy quantification was performed on a TESCAN® Integrated Mineral Analyser (TIMA) that identified and mapped thousands of grains in a matter of minutes for each sample. The results indicated that: (1) the NGSA samples are appropriate for HM analysis; (2) over 40 HMs were effectively identified and quantified using TIMA automated quantitative mineralogy; (3) the resultant HMs’ mineralogy is consistent with the samples’ bulk geochemistry and regional geological setting; and (4) the HM makeup of the NGSA samples varied across the country, as shown by the mineral mounts and preliminary maps. Based on these observations, HM mapping of the continent using NGSA samples will likely result in coherent and interpretable geological patterns relating to bedrock lithology, metamorphic grade, degree of alteration and mineralisation. It could assist in geological investigations especially where outcrop is minimal, challenging to correctly attribute due to extensive weathering, or simply difficult to access. It is believed that a continental-scale HM atlas for Australia could assist in derisking mineral exploration and lead to investment, e.g., via tenement uptake, exploration, discovery and ultimately exploitation. As some HMs are hosts for technology critical elements such as rare earth elements, their systematic and internally consistent quantification and mapping could lead to resource discovery essential for a more sustainable, lower-carbon economy.
APA, Harvard, Vancouver, ISO, and other styles
3

Parker, Jennifer, Katherine Irimata, Guangyu Zhang, et al. National Center for Health Statistics Data Presentation Standards for Rates and Counts. National Center for Health Statistics (U.S.), 2023. http://dx.doi.org/10.15620/cdc:124368.

Full text
Abstract:
This report describes the multistep NCHS data presentation standards for rates and counts, which are based on a minimum sample size and the relative width of a confidence interval and differ between vital statistics and health surveys.
APA, Harvard, Vancouver, ISO, and other styles
4

Dailami, N., M. Bhaskara Rao, and K. Subramanyam. On the Selection of the Best Gamma Population. Determination of Minimax Sample Sizes. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada166138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

W Djimeu, Eric, and Deo-Gracias Houndolo. Power calculation for causal inference in social science: sample size and minimum detectable effect determination. International Initiative for Impact Evaluation (3ie), 2015. http://dx.doi.org/10.23846/wp0026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Espino, Emilio, and Martín González Rozada. Automatic Stabilization and Fiscal Policy: Some Quantitative Implications for Latin America and the Caribbean. Inter-American Development Bank, 2012. http://dx.doi.org/10.18235/0011425.

Full text
Abstract:
This paper provides an estimation of the size of income and demand automatic stabilizers in a representative sample of Latin American and Caribbean (LAC) countries. The authors find that when a negative unemployment shock hits the economy, the size of income and demand automatic stabilizers coefficients is much smaller than the size of these coefficients in Europe and the United States. This evidence suggests that there is room for policies that can enlarge the absorption by these coefficients as a way to contribute to macroeconomic stability in LAC countries. The paper analyzes four policies affecting the income stabilization coefficient and two others affecting directly the demand stabilization coefficient. The main results suggest that changing the minimum tax exemption and its progressiveness using the tax structure of middle-income countries outside the LAC region is the best option to enlarge the size of the income and demand stabilization coefficients, and in this way to reduce the need of discretionary fiscal policies in theregion.
APA, Harvard, Vancouver, ISO, and other styles
7

Rauh, Nicholas. 2023 Addendum to the Rough Cilicia Kiln Site Ceramics (Syedra, Delice, Biçkici, and Antiochia ad Cragum): An Update to the Kiln Sites. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317638.

Full text
Abstract:
This addendum summarizes the ceramic remains recovered by the Rough Cilicia Archaeological Survey Project at four posited amphora kiln sites in the survey area: the Syedra Kiln Site, the Biçkici Kiln site, the Antiochia ad Cragum Kiln Site, and the Delice Kiln Site. All four sites were identified early on during the survey (1995-1997). The survey team conducted grab collections and triaged dozens of sherds recovered by 1997, before returning the bulk of these fragments to the field. A representative sample of the amphora fragments together with context ceramics for each site was conserved at the Alanya Archaeological Museum. In 2003 the survey team conducted a magnetometric survey of the Biçkici and Suedyra kiln sites with minimal results. Due to the longevity of the survey (1996-2011) members of the survey team, particularly Rauh, Dillon, Autret, and Kızılarslanoğlu, conducted repeated visits to the sites and recovered additional diagnostic fragments as they surfaced. These visits occurred almost annually, with intensive inspections occurring in 2008 and 2011. In September 2021, Rauh inspected the remaining collections from the kiln sites stored in the Alanya Archareological Museum.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Winston Paul, Daniel J. Twedt, David A. Wiedenfeld, Paul B. Hamel, Robert P. Ford, and Robert J. Cooper. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits. U.S. Department of Agriculture, Forest Service, Southern Forest Experiment Station, 1993. http://dx.doi.org/10.2737/so-rp-274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Olsen, Laurie, Elvira Armas, and Magaly Lavadenz. A review of year 2 LCAPs: A weak response to English Learners. Center for Equity for English Learners, 2016. http://dx.doi.org/10.15365/ceel.lcap2016.1.

Full text
Abstract:
A panel of 32 reviewers analyzed the Local Control and Accountability Plans (LCAPs) of same sample of 29 districts for the second year of implementation of the 2013 California Local Control Funding Formula (LCFF). Using the same four questions as the Year 1 report, the Year 2 analysis also addresses the key differences between first and second-year LCAPs. Key findings from the Year 2 LCAPs review include: (1) similarly weak responses to the needs of ELs by LEAs in Year 2; (2) some improvement in clarity about services provided to ELs in some areas, though most evidence was weak; (3) minimal attention to the new English Language Development Standards; (4) minimal investment in teacher capacity building to address EL needs; (5) lack of attention to coherent programs, services and supports for ELs and failure to address issues of program and curriculum access; (6) weak engagement of ELs’ parents in LCAP process and content of LCAP plans; (7) poor employment of EL data to inform LCAP goals and weak use of EL indicators as an LCAP accountability component; (8) lack of specificity in describing district services and site allocations for supplemental and concentration funding; and (9) difficulty identifying the coherence of responses of EL needs in year 2 LCAPs. Overall, the analysis of the 29 LCAPs continue to signal a weak response to EL needs. The authors reassert the urgency of the recommendations in the Year 1 report, offer additional specific recommendations for the state, county offices of education, and districts, and call upon the state to reaffirm the equity commitment in the LCFF design.
APA, Harvard, Vancouver, ISO, and other styles
10

Mintz, K. J. The explosibility of three canadian coal dusts. Natural Resources Canada/CMSS/Information Management, 1989. http://dx.doi.org/10.4095/331786.

Full text
Abstract:
Explosibility measurements on coal dusts from the Cape Breton Development Corporation's Lingan Mine, TransAlta's Highvale Mine and the Quintette Mine in B.C. have been carried out along with some tests on Pittsburgh Standard coal dust. The Quintette coal dust would not explode in the classical Hartmann apparatus, but did explode in the new 20-L vessel using a more powerful ignition source. The minimum explosible concentrations of the Lingan, Highvale and Pittsburgh coal dusts were all about the same (40 - 45 mg/L), that of the Quintette was higher (140 mg/L). The difference may be attributed to the much greater mean particle size of the Quintette dust. The explosion pressures (in kPa) were: Highvale, 600, Pittsburgh, 520, Lingan, 510, and Quintette, 440. The minimum oxygen concentrations required for explosions were (in % oxygen): Highvale 10.4, Lingan 10.5, and Quintette 14. The minimum ignition temperatures of dust clouds were (in °C): Highvale 510, Lingan 600, Quintette 620 and Pittsburgh 620. Further work is required to reconcile limit values.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography