Academic literature on the topic 'Minimum sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Minimum sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Minimum sample size"

1

Louangrath, P. I. "Sample Size Calculation for Continuous and Discrete Data." International Journal of Research and Methodology in Social Science 5, no. 4 (2019): 44–56. https://doi.org/10.5281/zenodo.3877623.

Full text
Abstract:
The purpose of this paper is to provide a practical guidance to researcher in social science on sample size determination. Sample size calculation is a basic and indispensable requisite for applied research in social science. Most research in social science is about population studies. In population studies, researchers could only study the sample of the population because detailed examination of the population is not feasible. In order for the sample to represent the population, a minimum sample must be obtained. Thus, minimum sample determination becomes a critical requisite in survey collection, interviews, or data collection. In this paper, we present minimum sample calculation methods for continuous and discrete data in non-time series scenarios. The data came from randomly generated values by using Excel command: rand()*100 for test sample sizes of <em>n = 5, 10, 20, 30, 50, 100, 200, 300, 400, 500, </em>and<em> 1,000</em>. We proposed a new minimum sample size method that consistently produces <em>n = 30.</em>
APA, Harvard, Vancouver, ISO, and other styles
2

Louangrath, P.I. "Minimum Sample Size Method Based on Survey Scales." Inter. J. Res. Methodol. Soc. Sci. 3, no. 3 (2017): 44–52. https://doi.org/10.5281/zenodo.1322593.

Full text
Abstract:
The objective of this paper is to introduce a new sample size calculation method based on the type of response scale used surveys. The current literature on sample size calculation focuses data attributes and distribution. There is no prior research using response scale as the basis for minimum sample size calculation. This paper fills that gap in the literature. We introduced a new minimum sample size calculation method called <em>n* (n-Star)</em> by using the Monte Carlo iteration as the basis to find asymptotic normality in the survey response scale. This new method allows us to achieve up to 95% accuracy in the sample-population inference. The data used in this study came from the numerical elements of the survey scales. Three Likert and one non-Likert scales were used to determine minimum sample size. Through Monte Carlo simulation and NK landscape optimization, we found that minimum sample size according to survey scales in all cases is n* = 31.61&plusmn;2.33 (<em>p &lt; 0.05</em>). We combined four scales to test for validity and reliable of the new sample size. Validity was tested by NK landscape optimization method resulted in error of F(z*) = 0.001 compared to the theoretical value for the center of the distribution curve at F(z) = 0.00. Reliability was tested by using Weibull system analysis method. It was found that the system drift tendency is L = 0.00 and system reliability R = 1.00.
APA, Harvard, Vancouver, ISO, and other styles
3

Scaketti, Matheus, Patricia Sanae Sujii, Alessandro Alves-Pereira, et al. "Sample Size Impact (SaSii): An R script for estimating optimal sample sizes in population genetics and population genomics studies." PLOS ONE 20, no. 2 (2025): e0316634. https://doi.org/10.1371/journal.pone.0316634.

Full text
Abstract:
Obtaining large sample sizes for genetic studies can be challenging, time-consuming, and expensive, and small sample sizes may generate biased or imprecise results. Many studies have suggested the minimum sample size necessary to obtain robust and reliable results, but it is not possible to define one ideal minimum sample size that fits all studies. Here, we present SaSii (Sample Size Impact), an R script to help researchers define the minimum sample size. Based on empirical and simulated data analysis using SaSii, we present patterns and suggest minimum sample sizes for experiment design. The patterns were obtained by analyzing previously published genotype datasets with SaSii and can be used as a starting point for the sample design of population genetics and genomic studies. Our results showed that it is possible to estimate an adequate sample size that accurately represents the real population without requiring the scientist to write any program code, extract and sequence samples, or use population genetics programs, thus simplifying the process. We also confirmed that the minimum sample sizes for SNP (single-nucleotide polymorphism) analysis are usually smaller than for SSR (simple sequence repeat) analysis and discussed other patterns observed from empirical plant and animal datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Louangrath, Paul, and Chanoknath Sutanapong. "Minimum Sample Size Calculation Using Cumulative Distribution Function." Inter. J. Res. Methodol. Soc. Sci. 5, no. 1 (2019): 100–113. https://doi.org/10.5281/zenodo.2667494.

Full text
Abstract:
Minimum sample size is a requirement in most experimental designs. Research in social science requires minimum sample size calculation in order to support the claim that the sample represents the population. If the sample does not adequately represent the population, generalizability could not be achieved. In this study, we present a minimum sample size calculation method by using the cumulative distribution function of the normal distribution. Since most quantitative data in social science research employ surveys with responses in the form of Likert or non-Likert scales, the CDF of the normal distribution curve is an appropriate tool for sample size determination. We use binary data in a form of (0,1), and continuous data, in a form of quantitative non-Likert (0,1,2,3), and Likert&nbsp; (1,2,3,4,5), (1,2,3,4,5,6,7) and (1,2,3,4,5,6,7,8,9,10) scales as the bases for our modeling. We used Monte Carlo simulation to determine the number of repetition for each scale to achieve normality. The minimum sample size was determined by taking the natural log of the Monte Carlo repetition multiplied by <em>pi</em>. We found that in all cases, the minimum sample size is about 30 where we maintain the confidence interval at 95%. For non-parametric case, the new sample size calculation method may be used for discrete and continuous data. For parametric modeling, we employed the entropy function for common distribution as the basis for sample size determination. This proposed sample size determination method is a contribution to the field because it served as a unified method for all data types and is a practical tool in research methodology.
APA, Harvard, Vancouver, ISO, and other styles
5

Straat, J. Hendrik, L. Andries van der Ark, and Klaas Sijtsma. "Minimum Sample Size Requirements for Mokken Scale Analysis." Educational and Psychological Measurement 74, no. 5 (2014): 809–22. http://dx.doi.org/10.1177/0013164414529793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jenkins, David G., and Pedro F. Quintana-Ascencio. "A solution to minimum sample size for regressions." PLOS ONE 15, no. 2 (2020): e0229345. http://dx.doi.org/10.1371/journal.pone.0229345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mundfrom, Daniel J., Dale G. Shaw, and Tian Lu Ke. "Minimum Sample Size Recommendations for Conducting Factor Analyses." International Journal of Testing 5, no. 2 (2005): 159–68. http://dx.doi.org/10.1207/s15327574ijt0502_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacka, T. H. "Investigations of discrepancies between laboratory studies of the flow of ice: density, sample shape and size, and grain-size." Annals of Glaciology 19 (1994): 146–54. http://dx.doi.org/10.3189/1994aog19-1-146-154.

Full text
Abstract:
Laboratory results are presented concerning ice creep at minimum creep rate (at ~1% strain) for fine-grained, initially isotropic, polycrystalline samples. The effect on the creep rate of ice density, sample shape (aspect ratio) and size, grain-size and ratio of grain-size to sample size is examined. Provided sample density is above ~0.83 Mg m−3 (i.e. the close-off density), there is no effect of density on ice-creep rate. Results provide no evidence of a creep rate dependence on test sample length for cylindrical samples. Sample diameter, however, does affect creep rate. Over the range of sample diameters studied (16.2 to 90 mm) creep rate decreases monotonically by a factor of ~4. This effect is independent of sample aspect ratio. Experiments examining size effects in simple shear indicate no dependence of minimum flow rate on shape or size in this stress configuration. Two grain-sizes were represented within the samples tested for the effect of sample size. As expected from earlier work, no grain-size effect on minimum creep rate is evident. In addition, there was no evidence of an effect on creep rate of the ratio of grain-size to sample size.
APA, Harvard, Vancouver, ISO, and other styles
9

Jacka, T. H. "Investigations of discrepancies between laboratory studies of the flow of ice: density, sample shape and size, and grain-size." Annals of Glaciology 19 (1994): 146–54. http://dx.doi.org/10.1017/s0260305500011137.

Full text
Abstract:
Laboratory results are presented concerning ice creep at minimum creep rate (at ~1% strain) for fine-grained, initially isotropic, polycrystalline samples. The effect on the creep rate of ice density, sample shape (aspect ratio) and size, grain-size and ratio of grain-size to sample size is examined. Provided sample density is above ~0.83 Mg m−3 (i.e. the close-off density), there is no effect of density on ice-creep rate. Results provide no evidence of a creep rate dependence on test sample length for cylindrical samples. Sample diameter, however, does affect creep rate. Over the range of sample diameters studied (16.2 to 90 mm) creep rate decreases monotonically by a factor of ~4. This effect is independent of sample aspect ratio. Experiments examining size effects in simple shear indicate no dependence of minimum flow rate on shape or size in this stress configuration. Two grain-sizes were represented within the samples tested for the effect of sample size. As expected from earlier work, no grain-size effect on minimum creep rate is evident. In addition, there was no evidence of an effect on creep rate of the ratio of grain-size to sample size.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Chenchen, and Shihong Yue. "Minimum Sample Size Estimate for Classifying Invasive Lung Adenocarcinoma." Applied Sciences 12, no. 17 (2022): 8469. http://dx.doi.org/10.3390/app12178469.

Full text
Abstract:
Statistical Learning Theory (SLT) plays an important role in prediction estimation and machine learning when only limited samples are available. At present, determining how many samples are necessary under given circumstances for prediction accuracy is still an unknown. In this paper, the medical diagnosis on lung cancer is taken as an example to solve the problem. Invasive adenocarcinoma (IA) is a main type of lung cancer, often presented as ground glass nodules (GGNs) in patient’s CT images. Accurately discriminating IA from non-IA based on GGNs has important implications for taking the right approach to treatment and cure. Support Vector Machine (SVM) is an SLT application and is used to classify GGNs, wherein the interrelation between the generalization and the lower bound of necessary sampling numbers can be effectively recovered. In this research, to validate the interrelation, 436 GGNs were collected and labeled using surgical pathology. Then, a feature vector was constructed for each GGN sample through the fully connected layer of AlexNet. A 10-dimensional feature subset was then selected with the p-value calculated using Analysis of Variance (ANOVA). Finally, four sets with different sample sizes were used to construct an SVM classifier. Experiments show that a theoretical estimate of minimum sample size is consistent with actual values, and the lower bound on sample size can be solved under various generalization requirements.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Minimum sample size"

1

Potgieter, Ryno. "Minimum sample size for estimating the Bayes error at a predetermined level." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33479.

Full text
Abstract:
Determining the correct sample size is of utmost importance in study design. Large samples yield classifiers or parameters with more precision and conversely, samples that are too small yield unreliable results. Fixed sample size methods, as determined by the specified level of error between the obtained parameter and population value, or a confidence level associated with the estimate, have been developed and are available. These methods are extremely useful when there is little or no cost (consequences of action), financial and time, involved in gathering the data. Alternatively, sequential sampling procedures have been developed specifically to obtain a classifier or parameter estimate that is as accurate as deemed necessary by the researcher, while sampling the least number of observations required to obtain the specified level of accuracy. This dissertation discusses a sequential procedure, derived using Martingale Limit Theory, which had been developed to train a classifier with the minimum number of observations to ensure, with a high enough probability, that the next observation sampled has a low enough probability of being misclassified. Various classification methods are discussed and tested, with multiple combinations of parameters tested. Additionally, the sequential procedure is tested on microarray data. Various advantages and shortcomings of the sequential procedure are pointed out and discussed. This dissertation also proposes a new sequential procedure that trains the classifier to such an extent as to accurately estimate the Bayes error with a high probability. The sequential procedure retains all of the advantages of the previous method, while addressing the most serious shortcoming. Ultimately, the sequential procedure developed enables the researcher to dictate how accurate the classifier should be and provides more control over the trained classifier.<br>Dissertation (MSc)--University of Pretoria, 2013.<br>Statistics<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
2

Marcondes, Patricia Dione Guerra. "Minimum sample size needed to construct cushion curves based on the stress-energy method." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1181668751/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Forgo, Vincent Z. Mr. "A Distribution of the First Order Statistic When the Sample Size is Random." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3181.

Full text
Abstract:
Statistical distributions also known as probability distributions are used to model a random experiment. Probability distributions consist of probability density functions (pdf) and cumulative density functions (cdf). Probability distributions are widely used in the area of engineering, actuarial science, computer science, biological science, physics, and other applicable areas of study. Statistics are used to draw conclusions about the population through probability models. Sample statistics such as the minimum, first quartile, median, third quartile, and maximum, referred to as the five-number summary, are examples of order statistics. The minimum and maximum observations are important in extreme value theory. This paper will focus on the probability distribution of the minimum observation, also known as the first order statistic, when the sample size is random.
APA, Harvard, Vancouver, ISO, and other styles
4

Ruengvirayudh, Pornchanok. "A Monte Carlo Study of Parallel Analysis, Minimum Average Partial, Indicator Function, and Modified Average Roots for Determining the Number of Dimensions with Binary Variables in Test Data: Impact of Sample Size and Factor Structure." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou151516919677091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, James Dickson. "Contributions to Profile Monitoring and Multivariate Statistical Process Control." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/30032.

Full text
Abstract:
The content of this dissertation is divided into two main topics: 1) nonlinear profile monitoring and 2) an improved approximate distribution for the T^2 statistic based on the successive differences covariance matrix estimator. (Part 1) In an increasing number of cases the quality of a product or process cannot adequately be represented by the distribution of a univariate quality variable or the multivariate distribution of a vector of quality variables. Rather, a series of measurements are taken across some continuum, such as time or space, to create a profile. The profile determines the product quality at that sampling period. We propose Phase I methods to analyze profiles in a baseline dataset where the profiles can be modeled through either a parametric nonlinear regression function or a nonparametric regression function. We illustrate our methods using data from Walker and Wright (2002) and from dose-response data from DuPont Crop Protection. (Part 2) Although the T^2 statistic based on the successive differences estimator has been shown to be effective in detecting a shift in the mean vector (Sullivan and Woodall (1996) and Vargas (2003)), the exact distribution of this statistic is unknown. An accurate upper control limit (UCL) for the T^2 chart based on this statistic depends on knowing its distribution. Two approximate distributions have been proposed in the literature. We demonstrate the inadequacy of these two approximations and derive useful properties of this statistic. We give an improved approximate distribution and recommendations for its use.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Houghton, Damon. "Minimum tree height sample sizes necessary for accurately estimating merchantable plot volume in Loblolly pine plantations." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-05022009-040541/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Qazi, Abdus Shakur. "Statistical analysis of TxCAP and its subsystems." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-08-4230.

Full text
Abstract:
The Texas Department of Transportation (TxDOT) uses the Texas Condition Assessment Program (TxCAP) to measure and compare the overall road maintenance conditions among its 25 districts. TxCAP combines data from three existing subsystems: the Pavement Management Information System (PMIS), which scores the condition of pavement; the Texas Maintenance Assessment Program (TxMAP), which evaluates roadside conditions; and the Texas Traffic Assessment Program (TxTAP), which evaluates the condition of signs, work zones, railroad crossings, and other traffic elements to get an overall picture of the condition of state roads. As a result, TxCAP provides a more comprehensive assessment of the interstate and non-interstate highways. However, the scores for each of the subsystems are based on data of different sample sizes, accuracy, and levels of variations, making it difficult to decide if the difference between two TxCAP score is a true difference or measurement error. Therefore, whether the use of TxCAP is an effective and consistent means to measure the TxDOT roadway maintenance conditions raises concerns and needs to be evaluated. In order to achieve this objective, statistical analyses of the system were conducted in two ways: 1) to determine whether sufficient samples are collected for each of the subsystems, and 2) to determine if the scores are statistically different from each other. A case study was conducted with a dataset covering the whole state from 2008 to 2010. The case study results show that the difference in scores between two districts are statistically significant for some of the districts and insignificant for some other districts. It is therefore recommended that TxDOT either compare the 25 districts by groups/tiers or increase the sample size of the data being collected to compare the districts as individual ones.<br>text
APA, Harvard, Vancouver, ISO, and other styles
8

"Robust Experimental Design for Speech Analysis Applications." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57412.

Full text
Abstract:
abstract: In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect). This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process.<br>Dissertation/Thesis<br>Masters Thesis Electrical Engineering 2020
APA, Harvard, Vancouver, ISO, and other styles
9

"Sample Size and Test Length Minima for DIMTEST with Conditional Covariance -Based Subtest Selection." Master's thesis, 2012. http://hdl.handle.net/2286/R.I.14957.

Full text
Abstract:
abstract: The existing minima for sample size and test length recommendations for DIMTEST (750 examinees and 25 items) are tied to features of the procedure that are no longer in use. The current version of DIMTEST uses a bootstrapping procedure to remove bias from the test statistic and is packaged with a conditional covariance-based procedure called ATFIND for partitioning test items. Key factors such as sample size, test length, test structure, the correlation between dimensions, and strength of dependence were manipulated in a Monte Carlo study to assess the effectiveness of the current version of DIMTEST with fewer examinees and items. In addition, the DETECT program was also used to partition test items; a second feature of this study also compared the structure of test partitions obtained with ATFIND and DETECT in a number of ways. With some exceptions, the performance of DIMTEST was quite conservative in unidimensional conditions. The performance of DIMTEST in multidimensional conditions depended on each of the manipulated factors, and did suggest that the minima of sample size and test length can be made lower for some conditions. In terms of partitioning test items in unidimensional conditions, DETECT tended to produce longer assessment subtests than ATFIND in turn yielding different test partitions. In multidimensional conditions, test partitions became more similar and were more accurate with increased sample size, for factorially simple data, greater strength of dependence, and a decreased correlation between dimensions. Recommendations for sample size and test length minima are provided along with suggestions for future research.<br>Dissertation/Thesis<br>M.A. Educational Psychology 2012
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Minimum sample size"

1

Sawyer, Richard. Determining minimum sample sizes for estimating prediction equations for college freshman grade average. American College Testing Program, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sylvester, Lancelot G. A computerized approach to finding the minimum sample size for single sample attribute sampling plans. 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Point counts of birds in bottomland hardwood forests of the Mississippi Alluvial valley: Duration, minimum sample size, and points versus visits. U.S. Dept. of Agriculture, Forest Service, Southern Forest Experiment Station, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Point counts of birds in bottomland hardwood forests of the Mississippi Alluvial valley: Duration, minimum sample size, and points versus visits. U.S. Dept. of Agriculture, Forest Service, Southern Forest Experiment Station, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles
6

Cumming, Douglas, ed. The Oxford Handbook of IPOs. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190614577.001.0001.

Full text
Abstract:
Firms generally begin as privately owned entities. When they grow large enough, the decision to go public and its consequences are among the most crucial times in a firm’s life cycle. The first time a firm is a reporting issuer gives rise to tremendous responsibilities about disclosing public information and accountability to a wide array of retail shareholders and institutional investors. Initial public offerings (IPOs) offer tremendous opportunities to raise capital. The economic and legal landscape for IPOs has been rapidly evolving across countries. There have been fewer IPOs in the United States in the aftermath of the 2007–2009 financial crisis and associated regulatory reforms that began in 2002. In 1980–2000, an average of 310 firms went public every year, while in 2001–2014 an average of 110 firms went public every year. At the same time, there are so many firms that seek an IPO in China that there has been a massive waiting list of hundreds of firms in recent years. Some countries are promoting small junior stock exchanges to go public early, and even crowdfunding to avoid any prospectus disclosure. Financial regulation of analysts and investment banks has been evolving in ways that drastically impact the economics of going public—in some countries, such as the United States, drastically increasing the minimum size of a company before it can expect to go public. This Handbook not only systematically and comprehensively consolidates a large body of literature on IPOs, but provides a foundation for future debates and inquiry.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Minimum sample size"

1

Skubalska-Rafajłowicz, Ewa. "Small Sample Size in High Dimensional Space - Minimum Distance Based Classification." In Artificial Intelligence and Soft Computing. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07173-2_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mischko, Jens, Stefan Einbock, and Rainer Wagener. "How to Predict the Product Reliability Confidently and Fast with a Minimum Number of Samples in the Wöhler Test." In Lecture Notes in Mechanical Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_11.

Full text
Abstract:
AbstractTo accurately estimate and predict the (product) lifetime, a large sample size is mandatory, especially for new and unknown materials. The realization of such a sample size is rarely feasible for reasons of cost and capacity. The prior knowledge must be systematically and consistently used to be able to predict the lifetime accurately. By using the example of Wöhler test, it will be shown that the lifetime prediction with a minimum number of specimen and test time can be successful, when taking the prior knowledge into account.
APA, Harvard, Vancouver, ISO, and other styles
3

Polkanov, Art. "Determination of the minimum bat sample group size to provide reliable parasite indices." In The Biology and Conservation of Australasian Bats. Royal Zoological Society of New South Wales, 2011. http://dx.doi.org/10.7882/fs.2011.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oppe, Mark, Richard Norman, Zhihao Yang, and Ben van Hout. "Experimental Design for the Valuation of the EQ-5D-5L." In Value Sets for EQ-5D-5L. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89289-0_3.

Full text
Abstract:
AbstractThe EQ-VT protocol for valuing the EQ-5D-5L offered the opportunity to develop a standardised experimental design to elicit EQ-5D-5L values. This chapter sets out the various aspects of the EQ-VT design and the basis on which methodological choices were made in regard to the stated preference methods used, i.e., composite time trade-off (cTTO) and discrete choice experiments (DCE). These choices include the sub-set of EQ-5D-5L health states to value using these methods; the number of cTTO and DCE valuation tasks per respondent; the minimum sample size needed; and the randomisation schema. This chapter also summarises the research studies developing and testing alternative experimental designs aimed at generating a “Lite” version of the EQ-VT design. This “Lite” version aimed to reduce the number of health states in the design, and thus the sample size, to increase the feasibility of undertaking valuation studies in countries with limited resources or recruitment possibilities. Finally, this chapter outlines remaining methodological issues to be addressed in future research, focusing on refinement of current design strategies, and identification of new designs for novel valuation approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Wilczyński, Maciej. "Minimax Estimation Under Random Sample Size." In Operations Research Proceedings 1999. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-58300-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fred, Ana L. N., and José M. N. Leitão. "Minimal sample size in grammatical inference a bootstrapping approach." In Advances in Pattern Recognition. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0033320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mbwambo, Naza A., and Emma T. Liwenga. "Cassava as an adaptation crop to climate variability and change in coastal areas of Tanzania: a case of the Mkuranga district." In Climate change impacts and sustainability: ecosystems of Tanzania. CABI, 2020. http://dx.doi.org/10.1079/9781789242966.0023.

Full text
Abstract:
Abstract This study was carried out in two villages, Kizapala and Kazole, of the Mkuranga District, in the Coast Region of Tanzania. The objective of the study was to establish the role of cassava as an adaptation crop to the changing climate and household food security. Primary data were obtained using household questionnaires and different participatory rural appraisal (PRA) techniques which included focus group discussions (FGDs), key informants and expert meetings. Secondary data were collected through a literature review, whereas temperature and rainfall data from 1984 to 2014 was obtained from the Tanzania Meteorological Agency (TMA). In each village, a sample size of 10% of all households was interviewed. Findings showed that 96% of respondents from Kazole village and 90% from Kizapala linked climate change with major climatic extreme events such as prolonged droughts and occasional abnormal floods. Analysis of temperature data for the last 30 years (1984-2014) revealed that temperature had significantly risen by a correlation coefficient of R<sup>2</sup> = 0.4936 for maximum and R<sup>2</sup> = 0.777 for minimum temperature. The field survey results closely correlated with findings from the analysis of TMA rainfall and temperature data. Findings revealed a decline in crop production which resulted in food shortages and livelihood insecurity in the study villages. The respondents in both villages consider cassava as a crop that is least affected by climate and environmental extremes, thus serves to ensure food availability and security in their households. As a result, growing cassava should be considered as an adaptation strategy to climate change and variability now and in the future. Improving cassava production, processing, marketing and value chain infrastructures is, therefore, crucial for enhancing sustainable adaptation in the district.
APA, Harvard, Vancouver, ISO, and other styles
8

Woodroofe, Michael. "An asymptotic minimax determination of the initial sample size in a two-stage sequential procedure." In Institute of Mathematical Statistics Lecture Notes - Monograph Series. Institute of Mathematical Statistics, 2004. http://dx.doi.org/10.1214/lnms/1196285393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hofer, Marvin, Sebastian Hellmann, Milan Dojchinovski, and Johannes Frey. "The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows." In Semantic Systems. In the Era of Knowledge Graphs. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59833-4_1.

Full text
Abstract:
Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility, to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .
APA, Harvard, Vancouver, ISO, and other styles
10

Holden, Stein T. "The gender dimensions of land tenure reforms in Ethiopia 1995-2020." In Land governance and gender: the tenure-gender nexus in land management and land policy. CABI, 2021. http://dx.doi.org/10.1079/9781789247664.0012.

Full text
Abstract:
Abstract Continued rapid population growth in rural areas is a major challenge to future land access for all in Ethiopia. Landlessness is growing and farm sizes shrinking. This tends to erode the constitutional right of all rural residents without another livelihood option to access land for subsistence. With the recent land laws also stipulating minimum farm sizes, this also restricts inheritance rights of children living on small farms. It also restricts the opportunity to share land equally among spouses upon divorce. Co-management of land among divorced parents and children on small farms is also challenging. The result may be disguised fragmentation. Given the growing landlessness and inheritance rules and the need for alternative livelihoods for youth, we may wonder whether women are at a disadvantage in non-farm employment. Recent studies of a large sample of resource-poor rural youth that have been eligible to join youth business groups and have been allocated rehabilitated communal lands have female members that on average have fewer assets, lower incomes and less education than male members. They are also much less likely to own a mobile phone and to become group leaders or group board members. This shows that young women in Ethiopia continue to be disadvantaged and are among the most resource-poor and vulnerable. There is a need for more targeted policies to give them equal opportunities in the ongoing rural as well as rural-urban transformation processes.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Minimum sample size"

1

Camerota, Chiara, Lorenzo Pappone, Tommaso Pecorella, and Flavio Esposito. "Addressing Data Security in IoT: Minimum Sample Size and Denoising Diffusion Models for Improved Malware Detection." In 2024 20th International Conference on Network and Service Management (CNSM). IEEE, 2024. https://doi.org/10.23919/cnsm62983.2024.10814607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kubushiro, Keiji, Kyohei Nomura, Satoshi Takahashi, Madoka Takahashi, and Hirokatsu Nakagawa. "Effect of Pre-Strain on Creep Properties of Alloy 740." In AM-EPRI 2010, edited by D. Gandy, J. Shingledecker, and R. Viswanathan. ASM International, 2010. http://dx.doi.org/10.31399/asm.cp.am-epri-2010p0164.

Full text
Abstract:
Abstract The effects of pre-strain on creep properties of Alloy 740 have been investigated. Tensile strain was 7.5% and introduced by room temperature tensile test. Creep tests were conducted under 750 degree C, 275-350MPa. Creep rupture life of pre-strained sample decreased by half compared with as-heat treated sample. Creep behaviors of both samples were almost similar in primary creep stage, but onset of creep rate acceleration of pre-strained sample was faster than those of as-heat treated sample. As a result, minimum creep rate of pre-strained sample were two times larger than that of as-heat treated sample. From the observation of ruptured specimen, pre-strained sample had much more sub cracks than as-heat treated sample. On the other hand, microstructure of both samples was also different. There were MC precipitates on grain boundary in both ruptured specimens, but both size and number of MC precipitates were larger in pre-strained sample although creep life of pre-strained sample was shorter than that of as-heat treated sample. In this paper, the difference of creep behavior will be discussed in terms of both the microstructural change and mechanical damage.
APA, Harvard, Vancouver, ISO, and other styles
3

Sanni, Olujide, and Frederick Pessu. "Impact of Multiphase Flow on the Inhibition of Carbonate Scale Deposition." In CONFERENCE 2023. AMPP, 2023. https://doi.org/10.5006/c2023-19425.

Full text
Abstract:
Abstract Several studies aimed at understanding and combating scale formation in the oil and gas industry have been mainly carried out in single phase brine solution, however, for chemical inhibitors to be effectively deployed to mitigate scaling, it is essential to develop an experimental matrix and laboratory tests to assess its effectiveness in the presence of multiphase fluids. The objective of this study is to evaluate the impact of various organic phases in produced fluids on the inhibition efficiency of polyphosphinocarboxylic acid (PPCA) on calcium carbonate scale precipitation Single and multiphase tests were conducted for CaCO3 precipitation at 30 °C. The mixture of CaCO3 brine (SR 211) and oil fractions was continuously stirred with an overhead impeller blade at 520 rpm to create homogeneous dispersion. The oil fractions include 50ml cyclohexane, 30ml kerosene, 20ml toluene and 0.01% asphaltene, fully dispersed in toluene. The substrate (RCE sample) on which surface deposition was assessed is a cylindrical piece of stainless steel (SS 316L) mounted on a shaft rotated at 400rpm by the overhead stirrer. The impact of the multiphase fluids on chemical scale inhibition was investigated with PPCA inhibitors at minimum inhibitor concentration (MIC) determined from the single phase. Bulk samples were taken at different time intervals and analysed by SEM, AAS and XRD to evaluate the precipitation process, size, morphology and kinetics of transformation. The results show an increase in the deposition for the oil-water system containing various organic phases. Under the same condition and duration of test, the MIC required to prevent scale formation in the bulk was shown to vary between the oil-free and the different oil-water systems. These results helps to improve on current understanding of calcium carbonate polymorphs crystallization in the presence of inhibitors.
APA, Harvard, Vancouver, ISO, and other styles
4

Marco, Surracco, and Tilocca Maria Caterina. "FLOTATION AND MAGNETIC SEPARATION PROCESSES FOR A MINERAL OF WOLLASTONITE." In 24th SGEM International Multidisciplinary Scientific GeoConference 24. STEF92 Technology, 2024. https://doi.org/10.5593/sgem2024/1.1/s04.56.

Full text
Abstract:
This paper presents the results of laboratory processing tests conducted to explore the potential for beneficiation of a wollastonite mineral (calcium metasilicate, CaSiO3). The ore studied comes from a mineralized body located in the southern part of Sardinia (Italy), which has been studied in the past with geological surveys and sampling. In the investigated area the mineralization was very irregular. The encasing rocks consist of schist and the mineralized body is composed of wollastonite associated with limestone, garnets and silica. The sample used for treatment tests was taken from this area and underwent various tests, including grinding, froth flotation, and magnetic separation. Finally, we conducted an overall treatment test using the beneficiation techniques that yielded the best results in the previous experiments. The treatment scheme followed in the overall test utilized simple techniques such as grinding, calcite flotation and magnetic separation. As a result, the concentrate contained 80% wollastonite and met the pollutant element specifications. The final grinding stage was necessary to meet the market's required particle size specifications. The flotation tests were conducted in a Denver laboratory cell, while a Jones wet high-intensity magnetic laboratory scale separator was used for the magnetic separation tests. The effectiveness of the various steps of testing was verified by comparing the results obtained with the specifications required by the wollastonite market, which impose minimum quantities of contaminants, such as iron, titanium and magnesium, while being less severe with regard to SiO2 and CaO contents.
APA, Harvard, Vancouver, ISO, and other styles
5

Rao, Shan (Sherry), Da Kuang, and Connor McManus. "Study of Minimal Surface Preparation for Various Patch Repair Coating Products Applied under the Minimum Allowable Temperatures." In CONFERENCE 2024. AMPP, 2024. https://doi.org/10.5006/c2024-21005.

Full text
Abstract:
Abstract Patch repair coating products provide cost-effective solutions for restoring small to medium-sized damages in mainline coatings to extend the lifespan of pipelines. Some coating products are specially designed to tolerate reduced surface conditions while providing the adequate corrosion protection performance. Due to easy mobilization and relatively inexpensive, power tool cleaning is commonly used in the field to prepare surfaces for small area repairs. During field coating work, adverse weather conditions are often encountered. However, little data were publicly available to support if the minimal surface recommended by the product data sheet (PDS) is also suitable for the coating applied under the minimum allowable temperature. In addition, the effect of the similar surface level produced by different power tools on the corrosion resistance of the same coating is rarely reported. This paper aims to address these gaps. Four commercially available coating products were selected to examine their potentials as patch repairs, including one liquid-applied epoxy coating, one butyl-based sealant product, and two viscoelastic coating products. Test samples were prepared using a rust grade C steel pipe section. Different power tools such as wire brush, rotary sander, flap disc, and bristle blaster were used to create SSPC-SP3, SSPC-SP11, and SSPC-SP15 surfaces. All four products were applied and cured at the minimum allowable temperatures specified in their respective PDS or recommended by the manufacturers. A 14-day cathodic disbondment (CD) test was employed to rule out the improper surfaces or power tools, and the next higher surface level was then considered as potentially suitable. The long-term performance characteristics of these coating products on the established minimal surfaces under the minimum allowable application temperatures were evaluated using the selected critical tests from CSA Z245.30 standards.
APA, Harvard, Vancouver, ISO, and other styles
6

Harless, Nikki, John Shingledecker, Kyle Stoodt, Kevin Cwiok, and Anand Kulkarni. "Impact of Three Additive Manufacturing Techniques on Microstructure and Creep Damage Development in Alloy 718." In AM-EPRI 2024. ASM International, 2024. http://dx.doi.org/10.31399/asm.cp.am-epri-2024p0338.

Full text
Abstract:
Abstract Inconel 718 is a nickel-based superalloy known for its excellent combination of high-temperature strength, corrosion resistance, and weldability. Additive Manufacturing (AM) has revolutionized traditional manufacturing processes by enabling the creation of complex and customized components. In this work, three prominent AM techniques: Laser-Based Powder Bed Fusion (PBF), Wire Direct Energy Deposition (DED), and Binder Jet (BJ) processes were explored. A thorough metallographic analysis and comparison of samples was conducted after short-term creep testing originating from each of the three aforementioned techniques in addition to wrought material. Detailed electron microscopy unveiled equiaxed grains in both BJ and wrought samples while PBF samples displayed elongated finer grain structures in the build direction, characteristic of PBF. The DED samples revealed a more bimodal grain distribution with a combination of smaller equiaxed grains accompanied by larger more elongated grains. When assessing the three processes, the average grain size was found to be larger in the BJ samples, while the PBF samples exhibited the most significant variation in grain and sub-grain size. Number density, size, and shape of porosity varied between all three techniques. Post-creep test observations in PBF samples revealed the occurrence of wedge cracking at the failure point, accompanied by a preference for grain boundary creep void formation while BJ samples exhibited grain boundary creep void coalescence and cracking at the failure location. In the DED samples, void formation was minimal however, it seemed to be more prevalent in areas with precipitates. In contrast, the wrought sample showed void formation at the failure site with a preference for areas with primary carbide formation. Despite BJ samples demonstrating similar or even superior rupture life compared to other AM techniques, a noteworthy reduction in rupture ductility was observed. While a coarse, uniform grain size is generally linked to enhanced creep resistance and rupture life, the combination of pre-existing voids along grain boundaries and the formation of new voids is hypothesized to accelerate rapid fracture, resulting in diminished ductility. This research shows careful consideration is needed when selecting an AM technology for high- temperature applications as creep behavior is sensitive to the large microstructural variations AM can introduce.
APA, Harvard, Vancouver, ISO, and other styles
7

Fajt, J. "Sometime More Is Just More: THPS Biocide Laboratory Kill Study on Wildtype Sulfate Reducing Bacteria." In CONFERENCE 2023. AMPP, 2023. https://doi.org/10.5006/c2023-18858.

Full text
Abstract:
Abstract Biocide dose response studies are commonly conducted on water solutions containing bacteria to determine the effect of chemical treatments before application. Biocide product labels provide broad guidelines for dosing. However, site water chemistry and bacteria biology make the minimum effective dose differ for each location difficult to determine. A large volume culture of sulfate reducing bacteria (SRB) was prepared and allowed to grow until 4 log bacteria were present. The sample as then split into four identical 500 ml samples. The four samples were dosed at 0, 5, 50 and 400 ppm of tetrakis (hydroxymethyl)-phosphonium sulfate (THPS) based biocide. The effect on bacteria levels were tested using an enzyme-based bacteria metabolism test after 0.2, 1, 8, 24 and 96 hr. This study showed that a single application of 50 ppm of biocide could be as effective as a 400 ppm on high numbers of planktonic SRB.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Zhaosheng, Xinyu Xu, and Li Gao. "Minimum Sample Size Determination of Floating Cars in an Urban Hybrid Network." In 15th COTA International Conference of Transportation Professionals. American Society of Civil Engineers, 2015. http://dx.doi.org/10.1061/9780784479292.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Fang-Xiang, W. J. Zhang, and Anthony J. Kusalik. "On Determination of Minimum Sample Size for Discovery of Temporal Gene Expression Patterns." In 2006 International Multi-Symposiums on Computer and Computational Sciences (IMSCCS). IEEE, 2006. http://dx.doi.org/10.1109/imsccs.2006.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Zezheng, Jinliang Cai, Zhen Wang, and Feng Qin. "Minimum Sample Size Estimation Method of Electromagnetic Effect Test Based on Confidence Interval." In 2022 IEEE 5th International Conference on Electronics Technology (ICET). IEEE, 2022. http://dx.doi.org/10.1109/icet55676.2022.9824327.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Minimum sample size"

1

W Djimeu, Eric, and Deo-Gracias Houndolo. Power calculation for causal inference in social science: sample size and minimum detectable effect determination. International Initiative for Impact Evaluation (3ie), 2015. http://dx.doi.org/10.23846/wp0026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parker, Jennifer, Katherine Irimata, Guangyu Zhang, et al. National Center for Health Statistics Data Presentation Standards for Rates and Counts. National Center for Health Statistics (U.S.), 2023. http://dx.doi.org/10.15620/cdc:124368.

Full text
Abstract:
This report describes the multistep NCHS data presentation standards for rates and counts, which are based on a minimum sample size and the relative width of a confidence interval and differ between vital statistics and health surveys.
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Winston Paul, Daniel J. Twedt, David A. Wiedenfeld, Paul B. Hamel, Robert P. Ford, and Robert J. Cooper. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits. U.S. Department of Agriculture, Forest Service, Southern Forest Experiment Station, 1993. http://dx.doi.org/10.2737/so-rp-274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Espino, Emilio, and Martín González Rozada. Automatic Stabilization and Fiscal Policy: Some Quantitative Implications for Latin America and the Caribbean. Inter-American Development Bank, 2012. http://dx.doi.org/10.18235/0011425.

Full text
Abstract:
This paper provides an estimation of the size of income and demand automatic stabilizers in a representative sample of Latin American and Caribbean (LAC) countries. The authors find that when a negative unemployment shock hits the economy, the size of income and demand automatic stabilizers coefficients is much smaller than the size of these coefficients in Europe and the United States. This evidence suggests that there is room for policies that can enlarge the absorption by these coefficients as a way to contribute to macroeconomic stability in LAC countries. The paper analyzes four policies affecting the income stabilization coefficient and two others affecting directly the demand stabilization coefficient. The main results suggest that changing the minimum tax exemption and its progressiveness using the tax structure of middle-income countries outside the LAC region is the best option to enlarge the size of the income and demand stabilization coefficients, and in this way to reduce the need of discretionary fiscal policies in theregion.
APA, Harvard, Vancouver, ISO, and other styles
5

Tiku, Sanjay, Arnav Rana, Binoy John, and Aaron Dinovitzer. PR-214-203805-R01 Performance Evaluation of ILI Systems for Dents and Coincident Features. Pipeline Research Council International, Inc. (PRCI), 2024. http://dx.doi.org/10.55274/r0000056.

Full text
Abstract:
Pipeline integrity management involves the analysis of pipeline condition information (e.g., pipe size, presence and size of features), operational/environmental conditions and line pipe material properties in engineering assessment (fitness-for-purpose) tools to evaluate operational risk. While nominal or minimum specified material properties and SCADA reported, design or estimated operational loading conditions can be considered, pipeline operators depend heavily on pipeline condition data from in-line inspection (ILI) systems. The current project presents the details of performance trials evaluating the ability of ILI systems to provide pipeline condition information for dents with coincident or closely aligned features. A set of sample dent features were prepared along with a trial protocol and performance metrics beyond those presented in API 1163 that were used to characterize performance. ILI system pull and pump through trials of magnetic, ultrasonic and caliper-based ILI technologies from four ILI Service Providers were performed. Data from these trials were used to quantify detection, identification, and sizing performance of the ILI systems for isolated corrosion features, dents with variety of shapes including those without coincident features and those with corrosion, gouges and/or cracks. The effect of dents on the ILI system detection, identification and sizing of the coincident features was evaluated.
APA, Harvard, Vancouver, ISO, and other styles
6

Tiku, Sanjay. PR-214-203820-R01 Performance Evaluation of ILI for Dents with Cracks and Gouges. Pipeline Research Council International, Inc. (PRCI), 2023. http://dx.doi.org/10.55274/r0000031.

Full text
Abstract:
Pipeline integrity management involves the analysis of pipeline condition information (e.g., pipe size, presence and size of features), operational/environmental conditions and line pipe material properties in engineering assessment (fitness-for-purpose) tools to evaluate operational risk. While nominal or minimum specified material properties and SCADA reported, design or estimated operational loading conditions can be considered, pipeline operators depend heavily on pipeline condition data from in-line inspection (ILI) systems. The current project presents the details of performance trials evaluating the ability of ILI systems to provide pipeline condition information for dents with coincident or closely aligned features. A set of sample dent features were prepared along with a trial protocol and performance metrics beyond those presented in API 1163 that were used to characterize performance. ILI system pull and pump through trials of magnetic, ultrasonic and caliper-based ILI technologies from seven ILI Service Providers were performed. Data from these trials were used to quantify detection, identification, and sizing performance of the ILI systems for isolated corrosion features, dents with variety of shapes including those without coincident features and those with corrosion, gouges and/or cracks. The effect of dents on the ILI system detection, identification and sizing of the coincident features was evaluated.
APA, Harvard, Vancouver, ISO, and other styles
7

Dailami, N., M. Bhaskara Rao, and K. Subramanyam. On the Selection of the Best Gamma Population. Determination of Minimax Sample Sizes. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada166138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Naddafi, Rahmat, Göran Sundblad, Alfred Sandström, et al. Developing management goals and associated assessment methods for Sweden’s nationally managed fish stocks : a project synthesis. Department of Aquatic Resources, Swedish University of Agricultural Sciences, 2023. http://dx.doi.org/10.54612/a.31cfjep2i0.

Full text
Abstract:
This report summarizes and synthesizes results from the Swedish Agency of Marine and Water Management (SwAM, or HaV) funded project “Förvaltningsmål för nationella arter (Management goals for nationally managed species)”. The objectives of the project have been to promote the development of management goals and associated status assessment methods and indicators, as well as reference points, for some nationally managed fish stocks both in coastal as well as freshwater areas. The report focusses largely on species and stocks that can be defined as data-poor. Such stocks are characterised by marked limitations in data availability and/or resources allocated to detailed analytical stock projections. Data-poor stocks also often lack carefully formulated management goals and associated methods and indicators for assessing stock status. In this report, we provide an overview of potential assessment methods and indicators and try to synthesise how they work and what the strengths and weaknesses are by applying them to selected data poor stocks such as pikeperch, pike, whitefish, and vendace. We also discuss how they relate to different potential management goals and provide recommendations for their application. We grouped the indicators and assessment methods by the three categories that are now used in the yearly status assessment framework provided by SLU Aqua (Resursöversikten/Fiskbarometern) – i) mortality, ii) abundance/biomass and iii) size/age structure. The results are also described for these three main categories of assessment indicators. Included is also a status report from a size- and age-based population dynamics model (Stock Synthesis 3) that is being developed for pikeperch in Lake Hjälmaren. An important experience from the project is that to improve the assessment methods for Swedish national fish stocks, it is important that managers develop both general as well as more detailed quantitative goals for the individual stocks. This should ideally be conducted in various forms of collaboration with the main stakeholders and scientists involved with assessment as participatory processes foster legitimacy. Carefully articulated management goals, which are possible to translate into quantitative targets, will facilitate the development of various approaches and methods to monitor stock statuses. Given the strong and complex interactions of fish and their environments it is also important to consider other pressures than fisheries when developing indicators and assessment methods. Our synthesis highlights a number of areas where the assessment of data-poor stocks can be improved: 1. Apply precautionary principles for data-limited stocks, particularly ones that are known to be vulnerable to exploitation. 2. Tailor approaches to how fisheries are managed in Sweden. Swedish nationally managed fish stocks are not managed by quotas (with one exception, vendace in the Bothnian Bay) and do not aim for maximum sustainable yield. Instead, the coastal and inland fisheries are managed by regulating the effort in the small-scale commercial fisheries (number of fishers/licenses and amount of gear). Regulation of recreational and subsistence fisheries effort, in terms of licenses or number of fishers) is not applied, nor possible since the fisheries is lacking obligatory notification and reporting systems. All national fisheries, however, are regulated by various technical measures (closed areas, size-limits, bag-limits, gear restrictions etc). Thus, goals and assessment methods that result in harvest limits or quota recommendations expressed in e.g. biomass/numbers are difficult to use as basis for management. Instead, there is a need for alternative management goals and associated assessment methods. 3. Use best practice methods and indicators and adapt as scientific knowledge is developed. Data-limited methods are developing rapidly, and new methods/approaches are proposed in the scientific literature every year. It is thus important to be updated on the most recent developments. 4. Clearly describe limitations/assumptions of methods used. It is important to be aware of and critically evaluate the assumptions underlying the analyses, and to carefully communicate uncertainty together with the stock status assessment. 5. Be particularly careful with low sample numbers. Many indicators and methods can be applied also on small sample sizes, however, the accuracy and precision of the estimates risk being low in such cases. 6. Accept that there is no "gold standard" for fisheries assessment. Each case study is unique and needs to be balanced against data availability, local needs and other important factors. This also means that analysts need to be careful when using generic reference levels or “borrowing” data from other stocks. 7. If possible, use several different methods/indicators. Although several indicators aim to measure similar aspects of the stock, small methodological differences can support the overall interpretation of individual indicator values. It is particularly important to incorporate many aspects and indicators (size/age/abundance/mortality) in order to produce a balanced assessment. 8. Develop means of communication. Indicators and goals should be easy to understand. However, interpretation of results from multi-indicator frameworks can be challenging. There is thus a need for finding ways of communication that can convey complicated results in a simple-to-understand manner. 9. For details on additional improvements, we refer the reader to the sub-header “recommendations for the future” found under each chapter. The implementation of Stock Synthesis for pikeperch in Lake Hjälmaren showed that it is possible to develop a more ambitious and detailed stock assessment model for a relatively data-poor stock. The model results partly support earlier interpretations of the development of the stock and the importance of the changes in regulations in 2001 (increased minimum size, increased mesh size and reduced mortality of undersized pikeperch). Before the model can be implemented and used for practical management, a number of actions for improvement are needed, which are highlighted in the relevant chapter. The most important next step is establishing management goals and reference levels for this stock. We recommend that such a dialogue is initiated by managers. The fisheries management goals should consider both biomass, fisheries mortality and size-based targets. To conclude, we stress the importance of improving all ongoing aspects related to the assessments of data-poor Swedish stocks. Strong local stocks and sustainable fisheries are vital for a variety of fisheries-related businesses and practices, particularly in rural areas, providing economical and societal value. Fishes also have important roles in aquatic food-webs and it is important that ecological values are managed wisely in order to reach targets for water quality, ecosystem structure and diversity. Given the strong and complex interactions of fish and their environments it is also important to consider other pressures than fisheries when developing indicators and assessment methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Mintz, K. J. The explosibility of three canadian coal dusts. Natural Resources Canada/CMSS/Information Management, 1989. http://dx.doi.org/10.4095/331786.

Full text
Abstract:
Explosibility measurements on coal dusts from the Cape Breton Development Corporation's Lingan Mine, TransAlta's Highvale Mine and the Quintette Mine in B.C. have been carried out along with some tests on Pittsburgh Standard coal dust. The Quintette coal dust would not explode in the classical Hartmann apparatus, but did explode in the new 20-L vessel using a more powerful ignition source. The minimum explosible concentrations of the Lingan, Highvale and Pittsburgh coal dusts were all about the same (40 - 45 mg/L), that of the Quintette was higher (140 mg/L). The difference may be attributed to the much greater mean particle size of the Quintette dust. The explosion pressures (in kPa) were: Highvale, 600, Pittsburgh, 520, Lingan, 510, and Quintette, 440. The minimum oxygen concentrations required for explosions were (in % oxygen): Highvale 10.4, Lingan 10.5, and Quintette 14. The minimum ignition temperatures of dust clouds were (in °C): Highvale 510, Lingan 600, Quintette 620 and Pittsburgh 620. Further work is required to reconcile limit values.
APA, Harvard, Vancouver, ISO, and other styles
10

Rauh, Nicholas. 2023 Addendum to the Rough Cilicia Kiln Site Ceramics (Syedra, Delice, Biçkici, and Antiochia ad Cragum): An Update to the Kiln Sites. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317638.

Full text
Abstract:
This addendum summarizes the ceramic remains recovered by the Rough Cilicia Archaeological Survey Project at four posited amphora kiln sites in the survey area: the Syedra Kiln Site, the Biçkici Kiln site, the Antiochia ad Cragum Kiln Site, and the Delice Kiln Site. All four sites were identified early on during the survey (1995-1997). The survey team conducted grab collections and triaged dozens of sherds recovered by 1997, before returning the bulk of these fragments to the field. A representative sample of the amphora fragments together with context ceramics for each site was conserved at the Alanya Archaeological Museum. In 2003 the survey team conducted a magnetometric survey of the Biçkici and Suedyra kiln sites with minimal results. Due to the longevity of the survey (1996-2011) members of the survey team, particularly Rauh, Dillon, Autret, and Kızılarslanoğlu, conducted repeated visits to the sites and recovered additional diagnostic fragments as they surfaced. These visits occurred almost annually, with intensive inspections occurring in 2008 and 2011. In September 2021, Rauh inspected the remaining collections from the kiln sites stored in the Alanya Archareological Museum.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography