Academic literature on the topic 'Optimum sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Optimum sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Optimum sample size"

1

Cho, Dooyong, Fazil T. Najafi, and Peter A. Kopac. "Determining Optimum Acceptance Sample Size." Transportation Research Record: Journal of the Transportation Research Board 2228, no. 1 (2011): 61–69. http://dx.doi.org/10.3141/2228-08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pelkowitz, L., and S. C. Schwarts. "Asymptotically Optimum Sample Size for Quickest Detection." IEEE Transactions on Aerospace and Electronic Systems AES-23, no. 2 (1987): 263–72. http://dx.doi.org/10.1109/taes.1987.313381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Xiaolan, Pradipta Sarkar, and William Veronesi. "Virtual Inspection: Optimum Sample Size for POD Experiment." Quality Engineering 14, no. 4 (2002): 623–44. http://dx.doi.org/10.1081/qen-120003563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gharaibeh, Nasir G., Sabrina I. Garber, and Litao Liu. "Determining Optimum Sample Size for Percent-Within-Limits Specifications." Transportation Research Record: Journal of the Transportation Research Board 2151, no. 1 (2010): 77–83. http://dx.doi.org/10.3141/2151-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Otis, David L. "Optimum Sample Size Allocation for Wood Duck Banding Studies." Journal of Wildlife Management 58, no. 1 (1994): 114. http://dx.doi.org/10.2307/3809557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ukrainskiy, Pavel. "Еmpirical method for estimation of the optimum size of random point samples for assessment areas of land cover from space images". InterCarto. InterGIS 27, № 2 (2021): 368–78. http://dx.doi.org/10.35595/2414-9179-2021-2-27-368-378.

Full text
Abstract:
A promising fast method for estimating land cover areas from satellite imagery is the use of random point sampling. This method allows you to obtain area values without spatially continuous mapping of land areas. The accuracy of the area estimate by this method depends on the sample size. The presented work describes a method for empirically finding the optimal sample size. To use this method, you must select a key site for which a reference land cover exists. For the key site, we perform multiple generation of samples of different sizes. Further, using these samples, we estimate the area of land cover. Comparison of the obtained areas with the reference areas allows you to calculate the measurement error. Analysis of the mean and the range of errors for different sample sizes allows us to identify the moment when the error ceases to decrease significantly with an increase in the sample size. This sample size is optimal. We tested the proposed method on the example of the Kalach Upland. The size range from 100 to 3000 sampling points per key site is analyzed (the size of the sampling in the row increases by 100 points). For each element of this row, we created 1000 samples of the corresponding size. We then analyzed the effect of sample size on the overall relative error in area estimates. The analysis showed that for the investigated key site the optimal sample size is 1000 points (1.1 points/km2). With this sample size, the overall relative error in determining areas was 4.0 % on average, and the maximum error was 9.9 %. Similar accuracy should be at the same sample size for other uplands in the foreststeppe and steppe zones of the East European plain.
APA, Harvard, Vancouver, ISO, and other styles
7

Nanni, Marcos Rafael, Fabrício Pinheiro Povh, José Alexandre Melo Demattê, Roney Berti de Oliveira, Marcelo Luiz Chicati, and Everson Cezar. "Optimum size in grid soil sampling for variable rate application in site-specific management." Scientia Agricola 68, no. 3 (2011): 386–92. http://dx.doi.org/10.1590/s0103-90162011000300017.

Full text
Abstract:
The importance of understanding spatial variability of soils is connected to crop management planning. This understanding makes it possible to treat soil not as a uniform, but a variable entity, and it enables site-specific management to increase production efficiency, which is the target of precision agriculture. Questions remain as the optimum soil sampling interval needed to make site-specific fertilizer recommendations in Brazil. The objectives of this study were: i) to evaluate the spatial variability of the main attributes that influence fertilization recommendations, using georeferenced soil samples arranged in grid patterns of different resolutions; ii) to compare the spatial maps generated with those obtained with the standard sampling of 1 sample ha-1, in order to verify the appropriateness of the spatial resolution. The attributes evaluated were phosphorus (P), potassium (K), organic matter (OM), base saturation (V%) and clay. Soil samples were collected in a 100 × 100 m georeferenced grid. Thinning was performed in order to create a grid with one sample every 2.07, 2.88, 3.75 and 7.20 ha. Geostatistical techniques, such as semivariogram and interpolation using kriging, were used to analyze the attributes at the different grid resolutions. This analysis was performed with the Vesper software package. The maps created by this method were compared using the kappa statistics. Additionally, correlation graphs were drawn by plotting the observed values against the estimated values using cross-validation. P, K and V%, a finer sampling resolution than the one using 1 sample ha-1 is required, while for OM and clay coarser resolutions of one sample every two and three hectares, respectively, may be acceptable.
APA, Harvard, Vancouver, ISO, and other styles
8

Indira, V., R. Vasanthakumari, N. R. Sakthivel, and V. Sugumaran. "Determination of sample size using power analysis and optimum bin size of histogram features." International Journal of Data Analysis Techniques and Strategies 3, no. 1 (2011): 21. http://dx.doi.org/10.1504/ijdats.2011.038804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Islam, Md Saiful. "Estimation of optimum sample size and number of replications in split-split plot design." Bangladesh Journal of Agricultural Research 32, no. 3 (2008): 403–11. http://dx.doi.org/10.3329/bjar.v32i3.542.

Full text
Abstract:
In field experiments, it is necessary to determine the optimum sample size as well as optimum number of replications if researchers have to use sampling techniques for collecting data from such experiments. Estimates of such optimum sample size and number of replications has been determined for split-split plot design minimizing the variance for a given cost of the experiment per treatment.DOI: http://dx.doi.org/10.3329/bjar.v32i3.542Bangladesh J. Agril. Res. 32(3) : 403-411, September 2007
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, A., and G. Singh. "ESTIMATION OF OPTIMUM SAMPLE SIZE FOR DIFFERENT CHARACTERS OF GUAVA VARIETIES." Acta Horticulturae, no. 735 (March 2007): 325–27. http://dx.doi.org/10.17660/actahortic.2007.735.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Optimum sample size"

1

Hathaway, John Ellis. "Determining the Optimum Number of Increments in Composite Sampling." BYU ScholarsArchive, 2005. https://scholarsarchive.byu.edu/etd/425.

Full text
Abstract:
Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a model is developed which accounts for different sources of variation in compositing and data analysis. This model is used to define and understand the process of determining the optimum number of increments that should be used in forming a composite. The blending variance is shown to have a smaller range of possible values than previously reported when estimating the number of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated number of increments.
APA, Harvard, Vancouver, ISO, and other styles
2

Feijó, Sandra. "Técnicas para execução de experimentos sob ambiente protegido para a cultura da abobrinha italiana." Universidade Federal de Santa Maria, 2005. http://repositorio.ufsm.br/handle/1/3178.

Full text
Abstract:
Conselho Nacional de Desenvolvimento Científico e Tecnológico<br>In order to determinate technicals to execution trials in protected environments for italian pumpkin was accomplished an experiment, from 18/08/2003 to 07/12/2003, in the area of the Department of Fitotecnia UFSM, in plastic greenhouse, with four rows and each row consisted of twenty-four plants. A total of twenty-seven harvests were made, evaluating fruit weight with length ≥15cm. The sample soil was sampled before accomplishment the experiment; one sample point is equal to four sub sample. The Smith heterogeneity index was estimated using SMITH s method (1938) and optimum plot size using method modified maximum curvature (MEIER & LESSMAN, 1971). Estimate, for how long productive period of italian pumpkin, in plastic greenhouse, must be evaluated aiming to estimate the experimental error and the difference among four intervals of harvest, was the aim in order to first paper. The harvest and evaluation of the initial half of the productive period of Italian pumpkin in plastic greenhouse was sufficient to estimate the experimental error, using six plants by plot, to evaluate different intervals of harvest. Because of high experimental error, the evaluation of the Italian pumpkin production during all the productive period is not sufficient to differentiate four treatments of intervals of harvest. The aim the second paper, was evaluate the chemicals characteristics of the soil heterogeneity index under environmental protect and determinate sample size. The values the Smith heterogeneity index were considerate small. The optimum plot size was equal to one basic unit, in other words, one sample point. The sample size estimated was ten sample points for the half width of the confidence interval of 20%, to 5% of error probability. The third paper, aimed evaluate the Smith heterogeneity index in order to different intervals of the harvest of fruits in different levels of the accumulate harvests of italian pumpkin in greenhouse, estimate the optimum plot size and determinate the least significant differences within treatments with variation in the size plot and number of replications. The Smith heterogeneity index is smaller and the use of smaller plots with larger number of replications benefit the experimental precision. The optimum plot size in order to yield italian pumpkin varying between one and seven plants, promoted appropriate evaluation of the yield italian pumpkin in the different studied treatments. Plots with three plants and six replications is better in order conduction the experiments, with least significant differences within treatments (average percentage) in 75,94%.<br>Para se determinar técnicas para execução de experimentos, sob ambiente protegido, para a cultura da abobrinha italiana, foi realizado um experimento em estufa plástica no período de 18/08/2003 a 07/12/2003, em área pertencente ao Departamento de Fitotecnia, na UFSM, Santa Maria, RS. As mudas foram transplantadas para a estufa plástica com espaçamento de 0,80 m entre plantas e 1,0 m entre filas, totalizando 24 plantas por fila. Foram realizadas 27 colheitas de frutos,com comprimento ≥ 15 cm. As amostras de solo foram coletadas antes da implantação do experimento, cada ponto amostral era composto por quatro subamostras. O índice de heterogeneidade de Smith (b) foi estimado pelo método de SMITH (1938) e o tamanho ótimo de parcela através do método da máxima curvatura modificado (MEIER & LESSMAN, 1971). A estimativa do erro experimental e a diferença entre quatro intervalos de colheita foram avaliados no primeiro trabalho. A colheita e avaliação da metade inicial do período produtivo da abobrinha italiana em estufa de plástica foi suficiente para estimar o erro experimental, usando seis plantas por parcela, para avaliar diferentes intervalos de colheita. O objetivo do segundo trabalho, foi avaliar o índice de heterogeneidade de Smith das principais características químicas do solo, em estufa plástica, e determinar o tamanho de amostra. Para todas variáveis analisadas, o índice de heterogeneidade de Smith, foi próximo a zero e o tamanho ótimo de parcela, foi igual à uma unidade básica, ou seja, um ponto amostral. O tamanho de amostra estimado, foi de dez pontos amostrais, como representativo para todas as variáveis analisadas, com semiamplitude do intervalo de confiança da média em porcentagem, de 20% em nível 5% de probabilidade de erro. O terceiro trabalho teve por objetivos avaliar o índice de heterogeneidade de Smith da produção de abobrinha italiana, para os diferentes intervalos de colheita dos frutos, em diferentes níveis de colheitas acumuladas, em ambiente protegido; estimar o tamanho ótimo de parcela e determinar a diferença mínima significativa entre tratamentos, variando o tamanho da parcela e o número de repetições. Como conclusão, o índice de heterogeneidade de Smith, foi baixo e o tamanho ótimo de parcela para a produção total de abobrinha italiana varia entre uma e sete plantas, conforme a freqüência de colheitas. O uso de parcelas com três plantas, seis repetições é mais adequado e apresenta uma diferença mínima significativa entre tratamentos, em porcentagem da média, de 75,94%.
APA, Harvard, Vancouver, ISO, and other styles
3

Medeiros, José António Amaro Correia. "Optimal sample size for assessing bacterioneuston structural diversity." Master's thesis, Universidade de Aveiro, 2011. http://hdl.handle.net/10773/10901.

Full text
Abstract:
Mestrado em Biologia Aplicada - Microbiologia Clínica e Ambiental<br>The surface microlayer (SML) is located at the interface atmospherehydrosphere and is theoretically defined as the top millimeter of the water column. However, the SML is operationally defined according to the sampling method used and the thickness varies with weather conditions and organic matter content, among other factors. The SML is a very dynamic compartment of the water column involved in the process of transport of materials between the hydrosphere and the atmosphere. Bacterial communities inhabiting the SML (bacterioneuston) are expected to be adapted to the particular SML environment which is characterized by physical and chemical stress associated to surface tension, high exposure to solar radiation and accumulation of hydrophobic compounds, some of which pollutants. However, the small volumes of SML water obtained with the different sampling methods reported in the literature, make the sampling procedure laborious and time-consuming. Sample size becomes even more critical when microcosm experiments are designed. The objective of this work was to determine the smallest sample size that could be used to assess bacterioneuston diversity by culture independent methods without compromising representativeness and therefore ecological significance. For that, two extraction methods were tested on samples of 0,5 mL, 5 mL and 10 mL of natural SML obtained at the estuarine system Ria de Aveiro. After DNA extraction, community structure was assessed by DGGE profiling of rRNA gene sequences. The CTAB-extraction procedure was selected as the most efficient extraction method and was later used with larger samples (1 mL, 20 mL and 50 mL). The DNA obtained was once more analyzed by DGGE and the results showed that the estimated diversity of the communities does not increase proportionally with increasing sample size and that a good estimate of the structural diversity of bacterioneuston communities can be obtained with very small samples.<br>A microcamada superficial marinha (SML) situa-se na interface atmosferahidrosfera e teoricamente é definida como o milímetro mais superficial da coluna de água. Operacionalmente, a espessura da SML depende do método de amostragem utilizado e é também variável com outros fatores, nomeadamente, as condições meteorológicas e teor de matéria orgânica, entre outros. A SML é um compartimento muito dinâmico da coluna de água que está envolvida no processo de transporte de materiais entre a hidrosfera e a atmosfera. As comunidades bacterianas que habitam na SML são designadas de bacterioneuston e existem indícios de que estão adaptadas ao ambiente particular da SML, caracterizado por stresse físico e químico associado à tensão superficial, alta exposição à radiação solar e acumulação de compostos hidrofóbicos, alguns dos quais poluentes de elevada toxicidade. No entanto, o reduzido volume de água da SML obtidos em cada colheita individual com os diferentes dispositivos de amostragem reportados na literatura, fazem com que o procedimento de amostragem seja laborioso e demorado. O tamanho da amostra torna-se ainda mais crítico em experiências de microcosmos. O objectivo deste trabalho foi avaliar se amostras de pequeno volume podem ser usadas para avaliar a diversidade do bacterioneuston, através de métodos de cultura independente, sem comprometer a representatividade, e o significado ecológico dos resultados. Para isso, foram testados dois métodos de extracção em amostras de 0,5 mL, 5 mL e 10 mL de SML obtida no sistema estuarino da Ria de Aveiro. Após a extracção do DNA total, a estrutura da comunidade bacteriana foi avaliada através do perfil de DGGE das sequências de genes que codificam para a sub unidade 16S do rRNA. O procedimento de extracção com brometo de cetil trimetil de amônia (CTAB) foi selecionado como sendo o método de extração com melhor rendimento em termos de diversidade do DNA e mais tarde foi aplicado a amostras de maior dimensão (1 mL, 20 mL e 50 mL). O DNA obtido foi mais uma vez usado para análise dos perfis de DGGE de 16S rDNA da comunidade e os resultados mostraram que a estimativa da diversidade de microorganismos não aumentou proporcionalmente com o aumento do tamanho da amostra e que com amostras de pequeno volume podem ser obtidas boas estimativas da diversidade estrutural das comunidades de bacterioneuston.
APA, Harvard, Vancouver, ISO, and other styles
4

Thach, Chau Thuy. "Self-designing optimal group sequential clinical trials /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Takazawa, Akira. "Optimal decision criteria for the study design and sample size of a biomarker-driven phase III trial." Kyoto University, 2020. http://hdl.handle.net/2433/253492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fitton, N. V. "Why and How to Report Distributions of Optima in Experiments on Heuristic Algorithms." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1006054556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brungard, Colby W. "Alternative Sampling and Analysis Methods for Digital Soil Mapping in Southwestern Utah." DigitalCommons@USU, 2009. http://digitalcommons.usu.edu/etd/472.

Full text
Abstract:
Digital soil mapping (DSM) relies on quantitative relationships between easily measured environmental covariates and field and laboratory data. We applied innovative sampling and inference techniques to predict the distribution of soil attributes, taxonomic classes, and dominant vegetation across a 30,000-ha complex Great Basin landscape in southwestern Utah. This arid rangeland was characterized by rugged topography, diverse vegetation, and intricate geology. Environmental covariates calculated from digital elevation models (DEM) and spectral satellite data were used to represent factors controlling soil development and distribution. We investigated optimal sample size and sampled the environmental covariates using conditioned Latin Hypercube Sampling (cLHS). We demonstrated that cLHS, a type of stratified random sampling, closely approximated the full range of variability of environmental covariates in feature and geographic space with small sample sizes. Site and soil data were collected at 300 locations identified by cLHS. Random forests was used to generate spatial predictions and associated probabilities of site and soil characteristics. Balanced random forests and balanced and weighted random forests were investigated for their use in producing an overall soil map. Overall and class errors (referred to as out-of-bag [OOB] error) were within acceptable levels. Quantitative covariate importance was useful in determining what factors were important for soil distribution. Random forest spatial predictions were evaluated based on the conceptual framework developed during field sampling.
APA, Harvard, Vancouver, ISO, and other styles
8

Kothawade, Manish. "A Bayesian Method for Planning Reliability Demonstration Tests for Multi-Component Systems." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1416154538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vong, Camille. "Model-Based Optimization of Clinical Trial Designs." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.

Full text
Abstract:
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
APA, Harvard, Vancouver, ISO, and other styles
10

Her, Chi-Way, and 何淇瑋. "The Optimal Sample Size For Interval Estimation Of Correlation Coefficient." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20731985747790441131.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>99<br>As the degree of correlation between two variables is one of concern to many social science issues, thus using the sample correlation coefficient to infer population correlation coefficient is a common method. However, the decision of the optimum sample size for the entire study will save a lot of time and cost. Traditionally, the sample size determination in addition to hypothesis testing method, this research will introduce the expected interval length method and the expected interval coverage probability method. Expected interval coverage probability method is based on interval estimation, but it can adjust sample size strict and loose according to the different set coverage probability. In this dissertation, the SAS software is used to construct model, after finding the optimal sample size, we will select the sample size randomly from the two designed population, and observe the interval width and interval coverage probability composed of sample size whether consistent with our original set. The results shows: the expected interval length method will have a better simulation results only when the samples are large enough, and the expected interval coverage probability method will shows unstable when the population parameters very close to 0.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Optimum sample size"

1

Baxter-Jones, Adam DG. Growth and maturation. Edited by Neil Armstrong and Willem van Mechelen. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780198757672.003.0002.

Full text
Abstract:
As children grow they increase in size and maturity. While growth refers to changes in size and complexity of tissue composition, maturation is the progressive achievement of adult status. A child’s growth status is an important determinant of current and lifelong health. Regular physical activity is required to obtain optimal growth. Normal healthy children show the same patterns of growth in terms of attainment of size and changes in proportionality. However, growth is not a linear process; the speed of statural growth decreases during infancy, is relatively constant during childhood, and accelerates during adolescence before slowing down in emerging adulthood. Although the patterns of growth are similar in all individuals, the timing and tempo of growth shows vast variability both within and between sexes. Thus it is important to remember that the effects of growth and maturation may mask or be greater than the effects of exercise.
APA, Harvard, Vancouver, ISO, and other styles
2

Najavits, Lisa M., and Melissa L. Anderson. Psychosocial Treatments for Posttraumatic Stress Disorder. Oxford University Press, 2015. http://dx.doi.org/10.1093/med:psych/9780199342211.003.0018.

Full text
Abstract:
Treatments for posttraumatic stress disorder (PTSD) work better than treatment as usual; average effect sizes are in the moderate to high range. A variety of treatments have been established as effective, with no one treatment having superiority. Both present-focused and past-focused treatment models work (neither consistently outperforms the other). Areas of future development include training, dissemination, client access to care, optimal delivery modes, and mechanisms of action. Methodological issues include improving research reporting, broadening study samples, and greater use of active comparison conditions.
APA, Harvard, Vancouver, ISO, and other styles
3

Ramdass, Ranjit. Neurophysiology in the assessment of inflammatory myopathies. Edited by Hector Chinoy and Robert Cooper. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198754121.003.0015.

Full text
Abstract:
Clinical neurophysiology (electrodiagnosis) includes the assessment of peripheral nerves by electrical stimulation (nerve conduction studies, NCS) and needle examination of muscles (electromyography, EMG). Electrodiagnostic assessment is a functional extension of clinical examination into the laboratory. It plays an important role in the investigation of a patient suspected of having myositis, providing valuable information regarding peripheral nerve, neuromuscular junction and muscle functions, to better characterize clinical syndromes. NCS can establish the presence and quantify the severity of a primary or co-existing peripheral neuropathy, while EMG examination can help discriminate between primary myogenic and primary neurogenic disorders. EMG is potentially more sensitive than clinical examination, as abnormalities can be detected in muscles apparently unaffected on clinical examination. Additionally, a number of muscles can be sampled to help target an optimal muscle biopsy site. Neurophysiology can also assist in monitoring treatment responses and detecting emerging problems, such steroid myopathy or drug-induced neuropathy.
APA, Harvard, Vancouver, ISO, and other styles
4

Bartkowicz, Leszek. Tekstura drzewostanów naturalnych w polskich parkach narodowych na tle teorii dynamiki lasu. Publishing House of the University of Agriculture in Krakow, 2021. http://dx.doi.org/10.15576/978-83-66602-20-5.

Full text
Abstract:
The aim of the study was to compare a patch-mosaic pattern in the old-growth forest stands developed in various climate and soil conditions occurring in different regions of Poland. Based on the assumption, that the patch-mosaic pattern in the forest reflect the dynamic processes taking place in it, and that each type of forest ecosystem is characterized by a specific regime of natural disturbances, the following hypotheses were formulated: (i) the patches with a complex structure in stands composed of latesuccessional, shade-tolerant tree species are more common than those composed of early-successional, light-demanding ones, (ii) the patch-mosaic pattern is more heterogeneous in optimal forest site conditions than in extreme ones, (iii) in similar site conditions differentiation of the stand structure in distinguished patches is determined by the successional status of the tree species forming a given patch, (iv) the successional trends leading to changes of species composition foster diversification of the patch structure, (v) differentiation of the stand structure is negatively related to their local basal area, especially in patches with a high level of its accumulation. Among the best-preserved old-growth forest remaining under strict protection in the Polish national parks, nineteen research plots of around 10 ha each were selected. In each plot, a grid (50 × 50 m) of circular sample subplots (with radius 12,62 m) was established. In the sample subplots, species and diameter at breast height of living trees (dbh ≥ 7 cm) were determined. Subsequently, for each sample subplot, several numerical indices were calculated: local basal area (G), dbh structure differentiation index (STR), climax index (CL) and successional index (MS). Statistical tests of Kruskal- Wallis, Levene and Generalized Additive Models (GAM) were used to verify the hypotheses. All examined forests were characterized by a large diversity of stand structure. A particularly high frequency of highly differentiated patches (STR &gt; 0,6) was recorded in the alder swamp forest. The patch mosaic in the examined plots was different – apart from the stands with a strongly pronounced mosaic character (especially subalpine spruce forests), there were also stands with high spatial homogeneity (mainly fir forests). The stand structure in the distinguished patches was generally poorly related to the other studied features. Consequently, all hypotheses were rejected. These results indicate a very complex, mixed pattern of forest natural dynamics regardless of site conditions. In beech forests and lowland multi-species deciduous forests, small-scale disturbances of the gap dynamics type dominate, which are overlapped with less frequent medium-scale disturbances. In more difficult site conditions, large-scale catastrophic disturbances, which occasionally appear in communities formed under the influence of gap dynamics (mainly spruce forests) or cohort dynamics (mainly pine forests), gain importance.
APA, Harvard, Vancouver, ISO, and other styles
5

Boyce, Lee, and Melody Schoenfeld. Strength Training for All Body Types. Human Kinetics, 2023. https://doi.org/10.5040/9781718241060.

Full text
Abstract:
Every person's body is different. Short, tall, or big all over, training should be designed to accommodate an athlete's different joint angles, bone lengths, and overall body structure. In Strength Training for All Body Types: The Science of Lifting and Levers, Lee Boyce and Melody Schoenfeld have teamed up to create a unique resource that explains how different bodies manage various exercises and how to best take advantage of physical attributes to optimize those movements. Strength Training for All Body Types covers 13 body types: • Tall • Short • Big all over • Short arms and long legs • Short legs and long arms • Long torso • Long torso, short legs, and long arms • Long torso, long legs, and short arms • Short torso, short legs, and long arms • Short torso, long legs, and short arms • Long femurs and short shins • Long shins and short femurs • Small hands Professionals working with people of various shapes and sizes will learn how to modify common lifts like the deadlift, squat, and bench press to maximize training outcomes and reduce the risk of injury. Detailed analysis and descriptions for each exercise variation provide the rationale for the modification and the science that explains why it is beneficial. The authors also dig into the physics of the body and describe how the length and proportions of body levers (e.g., arms, legs, torso) have an impact on the body's response to load. You will be better equipped to help clients use their body's proportions to their advantage rather than being a hindrance to optimal performance. Packed full of strength training exercises, sample workouts, and conditioning work designed for different body sizes, Strength Training for All Body Types gives you the tools you need to help your clients make changes to their technique, become stronger, lift more, and avoid injury. Earn continuing education credits/units! A continuing education exam that uses this book is also available. It may be purchased separately or as part of a package that includes both the book and exam. Audience Personal trainers and strength and conditioning specialists and coaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Zydroń, Tymoteusz. Wpływ systemów korzeniowych wybranych gatunków drzew na przyrost wytrzymałości gruntu na ścinanie. Publishing House of the University of Agriculture in Krakow, 2019. http://dx.doi.org/10.15576/978-83-66602-46-5.

Full text
Abstract:
The aim of the paper was to determine the influence of root systems of chosen tree species found in the Polish Flysch Carpathians on the increase of soil shear strength (root cohesion) in terms of slope stability. The paper's goal was achieved through comprehensive tests on root systems of eight relatively common in the Polish Flysch Carpathians tree species. The tests that were carried out included field work, laboratory work and analytical calculations. As part of the field work, the root area ratio (A IA) of the roots was determined using the method of profiling the walls of the trench at a distance of about 1.0 m from the tree trunk. The width of the. trenches was about 1.0 m, and their depth depended on the ground conditions and ranged from 0.6 to 1.0 m below the ground level. After preparing the walls of the trench, the profile was divided into vertical layers with a height of 0.1 m, within which root diameters were measured. Roots with diameters from 1 to 10 mm were taken into consideration in root area ratio calculations in accordance with the generally accepted methodology for this type of tests. These measurements were made in Biegnik (silver fir), Ropica Polska (silver birch, black locust) and Szymbark (silver birch, European beech, European hornbeam, silver fir, sycamore maple, Scots pine, European spruce) located near Gorlice (The Low Beskids) in areas with unplanned forest management. In case of each tested tree species the samples of roots were taken, transported to the laboratory and then saturated with water for at least one day. Before testing the samples were obtained from the water and stretched in a. tensile testing machine in order to determine their tensile strength and flexibility. In general, over 2200 root samples were tested. The results of tests on root area ratio of root systems and their tensile strength were used to determine the value of increase in shear strength of the soils, called root cohesion. To this purpose a classic Wu-Waldron calculation model was used as well as two types of bundle models, the so called static model (Fiber Bundle Model — FIRM, FBM2, FBM3) and the deformation model (Root Bundle Model— RBM1, RBM2, mRBM1) that differ in terms of the assumptions concerning the way the tensile force is distributed to the roots as well as the range of parameters taken into account during calculations. The stability analysis of 8 landslides in forest areas of Cicikowicleie and Wignickie Foothills was a form of verification of relevance of the obtained calculation results. The results of tests on root area ratio in the profile showed that, as expected, the number of roots in the soil profile and their ApIA values are very variable. It was shown that the values of the root area ratio of the tested tree species with a diameter 1-10 ram are a maximum of 0.8% close to the surface of the ground and they decrease along with the depth reaching the values at least one order of magnitude lower than close to the surface at the depth 0.5-1.0 m below the ground level. Average values of the root area ratio within the soil profile were from 0.05 to 0.13% adequately for Scots pine and European beech. The measured values of the root area ratio are relatively low in relation to the values of this parameter given in literature, which is probably connected with great cohesiveness of the soils and the fact that there were a lot of rock fragments in the soil, where the tests were carried out. Calculation results of the Gale-Grigal function indicate that a distribution of roots in the soil profile is similar for the tested species, apart from the silver fir from Bie§nik and European hornbeam. Considering the number of roots, their distribution in the soil profile and the root area ratio it appears that — considering slope stability — the root systems of European beech and black locust are the most optimal, which coincides with tests results given in literature. The results of tensile strength tests showed that the roots of the tested tree species have different tensile strength. The roots of European beech and European hornbeam had high tensile strength, whereas the roots of conifers and silver birch in deciduous trees — low. The analysis of test results also showed that the roots of the studied tree species are characterized by high variability of mechanical properties. The values Of shear strength increase are mainly related to the number and size (diameter) of the roots in the soil profile as well as their tensile strength and pullout resistance, although they can also result from the used calculation method (calculation model). The tests showed that the distribution of roots in the soil and their tensile strength are characterized by large variability, which allows the conclusion that using typical geotechnical calculations, which take into consideration the role of root systems is exposed to a high risk of overestimating their influence on the soil reinforcement. hence, while determining or assuming the increase in shear strength of soil reinforced with roots (root cohesion) for design calculations, a conservative (careful) approach that includes the most unfavourable values of this parameter should be used. Tests showed that the values of shear strength increase of the soil reinforced with roots calculated using Wu-Waldron model in extreme cases are three times higher than the values calculated using bundle models. In general, the most conservative calculation results of the shear strength increase were obtained using deformation bundle models: RBM2 (RBMw) or mRBM1. RBM2 model considers the variability of strength characteristics of soils described by Weibull survival function and in most cases gives the lowest values of the shear strength increase, which usually constitute 50% of the values of shear strength increase determined using classic Wu-Waldron model. Whereas the second model (mRBM1.) considers averaged values of roots strength parameters as well as the possibility that two main mechanism of destruction of a root bundle - rupture and pulling out - can occur at the same. time. The values of shear strength increase calculated using this model were the lowest in case of beech and hornbeam roots, which had high tensile strength. It indicates that in the surface part of the profile (down to 0.2 m below the ground level), primarily in case of deciduous trees, the main mechanism of failure of the root bundle will be pulling out. However, this model requires the knowledge of a much greater number of geometrical parameters of roots and geotechnical parameters of soil, and additionally it is very sensitive to input data. Therefore, it seems practical to use the RBM2 model to assess the influence of roots on the soil shear strength increase, and in order to obtain safe results of calculations in the surface part of the profile, the Weibull shape coefficient equal to 1.0 can be assumed. On the other hand, the Wu-Waldron model can be used for the initial assessment of the shear strength increase of soil reinforced with roots in the situation, where the deformation properties of the root system and its interaction with the soil are not considered, although the values of the shear strength increase calculated using this model should be corrected and reduced by half. Test results indicate that in terms of slope stability the root systems of beech and hornbeam have the most favourable properties - their maximum effect of soil reinforcement in the profile to the depth of 0.5 m does not usually exceed 30 kPa, and to the depth of 1 m - 20 kPa. The root systems of conifers have the least impact on the slope reinforcement, usually increasing the soil shear strength by less than 5 kPa. These values coincide to a large extent with the range of shear strength increase obtained from the direct shear test as well as results of stability analysis given in literature and carried out as part of this work. The analysis of the literature indicates that the methods of measuring tree's root systems as well as their interpretation are very different, which often limits the possibilities of comparing test results. This indicates the need to systematize this type of tests and for this purpose a root distribution model (RDM) can be used, which can be integrated with any deformation bundle model (RBM). A combination of these two calculation models allows the range of soil reinforcement around trees to be determined and this information might be used in practice, while planning bioengineering procedures in areas exposed to surface mass movements. The functionality of this solution can be increased by considering the dynamics of plant develop¬ment in the calculations. This, however, requires conducting this type of research in order to obtain more data.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Optimum sample size"

1

Baronov, David. "Optimal Sample Size—or Benign Manipulation, the Good Kind." In Biostatistics. Routledge, 2022. http://dx.doi.org/10.4324/9781003316985-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kuß, Uwe. "C-Optimal Decisions with Optimality Properties for Finite Sample Size." In Contributions to Econometrics and Statistics Today. Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/978-3-642-70189-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Lanju, Lu Cui, and Yaoyao Xu. "Optimal Adaptive Phase III Design with Interim Sample Size and Dose Determination." In Contemporary Biostatistics with Biopharmaceutical Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15310-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Sudipto, and Jigyasu Dubey. "Optimal Sample Size Calculation for Machine Learning Based Analysis for Suicide Ideation." In Communications in Computer and Information Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-22485-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

De Santis, Fulvio, and Stefania Gubbiotti. "Optimal Sample Size for Evidence and Consensus in Phase III Clinical Trials." In AIRO Springer Series. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95380-5_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zaman, Md Faisal, and Hideo Hirose. "Estimation of Optimal Sample Size of Decision Forest with SVM Using Embedded Cross-Validation Method." In Intelligent Information and Database Systems. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20042-7_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gnecco, Giorgio, and Federico Nutarelli. "Optimal Trade-Off Between Sample Size and Precision of Supervision for the Fixed Effects Panel Data Model." In Machine Learning, Optimization, and Data Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brungard, C. W., and J. L. Boettinger. "Conditioned Latin Hypercube Sampling: Optimal Sample Size for Digital Soil Mapping of Arid Rangelands in Utah, USA." In Digital Soil Mapping. Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-8863-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Viaene, Nicole, Johannes Hallmann, and Leendert P. G. Molendijk. "Methods for nematode extraction." In Techniques for work with plant and soil nematodes. CABI, 2021. http://dx.doi.org/10.1079/9781786391759.0012.

Full text
Abstract:
Abstract Nematodes can be present in different matrices. This chapter describes several methods to extract nematodes from soil and plant parts. It is crucial that an appropriate method is chosen for the purpose of the research as different types of nematodes, and even different nematode stages, are extracted depending on the method. Factors to consider for choosing the optimal extraction method are the extraction efficiency of the method, the maximum sample size that can be analysed and costs of the extraction equipment. In addition, water consumption, labour and the time needed before nematodes can be examined can be important factors.
APA, Harvard, Vancouver, ISO, and other styles
10

Viaene, Nicole, Johannes Hallmann, and Leendert P. G. Molendijk. "Methods for nematode extraction." In Techniques for work with plant and soil nematodes. CABI, 2021. http://dx.doi.org/10.1079/9781786391759.0002.

Full text
Abstract:
Abstract Nematodes can be present in different matrices. This chapter describes several methods to extract nematodes from soil and plant parts. It is crucial that an appropriate method is chosen for the purpose of the research as different types of nematodes, and even different nematode stages, are extracted depending on the method. Factors to consider for choosing the optimal extraction method are the extraction efficiency of the method, the maximum sample size that can be analysed and costs of the extraction equipment. In addition, water consumption, labour and the time needed before nematodes can be examined can be important factors.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Optimum sample size"

1

Osnowski, Andy, Harry Grover, Scott Rankin, Jenni Howe, and Fiona Carson. "On-Site Corrosion Inhibitor Detection for Improved Corrosion Management." In CORROSION 2021. AMPP, 2021. https://doi.org/10.5006/c2021-16792.

Full text
Abstract:
Abstract Corrosion inhibitors are widely used across the oil and gas industry to protect valuable infrastructure and manage corrosion issues. It is critical these chemicals are applied at a suitable dose: too much chemical can cause emulsion problems and increase OPEX, while too little chemical can leave a system at risk. The detection of corrosion inhibitor micelles can be used to identify the optimum dose. Micelles are nanoscale aggregates and identifying them in complex oilfield fluids, which often consist of multiple phases, is extremely challenging. The first-generation micelle detection technology we developed used complex instrumentation which required highly trained personnel and a lab environment to conduct testing. The technology has since undergone significant development and now consists of a small, rugged instrument, easy-to-use method and the ability to be deployed on-site (i.e. beside a sample point on a pipe). Advances have also been made in data processing and software has been built which gives immediate results. This sophisticated technology package has allowed us to deploy a new corrosion management strategy in areas of the US onshore pipeline network, whereby an initial audit of a system is conducted to identify any sites of overdosing or underdosing. The corrosion inhibitor injection rates are adjusted accordingly, until dosing is detected to be optimal, and then regular (usually monthly or quarterly) monitoring is implemented. The concentration of chemical required to protect a system can change for a wide variety of reasons, such as increased solids production or change in fluid composition, therefore regular monitoring is essential. Data will be presented on the use of this method in the field.
APA, Harvard, Vancouver, ISO, and other styles
2

Yuandri, Ryan Jonathan, Udjianna S. Pasaribu, Utriweni Mukhaiyar, and Mohamad N. Heriwan. "Minimum sample size and optimum depth difference study based on time-series analysis of gamma-ray log data." In PROCEEDING OF INTERNATIONAL CONFERENCE ON FRONTIERS OF SCIENCE AND TECHNOLOGY 2021. AIP Publishing, 2022. http://dx.doi.org/10.1063/5.0102509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Farnood, Ramin, Darryl Ngai, and Ning Yan. "Revisiting the Random Disk Model: Decomposition of Paper Structure Using Randomly Deposited Disks with Arbitrary Size Distributions." In Advances in Pulp and Paper Research, Cambridge 2013, edited by S. J. I’ Anson. Fundamental Research Committee (FRC), Manchester, 2013. http://dx.doi.org/10.15376/frc.2013.1.41.

Full text
Abstract:
In this study, a numerical algorithm was developed to decompose the planar mass structure of paper into a random array of grey disks with a discrete size distribution. The optimum size and the frequency of these disks were determined such that the second order statistics of the corresponding random disk structure resembled that of the paper sample. Using this method, eighty two (82) commercial and laboratory-made samples were analyzed. It was found that; independent of the forming conditions, the average disk size was proportional to the standard deviation of the disk size distribution. The utility of this new tool in analyzing the effect of papermaking conditions on paper formation is illustrated.
APA, Harvard, Vancouver, ISO, and other styles
4

Kempe, M., J. Wong, and A. Z. Genack. "Confocal spatial filtering for imaging with ballistic light in transillumination." In Advances in Optical Imaging and Photon Migration. Optica Publishing Group, 1996. http://dx.doi.org/10.1364/aoipm.1996.pmst25.

Full text
Abstract:
The detection of ballistic and diffuse light in confocal imaging in transillumination is investigated. We find an optimum pinhole size for detection of ballistic and suppression of diffuse light. The limits of confocal imaging with ballistic light are primarily determined by sample parameters. Spherical aberration introduced by the sample is a major limiting factor for high resolution imaging. In the aberration free case and for sample and illumination characteristics which are typical for biomedical imaging, the limits of ballistic light detection in confocal imaging are close to the noise limits of standard detectors.
APA, Harvard, Vancouver, ISO, and other styles
5

Hess, Timothy, Beshoy Morkos, Mark Bowman, and Joshua D. Summers. "Cross Analysis of Metal Foam Design Parameters for Achieving Desired Fluid Flow." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-64916.

Full text
Abstract:
This paper presents an experimental study of air flow through open cell metal foams for use as a thermal energy dissipating system. The goal of this paper is to identify the optimum configuration of metal foam design parameters for maximum flow. Four foam blocks were used in the study, representing a range of design parameters: material (copper or aluminum), pore size (5–10 pores per inch), and relative density (ε = 0.875–0.952). The effects of pore size were isolated by comparing air flow through three aluminum foam blocks with constant density and varied pore size. A series of wind tunnel tests were performed to measure the velocity of air flowing through the foam as a function of the free stream air velocity, ranging from 0 to 17.4 mph (7.5 m/s). Results indicated smaller pore sizes and larger densities decreased the amount of airflow through the foam. However, one foam sample produced results that did not fit this trend. Further investigation found this was likely due to the differences in the cross-sectional geometry of the foam ligaments. The ligament geometry, affected by density and manufacturing method, was not constant and not initially considered as a variable of interest. The cross-section shape of the ligaments was found to vary from a rounded triangular shape to a triangle shape with concave surfaces, increasing the amount of drag in the airflow through the sample.
APA, Harvard, Vancouver, ISO, and other styles
6

Sharma, Arjun, Tariq S. Khan, Ebrahim Al Hajri, and Md Islam. "Morphological Characterization of Fouling on Air Cooled Fin Fan Heat Exchangers." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-66972.

Full text
Abstract:
In today’s fast growing world where availability of energy has become a major concern, the cost of performance demands optimum heat exchange performance over extended periods of operational times. Fouling is one major factor that drastically affects heat exchanger performance. Most of the oil &amp; gas processing plants in the Middle East are located in deserts. Due to scarcity of water most of the installed heat exchangers are air-cooled. These heat exchangers are at high risk of low performance due to dusty/sticky particulate fouling. In order to identify possible active/passive methods to control or ideally eliminate particulate fouling, as a first step, it is desirable to know exact morphology of such particulate fouling. This study presents morphological characterization of selected fouling samples from eight different installed fin fan heat exchangers. The scanning electron microscope (SEM) tests are carried out to determine standard characteristics and size of sample foulant powder. Variability in sizes and shapes is found between samples perhaps due to different working temperature ranges of the selected heat exchangers. The semi quantitative sample composition measured by energy dispersion x-ray micro analysis was as following: 26.50% Si, 26.12% Ca, 10.07% C and 9% Al with traces of Fe, Na, Mg, Cl, and some other salts. X-ray diffraction analysis revealed presence of quartz, calcite and alumina with traces of halite and hematite. The diversity of these fouling samples reflects complexity with respect to their potential removal and effects on heat transfer.
APA, Harvard, Vancouver, ISO, and other styles
7

Vanderlinde, William E., and Don Chernoff. "X-Ray Nanoanalysis in the SEM." In ISTFA 2005. ASM International, 2005. http://dx.doi.org/10.31399/asm.cp.istfa2005p0370.

Full text
Abstract:
Abstract Scanning electron microscopy (SEM)/energy dispersive x-ray spectroscopy (EDS) is generally thought of as a bulk analysis technique that is not suited for nano-scale analysis. This paper discusses several options for reducing or eliminating the interaction volume size and obtaining x-ray data with much higher spatial resolution and surface sensitivity than is typically achieved in the SEM. These include collecting data at very low accelerating voltages to minimize beam spread in the sample, tilting the sample to keep the interaction volume near the surface, and analyzing thin sections to reduce or eliminate the problem of beam spread in the sample. Computer software simulations, in conjunction with experimental data are used to illustrate these methods. The paper also discusses issues effecting EDS analysis in the environmental SEM. It has been shown that computer modeling is a useful tool for determining the optimum beam conditions to improve energy dispersive analysis in the SEM.
APA, Harvard, Vancouver, ISO, and other styles
8

Arnone, Joshua C., Carol V. Ward, Gregory J. Della Rocca, Brett D. Crist, and A. Sherif El-Gizawy. "Simulation-Based Design of Orthopedic Trauma Implants." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-40936.

Full text
Abstract:
A computer-aided simulation model is developed to aid in the design and optimization of orthopaedic trauma implants. The developed model uses digital imaging, computer-aided solid modeling, and finite element methods in order to study the effects of various geometric parameters of fixation devices in orthopedic surgery practice. The results of the present simulation model would lead to the determination of the optimum implant design that provides the best match with the geometry of the human femur — reducing the risk of over-stressing bone tissue during implant insertion. The effectiveness of the presented simulation model is demonstrated through the design of intramedullary (IM) nails used in treating femoral shaft fractures. CT scans were taken of forty intact human femora. A technique was developed in order to digitally reconstruct the scans into 3D solid models using image segmentation, surface simplification, and smoothing methods while maintaining accurate representation of the original scans. Each resulting surface model is characterized by a network of nearly equilateral triangles of approximately the same size allowing for quality finite element meshing. Femoral lengths, curvature, shaft diameters, and location of maximum curvature were then quantified. An average geometric model was then generated for the investigated sample by averaging corresponding nodal coordinates in each femur model. Using the average model, a length-standardized function representing the curvature of the medullary canal was derived to create a geometrically optimized IM nail for the entire sample. “Virtual surgery” simulating the insertion process was then performed using finite element methods in order to validate the proposed optimal IM nail design. The results of both the optimum nail and a current nail were compared using the femur having the highest curvature in the sample. The present study shows that the developed simulation model leads to a nail design that reduces the insertion-induced stress within the femur to an acceptable level compared to current nails.
APA, Harvard, Vancouver, ISO, and other styles
9

Qi, Min, Yueying Wang, and Jia Liu. "Probabilistic Distribution of Parameters Applied to PFM Analysis for Piping in China Experimental Fast Reactor." In 2013 21st International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/icone21-16520.

Full text
Abstract:
The safety assessment method based on probabilistic fracture mechanics (PFM) is well applied to pressure vessel and piping. The PFM analysis is more reasonable and reliable than determinate fracture mechanics (DFM) method. In PFM analysis, the uncertainty of main assessment parameters, such as loads, material character parameters, structure dimension and defect sizes are considered to be random, and the probabilistic distribution of these parameters are determined with the theory of probability statistics. Related to the practical engineering of China experimental fast reactor (CEFR), this paper has done some research work on the parameters probabilistic distribution, and a method was given to determine the optimum fitting probabilistic distribution function of parameters applied to PFM analysis for piping in the small sample size. The work of this paper makes the foundation of the further probabilistic safety assessment of CEFR piping.
APA, Harvard, Vancouver, ISO, and other styles
10

Jimmy, Dorcas, Emenike Wami, and Michael Ifeanyi Ogba. "Cuttings Lifting Coefficient Model: A Criteria for Cuttings Lifting and Hole Cleaning Quality of Mud in Drilling Optimization." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/212004-ms.

Full text
Abstract:
Abstract In this study, the hole cleaning qualities of mud samples formulated with tigernut derivatives – starch and fibre – as additives were determined by adding drill cuttings as impurities and evaluating the Carrying Capacity Index (CCI) as well as Transport Index (TI) of the muds. Results of the analysis conducted for the mud properties showed that all the different mud properties but the pH of the mud evaluated of mud samples B, C1, C2, and C3 were slightly higher (albeit within the recommended values) than those of the control (standard) mud sample A. Using the results obtained from mud properties analysis and drilling operations data for the evaluation of the hole cleaning qualities, the following new expressions for optimum cuttings lifting ability (β) and cuttings lifting coefficient (β1), which gives criteria for cutting lifting in a wellbore were developed: β1 = 0.11519 [(1 − Cf)]−1(dp)−2.014. The higher the value of β1 greater than one, the better the hole cleaning ability of the mud and the lower the mud flowrate needed to achieve better hole cleaning for a given cutting particle size.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Optimum sample size"

1

Cressie, Noel, and Jonathan Biele. A Sample-Size Optimal Bayesian Procedure for Sequential Pharmaceutical Trials. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada248512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Darling, Arthur H., and William J. Vaughan. The Optimal Sample Size for Contingent Valuation Surveys: Applications to Project Analysis. Inter-American Development Bank, 2000. http://dx.doi.org/10.18235/0008824.

Full text
Abstract:
One of the first questions that has to be answered in the survey design process is "How many subjects should be interviewed?" The answer can have significant implications for the cost of project preparation, since in Latin America and the Caribbean costs per interview can range from US$20 to US$100. Traditionally, the sample size question has been answered in an unsatisfactory way by either dividing an exogenously fixed survey budget by the cost per interview or by employing some variant of a standard statistical tolerance interval formula. The answer is not to be found in the environmental economics literature. But, it can be developed by adapting a Bayesian decision analysis approach from business statistics. The paper explains and illustrates, with a worked example, the rationale for and mechanics of a sequential Bayesian optimization technique, which is only applicable when there is some monetary payoff to alternative courses of action that can be linked to the sample data.
APA, Harvard, Vancouver, ISO, and other styles
3

Kott, Phillip S. Calibration-Weighting a Stratified Simple Random Sample with SUDAAN. RTI Press, 2022. http://dx.doi.org/10.3768/rtipress.2022.mr.0048.2204.

Full text
Abstract:
This report shows how to apply the calibration-weighting procedures in SAS-callable SUDAAN (Version 11) to a stratified simple random sample drawn from a complete list frame for an establishment survey. The results are calibrated weights produced via raking, raking to a size variable, and pseudo-optimal calibration that potentially reduce and appropriately measure the standard errors of estimated totals. The report then shows how to use these procedures to remove selection bias caused by unit nonresponse under a plausible response model. Although unit nonresponse is usually assumed to be a function of variables with known population or full-sample estimated totals, calibration weighting can often be used when nonresponse is assumed to be a function of a variable known only for unit respondents (i.e., not missing at random). When producing calibrated weights for an establishment survey, one advantage the SUDAAN procedures have over most of their competitors is that their linearization-based variance estimators can capture the impact of finite-population correction.
APA, Harvard, Vancouver, ISO, and other styles
4

Shivakumar, Pranavkumar, Kanika Gupta, Antonio Bobet, Boonam Shin, and Peter J. Becker. Estimating Strength from Stiffness for Chemically Treated Soils. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317383.

Full text
Abstract:
The central theme of this study is to identify strength-stiffness correlations for chemically treated subgrade soils in Indiana. This was done by conducting Unconfined Compression (UC) Tests and Resilient Modulus Tests for soils collected at three different sites—US-31, SR-37, and I-65. At each site, soil samples were obtained from 11 locations at 30 ft spacing. The soils were treated in the laboratory with cement, using the same proportions used for construction, and cured for 7 and 28 days before testing. Results from the UC tests were compared with the resilient modulus results that were available. No direct correlation was found between resilient modulus and UCS parameters for the soils investigated in this study. A brief statistical analysis of the results was conducted, and a simple linear regression model involving the soil characteristics (plasticity index, optimum moisture content and maximum dry density) along with UCS and resilient modulus parameters was proposed.
APA, Harvard, Vancouver, ISO, and other styles
5

Blundell, S., and Nicole Wayant. Local spatial dispersion for multiscale modeling of geospatial data : exploring dispersion measures to determine optimal raster data sample sizes. Engineer Research and Development Center (U.S.), 2020. http://dx.doi.org/10.21079/11681/35678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reyes-Tagle, Gerardo, and Jorge E. Muñoz-Ayala. Debt and Economic Growth: Does Size Matter? Evidence from Dynamic Parametric and Static Non-parametric Approaches. Inter-American Development Bank, 2023. http://dx.doi.org/10.18235/0004818.

Full text
Abstract:
This paper provides new evidence on the effect of debt on economic growth through two alternative methodological approaches. On the one hand, by using a panel error correction model with a sample of 130 countries between 1980 and 2020, we found evidence of the existence of a range of debt-to-GDP ratios for which economic growth remains positive after debt surges. This threshold may lie between 32 percent and 136 percent, with optimal economic growth achieved at an 84 percent debt-to-GDP ratio for the whole sample of countries. The error correction form for the economic growth was dynamically consistent and non-linear with respect to the debt-to-GDP ratio. On the other hand, recent evidence has shown that commodity price volatility increases external debt accumulation for commodity-exporting countries. Still, there is no evidence of the effects of debt surges on these countries' economic growth. This paper provides original insights into the relationship between economic growth and the debt-to-GDP ratio for commodity and non-commodity-driven economies by employing a regression discontinuity design (RDD) approach. This method allows us to estimate differences in economic growth around an estimated threshold without assuming any specific function for the underlying relationship between the two variables. Our findings suggest that non-commodity-driven economies benefit from a higher threshold (85 percent) than commodity-exporting economies (50 percent).
APA, Harvard, Vancouver, ISO, and other styles
7

Fent, Thomas, Stefan Wrzaczek, Gustav Feichtinger, and Andreas Novak. Fertility decline and age-structure in China and India. Verlag der Österreichischen Akademie der Wissenschaften, 2024. http://dx.doi.org/10.1553/0x003f0d14.

Full text
Abstract:
China and India, two Asian countries that experienced a rapid decline in fertility since the middle of the twentieth century, are the focus of this paper. Although there is no doubt that lower fertility levels have many positive effects on the economy, development and sustainability, little is known about the optimal transition from high to medium or even low levels of fertility. Firstly, implementing policies that have the potential to reduce fertility is costly. Secondly, additional costs arise from adapting the infrastructure to a population that fluctuates quickly not only in terms of size but also with respect to the age structure. We apply an intertemporal optimisation model that takes the costs and benefits of fertility decline into account. The optimal time path depends on the cost structure, the planning horizon and the initial conditions. In the case of a long planning horizon and high initial fertility, it may even be optimal to reduce fertility temporarily below replacement level in order to slow down population growth at an early stage. A key finding of our formal investigation is that, under the same plausible parameter settings, the optimal paths for China and India differ substantially. Moreover, our analysis shows that India, where the fertility decline emerged as a consequence of societal and economic developments, followed a path closer to the optimal fertility transition than China, where the fertility decline was state-imposed. The mathematical approach deployed for this analysis provides insights into the optimal long-term development of fertility and allows for policy conclusions to be drawn for other countries that are still in the fertility transition process.
APA, Harvard, Vancouver, ISO, and other styles
8

Searcy, Stephen W., and Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, 1993. http://dx.doi.org/10.32747/1993.7568747.bard.

Full text
Abstract:
This project includes two main parts: Development of a “Selective Wavelength Imaging Sensor” and an “Adaptive Classifiery System” for adaptive imaging and sorting of agricultural products respectively. Three different technologies were investigated for building a selectable wavelength imaging sensor: diffraction gratings, tunable filters and linear variable filters. Each technology was analyzed and evaluated as the basis for implementing the adaptive sensor. Acousto optic tunable filters were found to be most suitable for the selective wavelength imaging sensor. Consequently, a selectable wavelength imaging sensor was constructed and tested using the selected technology. The sensor was tested and algorithms for multispectral image acquisition were developed. A high speed inspection system for fresh-market carrots was built and tested. It was shown that a combination of efficient parallel processing of a DSP and a PC based host CPU in conjunction with a hierarchical classification system, yielded an inspection system capable of handling 2 carrots per second with a classification accuracy of more than 90%. The adaptive sorting technique was extensively investigated and conclusively demonstrated to reduce misclassification rates in comparison to conventional non-adaptive sorting. The adaptive classifier algorithm was modeled and reduced to a series of modules that can be added to any existing produce sorting machine. A simulation of the entire process was created in Matlab using a graphical user interface technique to promote the accessibility of the difficult theoretical subjects. Typical Grade classifiers based on k-Nearest Neighbor techniques and linear discriminants were implemented. The sample histogram, estimating the cumulative distribution function (CDF), was chosen as a characterizing feature of prototype populations, whereby the Kolmogorov-Smirnov statistic was employed as a population classifier. Simulations were run on artificial data with two-dimensions, four populations and three classes. A quantitative analysis of the adaptive classifier's dependence on population separation, training set size, and stack length determined optimal values for the different parameters involved. The technique was also applied to a real produce sorting problem, e.g. an automatic machine for sorting dates by machine vision in an Israeli date packinghouse. Extensive simulations were run on actual sorting data of dates collected over a 4 month period. In all cases, the results showed a clear reduction in classification error by using the adaptive technique versus non-adaptive sorting.
APA, Harvard, Vancouver, ISO, and other styles
9

Koven, William, Gordon Grau, Benny Ron, and Tetsuya Hirano. Improving fry quality, survival and growth in commercially farmed fish by dietary stimulation of thyroid hormone production in premetamorphosing larvae. United States Department of Agriculture, 2004. http://dx.doi.org/10.32747/2004.7695856.bard.

Full text
Abstract:
There is a direct correlation between successful metamorphosis from larvae to post-larvae and the quality of the resultant juveniles or fry. Juvenile quality, in turn, is a major factor influencing fish production level and market price. However, following the profound morphological and physiological changes occurring during metamorphosis, the emerging juveniles in some species characteristically demonstrate heterotrophic growth, poor pigmentation, cannibalism and generally poor survival. The white grouper (Epinephelus aeneus) in Israel and the Pacific threadfin (Polydactylussexfilis) in Hawaii are two promising candidates for mariculture that have high market value but a natural fishery that has sharply declined in recent years. Unfortunately, their potential for culture is severely hampered by variable metamorphic success limiting their production. The main objective was to compare the efficacy and economic viability of dietary or environmental iodine on metamorphic success and juvenile quality in the white grouper and the pink snapper which would lead to improved commercial rearing protocols and increased production of these species both in Israel and the US. The Hawaii Institute of Marine Biology encountered problems with the availability of pink snapper brood stock and larvae and changed to Pacific threadfin or moi which is rapidly becoming a premier aquaculture species in Hawaii and throughout the Indo-Pacific. The white grouper brood stock at the National Center for Mariculture was lost as a result of a viral outbreak following the sudden breakdown of the ozone purification system. In addition, the NCM suffered a devastating fire in the fall of 2007 that completely destroyed the hatchery and laboratory facilities although the BARD project samples were saved. Nevertheless, by studying alternate species a number of valuable findings and conclusions that can contribute to improved metamorphosis in commercially valuable marine species resulted from this collaborative effort. The Israeli group found that exposing white grouper larvae to external TH levels synchronized and increased the rate of metamorphosis. This suggested that sub-optimal synthesis of TH may be a major factor causing size heterogeneity in the larval population and high mortality through cannibalism by their larger more metamorphosed cohorts. Two protocols were developed to enrich the larvae with higher levels of the TH precursor, iodine; feeding iodine enriched Artemia or increasing the level of seawater iodine the larvae are exposed to. Results of accumulated iodine in gilthead seabream larvae indicated that the absorption of iodine from the water is markedly more efficient than feeding iodine enriched Artemia nauplii. Samples for TH, which will be analyzed shortly, will be able to determine if another dietary factor is lacking to effectively utilize surplus tissue iodine for TH synthesis. Moreover, these samples will also clarify which approach to enriching larvae with iodine, through the live food or exposure to iodine enriched seawater is the most efficient and cost effective. The American group found that moi larvae reared in ocean water, which possessed substantially higher iodine levels than those found in seawater well water, grew significantly larger, and showed increased survival compared with well water reared larvae. Larvae reared in ocean water also progressed more rapidly through developmental stages than those in low-iodine well seawater. In collaboration with Israeli counterparts, a highly specific and precise radioimmunoassay procedure for thyroid hormones and cortisol was developed. Taken altogether, the combined Hawaiian and Israeli collaborative research suggests that for teleost species of commercial value, adequate levels of environmental iodine are more determinate in metamorphosis than iodine levels in the live zooplankton food provided to the larvae. Insuring sufficiently high enough iodine in the ambient seawater offers a much more economical solution to improved metamorphosis than enriching the live food with costly liposomes incorporating iodine rich oils.
APA, Harvard, Vancouver, ISO, and other styles
10

Castellano, Mike J., Abraham G. Shaviv, Raphael Linker, and Matt Liebman. Improving nitrogen availability indicators by emphasizing correlations between gross nitrogen mineralization and the quality and quantity of labile soil organic matter fractions. United States Department of Agriculture, 2012. http://dx.doi.org/10.32747/2012.7597926.bard.

Full text
Abstract:
A major goal in Israeli and U.S. agroecosystems is to maximize nitrogen availability to crops while minimizing nitrogen losses to air and water resources. This goal has presented a significant challenge to global agronomists and scientists because crops require large inputs of nitrogen (N) fertilizer to maximize yield, but N fertilizers are easily lost to surrounding ecosystems where they contribute to water pollution and greenhouse gas concentrations. Determination of the optimum N fertilizer input is complex because the amount of N produced from soil organic matter varies with time, space and management. Indicators of soil N availability may help to guide requirements for N fertilizer inputs and are increasingly viewed as indicators of soil health To address these challenges and improve N availability indicators, project 4550 “Improving nitrogen availability indicators by emphasizing correlations between gross nitrogen mineralization and the quality and quantity of labile organic matter fractions” addressed the following objectives: Link the quantity and quality of labile soil organic matter fractions to indicators of soil fertility and environmental quality including: i) laboratory potential net N mineralization ii) in situ gross N mineralization iii) in situ N accumulation on ion exchange resins iv) crop uptake of N from mineralized soil organic matter sources (non-fertilizer N), and v) soil nitrate pool size. Evaluate and compare the potential for hot water extractable organic matter (HWEOM) and particulate organic matter quantity and quality to characterize soil N dynamics in biophysically variable Israeli and U.S. agroecosystems that are managed with different N fertility sources. Ultimately, we sought to determine if nitrogen availability indicators are the same for i) gross vs. potential net N mineralization processes, ii) diverse agroecosystems (Israel vs. US) and, iii) management strategies (organic vs. inorganic N fertility sources). Nitrogen availability indicators significantly differed for gross vs. potential N mineralization processes. These results highlight that different mechanisms control each process. Although most research on N availability indicators focuses on potential net N mineralization, new research highlights that gross N mineralization may better reflect plant N availability. Results from this project identify the use of ion exchange resin (IERs) beads as a potential technical advance to improve N mineralization assays and predictors of N availability. The IERs mimic the rhizosphere by protecting mineralized N from loss and immobilization. As a result, the IERs may save time and money by providing a measurement of N mineralization that is more similar to the costly and time consuming measurement of gross N mineralization. In further search of more accurate and cost-effective predictors of N dynamics, Excitation- Emission Matrix (EEM) spectroscopy analysis of HWEOM solution has the potential to provide reliable indicators for changes in HWEOM over time. These results demonstrated that conventional methods of labile soil organic matter quantity (HWEOM) coupled with new analyses (EEM) may be used to obtain more detailed information about N dynamics. Across Israeli and US soils with organic and inorganic based N fertility sources, multiple linear regression models were developed to predict gross and potential N mineralization. The use of N availability indicators is increasing as they are incorporated into soil health assessments and agroecosystem models that guide N inputs. Results from this project suggest that some soil variables can universally predict these important ecosystem process across diverse soils, climate and agronomic management. BARD Report - Project4550 Page 2 of 249
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography