To see the other types of publications on this topic, follow the link: Sample size computations.

Journal articles on the topic 'Sample size computations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sample size computations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kang, Dongwoo, Janice B. Schwartz, and Davide Verotta. "Sample Size Computations for PK/PD Population Models." Journal of Pharmacokinetics and Pharmacodynamics 32, no. 5-6 (2005): 685–701. http://dx.doi.org/10.1007/s10928-005-0078-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mindrila, Diana. "Bayesian Latent Class Analysis: Sample Size, Model Size, and Classification Precision." Mathematics 11, no. 12 (2023): 2753. http://dx.doi.org/10.3390/math11122753.

Full text
Abstract:
The current literature includes limited information on the classification precision of Bayes estimation for latent class analysis (BLCA). (1) Objectives: The present study compared BLCA with the robust maximum likelihood (MLR) procedure, which is the default procedure with the Mplus 8.0 software. (2) Method: Markov chain Monte Carlo simulations were used to estimate two-, three-, and four-class models measured by four binary observed indicators with samples of 1000, 750, 500, 250, 100, and 75 observations, respectively. With each sample, the number of replications was 500, and entropy and average latent class probabilities for most likely latent class membership were recorded. (3) Results: Bayes entropy values were more stable and ranged between 0.644 and 1. Bayes’ average latent class probabilities ranged between 0.528 and 1. MLR entropy values ranged between 0.552 and 0.958. and MLR average latent class probabilities ranged between 0.539 and 0.993. With the two-class model, BLCA outperformed MLR with all sample sizes. With the three-class model, BLCA had higher classification precision with the 75-sample size, whereas MLR performed slightly better with the 750- and 1000-sample sizes. With the 4-class model, BLCA underperformed MLR and had an increased number of unsuccessful computations, particularly with smaller samples.
APA, Harvard, Vancouver, ISO, and other styles
3

Mehta, Cyrus R., Nitin R. Patel, and Pralay Senchaudhuri. "Exact Power and Sample-Size Computations for the Cochran-Armitage Trend Test." Biometrics 54, no. 4 (1998): 1615. http://dx.doi.org/10.2307/2533685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chaibub Neto, Elias. "Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications." PLOS ONE 10, no. 6 (2015): e0131333. http://dx.doi.org/10.1371/journal.pone.0131333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dilba, Gemechis, Frank Bretz, Ludwig A. Hothorn, and Volker Guiard. "Power and sample size computations in simultaneous tests for non-inferiority based on relative margins." Statistics in Medicine 25, no. 7 (2006): 1131–47. http://dx.doi.org/10.1002/sim.2359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meng, Fanyu, Wei Shao, and Yuxia Su. "Computing Simplicial Depth by Using Importance Sampling Algorithm and Its Application." Mathematical Problems in Engineering 2021 (December 31, 2021): 1–11. http://dx.doi.org/10.1155/2021/6663641.

Full text
Abstract:
Simplicial depth (SD) plays an important role in discriminant analysis, hypothesis testing, machine learning, and engineering computations. However, the computation of simplicial depth is hugely challenging because the exact algorithm is an NP problem with dimension d and sample size n as input arguments. The approximate algorithm for simplicial depth computation has extremely low efficiency, especially in high-dimensional cases. In this study, we design an importance sampling algorithm for the computation of simplicial depth. As an advanced Monte Carlo method, the proposed algorithm outperforms other approximate and exact algorithms in accuracy and efficiency, as shown by simulated and real data experiments. Furthermore, we illustrate the robustness of simplicial depth in regression analysis through a concrete physical data experiment.
APA, Harvard, Vancouver, ISO, and other styles
7

Royston, Patrick, and Abdel Babiker. "A Menu-driven Facility for Complex Sample Size Calculation in Randomized Controlled Trials with a Survival or a Binary Outcome." Stata Journal: Promoting communications on statistics and Stata 2, no. 2 (2002): 151–63. http://dx.doi.org/10.1177/1536867x0200200204.

Full text
Abstract:
We present a menu-driven Stata program for the calculation of sample size or power for complex clinical trials with a survival time or a binary outcome. The features supported include up to six treatment arms, an arbitrary time-to-event distribution, fixed or time-varying hazard ratios, unequal patient allocation, loss to follow-up, staggered patient entry, and crossover of patients from their allocated treatment to an alternative treatment. The computations of sample size and power are based on the logrank test and are done according to the asymptotic distribution of the logrank test statistic, adjusted appropriately for the design features.
APA, Harvard, Vancouver, ISO, and other styles
8

Avramchuk, Valeriy V., E. E. Luneva, and Alexander G. Cheremnov. "Increasing the Efficiency of Using Hardware Resources for Time-Frequency Correlation Function Computation." Advanced Materials Research 1040 (September 2014): 969–74. http://dx.doi.org/10.4028/www.scientific.net/amr.1040.969.

Full text
Abstract:
In the article the techniques of increasing efficient of using multi-core processors for the task of calculating the fast Fourier transform were considered. The fast Fourier transform is led on the basis of calculating a time time-frequency correlation function. The time-frequency correlation function allows increasing the information content of the analysis as compared with the classic correlation function. The significant computational capabilities are required to calculate the time-frequency correlation function, that by reason of the necessity of multiple computing fast Fourier transform. For computing the fast Fourier transform the Cooley-Tukey algorithm with fixed base two is used, which lends itself to efficient parallelization and is simple to implement. Immediately before the fast Fourier transform computation the procedure of bit-reversing the input data sequence is used. For algorithm of calculating the time-frequency correlation function parallel computing technique was used that experimentally allowed obtaining the data defining the optimal number of iterations for each core of the CPU, depending on the sample size. The results of experiments allowed developing special software that automatically select the effective amount of subtasks for parallel processing. Also the software provides the choice of sequential or parallel computations mode, depending on the sample size and the number of frequency intervals in the calculation of time-frequency correlation function.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Lin, Kelvin C. P. Wang, Qiang Li, Wenting Luo, and Jiangang Guo. "Impacts of Sample Size on Calculation of Pavement Texture Indicators with 1mm 3D Surface Data." Periodica Polytechnica Transportation Engineering 46, no. 1 (2017): 42. http://dx.doi.org/10.3311/pptr.9587.

Full text
Abstract:
The emerging 1mm resolution 3D data collection technology is capable of covering the entire pavement surface, and provides more data sets than traditional line-of-sight data collection systems. As a result, quantifying the impact of sample size including sample width and sample length on the calculation of pavement texture indicators is becoming possible. In this study, 1mm 3D texture data are collected and processed at seven test sites using the PaveVision3D Ultra system. Analysis of Variance (ANOVA) test and linear regression models are developed to investigate various sample length and width on the calculation of three widely used texture indicators: Mean Profile Depth (MPD), Mean Texture Depth (MTD) and Power Spectra Density (PSD). Since the current ASTM standards and other procedures cannot be directly applied to 3D surface for production due to a lack of definitions, the results from this research are beneficial in the process to standardize texture indicators’ computations with 1mm 3D surface data of pavements.
APA, Harvard, Vancouver, ISO, and other styles
10

Harris, Richard J., and Dana Quade. "The Minimally Important Difference Significant Criterion for Sample Size." Journal of Educational Statistics 17, no. 1 (1992): 27–49. http://dx.doi.org/10.3102/10769986017001027.

Full text
Abstract:
For a wide range of tests of single-df hypotheses, the sample size needed to achieve 50% power is readily approximated by setting N such that a significance test conducted on data that fit one’s assumptions perfectly just barely achieves statistical significance at one’s chosen alpha level. If the effect size assumed in establishing one’s N is the minimally important effect size (i.e., that effect size such that population differences or correlations smaller than that are not of any practical or theoretical significance, whether statistically significant or not), then 50% power is optimal, because the probability of rejecting the null hypothesis should be greater than .5 when the population difference is of practical or theoretical significance but lower than .5 when it is not. Moreover, the actual power of the test in this case will be considerably higher than .5, exceeding .95 for a population difference two or more times as large as the minimally important difference (MID). This minimally important difference significant (MIDS) criterion extends naturally to specific comparisons following (or substituting for) overall tests such as the ANOVA F and chi-square for contingency tables, although the power of the overall test (i.e., the probability of finding some statistically significant specific comparison) is considerably greater than .5 when the MIDS criterion is applied to the overall test. However, the proper focus for power computations is one or more specific comparisons (rather than the omnibus test), and the MIDS criterion is well suited to setting sample size on this basis. Whereas Nmids(the sample size specified by the MIDS criterion) is much too small for the case in which we wish to prove the modified H0 that there is no important population effect, it nonetheless provides a useful metric for specifying the necessary sample size. In particular, the sample size needed to have a 1 – α probability that the (1 − α)-level confidence interval around one’s population parameter includes no important departure from H0 is four times Nmids when H0 is true and approximately [4/(1 – b)2].NMIDS when b (the ratio between the actual population difference and the minimally important difference) is between zero and unity. The MIDS criterion for sample size provides a useful alternative to the methods currently most commonly employed and taught.
APA, Harvard, Vancouver, ISO, and other styles
11

Michalewicz, Marek, and Mark Priebatsch. "Perfect Scaling of the Electronic Structure Problem on a SIMD Architecture." Parallel Computing 21, no. 5 (1995): 853–70. https://doi.org/10.1016/0167-8191(94)00097-T.

Full text
Abstract:
We report on benchmark tests of computations of the total electronic density of states of a micro-crystallite of rutile TiO2 on MasPar MP-1 and MasPar MP-2 autonomous SIMD computers. The 3D spatial arrangement of atoms corresponds to the two dimensional computational grid of processing elements (PE) plus memory (2D + 1D) while the interactions between the constituent atoms correspond to the communication between the PEs. The largest sample we study consists of 491,520 atoms and its size is 41.5 × 41.5 × 1.5nm. Mathematically, the problem is equvalent to solving an n × n eigenvalue problem, where n ~ 2,500,000. The program is scalable in the number of atoms, so that the time required to run it is nearly independent of the size of the system in x and y directions (2D PE mesh) and is step-wise linear in z direction (memory axis). The total CPU time for the largest sample on a MasPar MP-2 computer with 16,384 processing elements is ~ 2.1 hour.
APA, Harvard, Vancouver, ISO, and other styles
12

Tyas, Maulida Fajrining, Anang Kurnia, and Agus Mohamad Soleh. "TEXT CLUSTERING ONLINE LEARNING OPINION DURING COVID-19 PANDEMIC IN INDONESIA USING TWEETS." BAREKENG: Jurnal Ilmu Matematika dan Terapan 16, no. 3 (2022): 939–48. http://dx.doi.org/10.30598/barekengvol16iss3pp939-948.

Full text
Abstract:
To prevent the spread of corona virus, restriction of social activities are implemented including school activities which reaps the pros and cons in community. Opinions about online learning are widely conveyed mainly on Twitter. Tweets obtained can be used to extract information using text clustering to group topics about online learning during pandemic in Indonesia. K-Means is often used and has good performance in text clustering area. However, the problem of high dimensionality in textual data can result in difficult computations so that a sampling method is proposed. This paper aims to examine whether a sampling method to cluster tweets can result to an efficient clustering than using the whole dataset. After pre-processing, five sample sizes are selected from 28300 tweets which are 250, 500, 2500, 10000 and 20000 to conduct K-Means clustering. Results showed that from 10 iterations, three main cluster topics appeared 90%-100% in sample size of 2500, 10000 and 20000. Meanwhile sample size of 250 and 500 tend to produced 20%-60% appearance of the three main cluster topics. This means that around 8% to 35% of tweets used can yield representative clusters and efficient computation which is four times faster than using entire dataset.
APA, Harvard, Vancouver, ISO, and other styles
13

Owen, Amanda J., and Laurence B. Leonard. "Lexical Diversity in the Spontaneous Speech of Children With Specific Language Impairment." Journal of Speech, Language, and Hearing Research 45, no. 5 (2002): 927–37. http://dx.doi.org/10.1044/1092-4388(2002/075).

Full text
Abstract:
The lexical diversity of children with specific language impairment (SLI) (ages 3 years 7 months to 7 years 3 months) was compared to that of normally developing same-age peers and younger normally developing children matched according to mean length of utterance in words (MLUw). Lexical diversity was calculated from spontaneous speech samples using D, a measure that uses repeated calculations of type-token ratio (TTR) to estimate how TTR changes as the speech samples increase in size. When D computations were based on 250-word samples, developmental differences were apparent. For both children with SLI and typically developing children, older subgroups showed higher D scores than younger subgroups, and subgroups with higher MLUws showed higher D scores than subgroups with lower MLUws. Children with SLI did not differ from same-age peers. At lower MLUw levels, children with SLI showed higher D scores than younger typically developing children matched for MLUw. The developmental sensitivity of D notwithstanding, comparisons using 100-utterance samples, in which the number of lexical tokens varied as a function of the children's MLUws, and comparisons between 250- and 500-word samples revealed the possible influence of sample size on this measure. However, analysis of the effect sizes using smaller and larger samples revealed that D is not affected by sample size to the degree seen for more traditional measures of lexical diversity.
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Cuong H., and Linh H. Tran. "The Application of Neural Networks to Predict the Water Evaporation Percentage and the Plastic Shrinkage Size of Self-Compacting Concrete Structure." Civil Engineering Journal 10, no. 1 (2024): 117–30. http://dx.doi.org/10.28991/cej-2024-010-01-07.

Full text
Abstract:
This article presents a solution using an artificial neural network and a neuro-fuzzy network to predict the rate of water evaporation and the size of the shrinkage of a self-compacting concrete mixture based on the concrete mixture parameters and the environment parameters. The concrete samples were mixed and measured at four different environmental conditions (i.e., humid, dry, hot with high humidity, and hot with low humidity), and two curing styles for the self-compacting concrete were measured. Data were collected for each sample at the time of mixing and pouring and every 60 minutes for the next ten hours to help create prediction models for the required parameters. A total of 528 samples were collected to create the training and testing data sets. The study proposed to use the classic Multi-Layer Perceptron and the modified Takaga-Sugeno-Kang neuro-fuzzy network to estimate the water evaporation rate and the shrinkage size of the concrete sample when using four inputs: the concrete water-to-binder ratio, environment temperature, relative humidity, and the time after pouring the concrete into the mold. Real-field experiments and numerical computations have shown that both of the models are good as parameter predictors, where low errors can be achieved. Both proposed networks achieved for testing results R2 bigger than 0.98, the mean of squared errors for water evaporation percentage was less than 1.43%, and the mean of squared errors for shrinkage sizes was less than 0.105 mm/m. The computation requirements of the two models in testing mode are also low, which can allow their easy use in practical applications. Doi: 10.28991/CEJ-2024-010-01-07 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
15

Yi, Jieyi, and Niansheng Tang. "Variational Bayesian Inference in High-Dimensional Linear Mixed Models." Mathematics 10, no. 3 (2022): 463. http://dx.doi.org/10.3390/math10030463.

Full text
Abstract:
In high-dimensional regression models, the Bayesian lasso with the Gaussian spike and slab priors is widely adopted to select variables and estimate unknown parameters. However, it involves large matrix computations in a standard Gibbs sampler. To solve this issue, the Skinny Gibbs sampler is employed to draw observations required for Bayesian variable selection. However, when the sample size is much smaller than the number of variables, the computation is rather time-consuming. As an alternative to the Skinny Gibbs sampler, we develop a variational Bayesian approach to simultaneously select variables and estimate parameters in high-dimensional linear mixed models under the Gaussian spike and slab priors of population-specific fixed-effects regression coefficients, which are reformulated as a mixture of a normal distribution and an exponential distribution. The coordinate ascent algorithm, which can be implemented efficiently, is proposed to optimize the evidence lower bound. The Bayes factor, which can be computed with the path sampling technique, is presented to compare two competing models in the variational Bayesian framework. Simulation studies are conducted to assess the performance of the proposed variational Bayesian method. An empirical example is analyzed by the proposed methodologies.
APA, Harvard, Vancouver, ISO, and other styles
16

Zaremba, Adam. "A Performance Evaluation Model for Global Macro Funds." International Journal of Finance & Banking Studies (2147-4486) 3, no. 1 (2014): 161–72. http://dx.doi.org/10.20525/ijfbs.v3i1.177.

Full text
Abstract:
The paper concentrates on value and size effects in country portfolios. It contributes to academic literature threefold. First, I provide fresh evidence that the value and size effects may be useful in explaining the cross-sectional variation in country returns. The computations are based on a broad sample of 66 countries in years 2000-2013. Second, I document that the country-level value and size effects are indifferent to currency conversions. Finally, I introduce an alternative macro-level Fama-French model, which, contrary to its prototype, employs country-based factors. I show that applying this modification makes the model more successful in evaluation of funds with global investment mandate than the standard CAPM and FF models.
APA, Harvard, Vancouver, ISO, and other styles
17

Yuan, Hongyan, Jonah H. Lee, and James E. Guilkey. "Stochastic reconstruction of the microstructure of equilibrium form snow and computation of effective elastic properties." Journal of Glaciology 56, no. 197 (2010): 405–14. http://dx.doi.org/10.3189/002214310792447770.

Full text
Abstract:
AbstractThree-dimensional geometric descriptions of microstructure are indispensable to obtain the structure–property relationships of snow. Because snow is a random heterogeneous material, it is often helpful to construct stochastic geometric models that can be used to model physical and mechanical properties of snow. In the present study, the Gaussian random field-based stochastic reconstruction of the sieved and sintered dry-snow sample with grain size less than 1 mm is investigated. The one- and two-point correlation functions of the snow samples are used as input for the stochastic snow model. Several statistical descriptors not used as input to the stochastic reconstruction are computed for the real and reconstructed snow to assess the quality of the reconstructed images. For the snow samples and the reconstructed snow microstructure, we also estimate the mechanical properties and the size of the associated representative volume element using numerical simulations as additional assessment of the quality of the reconstructed images. The results indicate that the stochastic reconstruction technique used in this paper is reasonably accurate, robust and highly efficient in numerical computations for the high-density snow samples we consider.
APA, Harvard, Vancouver, ISO, and other styles
18

Zeraati, Roxana, Tatiana A. Engel, and Anna Levina. "A flexible Bayesian framework for unbiased estimation of timescales." Nature Computational Science 2, no. 3 (2022): 193–204. http://dx.doi.org/10.1038/s43588-022-00214-3.

Full text
Abstract:
AbstractTimescales characterize the pace of change for many dynamic processes in nature. They are usually estimated by fitting the exponential decay of data autocorrelation in the time or frequency domain. Here we show that this standard procedure often fails to recover the correct timescales due to a statistical bias arising from the finite sample size. We develop an alternative approach to estimate timescales by fitting the sample autocorrelation or power spectrum with a generative model based on a mixture of Ornstein–Uhlenbeck processes using adaptive approximate Bayesian computations. Our method accounts for finite sample size and noise in data and returns a posterior distribution of timescales that quantifies the estimation uncertainty and can be used for model selection. We demonstrate the accuracy of our method on synthetic data and illustrate its application to recordings from the primate cortex. We provide a customizable Python package that implements our framework via different generative models suitable for diverse applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Chang, S. N., D. T. Chung, Y. F. Li, and S. Nemat-Nasser. "Target Configurations for Plate-Impact Recovery Experiments." Journal of Applied Mechanics 59, no. 2 (1992): 305–11. http://dx.doi.org/10.1115/1.2899521.

Full text
Abstract:
Normal plate impact recovery experiments have been perfomed on thin plates of ceramics, with and without a back momentum trap, in a one-stage gas gun. The free-surface velocity of the momentum trap was measured, using a normal velocity (or displacement) interferometer. In all recovered samples, cross-shaped cracks were seen to have been formed during the impact, at impact velocities as low as 27 m/s, even though star-shaped flyer plates were used. These cracks appear to be due to in-plane tensile stresses which develop in the sample as a result of the size mismatch between the flyer plate and the specimen (the impacting area of the flyer being smaller than the impacted area of the target) and because of the free-edge effects. Finite element computations, using PRONTO-2D and DYNA-3D, based on linear elasticity, confirm this observation. Based on numerical computations, a simple configuration for plate impact experiments is proposed, which minimizes the inplane tensile stresses allowing recovery experiments at much higher velocities than possible by the star-shaped flyer plate configuration. This is confirmed by normal plate impact recovery experiments which produced no tensile cracks at velocities in a range where the star-shaped flyer invariably introduces cross-shaped cracks in the sample. The new configuration includes lateral as well as longitudinal momentum traps.
APA, Harvard, Vancouver, ISO, and other styles
20

Zaremba, Adam. "A Performance Evaluation Model for Global Macro Funds." International Journal of Finance & Banking Studies (2147-4486) 3, no. 1 (2016): 161. http://dx.doi.org/10.20525/.v3i1.177.

Full text
Abstract:
<p>The paper concentrates on value and size effects in country portfolios. It contributes to academic literature threefold. First, I provide fresh evidence that the value and size effects may be useful in explaining the cross-sectional variation in country returns. The computations are based on a broad sample of 66 countries in years 2000-2013. Second, I document that the country-level value and size effects are indifferent to currency conversions. Finally, I introduce an alternative macro-level Fama-French model, which, contrary to its prototype, employs country-based factors. I show that applying this modification makes the model more successful in evaluation of funds with global investment mandate than the standard CAPM and FF models.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Torres-Huitzil, Cesar. "Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters." Scientific World Journal 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/108103.

Full text
Abstract:
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with ak×kkernel requires ofk2−1comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel sizek. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on1024×1024images with up to255×255kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding.
APA, Harvard, Vancouver, ISO, and other styles
22

Francis, Octave J., George M. Ware, Allen S. Carman, Gary P. Kirschenheuter, and Shia S. Kuan. "Use of Ten Gram Samples of Corn for Determination of Mycotoxins." Journal of AOAC INTERNATIONAL 71, no. 1 (1988): 41–43. http://dx.doi.org/10.1093/jaoac/71.1.41.

Full text
Abstract:
Abstract Data were gathered, during a study on the development of an automated system for the extraction, cleanup, and quantitation of mycotoxins in corn, to determine if it was scientifically sound to reduce the analytical sample size. Five, 10, and 25 g test portions were analyzed and statistically compared with 50 g test portions of the same composites for aflatoxin concentration variance. Statistical tests used to determine whether the 10 and 50 g sample sizes differed significantly showed a satisfactory observed variance ratio (Fobs) of 2.03 for computations of pooled standard deviations; paired f-test values of 0.952, 1.43, and 0.224 were computed for each of the 3 study samples. The results meet acceptable limits, since each sample’s r-test result is less than the published value of the |t|, which is 1.6909 for the test conditions. The null hypothesis is retained since the sample sizes do not give significantly different values for the mean analyte concentration. The percent coefficients of variation (CVs) for all samples tested were within the expected range. In addition, the variance due to sample mixing was evaluated using radioisotopelabeled materials, yielding an acceptable CV of 22.2%. The variance due to the assay procedure was also evaluated and showed an aflatoxin B, recovery of 78.9% and a CV of 11.4%. Results support the original premise that a sufficiently ground and blended sample would produce an analyte variance for a 10 g sample that was statistically comparable with that for a 50 g sample.
APA, Harvard, Vancouver, ISO, and other styles
23

Abakumov, A. I., I. I. Safronov, A. S. Smirnov, et al. "NUMERICAL SIMULATION OF A DROP WEIGHT TEST OF DUCTILE PIPE STEEL." Problems of strenght and plasticity 82, no. 4 (2020): 493–506. http://dx.doi.org/10.32326/1814-9146-2020-82-4-493-506.

Full text
Abstract:
The processes in the metal sample of a supply pipeline realized under drop-weight tests (DWT, or DWTT according to ASTM) are studied. DWT is a proof test of the pipeline metal that should ensure high resistance of the pipeline against extensive destruction. Numerical simulation of DWT with the steel sample of full thickness was performed; the steel had К65 strength grade. Parallel finite-element computer code DANCO developed in RFNC-VNIIEF was used for simulations. A detailed description of the rupture formation process required a fine-enough mesh and a supercomputer. To carry out the numerical simulation of the process, the constants of the deformation diagram were used, obtained on the basis of static and dynamic tensile tests of samples at room temperature. A modified Gurson–Tvergaard–Niedelman (GTNm) model for macro-viscous steel destruction (ductile failure) was used to describe the strain and the destruction of the metal. The modification makes it possible to describe direct and oblique cuts and their combinations in case of ductile failure of small-size objects (rods, plates, shells). The calculated dependences of the movement of the crack tip on the movement of the load and the resistance force of the sample on the movement of the crack tip are presented. We have got a good agreement between the computations and experimental data with regard to the “force–displacement” strength parameter, the deformed profiles and macro-geometry of the ruptured sample after the tests. The computation results reveal the mechanics of the crack origination, start and propagation in the sample, describe the plastic-flow energy distribution in the process of dynamic destruction. The results of the work can be used in the development of the requirements and of the implementation conditions of the “tooled” DWT, and for numerical simulation of the extensive destruction at the main pipeline.
APA, Harvard, Vancouver, ISO, and other styles
24

Nur, Abdusomad, and Yonas Muanenda. "Design and Evaluation of Real-Time Data Storage and Signal Processing in a Long-Range Distributed Acoustic Sensing (DAS) Using Cloud-Based Services." Sensors 24, no. 18 (2024): 5948. http://dx.doi.org/10.3390/s24185948.

Full text
Abstract:
In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we propose a method to address the issue of handling the large amount of data involved in DAS by designing and implementing a pipeline system to efficiently send the big data to DynamoDB in order to fully use the low latency of the DynamoDB data storage system for a benchmark DAS scheme for performing continuous monitoring over a 100 km range at a meter-scale spatial resolution. We employ the DynamoDB functionality of Amazon Web Services (AWS), which allows highly expandable storage capacity with latency of access of a few tens of milliseconds. The different stages of DAS data handling are performed in a pipeline, and the scheme is optimized for high overall throughput with reduced latency suitable for concurrent, real-time event extraction as well as the minimal storage of raw and intermediate data. In addition, the scalability of the DynamoDB-based data storage scheme is evaluated for linear and nonlinear variations of number of batches of access and a wide range of data sample sizes corresponding to sensing ranges of 1–110 km. The results show latencies of 40 ms per batch of access with low standard deviations of a few milliseconds, and latency per sample decreases for increasing the sample size, paving the way toward the development of scalable, cloud-based data storage services integrating additional post-processing for more precise feature extraction. The technique greatly simplifies DAS data handling in key application areas requiring continuous, large-scale measurement schemes. In addition, the processing of raw traces in a long-distance DAS for real-time monitoring requires the careful design of computational resources to guarantee requisite dynamic performance. Now, we will focus on the design of a system for the performance evaluation of cloud computing systems for diverse computations on DAS data. This system is aimed at unveiling valuable insights into performance metrics and operational efficiencies of computations on the data in the cloud, which will provide a deeper understanding of the system’s performance, identify potential bottlenecks, and suggest areas for improvement. To achieve this, we employ the CloudSim framework. The analysis reveals that the virtual machine (VM) performance decreases significantly the processing time with more capable VMs, influenced by Processing Elements (PEs) and Million Instructions Per Second (MIPS). The results also reflect that, although a larger number of computations is required as the fiber length increases, with the subsequent increase in processing time, the overall speed of computation is still suitable for continuous real-time monitoring. We also see that VMs with lower performance in terms of processing speed and number of CPUs have more inconsistent processing times compared to those with higher performance, while not incurring significantly higher prices. Additionally, the impact of VM parameters on computation time is explored, highlighting the importance of resource optimization in the DAS system design for efficient performance. The study also observes a notable trend in processing time, showing a significant decrease for every additional 50,000 columns processed as the length of the fiber increases. This finding underscores the efficiency gains achieved with larger computational loads, indicating improved system performance and capacity utilization as the DAS system processes more extensive datasets.
APA, Harvard, Vancouver, ISO, and other styles
25

Vandermeer, Ben, der Tweel Ingeborg van, der Weide Marijke C. Jansen-van, et al. "Comparison of nuisance parameters in pediatric versus adult randomized trials: a meta-epidemiologic empirical evaluation." BMC Medical Research Methodology 18, no. 1 (2018): 7. https://doi.org/10.1186/s12874-017-0456-8.

Full text
Abstract:
<strong>Background: </strong>We wished to compare the nuisance parameters of pediatric vs. adult randomized-trials (RCTs) and determine if the latter can be used in sample size computations of the former.<strong>Methods: </strong>In this meta-epidemiologic empirical evaluation we examined meta-analyses from the Cochrane Database of Systematic-Reviews, with at least one pediatric-RCT and at least one adult-RCT. Within each meta-analysis of binary efficacy-outcomes, we calculated the pooled-control-group event-rate (CER) across separately all pediatric and adult-trials, using random-effect models and subsequently calculated the control-group event-rate risk-ratio (CER-RR) of the pooled-pediatric-CERs vs. adult-CERs. Within each meta-analysis with continuous outcomes we calculated the pooled-control-group effect standard deviation (CE-SD) across separately all pediatric and adult-trials and subsequently calculated the CE-SD-ratio of the pooled-pediatric-CE-SDs vs. adult-CE-SDs. We then calculated across all meta-analyses the pooled-CER-RRs and pooled-CE-SD-ratios (primary endpoints) and the pooled-magnitude of effect-sizes of CER-RRs and CE-SD-ratios using REMs. A ratio &lt; 1 indicates that pediatric trials have smaller nuisance parameters than adult trials.<strong>Results: </strong>We analyzed 208 meta-analyses (135 for binary-outcomes, 73 for continuous-outcomes). For binary outcomes, pediatric-RCTs had on average 10% smaller CERs than adult-RCTs (summary-CE-RR: 0.90; 95% CI: 0.83, 0.98). For mortality outcomes the summary-CE-RR was 0.48 (95% CIs: 0.31, 0.74). For continuous outcomes, pediatric-RCTs had on average 26% smaller CE-SDs than adult-RCTs (summary-CE-SD-ratio: 0.74).<strong>Conclusions: </strong>Clinically relevant differences in nuisance parameters between pediatric and adult trials were detected. These differences have implications for design of future studies. Extrapolation of nuisance parameters for sample-sizes calculations from adult-trials to pediatric-trials should be cautiously done.
APA, Harvard, Vancouver, ISO, and other styles
26

Lessard, Sabin. "Recurrence Equations for the Probability Distribution of Sample Configurations in Exact Population Genetics Models." Journal of Applied Probability 47, no. 3 (2010): 732–51. http://dx.doi.org/10.1239/jap/1285335406.

Full text
Abstract:
Recurrence equations for the number of types and the frequency of each type in a random sample drawn from a finite population undergoing discrete, nonoverlapping generations and reproducing according to the Cannings exchangeable model are deduced under the assumption of a mutation scheme with infinitely many types. The case of overlapping generations in discrete time is also considered. The equations are developed for the Wright-Fisher model and the Moran model, and extended to the case of the limit coalescent with nonrecurrent mutation as the population size goes to ∞ and the mutation rate to 0. Computations of the total variation distance for the distribution of the number of types in the sample suggest that the exact Moran model provides a better approximation for the sampling formula under the exact Wright-Fisher model than the Ewens sampling formula in the limit of the Kingman coalescent with nonrecurrent mutation. On the other hand, this model seems to provide a good approximation for a Λ-coalescent with nonrecurrent mutation as long as the probability of multiple mergers and the mutation rate are small enough.
APA, Harvard, Vancouver, ISO, and other styles
27

Lessard, Sabin. "Recurrence Equations for the Probability Distribution of Sample Configurations in Exact Population Genetics Models." Journal of Applied Probability 47, no. 03 (2010): 732–51. http://dx.doi.org/10.1017/s0021900200007038.

Full text
Abstract:
Recurrence equations for the number of types and the frequency of each type in a random sample drawn from a finite population undergoing discrete, nonoverlapping generations and reproducing according to the Cannings exchangeable model are deduced under the assumption of a mutation scheme with infinitely many types. The case of overlapping generations in discrete time is also considered. The equations are developed for the Wright-Fisher model and the Moran model, and extended to the case of the limit coalescent with nonrecurrent mutation as the population size goes to ∞ and the mutation rate to 0. Computations of the total variation distance for the distribution of the number of types in the sample suggest that the exact Moran model provides a better approximation for the sampling formula under the exact Wright-Fisher model than the Ewens sampling formula in the limit of the Kingman coalescent with nonrecurrent mutation. On the other hand, this model seems to provide a good approximation for a Λ-coalescent with nonrecurrent mutation as long as the probability of multiple mergers and the mutation rate are small enough.
APA, Harvard, Vancouver, ISO, and other styles
28

Wibowo, Nur Aji, Susatyo Pranoto, Cucun Alep Riyanto, and Andreas Setiawan. "A micromagnetic study: lateral size dependence of the macroscopic properties of rectangular parallelepiped Cobalt-ferrite nanoferromagnetic." Journal of Physics: Theories and Applications 4, no. 1 (2020): 16. http://dx.doi.org/10.20961/jphystheor-appl.v4i1.46692.

Full text
Abstract:
&lt;span lang="EN-US"&gt;The purpose of this study is to provide systematic information through micromagnetic simulations related to the impact of particle size on the magnetic characteristics of Cobalt-ferrite MNP. The micromagnetic computations performed were based on LLG equation. The MNPs sample was simulated in the form of a rectangular parallelepiped with a thickness of 20 nm and square surface with lateral length varies from 10 to 80 nm at an interval of 10 nm. &lt;/span&gt;&lt;span lang="EN-ID"&gt;The results of this study indicate that the size changes in Cobalt-ferrite MNP have a significant impact on various magnetic properties, such as the magnitude of the barrier energy, coercive and nucleation fields, magnetization rate, magnetization curve profile, and magnetization mode.&lt;/span&gt;&lt;span lang="EN-ID"&gt;Cobalt-ferrite MNP with a size of 10 nm shows a single domain with a relatively short magnetization reversal time and high coercive field.&lt;/span&gt;
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Wei, Nianbo Dong, and Rebecca A. Maynard. "Power Analysis for Two-Level Multisite Randomized Cost-Effectiveness Trials." Journal of Educational and Behavioral Statistics 45, no. 6 (2020): 690–718. http://dx.doi.org/10.3102/1076998620911916.

Full text
Abstract:
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take account of sample sizes, the effect size, covariates effects, nesting effects for both cost and effectiveness measures, the ratio of the total variance of the cost measure to the total variance of effectiveness measure, and correlations between cost and effectiveness measures at each level. Illustrative examples that show how power is influenced by the sample sizes, nesting effects, covariate effects, and correlations between cost and effectiveness measures are presented. We also demonstrate how the calculations can be applied in the design phase of two-level multisite RCETs using the software PowerUp!-CEA (Version 1.0).
APA, Harvard, Vancouver, ISO, and other styles
30

Sakalauskas, Leonidas, and Kestutis Žilinskas. "APPLICATION OF STATISTICAL CRITERIA TO OPTIMALITY TESTING IN STOCHASTIC PROGRAMMING." Technological and Economic Development of Economy 12, no. 4 (2006): 314–20. http://dx.doi.org/10.3846/13928619.2006.9637760.

Full text
Abstract:
In this paper the stochastic adaptive method has been developed to solve stochastic linear problems by a finite sequence of Monte‐Carlo sampling estimators. The method is grounded on adaptive regulation of the size of Monte‐Carlo samples and the statistical termination procedure, taking into consideration the statistical modeling error. Our approach distinguishes itself by treatment of the accuracy of the solution in a statistical manner, testing the hypothesis of optimality according to statistical criteria, and estimating confidence intervals of the objective and constraint functions. The adjustment of sample size, when it is taken inversely proportional to the square of the norm of the Monte‐Carlo estimate of the gradient, guarantees the convergence a. s. at a linear rate. We examine four estimators for stochastic gradient: by the differentiation of the integral with respect to x, the finite difference approach, the Simulated Perturbation Stochastic Approximation approach, the Likelihood Ratio approach. The numerical study and examples in practice corroborate the theoretical conclusions and show that the procedures developed make it possible to solve stochastic problems with a sufficient agreeable accuracy by means of the acceptable amount of computations.
APA, Harvard, Vancouver, ISO, and other styles
31

Saxena, Nishank, Ronny Hofmann, Amie Hows, et al. "Rock compressibility from microcomputed tomography images: Controls on digital rock simulations." GEOPHYSICS 84, no. 4 (2019): WA127—WA139. http://dx.doi.org/10.1190/geo2018-0499.1.

Full text
Abstract:
Rock compressibility is a major control of reservoir compaction, yet only limited core measurements are available to constrain estimates. Improved analytical and computational estimates of rock compressibility of reservoir rock can improve forecasts of reservoir production performance and the geomechanical integrity of compacting reservoirs. The fast-evolving digital rock technology can potentially overcome the need for simplification of pores (e.g., ellipsoids) to estimate rock compressibility as the computations are performed on an actual pore-scale image acquired using 3D microcomputed tomography (micro-CT). However, the computed compressibility using a digital image is impacted by numerous factors, including imaging conditions, image segmentation, constituent properties, choice of numerical simulator, rock field of view, how well the grain contacts are resolved in an image, and the treatment of grain-to-grain contacts. We have analyzed these factors and quantify their relative contribution to the rock moduli computed using micro-CT images of six rocks: a Fontainebleau sandstone sample, two Berea sandstone samples, a Castelgate sandstone sample, a grain pack, and a reservoir rock. We find that image-computed rock moduli are considerably stiffer than those inferred using laboratory-measured ultrasonic velocities. This disagreement cannot be solely explained by any one of the many controls when considered in isolation, but it can be ranked by their relative contribution to the overall rock compressibility. Among these factors, the image resolution generally has the largest impact on the quality of image-derived compressibility. For elasticity simulations, the quality of an image resolution is controlled by the ratio of the contact length and image voxel size. Images of poor resolution overestimate contact lengths, resulting in stiffer simulation results.
APA, Harvard, Vancouver, ISO, and other styles
32

Farsia, Lina, Sarair, Fadhlullah Romi, Siti Al-Mukarrama, and Septhia Irnanda. "ENHANCING STUDENTS’ ENGLISH PRONUNCIATION THROUGH PHONETIC METHODOLOGY AT MTSN 3 BANDA ACEH." Proceedings of International Conference on Education 2, no. 1 (2024): 466–72. http://dx.doi.org/10.32672/pice.v2i1.1396.

Full text
Abstract:
This study investigates how well first-grade students at MTsN 3 Banda Aceh can pronounce English words correctly while using a phonetic approach. The study uses tests, questionnaires, and experimental instruction sessions to investigate the efficacy of the phonetic technique using a pre-experimental research design. All first-grade children at MTsN 3 Banda Aceh make up the target population, and 30 students were chosen as a sample size for the research. The data is analyzed using statistical techniques such as t-tests, percentage computations and SPSS 21. Based on the difference between pretest and posttest scores, the results show that students' pronunciation skills significantly improved after using the phonetic approach. Furthermore, both the quantitative and qualitative results show that students had positive attitudes regarding the phonetic technique.
APA, Harvard, Vancouver, ISO, and other styles
33

Gulo, Senang Hati, and Andre Hasudungan Lubis. "Penerapan Multi-Layer Perceptron untuk Mengklasifikasi Penduduk Kurang Mampu." Explorer 4, no. 2 (2024): 51–59. https://doi.org/10.47065/explorer.v4i2.1146.

Full text
Abstract:
The classification of the less capable population in Afulu Sub-district is currently reliant on a manual system, resulting in prolonged processing times. To address this issue, this research endeavors to develop a practical application for the classification of population data, with the primary objective of expediting the processing of population data in Afulu Sub-district. The study will focus on nine villages within the sub-district, encompassing a total population of 11,722 individuals, with a sample size of 386. The present study utilizes the Multilayer Perceptron, a classical algorithm that continues to be the most widely employed method in numerous researches. The findings of the present study indicate that out of the total sample size, 152 individuals were classified as capable, 86 individuals were classified as moderately capable, and a substantial number of 148 individuals were classified as less capable. The classification results were evaluated using a confusion matrix. The 3-5-1 architecture, comprising of 3 input layers, 5 hidden layers, and 1 output layer, was found to be the most superior. This architecture demonstrated an accuracy value of 96.9%, a recall value of 92%, a precision value of 98.5%, and an F-score value of 94.9%. A detailed elucidation of the parameters employed, the formulas utilized, and several computations performed are explained further.
APA, Harvard, Vancouver, ISO, and other styles
34

Mousa, Wail A., Said Boussakta, Desmond C. McLernon, and Mirko Van der Baan. "Implementation of 2D explicit depth extrapolation FIR digital filters for 3D seismic volumes using singular value decomposition." GEOPHYSICS 75, no. 1 (2010): V1—V12. http://dx.doi.org/10.1190/1.3294424.

Full text
Abstract:
We propose a new scheme for implementing predesigned 2D complex-valued wavefield extrapolation finite impulse response (FIR) digital filters, which are used for extrapolating 3D seismic wavefields. The implementation is based on singular value decomposition (SVD) of quadrantally symmetric 2D FIR filters (extrapolators). To simplify the SVD computations for such a filter impulse response structure, we apply a special matrix transformation on the extrapolation FIR filter impulse responses where we guarantee the retention of their wavenumber phase response. Unlike the existing 2D FIR filter implementation methods that are used for this geophysical application such as the McClellan transformation or its improved version, this implementation via SVD results in perfect circularly symmetrical magnitude and phase wavenumber responses. In this paper, we also demonstrate that the SVD method can save (depending on the filter size) more than 23% of the number of multiplications per output sample and approximately 62% of the number of additions per output sample when compared to direct implementation with quadrantal symmetry via true 2D convolution. Finally, an application to extrapolation of a seismic impulse is shown to prove our theoretical conclusions.
APA, Harvard, Vancouver, ISO, and other styles
35

Grohmann, Peter, and Michael S. J. Walter. "Speeding up Statistical Tolerance Analysis to Real Time." Applied Sciences 11, no. 9 (2021): 4207. http://dx.doi.org/10.3390/app11094207.

Full text
Abstract:
Statistical tolerance analysis based on Monte Carlo simulation can be applied to obtain a cost-optimized tolerance specification that satisfies both the cost and quality requirements associated with manufacturing. However, this process requires time-consuming computations. We found that an implementation that uses the graphics processing unit (GPU) for vector-chain-based statistical tolerance analysis scales better with increasing sample size than a similar implementation on the central processing unit (CPU). Furthermore, we identified a significant potential for reducing runtime by using array vectorization with NumPy, the proper selection of row- and column- major order, and the use of single precision floating-point numbers for the GPU implementation. In conclusion, we present open source statistical tolerance analysis and statistical tolerance synthesis approaches with Python that can be used to improve existing workflows to real time on regular desktop computers.
APA, Harvard, Vancouver, ISO, and other styles
36

Jin, Lin, Curtis W. Jarand, Mark L. Brader, and Wayne F. Reed. "Angle-dependent effects in DLS measurements of polydisperse particles." Measurement Science and Technology 33, no. 4 (2022): 045202. http://dx.doi.org/10.1088/1361-6501/ac42b2.

Full text
Abstract:
Abstract Dynamic light scattering (DLS) is widely used for analyzing biological polymers and colloids. Its application to nanoparticles in medicine is becoming increasingly important with the recent emergence of prominent lipid nanoparticle (LNP)-based products, such as the SARS-CoV-2 vaccines from Pfizer, Inc.-BioNTech (BNT162b2) and Moderna, Inc. (mRNA-1273). DLS plays an important role in the characterization and quality control of nanoparticle-based therapeutics and vaccines. However, most DLS instruments have a single detection angle θ, and the amplitude of the scattering vector, q, varies among them according to the relationship q = (4πn/λ 0) sin(θ/2), where λ 0 is the laser wavelength. Results for identical, polydisperse samples among instruments of varying q yield different hydrodynamic diameters, because, as particles become larger they scatter less light at higher q, so that higher-q instruments will under-sample large particles in polydisperse populations, and report higher z-average diffusion coefficients, and hence smaller effective hydrodynamic diameters than lower-q instruments. As particle size reaches the Mie regime the scattering envelope manifests angular maxima and minima, and the monotonic decrease of average size versus q is lost. The discrepancy among instruments of different q is hence fundamental, and not merely technical. This work examines results for different q-value instruments, using mixtures of monodisperse latex sphere standards, for which experimental measurements agree well with computations, and also polydisperse solutions of physically-degraded LNPs, for which results follow expected trends. Mie effects on broad unimodal populations are also considered. There is no way to predict results between two instruments with different q for samples of unknown particle size distributions. Initial analysis of the polydispersity index among different instruments shows a technical difference due to method of autocorrelation analysis, in addition to the fundamental q-effect.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Chuanyuan, Zhenyu Liu, Chan Qiu, and Jianrong Tan. "A quasi-Monte Carlo statistical three-dimensional tolerance analysis method of products based on edge sampling." Assembly Automation 41, no. 4 (2021): 501–13. http://dx.doi.org/10.1108/aa-09-2020-0144.

Full text
Abstract:
Purpose The conventional statistical method of three-dimensional tolerance analysis requires numerous pseudo-random numbers and consumes enormous computations to increase the calculation accuracy, such as the Monte Carlo simulation. The purpose of this paper is to propose a novel method to overcome the problems. Design/methodology/approach With the combination of the quasi-Monte Carlo method and the unified Jacobian-torsor model, this paper proposes a three-dimensional tolerance analysis method based on edge sampling. By setting reasonable evaluation criteria, the sequence numbers representing relatively smaller deviations are excluded and the remaining numbers are selected and kept which represent deviations approximate to and still comply with the tolerance requirements. Findings The case study illustrates the effectiveness and superiority of the proposed method in that it can reduce the sample size, diminish the computations, predict wider tolerance ranges and improve the accuracy of three-dimensional tolerance of precision assembly simultaneously. Research limitations/implications The proposed method may be applied only when the dimensional and geometric tolerances are interpreted in the three-dimensional tolerance representation model. Practical implications The proposed tolerance analysis method can evaluate the impact of manufacturing errors on the product structure quantitatively and provide a theoretical basis for structural design, process planning and manufacture inspection. Originality/value The paper is original in proposing edge sampling as a sampling strategy to generating deviation numbers in tolerance analysis.
APA, Harvard, Vancouver, ISO, and other styles
38

Maghoul, Amir, Ingve Simonsen, Ali Rostami, and Peyman Mirtaheri. "An Optical Modeling Framework for Coronavirus Detection Using Graphene-Based Nanosensor." Nanomaterials 12, no. 16 (2022): 2868. http://dx.doi.org/10.3390/nano12162868.

Full text
Abstract:
The outbreak of the COVID-19 virus has faced the world with a new and dangerous challenge due to its contagious nature. Hence, developing sensory technologies to detect the coronavirus rapidly can provide a favorable condition for pandemic control of dangerous diseases. In between, because of the nanoscale size of this virus, there is a need for a good understanding of its optical behavior, which can give an extraordinary insight into the more efficient design of sensory devices. For the first time, this paper presents an optical modeling framework for a COVID-19 particle in the blood and extracts its optical characteristics based on numerical computations. To this end, a theoretical foundation of a COVID-19 particle is proposed based on the most recent experimental results available in the literature to simulate the optical behavior of the coronavirus under varying physical conditions. In order to obtain the optical properties of the COVID-19 model, the light reflectance by the structure is then simulated for different geometrical sizes, including the diameter of the COVID-19 particle and the size of the spikes surrounding it. It is found that the reflectance spectra are very sensitive to geometric changes of the coronavirus. Furthermore, the density of COVID-19 particles is investigated when the light is incident on different sides of the sample. Following this, we propose a nanosensor based on graphene, silicon, and gold nanodisks and demonstrate the functionality of the designed devices for detecting COVID-19 particles inside the blood samples. Indeed, the presented nanosensor design can be promoted as a practical procedure for creating nanoelectronic kits and wearable devices with considerable potential for fast virus detection.
APA, Harvard, Vancouver, ISO, and other styles
39

Whitehead, John. "Group sequential trials revisited: Simple implementation using SAS." Statistical Methods in Medical Research 20, no. 6 (2010): 635–56. http://dx.doi.org/10.1177/0962280210379036.

Full text
Abstract:
The methodology of group sequential trials is now well established and widely implemented. The benefits of the group sequential approach are generally acknowledged, and its use, when applied properly, is accepted by researchers and regulators. This article describes how a wide range of group sequential designs can easily be implemented using two accessible SAS functions. One of these, PROBBNRM is a standard function, while the other, SEQ, is part of the interactive matrix language of SAS, PROC IML. The account focuses on the essentials of the approach and reveals how straightforward it can be. The design of studies is described, including their evaluation in terms of the distribution of final sample size. The conduct of the interim analyses is discussed, with emphasis on the consequences of inevitable departures from the planned schedule of information accrual. The computations required for the final analysis, allowing for the sequential design, are closely related to those conducted at the design stage. Illustrative examples are given and listings of suitable of SAS code are provided.
APA, Harvard, Vancouver, ISO, and other styles
40

Sengupta, Mita, and Shannon L. Eichmann. "Computing elastic properties of organic-rich source rocks using digital images." Leading Edge 40, no. 9 (2021): 662–66. http://dx.doi.org/10.1190/tle40090662.1.

Full text
Abstract:
Digital rocks are 3D image-based representations of pore-scale geometries that reside in virtual laboratories. High-resolution 3D images that capture microstructural details of the real rock are used to build a digital rock. The digital rock, which is a data-driven model, is used to simulate physical processes such as fluid flow, heat flow, electricity, and elastic deformation through basic laws of physics and numerical simulations. Unconventional reservoirs are chemically heterogeneous where the rock matrix is composed of inorganic minerals, and hydrocarbons are held in the pores of thermally matured organic matter, all of which vary spatially at the nanoscale. This nanoscale heterogeneity poses challenges in measuring the petrophysical properties of source rocks and interpreting the data with reference to the changing rock structure. Focused ion beam scanning electron microscopy is a powerful 3D imaging technique used to study source rock structure where significant micro- and nanoscale heterogeneity exists. Compared to conventional rocks, the imaging resolution required to image source rocks is much higher due to the nanoscale pores, while the field of view becomes smaller. Moreover, pore connectivity and resulting permeability are extremely low, making flow property computations much more challenging than in conventional rocks. Elastic properties of source rocks are significantly more anisotropic than those of conventional reservoirs. However, one advantage of unconventional rocks is that the soft organic matter can be captured at the same imaging resolution as the stiff inorganic matrix, making digital elasticity computations feasible. Physical measurement of kerogen elastic properties is difficult because of the tiny sample size. Digital rock physics provides a unique and powerful tool in the elastic characterization of kerogen.
APA, Harvard, Vancouver, ISO, and other styles
41

Jung, Sin-ho, Sun J. Kang, Linda M. McCall, and Brent Blumenstein. "Sample Size Computation for Two-Sample Noninferiority Log-Rank Test." Journal of Biopharmaceutical Statistics 15, no. 6 (2005): 969–79. http://dx.doi.org/10.1080/10543400500265736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ginski, Christian, Ryo Tazaki, Carsten Dominik, and Tomas Stolker. "Observed Polarized Scattered Light Phase Functions of Planet-forming Disks." Astrophysical Journal 953, no. 1 (2023): 92. http://dx.doi.org/10.3847/1538-4357/acdc97.

Full text
Abstract:
Abstract Dust particles are the building blocks from which planetary bodies are made. A major goal of studies of planet-forming disks is to constrain the properties of dust particles and aggregates in order to trace their origin, structure, and the associated growth and mixing processes in the disk. Observations of the scattering and/or emission of dust in a location of the disk often lead to degenerate information about the kinds of particles involved, such as the size, porosity, or fractal dimensions of aggregates. Progress can be made by deriving the full (polarizing) scattering phase function of such particles at multiple wavelengths. This has now become possible by careful extraction from scattered light images. Such an extraction requires knowledge about the shape of the scattering surface in the disk, and we discuss how to obtain such knowledge as well as the associated uncertainties. We use a sample of disk images from observations with the Very Large Telescope/SPHERE to, for the first time, extract the phase functions of a whole sample of disks with broad phase-angle coverage. We find that polarized phase functions come in two categories. Comparing the extracted functions with theoretical predictions from rigorous T-Matrix computations of aggregates, we show that one category can be linked back to fractal, porous aggregates, while the other is consistent with more compact, less porous aggregates. We speculate that the more compact particles become visible in disks where embedded planets trigger enhanced vertical mixing.
APA, Harvard, Vancouver, ISO, and other styles
43

De Martini, Daniele. "Phase III Failures for a Lack of Efficacy can be, in Significant Part, Recovered (Introducing Success Probability Estimation Quantitatively)." Epidemiology, Biostatistics, and Public Health 18, no. 1 (2023): 13–17. http://dx.doi.org/10.54103/2282-0930/20638.

Full text
Abstract:
The rate of phase III trials failures is approximately 42-45%, and most of them are due to a lack of efficacy. Some of the failures for a lack of efficacy are expected, due to type I errors in phase II and type II errors in phase III. However, the rate of these failures is far from saturating the global failure rate due to a lack of efficacy.In this work, the probability of unexpected failure for a lack of efficacy in phase III trials is estimated to be about 14%, with credibility interval (9%, 18%). These failures can be recovered through an adequate planning/empowering of phase II, and by adopting conservative estimation for the sample size of phase III. The software SP4CT (a free web application available at www.sp4ct.com) allows these computations. This 14% rate of unexpected failures gives that every year approximately 270,000 patients uselessly undergo a phase III trial with a large damage in individual ethics; moreover, the unavailability of many effective treatments is a considerable damage for collective ethics. The 14% of unexpected failures also produces more than $11bn of pure waste, and generates a much higher lack of revenue given by drugs’ marketing.
APA, Harvard, Vancouver, ISO, and other styles
44

Khan, Zahoor Ahmad, Javaidullah Khan, Salman Ahmad, and Hameed Ullah. "Frequency of Left Ventricular Mechanical Dyssynchrony in Chronic Heart Failure." Pakistan Journal of Medical and Health Sciences 17, no. 1 (2023): 628–30. http://dx.doi.org/10.53350/pjmhs2023171628.

Full text
Abstract:
Objective: To determine the frequency of left ventricular mechanical dyssynchrony in chronic heart failure patients. Material and Methods: After approval from the Hospital ethical committee, the study was conducted in the department of cardiology Hayatabad medical complex Peshawar from 1st September 2021 to 28th February 2022. In this descriptive cross-sectional study, 155 patients in total were examined while 27% of heart failure patients had mechanical dyssynchrony, a 95% confidence interval, and a 7% margin of error were maintained according to WHO sample size computations. Results: This study's participants had an average age of 60, with a standard deviation of ±1.26 years. 62% of the patients were male, compared to 38% of females. 35% of patients had a Broad QRS complex, whereas 65% of patients had a Narrow QRS complex. Mechanical dyssynchrony was identified in 32% of chronic heart failure patients. Conclusion: Patients in chronic heart failure with QRS &gt; 150ms and having mechanical dyssynchrony are benefited with Cardiac Ressynchronisation Therapy (CRT) implantation which reduces morbidity and mortality, while those with a narrow QRS (&lt;150ms) are generally non – responders. Keywords: Mechanical dyssynchrony, chronic heart failure, left ventricular ejection fraction
APA, Harvard, Vancouver, ISO, and other styles
45

Safonova, M. F., and K. G. Korovina. "Assessing the materiality threshold and scope of auditing procedures: The sector-specific approach." International Accounting 23, no. 6 (2020): 683–700. http://dx.doi.org/10.24891/ia.23.6.683.

Full text
Abstract:
Subject. As part of any audit engagement, the auditor needs to obtain reasonable assurance that financial statements are free from material misstatements. For this, they assess the tolerable threshold of errors for financial statements, which, if exceeded, will influence the correctness of users' conclusions and adequacy of their subsequent decisions. International Standards on Auditing guide the assessment of the materiality threshold. However, the approach they stipulate fails to accommodate for the sectoral specifics and distinctions of some audited entities. Objectives. We assess the stable structural correlation of key financial reporting indicators through a sample of 130 agricultural entities operating in the Krasnodar Krai. The study presents a technique for assessing the materiality by variance from the average values of the sector as the size of a company may be. Methods. We apply general and special methods, such as induction, deduction, analysis, synthesis, computational graphics, monographs, algorithms. Results. In some cases, the resultant data appeared to significantly diverge at the estimated materiality level. Having analyzed the causes, we proved the reasonableness of the computations done by out technique. As a conclusion, we sum up strengths and weaknesses of the materiality assessment technique. Conclusions and Relevance. The enhanced materiality assessment technique accommodates for the sectoral specifics of agricultural entities. Assessing the materiality level, we revealed the structural correlation of various assets and liabilities. The technique is based on actual data and adaptable, if needed, to the given algorithm. It is more precise for non-standard entities. The findings can be used in the auditing theory and practice, serve for setting the accounting process and financial reporting in various entities. It will also be useful for the Master's Degree Programs in Economics.
APA, Harvard, Vancouver, ISO, and other styles
46

Egbewale, Bolaji E. "Design and Statistical Methods for Handling Covariates Imbalance in Randomized Controlled Clinical Trials: Dilemmas Resolved." Clinical Trials and Practice – Open Journal 4, no. 1 (2021): 22–29. http://dx.doi.org/10.17140/ctpoj-4-121.

Full text
Abstract:
Introduction In practice, between groups baseline imbalance following randomization not only opens effect estimate to bias in controlled trials, it also has certain ethical consequences. Both design and statistical approaches to ensure balanced treatment groups in prognostic factors are not without their drawbacks. This article identified potential limitations associated with design and statistical approaches for handling covariate imbalance in randomized controlled clinical trials (RCTs) and proffered solutions to them. Methods A careful review of literatures coupled with a robust appraisal of statistical models of methods involved in a way that compared their strength and weaknesses in trial environments, was adopted. Results Stratification breaks down in small sample size trials and may not accommodate more than two stratification factors in practice. On the other hand, minimization that balances for multiple prognostic factors even in small trials is not a pure random procedure and in addition, could present with complexities in computations. Overall, either minimization or stratification factors should be included in the model for statistical adjustment. Statistically, estimate of effect by change score analysis (CSA) is susceptible to direction and magnitude of imbalance. Only analysis of covariance (ANCOVA) yields unbiased effect estimate in all trial scenarios including situations with baseline imbalance in known and unknown prognostic covariates. Conclusion Design methods for balancing covariates between groups are not without their limitations. Both direction and size of baseline imbalance also have profound consequence on effect estimate by CSA. Only ANCOVA yields unbiased treatment effect and is recommended at all trial scenarios, whether or not between groups covariate imbalance matters.
APA, Harvard, Vancouver, ISO, and other styles
47

Rajan Shah, Ushmi, Stephen Maore, and Susan Nzioki. "The Influence of Market Penetration Strategies on Market Share in the Hospitality Industry in Arusha, Tanzania." International Journal of Scientific Research and Management 10, no. 07 (2022): 3689–96. http://dx.doi.org/10.18535/ijsrm/v10i7.em04.

Full text
Abstract:
Organizations must implement competitive product-market strategies to secure their survival and sustainability in the marketplace. They must surpass competitors that follow other generic strategy types or those who are trapped in the middle. Although several investigations have been carried out in the tourist sector, none have been conducted on the marketing methods employed by Tanzanian tour firms. The purpose of the research was to see if companies are still using Igor Ansoff's product-market strategies to grow their market share. The main purpose of this study was to determine the impact of market penetration strategies on Arusha-based travel companies’ expansion of market share A survey research design was employed in the study, targeting marketing managers from tour companies in Arusha. To determine the sample size, the researchers utilized basic random sampling. The data was gathered from a sample of 44 people. The marketing managers were handed a questionnaire with 17 questions to complete. Quantitative techniques were used to analyze the information gathered. When the data was acquired, it was cleaned, coded, categorised, and sorted. Information was then computed both by computations of deductive and expressive nature. Results indicate that market penetration strategy (β = .732, p = .000&lt;.05) significantly influences market share at 95% confidence level. The study therefore concludes that market penetration strategy significantly influences market share among tour companies in Arusha, Tanzania. It is recommended in addition to sustaining and improving current practices under the strategy, tour companies in the country ought to continue to embrace the market penetration strategy.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Zhongyuan, Feiyu Lian, and Yuying Jiang. "Recognition of Rice Species Based on Gas Chromatography-Ion Mobility Spectrometry and Deep Learning." Agriculture 14, no. 9 (2024): 1552. http://dx.doi.org/10.3390/agriculture14091552.

Full text
Abstract:
To address the challenge of relying on complex biochemical methods for identifying rice species, a prediction model that combines gas chromatography-ion mobility spectroscopy (GC-IMS) with a convolutional neural network (CNN) was developed. The model utilizes the GC-IMS fingerprint data of each rice variety sample, and an improved CNN structure is employed to increase the recognition accuracy. First, an improved generative adversarial network based on the diffusion model (DGAN) is used for data enhancement to expand the dataset size. Then, on the basis of a residual network called ResNet50, a transfer learning method is introduced to improve the training effect of the model under the condition of a small sample. In addition, a new attention mechanism called Triplet is introduced to further highlight useful features and improve the feature extraction performance of the model. Finally, to reduce the number of model parameters and improve the efficiency of the model, a method called knowledge distillation is used to compress the model. The results of our experiments revealed that the recognition accuracy for identifying the 10 rice varieties was close to 96%; hence, the proposed model significantly outperformed traditional models such as principal component analysis and support vector machine. Furthermore, compared to the traditional CNN, our model reduced the number of parameters and number of computations by 53% and 55%, respectively, without compromising classification accuracy. The study also suggests that the combination of GC-IMS and our proposed deep learning method had better discrimination abilities for rice varieties than traditional chromatography and other spectral analysis methods and that it effectively identified rice varieties.
APA, Harvard, Vancouver, ISO, and other styles
49

Thippeswamy DR, Venkataiah C, Sharath Kumar P, and Basavaraj Hatti. "Overview of flotation studies on low grade limestone for sustainable industrial potentiality: A case study." International Journal of Science and Research Archive 13, no. 1 (2024): 3248–66. http://dx.doi.org/10.30574/ijsra.2024.13.1.2059.

Full text
Abstract:
In the present investigation, we used low-grade limestone from the Joldhal region of the Shimoga schist belt in Karnataka to upgrade the CaO and minimize the gangue components for its effective industrial utilization. These samples, having SiO2-10.40%, 42.50% CaO, and 5.20% Al2O3. We used D-12 Denver laboratory flotation equipment for this study. In optimization studies, we used variables 0.8, 1.0, and 1.2 kg/t collector [sodium oleate] dosage and 0.5, 0.6 and 0.7 kg/t depressants [sodium silicate]. In the detailed experimental studies, we used an optimum dosage of 1.2 kg/t collector and 0.7 kg/t depressants. Froth concentrate was collected in cumulative flotation time frames of 15, 30, and 60 seconds, remaining considered as tailings. The collected product and tailings were filtered, dried, and weighed, and the obtained assays were analyzed. It was observed that the optimum result was achieved in a -200# size class with a CaO percentage increase from 42.50% to 52.83% and silica reduction from 10.50% to 2.01% in 15-second products. For the obtained test results, product performance computations such as weight percentage recovery, CaO grade recovery, grade ratio concentration, and weight distribution ratio were performed. When processing this sample, direct flotation proved to be a more effective treatment. The obtained concentrate can be directly utilized in the cement, iron, and steel industries. The results of this study will probably extend the life of raw material resources for future requirements, which will have vital implications for rapidly expanding industries.
APA, Harvard, Vancouver, ISO, and other styles
50

Hsu, Jason C. "Sample size computation for designing multiple comparison experiments." Computational Statistics & Data Analysis 7, no. 1 (1988): 79–91. http://dx.doi.org/10.1016/0167-9473(88)90017-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography