To see the other types of publications on this topic, follow the link: Sample-size Computation.

Journal articles on the topic 'Sample-size Computation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sample-size Computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jung, Sin-ho, Sun J. Kang, Linda M. McCall, and Brent Blumenstein. "Sample Size Computation for Two-Sample Noninferiority Log-Rank Test." Journal of Biopharmaceutical Statistics 15, no. 6 (2005): 969–79. http://dx.doi.org/10.1080/10543400500265736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hsu, Jason C. "Sample size computation for designing multiple comparison experiments." Computational Statistics & Data Analysis 7, no. 1 (1988): 79–91. http://dx.doi.org/10.1016/0167-9473(88)90017-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lachenbruch, Peter A. "A note on sample size computation for testing interactions." Statistics in Medicine 7, no. 4 (1988): 467–69. http://dx.doi.org/10.1002/sim.4780070403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cunningham, Tina D., and Robert E. Johnson. "Design effects for sample size computation in three-level designs." Statistical Methods in Medical Research 25, no. 2 (2012): 505–19. http://dx.doi.org/10.1177/0962280212460443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kharrat, Najla, Imen Ayadi, and Ahmed Rebaï. "Sample size computation for association studies using case—parents design." Journal of Genetics 85, no. 3 (2006): 187–91. http://dx.doi.org/10.1007/bf02935329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Jie, Jianfeng Luo, Kenneth Liu, and Devan V. Mehrotra. "On power and sample size computation for multiple testing procedures." Computational Statistics & Data Analysis 55, no. 1 (2011): 110–22. http://dx.doi.org/10.1016/j.csda.2010.05.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xinjia. "Exact computation of minimum sample size for estimation of binomial parameters." Journal of Statistical Planning and Inference 141, no. 8 (2011): 2622–32. http://dx.doi.org/10.1016/j.jspi.2011.02.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gudicha, Dereje W., Fetene B. Tekle, and Jeroen K. Vermunt. "Power and Sample Size Computation for Wald Tests in Latent Class Models." Journal of Classification 33, no. 1 (2016): 30–51. http://dx.doi.org/10.1007/s00357-016-9199-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dharan, Bala G. "A priori sample size evaluation and information matrix computation for time series models." Journal of Statistical Computation and Simulation 21, no. 2 (1985): 171–77. http://dx.doi.org/10.1080/00949658508810811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Huifen, and Tsu-Kuang Yang. "Computation of the sample size and coverage for guaranteed-coverage nonnormal tolerance intervals." Journal of Statistical Computation and Simulation 63, no. 4 (1999): 299–320. http://dx.doi.org/10.1080/00949659908811959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Adli, Adam, and Pascal Tyrrell. "Impact of Training Sample Size on the Effects of Regularization in a Convolutional Neural Network-based Dental X-ray Artifact Prediction Model." Journal of Undergraduate Life Sciences 14, no. 1 (2020): 5. http://dx.doi.org/10.33137/juls.v14i1.35883.

Full text
Abstract:
Introduction: Advances in computers have allowed for the practical application of increasingly advanced machine learning models to aid healthcare providers with diagnosis and inspection of medical images. Often, a lack of training data and computation time can be a limiting factor in the development of an accurate machine learning model in the domain of medical imaging. As a possible solution, this study investigated whether L2 regularization moderate s the overfitting that occurs as a result of small training sample sizes.Methods: This study employed transfer learning experiments on a dental x-ray binary classification model to explore L2 regularization with respect to training sample size in five common convolutional neural network architectures. Model testing performance was investigated and technical implementation details including computation times and hardware considerations as well as performance factors and practical feasibility were described.Results: The experimental results showed a trend that smaller training sample sizes benefitted more from regularization than larger training sample sizes. Further, the results showed that applying L2 regularization did not apply significant computational overhead and that the extra rounds of training L2 regularization were feasible when training sample sizes are relatively small.Conclusion: Overall, this study found that there is a window of opportunity in which the benefits of employing regularization can be most cost-effective relative to training sample size. It is recommended that training sample size should be carefully considered when forming expectations of achievable generalizability improvements that result from investing computational resources into model regularization.
APA, Harvard, Vancouver, ISO, and other styles
12

Gamrot, Wojciech. "On exact computation of minimum sample size for restricted estimation of a binomial parameter." Journal of Statistical Planning and Inference 143, no. 4 (2013): 852–66. http://dx.doi.org/10.1016/j.jspi.2012.09.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lorah, Julie, and Andrew Womack. "Value of sample size for computation of the Bayesian information criterion (BIC) in multilevel modeling." Behavior Research Methods 51, no. 1 (2019): 440–50. http://dx.doi.org/10.3758/s13428-018-1188-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Avramchuk, Valeriy V., E. E. Luneva, and Alexander G. Cheremnov. "Increasing the Efficiency of Using Hardware Resources for Time-Frequency Correlation Function Computation." Advanced Materials Research 1040 (September 2014): 969–74. http://dx.doi.org/10.4028/www.scientific.net/amr.1040.969.

Full text
Abstract:
In the article the techniques of increasing efficient of using multi-core processors for the task of calculating the fast Fourier transform were considered. The fast Fourier transform is led on the basis of calculating a time time-frequency correlation function. The time-frequency correlation function allows increasing the information content of the analysis as compared with the classic correlation function. The significant computational capabilities are required to calculate the time-frequency correlation function, that by reason of the necessity of multiple computing fast Fourier transform. For computing the fast Fourier transform the Cooley-Tukey algorithm with fixed base two is used, which lends itself to efficient parallelization and is simple to implement. Immediately before the fast Fourier transform computation the procedure of bit-reversing the input data sequence is used. For algorithm of calculating the time-frequency correlation function parallel computing technique was used that experimentally allowed obtaining the data defining the optimal number of iterations for each core of the CPU, depending on the sample size. The results of experiments allowed developing special software that automatically select the effective amount of subtasks for parallel processing. Also the software provides the choice of sequential or parallel computations mode, depending on the sample size and the number of frequency intervals in the calculation of time-frequency correlation function.
APA, Harvard, Vancouver, ISO, and other styles
15

Miller, Jeff. "Short Report: Reaction Time Analysis with Outlier Exclusion: Bias Varies with Sample Size." Quarterly Journal of Experimental Psychology Section A 43, no. 4 (1991): 907–12. http://dx.doi.org/10.1080/14640749108400962.

Full text
Abstract:
To remove the influence of spuriously long response times, many investigators compute “restricted means”, obtained by throwing out any response time more than 2.0, 2.5, or 3.0 standard deviations from the overall sample average. Because reaction time distributions are skewed, however, the computation of restricted means introduces a bias: the restricted mean underestimates the true average of the population of response times. This problem may be very serious when investigators compare restricted means across conditions with different numbers of observations, because the bias increases with sample size. Simulations show that there is substantial differential bias when comparing conditions with fewer than 10 observations against conditions with more than 20. With strongly skewed distributions and a cutoff of 3.0 standard deviations, differential bias can influence comparisons of conditions with even more observations.
APA, Harvard, Vancouver, ISO, and other styles
16

Kang, Dongwoo, Janice B. Schwartz, and Davide Verotta. "A sample size computation method for non-linear mixed effects models with applications to pharmacokinetics models." Statistics in Medicine 23, no. 16 (2004): 2551–66. http://dx.doi.org/10.1002/sim.1695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jeanprêtre, N., and R. Kraftsik. "A program for the computation of power and determination of sample size in hierarchical experimental designs." Computer Methods and Programs in Biomedicine 29, no. 3 (1989): 179–90. http://dx.doi.org/10.1016/0169-2607(89)90128-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Neven, Anouk, Murielle Mauer, Baktiar Hasan, Richard Sylvester, and Laurence Collette. "Sample size computation in phase II designs combining the A’Hern design and the Sargent and Goldberg design." Journal of Biopharmaceutical Statistics 30, no. 2 (2019): 305–21. http://dx.doi.org/10.1080/10543406.2019.1641817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Evans, M. W., G. C. Lie, and E. Clementi. "On the absence of sample size effects in the computation of cross-correlation functions in liquid water." Journal of Molecular Liquids 40, no. 2 (1989): 89–99. http://dx.doi.org/10.1016/0167-7322(89)80038-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Coskun, Abdurrahman, Elvan Ceyhan, Tamer C. Inal, Mustafa Serteser, and Ibrahim Unsal. "The comparison of parametric and nonparametric bootstrap methods for reference interval computation in small sample size groups." Accreditation and Quality Assurance 18, no. 1 (2012): 51–60. http://dx.doi.org/10.1007/s00769-012-0948-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Amadou, Z., and B. H. Mohammed. "Marketing Expansion Strategies for Local Crops-Based Couscous in Tahoua State, Niger Republic." Nigerian Journal of Basic and Applied Sciences 28, no. 2 (2021): 70–80. http://dx.doi.org/10.4314/njbas.v28i2.9.

Full text
Abstract:
This study investigates the own-price and cross-price elasticities for locally grown crops-based couscous and compares confidence intervals computation for willingness-to-pay and market shares under Krinsky Robb and Delta bootstrapping methods. Syntheses of previous literature and a focus group with consumers had helped to identify nine brands of couscous included in this research. The fractional factorial design was used to collect data from three hundred consumers, while the multinomial logit was used to analyze data. Results indicate that rice, cowpea and millet-based couscous were the most preferred by consumers and their market share accounts of more than fifty percent. The Results from simulation showed that confidence intervals under Krinsky and Robb stabilize as a sample size increases and thereby adjusting for skewness. However, confidence intervals under Delta computation are constant regardless of sample size, thereby failing to adjust for skewness. Finally, results also indicate that skewness was also accommodated in confidence intervals for market share because its values progressively adjust as sample size increases. These findings may be useful to boost cropbased couscous demand in the study area and beyond and thereby improving farmers’ revenue and offering diet diversification opportunity.
 Keywords: Marketing expansion, Strategies, Local crops, Couscous, Willingness-to-pay
APA, Harvard, Vancouver, ISO, and other styles
22

CHEN, WEN-SHENG, PONG C. YUEN, and JIAN HUANG. "A NEW REGULARIZED LINEAR DISCRIMINANT ANALYSIS METHOD TO SOLVE SMALL SAMPLE SIZE PROBLEMS." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 07 (2005): 917–35. http://dx.doi.org/10.1142/s0218001405004344.

Full text
Abstract:
This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix [Formula: see text], which is suitable for parameter reduction. Based on the new definition of [Formula: see text], we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix [Formula: see text] approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Persson, Rasmus A. X., and Johan Bergenholtz. "Efficient computation of the scattering intensity from systems of nonspherical particles." Journal of Applied Crystallography 49, no. 5 (2016): 1524–31. http://dx.doi.org/10.1107/s1600576716011481.

Full text
Abstract:
The analysis of the angle dependence of the elastic scattering of radiation from a sample is an efficient and non-invasive technique that is used in fundamental science, in medicine and in technical quality control in industry. Precise information on the shape, size, polydispersity and interactions of a colloidal sample is readily obtained provided an underlying scattering model, i.e. form and structure factors, can be computed for the sample. Here, a numerical method that can efficiently compute the form factor amplitude (and thus the scattering intensity) of nonspherical scatterers through an importance sampling algorithm of the Fourier integral of the scattering density is presented. Using the precomputed form factor amplitudes, the calculation of the scattering intensity at any particle concentration then scales linearly with the particle number and linearly with the number of q points for its evaluation. This is illustrated by an example calculation of the scattering by concentrated suspensions of ellipsoidal Janus particles and the numerical accuracy for the computed form factor amplitudes is compared with analytical benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
24

Hirai, Yasuhisa, and Tadashi Nakamura. "A New Arithmetic and an Application to the Computation of Binomial Probability for Very Wide Range of Sample Size." Japanese journal of applied statistics 35, no. 2 (2006): 93–111. http://dx.doi.org/10.5023/jappstat.35.93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Torres-Huitzil, Cesar. "Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters." Scientific World Journal 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/108103.

Full text
Abstract:
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with ak×kkernel requires ofk2−1comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel sizek. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on1024×1024images with up to255×255kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding.
APA, Harvard, Vancouver, ISO, and other styles
26

Ortega Ramírez, Miriam Patricia, Laurent Oxarango, and Alfonso Gastelum Strozzi. "Effect of X-ray CT resolution on the quality of permeability computation for granular soils: definition of a criterion based on morphological properties." Soil Research 57, no. 6 (2019): 589. http://dx.doi.org/10.1071/sr18189.

Full text
Abstract:
In this study, the quality of soil permeability estimation based on computational fluid dynamics is discussed. Two types of three-dimensional geometries were considered: an image of Fontainebleau sand obtained from X-ray computed micro-tomography and a virtual pack of spheres. Numerical methods such as finite difference or lattice Boltzmann can conveniently use the image voxels as computational mesh elements. In this framework, the image resolution is directly associated with quality of the numerical computation. A higher resolution should promote both a better morphological description and discretisation. However, increasing the resolution may prevent the studied volume from being representative. Here, each sample was scaled and analysed at five resolutions. The dependence of soil properties with respect to the image resolution is discussed. As resolution decreased, the permeability and specific surface values tended to diverge from the reference value. This deterioration could be attributed to the shift of the pore size distribution towards badly resolved pores in the voxelised geometry. As long as granular soils are investigated, the volume fraction of pores smaller than six voxels in diameter should not exceed 50% to ensure the validity of permeability computation. In addition, based on an analysis of flow distribution, the volume fraction of pores smaller than four voxels should not exceed 25% in order to limit the flow rate occurring in badly discretised pores under 10%. For the Fontainebleau sand and virtual pack of spheres, the maximum voxel size meeting this criterion corresponded to 1/14 and 1/20 of the mean grain size respectively.
APA, Harvard, Vancouver, ISO, and other styles
27

Van Hoesen, Daniel C., James C. Bendert, and Kenneth F. Kelton. "Absorption and secondary scattering of X-rays with an off-axis small beam for a cylindrical sample geometry." Acta Crystallographica Section A Foundations and Advances 75, no. 2 (2019): 362–69. http://dx.doi.org/10.1107/s2053273318017710.

Full text
Abstract:
Expressions for X-ray absorption and secondary scattering are developed for cylindrical sample geometries. The incident-beam size is assumed to be smaller than the sample and in general directed off-axis onto the cylindrical sample. It is shown that an offset beam has a non-negligible effect on both the absorption and multiple scattering terms, resulting in an asymmetric correction that must be applied to the measured scattering intensities. The integral forms of the corrections are first presented. A small-beam limit is then developed for easier computation.
APA, Harvard, Vancouver, ISO, and other styles
28

Lieberman, Offer, Judith Rousseau, and David M. Zucker. "SMALL-SAMPLE LIKELIHOOD-BASED INFERENCE IN THE ARFIMA MODEL." Econometric Theory 16, no. 2 (2000): 231–48. http://dx.doi.org/10.1017/s0266466600162048.

Full text
Abstract:
The autoregressive fractionally integrated moving average (ARFIMA) model has become a popular approach for analyzing time series that exhibit long-range dependence. For the Gaussian case, there have been substantial advances in the area of likelihood-based inference, including development of the asymptotic properties of the maximum likelihood estimates and formulation of procedures for their computation. Small-sample inference, however, has not to date been studied. Here we investigate the small-sample behavior of the conventional and Bartlett-corrected likelihood ratio tests (LRT) for the fractional difference parameter. We derive an expression for the Bartlett correction factor. We investigate the asymptotic order of approximation of the Bartlett-corrected test. In addition, we present a small simulation study of the conventional and Bartlett-corrected LRT's. We find that for simple ARFIMA models both tests perform fairly well with a sample size of 40 but the Bartlett-corrected test generally provides an improvement over the conventional test with a sample size of 20.
APA, Harvard, Vancouver, ISO, and other styles
29

Heath, Anna, Natalia Kunst, Christopher Jackson, et al. "Calculating the Expected Value of Sample Information in Practice: Considerations from 3 Case Studies." Medical Decision Making 40, no. 3 (2020): 314–26. http://dx.doi.org/10.1177/0272989x20912402.

Full text
Abstract:
Background. Investing efficiently in future research to improve policy decisions is an important goal. Expected value of sample information (EVSI) can be used to select the specific design and sample size of a proposed study by assessing the benefit of a range of different studies. Estimating EVSI with the standard nested Monte Carlo algorithm has a notoriously high computational burden, especially when using a complex decision model or when optimizing over study sample sizes and designs. Recently, several more efficient EVSI approximation methods have been developed. However, these approximation methods have not been compared, and therefore their comparative performance across different examples has not been explored. Methods. We compared 4 EVSI methods using 3 previously published health economic models. The examples were chosen to represent a range of real-world contexts, including situations with multiple study outcomes, missing data, and data from an observational rather than a randomized study. The computational speed and accuracy of each method were compared. Results. In each example, the approximation methods took minutes or hours to achieve reasonably accurate EVSI estimates, whereas the traditional Monte Carlo method took weeks. Specific methods are particularly suited to problems where we wish to compare multiple proposed sample sizes, when the proposed sample size is large, or when the health economic model is computationally expensive. Conclusions. As all the evaluated methods gave estimates similar to those given by traditional Monte Carlo, we suggest that EVSI can now be efficiently computed with confidence in realistic examples. No systematically superior EVSI computation method exists as the properties of the different methods depend on the underlying health economic model, data generation process, and user expertise.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Zihe, Jun Liu, Xiaobing Li, and Asad J. Khattak. "Do Larger Sample Sizes Increase the Reliability of Traffic Incident Duration Models? A Case Study of East Tennessee Incidents." Transportation Research Record: Journal of the Transportation Research Board 2675, no. 6 (2021): 265–80. http://dx.doi.org/10.1177/0361198121992063.

Full text
Abstract:
Incident duration models are often developed to assist incident management and traveler information dissemination. With recent advances in data collection and management, enormous achieved incident data are now available for incident model development. However, a large volume of data may present challenges to practitioners, such as data processing and computation. Besides, data that span multiple years may have inconsistency issues because of the data collection environments and procedures. A practical question may arise in the incident modeling community—Is that much data really necessary (“all-in”) to build models? If not, then how many data are necessary? To answer these questions, this study aims to investigate the relationship between the data sample sizes and the reliability of incident duration analysis models. This study proposed and demonstrated a sample size determination framework through a case study using data of over 47,000 incidents. This study estimated handfuls of hazard-based duration models with varying sample sizes. The relationships between sample size and model performance, along with estimate outcomes (i.e., coefficients and significance levels), were examined and visualized. The results showed that the variation of estimated coefficients decreases as the sample size increases, and becomes stabilized when the sample size reaches a critical threshold value. This critical threshold value may be the recommended sample size. The case study suggested a sample size of 6,500 to be enough for a reliable incident duration model. The critical value may vary significantly with different data and model specifications. More implications are discussed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
31

Reyhani, Nima. "Multiple Spectral Kernel Learning and a Gaussian Complexity Computation." Neural Computation 25, no. 7 (2013): 1926–51. http://dx.doi.org/10.1162/neco_a_00457.

Full text
Abstract:
Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhan, Xin, Lawrence M. Schwartz, M. Nafi Toksöz, Wave C. Smith, and F. Dale Morgan. "Pore-scale modeling of electrical and fluid transport in Berea sandstone." GEOPHYSICS 75, no. 5 (2010): F135—F142. http://dx.doi.org/10.1190/1.3463704.

Full text
Abstract:
The purpose of this paper is to test how well numerical calculations can predict transport properties of porous permeable rock, given its 3D digital microtomography [Formula: see text] image. For this study, a Berea 500 sandstone sample is used, whose [Formula: see text] images have been obtained with resolution of [Formula: see text]. Porosity, electrical conductivity, permeability, and surface area are calculated from the [Formula: see text] image and compared with laboratory-measured values. For transport properties (electrical conductivity, permeability), a finite-difference scheme is adopted. The calculated and measured properties compare quite well. Electrical transport in Berea 500 sandstone is complicated by the presence of surface conduction in the electric double layer at the grain-electrolyte boundary. A three-phase conductivity model is proposed to compute surface conduction on the rock [Formula: see text] image. Effects of image resolution and computation sample size on the accuracy of numerical predictions are also investigated. Reducing resolution (i.e., increasing the voxel dimensions) decreases the calculated values of electrical conductivity and hydraulic permeability. Increasing computation sample volume gives a better match between laboratory measurements and numerical results. Large sample provides a better representation of the rock.
APA, Harvard, Vancouver, ISO, and other styles
33

Gordon, Derek, Douglas Londono, Payal Patel, Wonkuk Kim, Stephen J. Finch, and Gary A. Heiman. "An Analytic Solution to the Computation of Power and Sample Size for Genetic Association Studies under a Pleiotropic Mode of Inheritance." Human Heredity 81, no. 4 (2016): 194–209. http://dx.doi.org/10.1159/000457135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nagaraja, Kashyap, and Ulisses Braga-Neto. "Bayesian Classification of Proteomics Biomarkers from Selected Reaction Monitoring Data using an Approximate Bayesian Computation-Markov Chain Monte Carlo Approach." Cancer Informatics 17 (January 2018): 117693511878692. http://dx.doi.org/10.1177/1176935118786927.

Full text
Abstract:
Selected reaction monitoring (SRM) has become one of the main methods for low-mass-range–targeted proteomics by mass spectrometry (MS). However, in most SRM-MS biomarker validation studies, the sample size is very small, and in particular smaller than the number of proteins measured in the experiment. Moreover, the data can be noisy due to a low number of ions detected per peptide by the instrument. In this article, those issues are addressed by a model-based Bayesian method for classification of SRM-MS data. The methodology is likelihood-free, using approximate Bayesian computation implemented via a Markov chain Monte Carlo procedure and a kernel-based Optimal Bayesian Classifier. Extensive experimental results demonstrate that the proposed method outperforms classical methods such as linear discriminant analysis and 3NN, when sample size is small, dimensionality is large, the data are noisy, or a combination of these.
APA, Harvard, Vancouver, ISO, and other styles
35

Banerjee, Upamanyu, and Ulisses M. Braga-Neto. "Bayesian ABC-MCMC Classification of Liquid Chromatography–Mass Spectrometry Data." Cancer Informatics 14s5 (January 2015): CIN.S30798. http://dx.doi.org/10.4137/cin.s30798.

Full text
Abstract:
Proteomics promises to revolutionize cancer treatment and prevention by facilitating the discovery of molecular biomarkers. Progress has been impeded, however, by the small-sample, high-dimensional nature of proteomic data. We propose the application of a Bayesian approach to address this issue in classification of proteomic profiles generated by liquid chromatography-mass spectrometry (LC-MS). Our approach relies on a previously proposed model of the LC-MS experiment, as well as on the theory of the optimal Bayesian classifier (OBC). Computation of the OBC requires the combination of a likelihood-free methodology called approximate Bayesian computation (ABC) as well as Markov chain Monte Carlo (MCMC) sampling. Numerical experiments using synthetic LC-MS data based on an actual human proteome indicate that the proposed ABC-MCMC classification rule outperforms classical methods such as support vector machines, linear discriminant analysis, and 3-nearest neighbor classification rules in the case when sample size is small or the number of selected proteins used to classify is large.
APA, Harvard, Vancouver, ISO, and other styles
36

Abramson, Corey M., Jacqueline Joslyn, Katharine A. Rendle, Sarah B. Garrett, and Daniel Dohan. "The promises of computational ethnography: Improving transparency, replicability, and validity for realist approaches to ethnographic analysis." Ethnography 19, no. 2 (2017): 254–84. http://dx.doi.org/10.1177/1466138117725340.

Full text
Abstract:
This article argues the advance of computational methods for analyzing, visualizing and disseminating social scientific data can provide substantial tools for ethnographers operating within the broadly realist ‘normal-scientific tradition’ (NST). While computation does not remove the fundamental challenges of method and measurement that are central to social research, new technologies provide resources for leveraging what NST researchers see as ethnography’s strengths (e.g. the production of in situ observations of people over time) while addressing what NST researchers see as ethnography’s weaknesses (e.g. questions of sample size, generalizability and analytical transparency). Specifically, we argue computational tools can help: (1) scale ethnography, (2) improve transparency, (3) allow basic replications, and (4) ultimately address fundamental concerns about internal and external validity. We explore these issues by illustrating the utility of three forms of ethnographic visualization enabled by computational advances – ethnographic heatmaps (ethnoarrays), a combination of participant observation data with techniques from social network analysis (SNA), and text mining. In doing so, we speak to the potential uses and challenges of nascent ‘computational ethnography.’
APA, Harvard, Vancouver, ISO, and other styles
37

Song, Hunsoo, Yonghyun Kim, and Yongil Kim. "A Patch-Based Light Convolutional Neural Network for Land-Cover Mapping Using Landsat-8 Images." Remote Sensing 11, no. 2 (2019): 114. http://dx.doi.org/10.3390/rs11020114.

Full text
Abstract:
This study proposes a light convolutional neural network (LCNN) well-fitted for medium-resolution (30-m) land-cover classification. The LCNN attains high accuracy without overfitting, even with a small number of training samples, and has lower computational costs due to its much lighter design compared to typical convolutional neural networks for high-resolution or hyperspectral image classification tasks. The performance of the LCNN was compared to that of a deep convolutional neural network, support vector machine (SVM), k-nearest neighbors (KNN), and random forest (RF). SVM, KNN, and RF were tested with both patch-based and pixel-based systems. Three 30 km × 30 km test sites of the Level II National Land Cover Database were used for reference maps to embrace a wide range of land-cover types, and a single-date Landsat-8 image was used for each test site. To evaluate the performance of the LCNN according to the sample sizes, we varied the sample size to include 20, 40, 80, 160, and 320 samples per class. The proposed LCNN achieved the highest accuracy in 13 out of 15 cases (i.e., at three test sites with five different sample sizes), and the LCNN with a patch size of three produced the highest overall accuracy of 61.94% from 10 repetitions, followed by SVM (61.51%) and RF (61.15%) with a patch size of three. Also, the statistical significance of the differences between LCNN and the other classifiers was reported. Moreover, by introducing the heterogeneity value (from 0 to 8) representing the complexity of the map, we demonstrated the advantage of patch-based LCNN over pixel-based classifiers, particularly at moderately heterogeneous pixels (from 1 to 4), with respect to accuracy (LCNN is 5.5% and 6.3% more accurate for a training sample size of 20 and 320 samples per class, respectively). Finally, the computation times of the classifiers were calculated, and the LCNN was confirmed to have an advantage in large-area mapping.
APA, Harvard, Vancouver, ISO, and other styles
38

Rubinacci, Simone, Olivier Delaneau, and Jonathan Marchini. "Genotype imputation using the Positional Burrows Wheeler Transform." PLOS Genetics 16, no. 11 (2020): e1009049. http://dx.doi.org/10.1371/journal.pgen.1009049.

Full text
Abstract:
Genotype imputation is the process of predicting unobserved genotypes in a sample of individuals using a reference panel of haplotypes. In the last 10 years reference panels have increased in size by more than 100 fold. Increasing reference panel size improves accuracy of markers with low minor allele frequencies but poses ever increasing computational challenges for imputation methods. Here we present IMPUTE5, a genotype imputation method that can scale to reference panels with millions of samples. This method continues to refine the observation made in the IMPUTE2 method, that accuracy is optimized via use of a custom subset of haplotypes when imputing each individual. It achieves fast, accurate, and memory-efficient imputation by selecting haplotypes using the Positional Burrows Wheeler Transform (PBWT). By using the PBWT data structure at genotyped markers, IMPUTE5 identifies locally best matching haplotypes and long identical by state segments. The method then uses the selected haplotypes as conditioning states within the IMPUTE model. Using the HRC reference panel, which has ∼65,000 haplotypes, we show that IMPUTE5 is up to 30x faster than MINIMAC4 and up to 3x faster than BEAGLE5.1, and uses less memory than both these methods. Using simulated reference panels we show that IMPUTE5 scales sub-linearly with reference panel size. For example, keeping the number of imputed markers constant, increasing the reference panel size from 10,000 to 1 million haplotypes requires less than twice the computation time. As the reference panel increases in size IMPUTE5 is able to utilize a smaller number of reference haplotypes, thus reducing computational cost.
APA, Harvard, Vancouver, ISO, and other styles
39

Yadav, V., and A. M. Michalak. "Improving computational efficiency in large linear inverse problems: an example from carbon dioxide flux estimation." Geoscientific Model Development 6, no. 3 (2013): 583–90. http://dx.doi.org/10.5194/gmd-6-583-2013.

Full text
Abstract:
Abstract. Addressing a variety of questions within Earth science disciplines entails the inference of the spatiotemporal distribution of parameters of interest based on observations of related quantities. Such estimation problems often represent inverse problems that are formulated as linear optimization problems. Computational limitations arise when the number of observations and/or the size of the discretized state space becomes large, especially if the inverse problem is formulated in a probabilistic framework and therefore aims to assess the uncertainty associated with the estimates. This work proposes two approaches to lower the computational costs and memory requirements for large linear space–time inverse problems, taking the Bayesian approach for estimating carbon dioxide (CO2) emissions and uptake (a.k.a. fluxes) as a prototypical example. The first algorithm can be used to efficiently multiply two matrices, as long as one can be expressed as a Kronecker product of two smaller matrices, a condition that is typical when multiplying a sensitivity matrix by a covariance matrix in the solution of inverse problems. The second algorithm can be used to compute a posteriori uncertainties directly at aggregated spatiotemporal scales, which are the scales of most interest in many inverse problems. Both algorithms have significantly lower memory requirements and computational complexity relative to direct computation of the same quantities (O(n2.5) vs. O(n3)). For an examined benchmark problem, the two algorithms yielded massive savings in floating point operations relative to direct computation of the same quantities. Sample computer codes are provided for assessing the computational and memory efficiency of the proposed algorithms for matrices of different dimensions.
APA, Harvard, Vancouver, ISO, and other styles
40

Fan, Jianqing, Fang Han, and Han Liu. "Challenges of Big Data analysis." National Science Review 1, no. 2 (2014): 293–314. http://dx.doi.org/10.1093/nsr/nwt032.

Full text
Abstract:
Abstract Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogenous assumptions in most statistical methods for Big Data cannot be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Jian, and Ling Shen. "Applied Technology in an Adaptive Particle Filter Based on Interval Estimation and KLD-Resampling." Advanced Materials Research 1014 (July 2014): 452–58. http://dx.doi.org/10.4028/www.scientific.net/amr.1014.452.

Full text
Abstract:
Particle filter as a sequential Monte Carlo method is widely applied in stochastic sampling for state estimation in a recursive Bayesian filtering framework. The efficiency and accuracy of the particle filter depends on the number of particles and the relocating method. The automatic selection of sample size for a given task is therefore essential for reducing unnecessary computation and for optimal performance, especially when the posterior distribution greatly varies overtime. This paper presents an adaptive resampling method (IE_KLD_PF) based on interval estimation, and after interval estimating the expectation of the system states, the new algorithm adopts Kullback-Leibler distance (KLD) to determine the number of particles to resample from the interval and update the filter results by current observation information. Simulations are performed to show that the proposed filter can reduce the average number of samples significantly compared to the fixed sample size particle filter.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Ji, and Deyu Meng. "FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test." Neural Computation 27, no. 6 (2015): 1345–72. http://dx.doi.org/10.1162/neco_a_00732.

Full text
Abstract:
The maximum mean discrepancy (MMD) is a recently proposed test statistic for the two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner’s theorem and Fourier transform (Rahimi & Recht, 2007 ). Taking advantage of sampling the Fourier transform, FastMMD decreases the time complexity for MMD calculation from [Formula: see text] to [Formula: see text], where N and d are the size and dimension of the sample set, respectively. Here, L is the number of basis functions for approximating kernels that determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to [Formula: see text] by using the Fastfood technique (Le, Sarlós, & Smola, 2013 ). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We also provide a geometric explanation for our method, ensemble of circular discrepancy, which helps us understand the insight of MMD and we hope will lead to more extensive metrics for assessing the two-sample test task. Experimental results substantiate that the accuracy of FastMMD is similar to that of MMD and with faster computation and lower variance than existing MMD approximation methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Yadav, V., and A. M. Michalak. "Technical Note: Improving computational efficiency in large linear inverse problems: an example from carbon dioxide flux estimation." Geoscientific Model Development Discussions 5, no. 4 (2012): 3325–42. http://dx.doi.org/10.5194/gmdd-5-3325-2012.

Full text
Abstract:
Abstract. Addressing a variety of questions within Earth science disciplines entails the inference of the spatio-temporal distribution of parameters of interest based on observations of related quantities. Such estimation problems often represent inverse problems that are formulated as linear optimization problems. Computational limitations arise when the number of observations and/or the size of the discretized state space become large, especially if the inverse problem is formulated in a probabilistic framework and therefore aims to assess the uncertainty associated with the estimates. This work proposes two approaches to lower the computational costs and memory requirements for large linear space-time inverse problems, taking the Bayesian approach for estimating carbon dioxide (CO2) emissions and uptake (a.k.a. fluxes) as a prototypical example. The first algorithm can be used to efficiently multiply two matrices, as long as one can be expressed as a Kronecker product of two smaller matrices, a condition that is typical when multiplying a sensitivity matrix by a covariance matrix in the solution of inverse problems. The second algorithm can be used to compute a posteriori uncertainties directly at aggregated spatio-temporal scales, which are the scales of most interest in many inverse problems. Both algorithms have significantly lower memory requirements and computational complexity relative to direct computation of the same quantities (O(n2.5) vs. O(n3)). For an examined benchmark problem, the two algorithms yielded a three and six order of magnitude increase in computational efficiency, respectively, relative to direct computation of the same quantities. Sample computer code is provided for assessing the computational and memory efficiency of the proposed algorithms for matrices of different dimensions.
APA, Harvard, Vancouver, ISO, and other styles
44

Fan, Duan, Mei Lan Qi, and Hong Liang He. "Damage Distribution of High Purity Aluminum under the Impact Loading." Advanced Materials Research 160-162 (November 2010): 1001–5. http://dx.doi.org/10.4028/www.scientific.net/amr.160-162.1001.

Full text
Abstract:
Knowledge of damage distribution is important and essential for understanding the dynamic failure behavior of solid material under the high velocity impact. For the High Purity Aluminum (99.999%), disk sample was shock impacted by a light gun and its damage distribution has been carefully characterized. The recovered sample was cut symmetrically along the impact direction and the damage on the cross section has been statistically studied. Unlike the previous work as Lynn Seaman et al. reported, a new computation treatment has been established in terms of the Schwartz-Saltykov method, which gives an easy and simple transformation from the two-dimensional size distribution to three-dimensional size distribution. We demonstrated the variation of damage distribution of High Purity Aluminum under different dynamic tensile loading, and discussed the damage evolution characteristics associated with the micro-voids nucleation, growth and coalescence. Results provide physical basis for the theoretical modeling and numerical simulation of spall fracture.
APA, Harvard, Vancouver, ISO, and other styles
45

Ruf, B., B. Erdnuess, and M. Weinmann. "DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 24, 2017): 325–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-325-2017.

Full text
Abstract:
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
APA, Harvard, Vancouver, ISO, and other styles
46

Yuan, Hongyan, Jonah H. Lee, and James E. Guilkey. "Stochastic reconstruction of the microstructure of equilibrium form snow and computation of effective elastic properties." Journal of Glaciology 56, no. 197 (2010): 405–14. http://dx.doi.org/10.3189/002214310792447770.

Full text
Abstract:
AbstractThree-dimensional geometric descriptions of microstructure are indispensable to obtain the structure–property relationships of snow. Because snow is a random heterogeneous material, it is often helpful to construct stochastic geometric models that can be used to model physical and mechanical properties of snow. In the present study, the Gaussian random field-based stochastic reconstruction of the sieved and sintered dry-snow sample with grain size less than 1 mm is investigated. The one- and two-point correlation functions of the snow samples are used as input for the stochastic snow model. Several statistical descriptors not used as input to the stochastic reconstruction are computed for the real and reconstructed snow to assess the quality of the reconstructed images. For the snow samples and the reconstructed snow microstructure, we also estimate the mechanical properties and the size of the associated representative volume element using numerical simulations as additional assessment of the quality of the reconstructed images. The results indicate that the stochastic reconstruction technique used in this paper is reasonably accurate, robust and highly efficient in numerical computations for the high-density snow samples we consider.
APA, Harvard, Vancouver, ISO, and other styles
47

Spooner, S., and X. L. Wang. "Diffraction Peak Displacement in Residual Stress Samples Due to Partial Burial of the Sampling Volume." Journal of Applied Crystallography 30, no. 4 (1997): 449–55. http://dx.doi.org/10.1107/s0021889897000174.

Full text
Abstract:
Near-surface measurement of residual strain and stress with neutron scattering complements and extends the surface residual stress measurements by X-ray diffraction. However, neutron diffraction measurements near surfaces are sensitive to scattering volume alignment, neutron beam wavelength spread and beam collimation and, unless properly understood, can give large fictitious strains. An analytic calculation and a numerical computation of neutron diffraction peak shifts due to partial burial of the sampling volume have been made and are compared with experimental measurement. Peak shifts in a strain-free nickel sample were determined for conditions where the sample surface is displaced so that the scattering gage volume is partially buried in the sample. The analytic and numerically computed peak shifts take into account the beam collimation, neutron source size, monochromator crystal mosaic spread and the collection of diffracted intensity with a linear position-sensitive counter.
APA, Harvard, Vancouver, ISO, and other styles
48

Qin, Peng-Ju, Zhang-Rong Liu, Xiao-Ling Lai, Yong-Bao Wang, Zhi-Wei Song, and Chen-Xi Miao. "A New Method to Determine the Spatial Sensitivity of Time Domain Reflectometry Probes Based on Three-Dimensional Weighting Theory." Water 12, no. 2 (2020): 545. http://dx.doi.org/10.3390/w12020545.

Full text
Abstract:
The time domain reflectometry (TDR) method has been widely used to measure soil water content for agriculture and engineering applications. Quick design and optimization of the probe is crucial to achieving practical utilization. Generally, the two-dimensional weighting theory, calculation of the spatial sensitivity of TDR probes in the plane transverse to the direction of electromagnetic wave propagation, and relevant numerical simulation techniques can be used to solve any issues. However, it is difficult to tackle specific problems such as complex probe shape, end effect, and so forth. In order to solve these issues, a method including a three-dimensional weighting theory and the relevant numerical simulation technique was proposed and verified to confirm the feasibility of this method by means of comparing the existing experimental results and the computational values. First, a shaft probe was used to determine the impact of the shaft on the effective dielectric constant of the probe. Then, three-rod probes were calibrated by a sample with a special shape and water-level variations around the probe using the proposed method to determine the values of the apparent dielectric constant. Besides, model boundary size and end effect were also considered in the computation of dielectric constants. Results showed that compared with the experimental and computational data, the newly proposed method calculated the measurement sensitivity of the shaft probes well. In addition, it was observed that experiment dielectric constant values were slightly different from computational ones, not only using a vertical probe but also horizonal probe. Moreover, it was also found that there was a slight influence of sample shape and end effect on the apparent dielectric constant, but model boundary size has a certain impact on the values. Overall, the new method can provide benefits in the design and optimization of the probe.
APA, Harvard, Vancouver, ISO, and other styles
49

Klusemann, Benjamin, and Swantje Bargmann. "Modeling and simulation of size effects in metallic glasses with a nonlocal continuum mechanics theory." Journal of the Mechanical Behavior of Materials 22, no. 1-2 (2013): 51–66. http://dx.doi.org/10.1515/jmbm-2013-0009.

Full text
Abstract:
AbstractThe present contribution is concerned with the modeling and computation of size effects in metallic glasses. For the underlying model description, we resort to a thermodynamically consistent, gradient-extended continuum mechanics approach. The numerical implementation is carried out with the help of the finite element method. Numerical examples are presented and compared with existing experimental findings to illustrate the performance of the constitutive model. In this regard, the influence of the material length scale is investigated. It is shown that with decreasing sample size or decreasing material length scale, a delay of the shear localization is obtained. In addition, the tension-compression asymmetry observed in experiments is captured by the proposed model. Further, the rate-dependent behavior as well as the influence of the results to initial local defects are investigated.
APA, Harvard, Vancouver, ISO, and other styles
50

Kumar, Nitin, R. K. Agrawal, and Ajay Jaiswal. "Incremental and Decremental Exponential Discriminant Analysis for Face Recognition." International Journal of Computer Vision and Image Processing 4, no. 1 (2014): 40–55. http://dx.doi.org/10.4018/ijcvip.2014010104.

Full text
Abstract:
Linear Discriminant Analysis (LDA) is widely used for feature extraction in face recognition but suffers from small sample size (SSS) problem in its original formulation. Exponential discriminant analysis (EDA) is one of the variants of LDA suggested recently to overcome this problem. For many real time systems, it may not be feasible to have all the data samples in advance before the actual model is developed. The new data samples may appear in chunks at different points of time. In this paper, the authors propose incremental formulation of EDA to avoid learning from scratch. The proposed incremental algorithm takes less computation time and memory. Experiments are performed on three publicly available face datasets. Experimental results demonstrate the effectiveness of the proposed incremental formulation in comparison to its batch formulation in terms of computation time and memory requirement. Also, the proposed incremental algorithms (IEDA, DEDA) outperform incremental formulation of LDA in terms of classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography