Academic literature on the topic 'Small sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Small sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Small sample size"

1

Yang, Shengping, and Gilbert Berdine. ""Small" sample size." Southwest Respiratory and Critical Care Chronicles 11, no. 49 (2023): 52–55. http://dx.doi.org/10.12746/swrccc.v11i49.1251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Student. "SMALL SAMPLE SIZE SCIENTIST." Pediatrics 83, no. 3 (1989): A72. http://dx.doi.org/10.1542/peds.83.3.a72a.

Full text
Abstract:
The believer in the law of small numbers practices science as follows: 1. He gambles his research hypotheses on small samples without realizing that the odds against him are unreasonably high. He overestimates power. 2. He has undue confidence in early trends (e.g., the data of the first few subjects) and in the stability of observed patterns (e.g., the number and identity of significant results). He overestimates significance. 3. In evaluating replications, his or others', he has unreasonably high expectations about the replicability of significant results. He underestimates the breadth of confidence intervals. 4. He rarely attributes a deviation of results from expectations to sampling variability, because he finds a causal "explanation" for any discrepancy. Thus, he has little opportunity to recognize sampling variation in action. His belief in the law of small numbers, therefore, will forever remain intact.
APA, Harvard, Vancouver, ISO, and other styles
3

Van Oort, Michiel J. M., and Mary Anne White. "Automated, small sample‐size adiabatic calorimeter." Review of Scientific Instruments 58, no. 7 (1987): 1239–41. http://dx.doi.org/10.1063/1.1139445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kumar, A. "The Sample Size." Journal of Universal College of Medical Sciences 2, no. 1 (2014): 45–47. http://dx.doi.org/10.3126/jucms.v2i1.10493.

Full text
Abstract:
Finding an "appropriate sample size" has been the most basic and foremost problem; a research worker is always faced with, in all sampling based analytical researches. This is so, since a very large sized sample results to unnecessary wastage of resources, while a very small sized sample may affect adversely the accuracy of sample estimates and thus in turn losing the very efficacy of selected sampling plan. The present paper attempts to highlight the main determinant factors and the analytical approach towards estimation ofrequired sample size, along with a few illustrations. DOI: http://dx.doi.org/10.3126/jucms.v2i1.10493 Journal of Universal College of Medical Sciences (2014) Vol.2(1): 45-47
APA, Harvard, Vancouver, ISO, and other styles
5

Korn, Edward L. "Sample Size Tables for Bounding Small Proportions." Biometrics 42, no. 1 (1986): 213. http://dx.doi.org/10.2307/2531259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

VanSant, Ann F. "The Dilemma of the Small Sample Size." Pediatric Physical Therapy 15, no. 3 (2003): 145. http://dx.doi.org/10.1097/01.pep.0000087995.07869.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khobragade, Vandana, M. S. Pradeep Kumar Patnaik, and Srinivasa Rao Sura. "Revaluating Pretraining in Small Size Training Sample Regime." International Journal of Electrical and Electronics Research 10, no. 3 (2022): 694–704. http://dx.doi.org/10.37391/ijeer.100346.

Full text
Abstract:
Deep neural network (DNN) based models are highly acclaimed in medical image classification. The existing DNN architectures are claimed to be at the forefront of image classification. These models require very large datasets to classify the images with a high level of accuracy. However, fail to perform when trained on datasets of small size. Low accuracy and overfitting are the problems observed when medical datasets of small sizes are used to train a classifier using deep learning models such as Convolutional Neural Networks (CNN). These existing methods and models either always overfit when training on these small datasets or will result in classification accuracy which tends towards randomness. This issue stands even when using Transfer Learning (TL), the current standard for such a scenario. In this paper, we have tested several models including ResNet and VGGs along with more modern models like MobileNets on different medical datasets with transfer learning and without transfer learning. We have proposed solid theories as to why there exists a need for a more novel approach to this issue, and how the current methodologies fail when applied to the aforementioned datasets. Larger, more complex models are not able to converge for smaller datasets. Smaller models with less complexity perform better on the same dataset than their larger model counterparts.
APA, Harvard, Vancouver, ISO, and other styles
8

Kuo, Bor-Chen, and Kuang-Yu Chang. "Feature Extractions for Small Sample Size Classification Problem." IEEE Transactions on Geoscience and Remote Sensing 45, no. 3 (2007): 756–64. http://dx.doi.org/10.1109/tgrs.2006.885074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rugini, Luca, Paolo Banelli, and Geert Leus. "Small Sample Size Performance of the Energy Detector." IEEE Communications Letters 17, no. 9 (2013): 1814–17. http://dx.doi.org/10.1109/lcomm.2013.080813.131399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bacchetti, Peter. "Small sample size is not the real problem." Nature Reviews Neuroscience 14, no. 8 (2013): 585. http://dx.doi.org/10.1038/nrn3475-c3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Small sample size"

1

Serra, Puertas Jorge. "Shrinkage corrections of sample linear estimators in the small sample size regime." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/404386.

Full text
Abstract:
We are living in a data deluge era where the dimensionality of the data gathered by inexpensive sensors is growing at a fast pace, whereas the availability of independent samples of the observed data is limited. Thus, classical statistical inference methods relying on the assumption that the sample size is large, compared to the observation dimension, are suffering a severe performance degradation. Within this context, this thesis focus on a popular problem in signal processing, the estimation of a parameter, observed through a linear model. This inference is commonly based on a linear filtering of the data. For instance, beamforming in array signal processing, where a spatial filter steers the beampattern of the antenna array towards a direction to obtain the signal of interest (SOI). In signal processing the design of the optimal filters relies on the optimization of performance measures such as the Mean Square Error (MSE) and the Signal to Interference plus Noise Ratio (SINR). When the first two moments of the SOI are known, the optimization of the MSE leads to the Linear Minimum Mean Square Error (LMMSE). When such statistical information is not available one may force a no distortion constraint towards the SOI in the optimization of the MSE, which is equivalent to maximize the SINR. This leads to the Minimum Variance Distortionless Response (MVDR) method. The LMMSE and MVDR are optimal, though unrealizable in general, since they depend on the inverse of the data correlation, which is not known. The common approach to circumvent this problem is to substitute it for the inverse of the sample correlation matrix (SCM), leading to the sample LMMSE and sample MVDR. This approach is optimal when the number of available statistical samples tends to infinity for a fixed observation dimension. This large sample size scenario hardly holds in practice and the sample methods undergo large performance degradations in the small sample size regime, which may be due to short stationarity constraints or to a system with a high observation dimension. The aim of this thesis is to propose corrections of sample estimators, such as the sample LMMSE and MVDR, to circumvent their performance degradation in the small sample size regime. To this end, two powerful tools are used, shrinkage estimation and random matrix theory (RMT). Shrinkage estimation introduces a structure on the filters that forces some corrections in small sample size situations. They improve sample based estimators by optimizing a bias variance tradeoff. As direct optimization of these shrinkage methods leads to unrealizable estimators, then a consistent estimate of these optimal shrinkage estimators is obtained, within the general asymptotics where both the observation dimension and the sample size tend to infinity, but at a fixed rate. That is, RMT is used to obtain consistent estimates within an asymptotic regime that deals naturally with the small sample size. This RMT approach does not require any assumptions about the distribution of the observations. The proposed filters deal directly with the estimation of the SOI, which leads to performance gains compared to related work methods based on optimizing a metric related to the data covariance estimate or proposing rather ad-hoc regularizations of the SCM. Compared to related work methods which also treat directly the estimation of the SOI and which are based on a shrinkage of the SCM, the proposed filter structure is more general. It contemplates corrections of the inverse of the SCM and considers the related work methods as particular cases. This leads to performance gains which are notable when there is a mismatch in the signature vector of the SOI. This mismatch and the small sample size are the main sources of degradation of the sample LMMSE and MVDR. Thus, in the last part of this thesis, unlike the previous proposed filters and the related work, we propose a filter which treats directly both sources of degradation.<br>Estamos viviendo en una era en la que la dimensión de los datos, recogidos por sensores de bajo precio, está creciendo a un ritmo elevado, pero la disponibilidad de muestras estadísticamente independientes de los datos es limitada. Así, los métodos clásicos de inferencia estadística sufren una degradación importante, ya que asumen un tamaño muestral grande comparado con la dimensión de los datos. En este contexto, esta tesis se centra en un problema popular en procesado de señal, la estimación lineal de un parámetro observado mediante un modelo lineal. Por ejemplo, la conformación de haz en procesado de agrupaciones de antenas, donde un filtro enfoca el haz hacia una dirección para obtener la señal asociada a una fuente de interés (SOI). El diseño de los filtros óptimos se basa en optimizar una medida de prestación como el error cuadrático medio (MSE) o la relación señal a ruido más interferente (SINR). Cuando hay información sobre los momentos de segundo orden de la SOI, la optimización del MSE lleva a obtener el estimador lineal de mínimo error cuadrático medio (LMMSE). Cuando esa información no está disponible, se puede forzar la restricción de no distorsión de la SOI en la optimización del MSE, que es equivalente a maximizar la SINR. Esto conduce al estimador de Capon (MVDR). El LMMSE y MVDR son óptimos, pero no son realizables, ya que dependen de la inversa de la matriz de correlación de los datos, que no es conocida. El procedimiento habitual para solventar este problema es sustituirla por la inversa de la correlación muestral (SCM), esto lleva al LMMSE y MVDR muestral. Este procedimiento es óptimo cuando el tamaño muestral tiende a infinito y la dimensión de los datos es fija. En la práctica este tamaño muestral elevado no suele producirse y los métodos LMMSE y MVDR muestrales sufren una degradación importante en este régimen de tamaño muestral pequeño. Éste se puede deber a periodos cortos de estacionariedad estadística o a sistemas cuya dimensión sea elevada. El objetivo de esta tesis es proponer correcciones de los estimadores LMMSE y MVDR muestrales que permitan combatir su degradación en el régimen de tamaño muestral pequeño. Para ello se usan dos herramientas potentes, la estimación shrinkage y la teoría de matrices aleatorias (RMT). La estimación shrinkage introduce una estructura de los estimadores que mejora los estimadores muestrales mediante la optimización del compromiso entre media y varianza del estimador. La optimización directa de los métodos shrinkage lleva a métodos no realizables. Por eso luego se propone obtener una estimación consistente de ellos en el régimen asintótico en el que tanto la dimensión de los datos como el tamaño muestral tienden a infinito, pero manteniendo un ratio constante. Es decir RMT se usa para obtener estimaciones consistentes en un régimen asintótico que trata naturalmente las situaciones de tamaño muestral pequeño. Esta metodología basada en RMT no requiere suposiciones sobre el tipo de distribución de los datos. Los filtros propuestos tratan directamente la estimación de la SOI, esto lleva a ganancias de prestaciones en comparación a otros métodos basados en optimizar una métrica relacionada con la estimación de la covarianza de los datos o regularizaciones ad hoc de la SCM. La estructura de filtro propuesta es más general que otros métodos que también tratan directamente la estimación de la SOI y que se basan en un shrinkage de la SCM. Contemplamos correcciones de la inversa de la SCM y los métodos del estado del arte son casos particulares. Esto lleva a ganancias de prestaciones que son notables cuando hay una incertidumbre en el vector de firma asociado a la SOI. Esa incertidumbre y el tamaño muestral pequeño son las fuentes de degradación de los LMMSE y MVDR muestrales. Así, en la última parte de la tesis, a diferencia de métodos propuestos previamente en la tesis y en la literatura, se propone un filtro que trata de forma directa ambas fuentes de degradación.
APA, Harvard, Vancouver, ISO, and other styles
2

Tongur, Can. "Small sample performances of two tests for overidentifying restrictions." Thesis, Uppsala University, Department of Economics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6367.

Full text
Abstract:
<p>Two new specification tests for overidentifying restrictions proposed by Hahn and Hausman (2002:b) are here tested and compared to the classical Sargan test. Power properties are found to be very similar in overall performance, while Sargan generally has better size than the new tests. Also, size is distorted for one of the new tests, thus a tendency to reject prevails. In addition, sometimes severe bias is found which affects the tests’ performances, something that differs from earlier studies.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Wood, Scott William. "Differential item functioning procedures for polytomous items when examinee sample sizes are small." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1110.

Full text
Abstract:
As part of test score validity, differential item functioning (DIF) is a quantitative characteristic used to evaluate potential item bias. In applications where a small number of examinees take a test, statistical power of DIF detection methods may be affected. Researchers have proposed modifications to DIF detection methods to account for small focal group examinee sizes for the case when items are dichotomously scored. These methods, however, have not been applied to polytomously scored items. Simulated polytomous item response strings were used to study the Type I error rates and statistical power of three popular DIF detection methods (Mantel test/Cox's β, Liu-Agresti statistic, HW3) and three modifications proposed for contingency tables (empirical Bayesian, randomization, log-linear smoothing). The simulation considered two small sample size conditions, the case with 40 reference group and 40 focal group examinees and the case with 400 reference group and 40 focal group examinees. In order to compare statistical power rates, it was necessary to calculate the Type I error rates for the DIF detection methods and their modifications. Under most simulation conditions, the unmodified, randomization-based, and log-linear smoothing-based Mantel and Liu-Agresti tests yielded Type I error rates around 5%. The HW3 statistic was found to yield higher Type I error rates than expected for the 40 reference group examinees case, rendering power calculations for these cases meaningless. Results from the simulation suggested that the unmodified Mantel and Liu-Agresti tests yielded the highest statistical power rates for the pervasive-constant and pervasive-convergent patterns of DIF, as compared to other DIF method alternatives. Power rates improved by several percentage points if log-linear smoothing methods were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. Power rates did not improve if Bayesian methods or randomization tests were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. ANOVA tests showed that statistical power was higher when 400 reference examinees were used versus 40 reference examinees, when impact was present among examinees versus when impact was not present, and when the studied item was excluded from the anchor test versus when the studied item was included in the anchor test. Statistical power rates were generally too low to merit practical use of these methods in isolation, at least under the conditions of this study.
APA, Harvard, Vancouver, ISO, and other styles
4

Tanner, Whitney Ford. "Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal Studies." UKnowledge, 2018. https://uknowledge.uky.edu/epb_etds/20.

Full text
Abstract:
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result. First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections. Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model. Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
APA, Harvard, Vancouver, ISO, and other styles
5

Heard, Astrid. "APPLICATION OF STATISTICAL METHODS IN RISK AND RELIABILITY." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2602.

Full text
Abstract:
The dissertation considers construction of confidence intervals for a cumulative distribution function F(z) and its inverse at some fixed points z and u on the basis of an i.i.d. sample where the sample size is relatively small. The sample is modeled as having the flexible Generalized Gamma distribution with all three parameters being unknown. This approach can be viewed as an alternative to nonparametric techniques which do not specify distribution of X and lead to less efficient procedures. The confidence intervals are constructed by objective Bayesian methods and use the Jeffreys noninformative prior. Performance of the resulting confidence intervals is studied via Monte Carlo simulations and compared to the performance of nonparametric confidence intervals based on binomial proportion. In addition, techniques for change point detection are analyzed and further evaluated via Monte Carlo simulations. The effect of a change point on the interval estimators is studied both analytically and via Monte Carlo simulations.<br>Ph.D.<br>Department of Mathematics<br>Arts and Sciences<br>Mathematics
APA, Harvard, Vancouver, ISO, and other styles
6

Hartley, Michael A. "A simulation study of the error induced in one-sided reliability confidence bounds for the Weiball distribution using a small sample size with heavily censored data." Thesis, Monterey California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1260.

Full text
Abstract:
Approved for public release; distribution in unlimited.<br>Budget limitations have reduced the number of military components available for testing, and time constraints have reduced the amount of time available for actual testing resulting in many items still operating at the end of test cycles. These two factors produce small test populations (small sample size) with "heavily" censored data. The assumption of "normal approximation" for estimates based on these small sample sizes reduces the accuracy of confidence bounds of the probability plots and the associated quantities. This creates a problem in acquisition analysis because the confidence in the probability estimates influences the number of spare parts required to support a mission or deployment or determines the length of warranty ensuring proper operation of systems. This thesis develops a method that simulates small samples with censored data and examines the error of the Fisher-Matrix (FM) and the Likelihood Ratio Bounds (LRB) confidence methods of two test populations (size 10 and 20) with three, five, seven and nine observed failures for the Weibull distribution. This thesis includes a Monte Carlo simulation code written in S-Plus that can be modified by the user to meet their particular needs for any sampling and censoring scheme. To illustrate the approach, the thesis includes a catalog of corrected confidence bounds for the Weibull distribution, which can be used by acquisition analysts to adjust their confidence bounds and obtain a more accurate representation for warranty and reliability work.<br>Civilian, Department of the Air Force
APA, Harvard, Vancouver, ISO, and other styles
7

Hartley, Michael A. "A simulation study of the error induced in one-shine reliability confidence bounds for the Weiball distribution using a small sample size with heavily censored data /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FHartley.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prohorenko, Didrik. "A forecasting approach to estimating cartel damages : The importance of considering estimation uncertainty." Thesis, Södertörns högskola, Nationalekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-41021.

Full text
Abstract:
In this study, I consider the performance of simple forecast models frequently applied in counterfactual analysis when the information at hand is limited. Furthermore, I discuss the robustness of the standard t-test commonly used to statistically detect cartels. I empirically verify that the standard t-statistics encompasses parameter estimation uncertainty when one of the time series in a two-sided t-test has been estimated. Thereafter, I compare the results with those from a corrected t-test, recently proposed, where the uncertainty has been accounted for. The results from the study show that a simple OLS-model can be used to detect a cartel and to compute a counterfactual price when data is limited, at least as long as the price overcharge inflicted by the cartel members is relatively large. Yet, the level of accuracy may vary and at a point where the data used for estimating the model become relatively limited, the model predictions tend to be inaccurate.
APA, Harvard, Vancouver, ISO, and other styles
9

Cook, James Allen. "A decompositional investigation of 3D face recognition." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16653/1/James_Allen_Cook_Thesis.pdf.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
10

Cook, James Allen. "A decompositional investigation of 3D face recognition." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16653/.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Small sample size"

1

Tang, K. Linda. The effect of small calibration sample sizes on TOEFL IRT-based equating. Educational Testing Service, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Luzon, M. Dolores Moreno. Training & the implementation of quality programmes by a sample of small & medium sized firms in Spain. Aston Business School Research Institute, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Champlain, Andre F. De. Assessing the dimensionality of simulated LSAT item response matrices with small sample sizes and short test lengths. Law School Admission Council, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

van de Schoot, Rens, and Milica Miočević, eds. Small Sample Size Solutions. Routledge, 2020. http://dx.doi.org/10.4324/9780429273872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schoot, Rens van de, and Milica Miočević. Small Sample Size Solutions. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Jie. Appearance-based face recognition under small sample size scenario. 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Small Sample Size Solutions: A Guide for Applied Researchers and Practitioners. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Small Sample Size Solutions: A How to Guide for Applied Researchers and Practitioners. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schoot, Rens van de, and Milica Miočević. Small Sample Size Solutions: A How to Guide for Applied Researchers and Practitioners. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schoot, Rens van de, and Milica Miočević. Small Sample Size Solutions: A How to Guide for Applied Researchers and Practitioners. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Small sample size"

1

Nigam, A. K. "Small area estimation (for inadequate sample size)." In Statistical Aspects of Community Health and Nutrition. CRC Press, 2024. http://dx.doi.org/10.1201/9781003476948-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Read, Timothy R. C., and Noel A. C. Cressie. "Improving the Accuracy of Tests with Small Sample Size." In Springer Series in Statistics. Springer New York, 1988. http://dx.doi.org/10.1007/978-1-4612-4578-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Golland, Polina, W. Eric L. Grimson, Martha E. Shenton, and Ron Kikinis. "Small Sample Size Learning for Shape Analysis of Anatomical Structures." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2000. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-540-40899-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, WeiShi, JianHuang Lai, and P. C. Yuen. "Fast Calculation for Fisher Criteria in Small Sample Size Problem." In Advances in Biometric Person Authentication. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30548-4_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhidong, and Wuyi Yang. "Subspace Regularized Linear Discriminant Analysis for Small Sample Size Problems." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32695-0_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Wuyi. "Regularized Complete Linear Discriminant Analysis for Small Sample Size Problems." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31837-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dornaika, Fadi, and Alireza Bosagzadeh. "On Solving the Small Sample Size Problem for Marginal Fisher Analysis." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39094-4_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Skubalska-Rafajłowicz, Ewa. "Small Sample Size in High Dimensional Space - Minimum Distance Based Classification." In Artificial Intelligence and Soft Computing. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07173-2_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stejić, Zoran, Yasufumi Takama, and Kaoru Hirota. "Small Sample Size Performance of Evolutionary Algorithms for Adaptive Image Retrieval." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27814-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rambo, Robert P. "Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources." In Biological Small Angle Scattering: Techniques, Strategies and Tips. Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6038-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Small sample size"

1

Song, Guoqing, Ke Ma, Yang Yang, Yanchen Dong, Danyang Han, and Mingliang Suo. "Fault Diagnosis Model for UAV with Small Sample Size and Imbalanced Data." In 2024 Global Reliability and Prognostics and Health Management Conference (PHM-Beijing). IEEE, 2024. https://doi.org/10.1109/phm-beijing63284.2024.10874672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Liyin, Bin Liu, Wengui Ma, Shun Wang, Biao Xu, and Binshuang Wen. "Study of Reliability Assessment on Nuclear Safety DCS Module with Small Sample Size." In 2024 Global Reliability and Prognostics and Health Management Conference (PHM-Beijing). IEEE, 2024. https://doi.org/10.1109/phm-beijing63284.2024.10874739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Shanshan, and Lujing Chao. "Tackling the Challenge of Small Sample Size: Effective Training by Integrating SNN and GNN." In 2024 China Automation Congress (CAC). IEEE, 2024. https://doi.org/10.1109/cac63892.2024.10864983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Zeyang, Shuzhe Cao, Pan Liu, Tianhao Huang, Jianming Wu, and Guodong Zhou. "Overcoming Small Sample Size Challenges In Lithography Process Window Analysis Through DCGAN-Based Augmentation." In 2025 Conference of Science and Technology of Integrated Circuits (CSTIC). IEEE, 2025. https://doi.org/10.1109/cstic64481.2025.11017839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Haotian, Yuan Jiang, and Lei Zhao. "Chaotic Weighting Random Integration-Based Data Augmentation for Radio Frequency Fingerprint Identification With Extremely Small Sample Size." In 2024 IEEE 7th International Conference on Electronic Information and Communication Technology (ICEICT). IEEE, 2024. http://dx.doi.org/10.1109/iceict61637.2024.10671144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Yuqiao, Ze Jin, Fumihiko Nakatani, Mototaka Miyake, and Kenji Suzuki. "“Small-data” Patch-wise Multi-dimensional Output Deep-learning for Rare Cancer Diagnosis in MRI under Limited Sample-size Situation." In 2024 IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, 2024. http://dx.doi.org/10.1109/isbi56570.2024.10635254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Feng Gao, A. H. Strahler, W. Lucht, Zong-Guo Xia, and Xiaowen Li. "Retrieving albedo in small sample size." In IGARSS '98. Sensing and Managing the Environment. 1998 IEEE International Geoscience and Remote Sensing. Symposium Proceedings. (Cat. No.98CH36174). IEEE, 1998. http://dx.doi.org/10.1109/igarss.1998.702230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rosario, Dalton, and Henry Kling. "Hyperspectral object tracking using small sample size." In SPIE Defense, Security, and Sensing, edited by Sylvia S. Shen and Paul E. Lewis. SPIE, 2010. http://dx.doi.org/10.1117/12.850359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

CHEN, Wenan, and Hongbin ZHANG. "WEIGHTED PROJECTION APPROACH FOR SMALL SAMPLE SIZE PROBLEM." In 11th Joint International Computer Conference - JICC 2005. WORLD SCIENTIFIC, 2005. http://dx.doi.org/10.1142/9789812701534_0197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Yanhua, and Bo Yang. "Probability-possibility transformation for small sample size data." In 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2010. http://dx.doi.org/10.1109/fskd.2010.5569396.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Small sample size"

1

Kamath, C., J. Franzman, and B. Daub. Spatio-Temporal Surrogates for Interaction of a Jet with High Explosives: Part I - Analysis with a Small Sample Size. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/1984763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cuevas, Conrado, José Miguel Benavente, Rolando Campusano, and Roberto Alvarez. Employment Generation, Firm Size, and Innovation in Chile. Inter-American Development Bank, 2011. http://dx.doi.org/10.18235/0008964.

Full text
Abstract:
This paper compiles and analyzes several sources of information to shed light on the relationship between innovation and employment growth in the manufacturing industry in Chile in the last 15 years. Our overall conclusions are that process innovation is generally not found to be a relevant determinant of employment growth, and that product innovation is usually positively associated with an expansion in employment. These results seem to be similar regardless of firm size and hold for both low- and high-tech industries. Our findings reveal that in-house R&amp;D (make only strategy) is positively related with employment growth. However, this is not the case for the sample of small firms. With respect to employment growth by types of workers, skilled vs. unskilled, the evidence is less robust. However, in-house R&amp;D appears to favor employment growth for both types of workers in low-tech industries.
APA, Harvard, Vancouver, ISO, and other styles
3

Aboal, Diego, Paula Garda, Bibiana Lanzilotta, and Marcelo Perera. Firm Size, Knowledge Intensity and Employment Generation: The Microeconometric Evidence for the Service Sector in Uruguay. Inter-American Development Bank, 2011. http://dx.doi.org/10.18235/0008971.

Full text
Abstract:
The employment impact of innovation in the heterogeneous universe of services was studied using data from the 2004-2009 Uruguayan service innovation surveys. The empirical evidence shows that the impact of product innovation on employment is positive, while process innovation appears to have no effect. The effect varies according to the skill level of the labor force, across sectors, and the type of innovation strategy pursued by firms. Process innovation activities tend to substitute low-skilled jobs with higher-skilled jobs, while product innovation allows for more gains in efficiency in the production of new products with unskilled labor and no gains with the skilled labor force. Producing technology in-house has in most cases no impact on employment, while the combined strategy of acquiring technology outside the firm and producing it in-house has strong positive effects. The results found for knowledge-intensive business services and small firms, with some exceptions, are similar to the ones found for whole sample.
APA, Harvard, Vancouver, ISO, and other styles
4

Tennant, David. Business Surveys on the Impact of COVID-19 on Jamaican Firms. Inter-American Development Bank, 2021. http://dx.doi.org/10.18235/0003251.

Full text
Abstract:
The datasets come from two surveys of Jamaican businesses conducted between May and June 2020. Two sets of self-administered surveys were conducted using Survey Monkey. A very small sample of financial institutions was surveyed to gain perspective on the challenges facing financiers as a result of the pandemic, and their efforts to respond to such challenges. Nine financial institutions completed this survey, and the results were used to complement the information derived from the second and major survey. The second survey targeted non-financial businesses operating in Jamaica. The sample of firms was selected from a list of all registered Jamaican firms, obtained from the Companies Office of Jamaica. A stratified random sample was used based on firm type, region, and sector. Some firms may have also participated in the study through contact made by their respective affiliations, which were approached to endorse the study and encourage their members to engage. A total of 390 firms completed the second survey. A significant degree of representation was achieved across size, type and age of business, sector and location of operation. Good gender representation was also achieved.
APA, Harvard, Vancouver, ISO, and other styles
5

Gu, Wan-Jie, Hao-Tian Wang, Jiao Huang, Zhe-Ming Zhao, and Chun-Dong Zhang. High Flow Nasal Oxygen versus Conventional Oxygen Therapy in Gastrointestinal Endoscopy with Conscious Sedation: A Systematic Review and Meta-Analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2022. http://dx.doi.org/10.37766/inplasy2022.1.0022.

Full text
Abstract:
Review question / Objective: To compare the efficacy of high flow nasal oxygen with conventional oxygen therapy to prevent hypoxemia in gastrointestinal endoscopy with conscious sedation. Condition being studied: High flow nasal oxygen, a novel technique, may be an alternative to conventional oxygen therapy. High flow nasal oxygen can deliver heated and humidified oxygen via special nasal cannula with high flow (up to 70 L/min). It has been applied to improve oxygenation in clinical entities, favored by increasing evidence supporting its efficacy. Recently, the use of high flow nasal oxygen has spreaded to gastrointestinal endoscopy. However, the efficacy of high flow nasal oxygen in gastrointestinal endoscopy has not yet been well evaluated due to small sample size of the individual study and conflicting results.
APA, Harvard, Vancouver, ISO, and other styles
6

Morón, Eduardo, Edgar Salgado, and Cristhian Seminario. Financial Dependence, Formal Credit and Firm Informality: Evidence from Peruvian Household Data. Inter-American Development Bank, 2012. http://dx.doi.org/10.18235/0011384.

Full text
Abstract:
This paper examines the link between financial deepening and formalization in Peru. Using data from the National Household Survey, Bloomberg and the Central Bank of Peru Central Bank, the Catão, Pagés, and Rosales (2009) model is implemented at activity level (2-digits ISIC), and the Rajan and Zingales (1998) approach of sectors' dependence on external funds is followed. The sample is divided into three firm size categories, and two formality measures are assessed. Using the accounting books specification, robust results are obtained, supporting a significant and positive effect of credit growth on formalization only for the self-employment firms category. Alternatively, using the pension enrollment specification, the channel is found positively significant only for firms with more than 10 workers; there is a smaller effect for firms with 2-10 workers. There is also a significant between effect, explaining the transition from small firms to larger firms due to greater credit availability.
APA, Harvard, Vancouver, ISO, and other styles
7

Saint-Hubert, Cathrina, Maylinn Yon, Jenny Guo, and Jennifer Frederick. Screen Time Activities of Adolescents with Depression Symptoms. Florida International University, 2025. https://doi.org/10.25148/fiuurj.3.1.7.

Full text
Abstract:
Over 90% of US adolescents own a smartphone. Adolescents commonly use smartphones to communicate through apps such as Snapchat and Instagram. Considering the rise in technology use and depressive symptoms amongst adolescents, communication patterns can provide insight into the communication habits of depressed adolescents. Prior research suggests that the effects of digital technology use on adolescents’ well-being differ based on the type of use and frequency of communication. We examined how adolescents with elevated depressive symptoms use smartphones to communicate, including apps, modalities, frequency of communication, and with whom they communicated. The sample (N=46) comprised largely female (70%) and Hispanic (80.4%) adolescents ages 12-18 years who met a cutoff of ≥16 on the Center for Epidemiological Studies-Depression Scale (CES-D). Data were collected from a larger study using smartphone-based ecological momentary assessment (EMA) and mobile sensing over one month to model idiographic networks of depressive symptoms and processes. At the follow-up visit, participants reported using native phone apps most frequently (100%), followed by Instagram (86.96%) and Snapchat (43.49%). Most participants reported texting as the most frequent method to contact friends (76.09%) and parents (56.52%). Findings demonstrate that adolescents use different communication methods to communicate with different people. Our sample used text most frequently, which may be due to convenience and autonomy. Findings are limited by self-report assessment of smartphone communication use and small sample size. Assessing content rather than the frequency of specific apps can provide insight into the nature and function of smartphone-based communication among depressed adolescents.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Ciyan, Jing Chen, Sini Li, and Jianhe Li. Comparative Efficacy of Chinese Herbal Injections Combined Western medicine for Non-small cell lung cancer: A Bayesian Network Meta-Analysis of randomized controlled trials. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2021. http://dx.doi.org/10.37766/inplasy2021.11.0068.

Full text
Abstract:
Review question / Objective: Advanced lung cancer has become the top malignant tumor in terms of morbidity and mortality, and Chinese herbal injections combined with western drugs have been widely used to treat advanced non-small cell lung cancer. For this purpose, we conducted a Bayesian network analysis to systematically evaluate the efficacy of different herbal injections combined with western drugs in the treatment of NSCLC. Subjects: Patients diagnosed with NSCLC by pathological or cytological examination, locally advanced or those who refused surgical treatment were included, regardless of gender, age, stage, race, nationality and sample size; Interventions: Chinese herbal injections combined with three types of commonly used western drugs (platinum, targeted and immune agents) were used in the experimental group, while the control group was treated with western drugs alone; Study type: to report the efficacy of Chinese herbal injections combined with western drugs in the treatment of non-small cell lung cancer efficacy in a randomized controlled trial (rct) Eligible. No restrictions were imposed on language, year of publication, or publication status. Ending indicators: Main ending indicators: (1) disease control rate (DCR), DCR = (complete remission + partial remission + stable)/total number of cases. Efficacy rate = (number of improvement cases + number of stable cases)/total number of cases. (2) Secondary outcome indicators: quality of life, determined according to the KPS behavioral status scale, improvement was defined as an increase of ≥10 points in KPS score after treatment; stability was defined as an increase or decrease of &lt;10 points in KPS score; decline was defined as a decrease of ≥10 points in KPS score. (3) The incidence of adverse reactions, including gastrointestinal reactions, white blood cell (WBC) reduction, hemoglobin (HGB) reduction, platelet (PLT) reduction, etc.
APA, Harvard, Vancouver, ISO, and other styles
9

Raikow, David, Mark Wasser, Amanda McCutcheon, and Anne Farahi. Trends in water quality of Waikolu Stream, Kalaupapa National Historical Park, Moloka?i, Hawaii, 2007?2017. National Park Service, 2024. http://dx.doi.org/10.36967/2302153.

Full text
Abstract:
Water quality was monitored in Waikolu Stream, Kalaupapa National Historical Park, Moloka?i, Hawa?i from 2007-2017. Water was collected for nutrient analyses and a sonde equipped with multiple probes was used to measure water quality parameters. The sampling regime was based on a split-panel of permanently established or ?fixed? sites sampled during each sampling effort, and ?temporary? sites of randomly assigned locations, for each sampling effort. Results included: ? NO-3 + NO-2 regularly exceeded state water quality criteria. ? TP exceeded state water quality criteria once out of three quarters evaluated. ? Other nutrient concentrations were consistent with Hawaiian streams. ? Waters were clear and well oxygenated. ? Conditions were consistent with similar pacific island streams. ? Dissolved nutrient fractions dominated base flow conditions. ? Seasonality was detected for TDN, NO-3 + NO-2, chlorophyll, pH, oxygen, and temperature. ? Several trends were detected; however, the magnitude of change was small and did not warrant concern. ? Temporal patterns did not obviously correspond to climate conditions. ? Parameters varied spatially to a limited extent, with nitrogen decreasing and temperature, specific conductance, and oxygen increasing from headwaters to the mouth. ? Temporary site conditions were highly correlated with fixed sites and did not provide additional insights. Water quality in Waikolu Stream was generally good with the exception of NO-3 + NO-2 which exceeded state standards. However, concentrations of NO-3 + NO-2 were generally consistent with many streams in Hawaii and natural causes of observed concentrations may have included older soils, steep slopes, and high precipitation. Exotic species including ungulates, which are known to occur in Waikolu Valley, could have contributed to elevated N through droppings and soil erosion in the riparian zone. Waikolu Stream TDN, TDP, chlorophyll, pH, turbidity, SpC, and oxygen were consistent with streams in other well-protected NPS watersheds on pacific islands on Maui, Guam, and American Samoa. Due to the watershed?s small size, the stream?s short length, and because variation or unique conditions were not captured by using split-panel, we recommend reducing the sampling effort. For example, our observations indicate that one easily accessible site in the lower reaches of the stream (FKALA03) displays water quality that reflects the entire stream. We also recommend adding nutrient analytes to better characterize biogeochemical cycling in this stream, including TP, TN, NH+4, and PO4. A power analysis is underway to evaluate reduction of sample sites. Such a reduction will result in large cost savings for the PACN water quality monitoring program.
APA, Harvard, Vancouver, ISO, and other styles
10

Coles, Henry, Yong Qin, and Phillip Price. Comparing Server Energy Use and Efficiency Using Small Sample Sizes. Office of Scientific and Technical Information (OSTI), 2014. http://dx.doi.org/10.2172/1163229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography