Academic literature on the topic 'Training Sample Size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Training Sample Size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Training Sample Size"

1

Zheng, Chengyong, Ningning Wang, and Jing Cui. "Hyperspectral Image Classification With Small Training Sample Size Using Superpixel-Guided Training Sample Enlargement." IEEE Transactions on Geoscience and Remote Sensing 57, no. 10 (2019): 7307–16. http://dx.doi.org/10.1109/tgrs.2019.2912330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wahba, Yasmen, Ehab ElSalamouny, and Ghada ElTaweel. "Estimating the Sample Size for Training Intrusion Detection Systems." International Journal of Computer Network and Information Security 9, no. 12 (2017): 1–10. http://dx.doi.org/10.5815/ijcnis.2017.12.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dobbin, K. K., and X. Song. "Sample size requirements for training high-dimensional risk predictors." Biostatistics 14, no. 4 (2013): 639–52. http://dx.doi.org/10.1093/biostatistics/kxt022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Muto, Yoshihiko, and Yoshihiko Hamamoto. "Improvement of the Parzen classifier in small training sample size situations." Intelligent Data Analysis 5, no. 6 (2001): 477–90. http://dx.doi.org/10.3233/ida-2001-5604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

BLAMIRE, P. A. "The influence of relative sample size in training artificial neural networks." International Journal of Remote Sensing 17, no. 1 (1996): 223–30. http://dx.doi.org/10.1080/01431169608949000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Dongrui, Rui Zhang, Tiejun Liu, et al. "Enhanced Z-LDA for Small Sample Size Training in Brain-Computer Interface Systems." Computational and Mathematical Methods in Medicine 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/680769.

Full text
Abstract:
Background. Usually the training set of online brain-computer interface (BCI) experiment is small. For the small training set, it lacks enough information to deeply train the classifier, resulting in the poor classification performance during online testing.Methods. In this paper, on the basis of Z-LDA, we further calculate the classification probability of Z-LDA and then use it to select the reliable samples from the testing set to enlarge the training set, aiming to mine the additional information from testing set to adjust the biased classification boundary obtained from the small training set. The proposed approach is an extension of previous Z-LDA and is named enhanced Z-LDA (EZ-LDA).Results. We evaluated the classification performance of LDA, Z-LDA, and EZ-LDA on simulation and real BCI datasets with different sizes of training samples, and classification results showed EZ-LDA achieved the best classification performance.Conclusions. EZ-LDA is promising to deal with the small sample size training problem usually existing in online BCI system.
APA, Harvard, Vancouver, ISO, and other styles
7

Qiu, Minna, Jian Zhang, Jiayan Yang, and Liying Ye. "Fusing Two Kinds of Virtual Samples for Small Sample Face Recognition." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/280318.

Full text
Abstract:
Face recognition has become a very active field of biometrics. Different pictures of the same face might include various changes of expressions, poses, and illumination. However, a face recognition system usually suffers from the problem that nonsufficient training samples cannot convey these possible changes effectively. The main reason is that a system has only limited storage space and limited time to capture training samples. Many previous literatures ignored the problem of nonsufficient training samples. In this paper, we overcome the insufficiency of training sample size problem by fusing two kinds of virtual samples and the original samples to perform small sample face recognition. The two used kinds of virtual samples are mirror faces and symmetrical faces. Firstly, we transform the original face image to obtain mirror faces and symmetrical faces. Secondly, we fuse these two kinds of virtual samples to achieve the matching scores between the test sample and each class. Finally, we integrate the matching scores to get the final classification results. We compare the proposed method with the single virtual sample augment methods and the original representation-based classification. The experiments on various face databases show that the proposed scheme achieves the best accuracy among the representation-based classification methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhi, Wei Mei, Hua Ping Guo, and Ming Fan. "Sample Size on the Impact of Imbalance Learning." Advanced Materials Research 756-759 (September 2013): 2547–51. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2547.

Full text
Abstract:
Classification of imbalanced data sets is widely used in many real life applications. Most state-of-the-art classification methods which assume the data sets are relatively balanced lose their efficiency. The paper discusses the factors which influence the modeling of a capable classifier in identifying rare events, especially for the factor of sample size. Carefully designed experiments using Rotation Forest as base classifier, carried on 3 datasets from UCI Machine Learning Repository based on weak show that, in particular imbalance ratio, increases the size of training set by unsupervised resample the large error rate caused by the imbalanced class distribution decreases. The common classification algorithm can reach good effect.
APA, Harvard, Vancouver, ISO, and other styles
9

Hwa, Rebecca. "Sample Selection for Statistical Parsing." Computational Linguistics 30, no. 3 (2004): 253–76. http://dx.doi.org/10.1162/0891201041850894.

Full text
Abstract:
Corpus-based statistical parsing relies on using large quantities of annotated text as training examples. Building this kind of resource is expensive and labor-intensive. This work proposes to use sample selection to find helpful training examples and reduce human effort spent on annotating less informative ones. We consider several criteria for predicting whether unlabeled data might be a helpful training example. Experiments are performed across two syntactic learning tasks and within the single task of parsing across two learning models to compare the effect of different predictive criteria. We find that sample selection can significantly reduce the size of annotated training corpora and that uncertainty is a robust predictive criterion that can be easily applied to different learning models.
APA, Harvard, Vancouver, ISO, and other styles
10

Ramezan, Christopher A., Timothy A. Warner, Aaron E. Maxwell, and Bradley S. Price. "Effects of Training Set Size on Supervised Machine-Learning Land-Cover Classification of Large-Area High-Resolution Remotely Sensed Data." Remote Sensing 13, no. 3 (2021): 368. http://dx.doi.org/10.3390/rs13030368.

Full text
Abstract:
The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many real-world applied projects. This work investigates how variations in training set size, ranging from a large sample size (n = 10,000) to a very small sample size (n = 40), affect the performance of six supervised machine-learning algorithms applied to classify large-area high-spatial-resolution (HR) (1–5 m) remotely sensed data within the context of a geographic object-based image analysis (GEOBIA) approach. GEOBIA, in which adjacent similar pixels are grouped into image-objects that form the unit of the classification, offers the potential benefit of allowing multiple additional variables, such as measures of object geometry and texture, thus increasing the dimensionality of the classification input data. The six supervised machine-learning algorithms are support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), learning vector quantization (LVQ), and gradient-boosted trees (GBM). RF, the algorithm with the highest overall accuracy, was notable for its negligible decrease in overall accuracy, 1.0%, when training sample size decreased from 10,000 to 315 samples. GBM provided similar overall accuracy to RF; however, the algorithm was very expensive in terms of training time and computational resources, especially with large training sets. In contrast to RF and GBM, NEU, and SVM were particularly sensitive to decreasing sample size, with NEU classifications generally producing overall accuracies that were on average slightly higher than SVM classifications for larger sample sizes, but lower than SVM for the smallest sample sizes. NEU however required a longer processing time. The k-NN classifier saw less of a drop in overall accuracy than NEU and SVM as training set size decreased; however, the overall accuracies of k-NN were typically less than RF, NEU, and SVM classifiers. LVQ generally had the lowest overall accuracy of all six methods, but was relatively insensitive to sample size, down to the smallest sample sizes. Overall, due to its relatively high accuracy with small training sample sets, and minimal variations in overall accuracy between very large and small sample sets, as well as relatively short processing time, RF was a good classifier for large-area land-cover classifications of HR remotely sensed data, especially when training data are scarce. However, as performance of different supervised classifiers varies in response to training set size, investigating multiple classification algorithms is recommended to achieve optimal accuracy for a project.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Training Sample Size"

1

Lin, Ying-pu, and 林應璞. "Investigation of the Effect of Training Sample Size on Performance of 2D and 2.5D Face Recognition." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/15741076427990963964.

Full text
Abstract:
碩士<br>國立成功大學<br>系統及船舶機電工程學系碩博士班<br>97<br>The purpose of this thesis is to investigate the effect of training sample size on performance of 2D and 2.5D face recognition. The methods of face recognition are formed by feature extraction (Haar wavelet transform, principlal component analysis and improved principlal component analysis) and classification (Euclidean distance, nearest feature line method and linear discriminant analysis) techniques. This thesis makes an effort on finding a suitable recognition method and concludes the relationship between the size of training sample and the rate of face recognition. A facial image in 2D face recognition is first captured by a CCD camera and an image pre-processing technique is applied to obtain a facial region. However, a facial image in 2.5D face recognition is established by using Photometric Stereo Method (PSM) to obtain the depth and pixel values in the 2.5D face model. Since the construction of 2.5D face model is performed in a dark room, the pixel values are not affected by the intensity of light. Thus, the combination of depth and pixel value will be used as the feature vector for 2.5D face recognition. The simulation of 2D face recognition based on ORL (Olivetti Research Lab), GIT (Georgia Institute of Technology), CIT (California Institute of Technology), ESSEX (University of ESSEX), UMIST (University of Manchester Institute of Science and Technology) and author’s own databases are performed to derive the the relationship between the size of training sample and the rate of face recognition. As a result, the combination of improved principlal component analysis and Euclidean distance has the best recognition rate. When the size of training sample is between 13 and 17, the recognition rate is over 85%. If the size of training sample increases to be above 18, it slightly increases the recognition rate but it rises the recognition time. From a large scale database (i.e., ESSEX), the recognition rate is over 92% when the size of training sample is between 13 and 19. Thus, the recognition rate is stable for a large scale database. As the training sample size increases to 25, the recognition rate does not significantly increase. Thus, an increase of the size of training sample does not provide better recognition rates. On the other hand, the simulation of 2.5D face recognition is based on author’s own database and the recognition method is the same as the 2D face recognition (i.e., the combination of improved principlal component analysis and Euclidean distance). When the size of training sample is between 14 and 17, the recognition rate is above 84% and is stable. If the size of training sample increases to be above 17, the recognition rate reaches to be 93.93%. However, if the size of training sample continuously increases, it does not increase the recognition rate significantly but it rises the recognition time. In conclusion, the algorithms of 2D and 2.5D face recognition are integrated to become a real-time face recognition by using Instrument Control Toolbox and Graphical User Interface in the Matlab environment. Index Terms: Training Sample Size, Photometric Stereo Method, Haar Wavelet Transform, Principlal Component Analysis, Improved Principlal Component Analysis, Euclidean Distance, Linear Discriminant Analysis, Nearest Feature Line.
APA, Harvard, Vancouver, ISO, and other styles
2

Daniyal and 單尼爾. "A guideline to determine the training sample size when applying data mining methods in clinical decision making." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/4g499k.

Full text
Abstract:
碩士<br>國立中央大學<br>生醫科學與工程學系<br>107<br>Background: Biomedicine is a field rich in a variety of heterogeneous, evolving, complex and unstructured data, coming from autonomous sources (i.e. heterogeneous, autonomous, complex and evolving (HACE) theorem). Acquisition of biomedical data takes time, and human power, and usually are very expensive. So, it is difficult to work with populations, and hence, researchers work with samples. In recent years, two growing concerns have overwhelmed in the healthcare area: use of small sample size for experiment and extraction of useful information from massive medical data (big data). Researchers have claimed that overfitting causes false positive (type I error) or false negative (type II error) in small sample size studies in the biomedicine field which produces exaggerated results that do not represent a true effect. On the other hand, in last few years, the volume of data is getting bigger and more complicated due to the continuous generation of data from many sources such as Functional magnetic resonance imaging (fMRI), computed tomography (CT) scan, Positron-emission tomography (PET)/ Single-photon emission computed tomography (SPECT) and Electroencephalogram (EEG). Big data mining has become the most fascinating and fastest growing area which enables the selection, exploring and modelling the vast amount of medical data to help clinical decision making, prevent medication error, and enhance patients’ outcomes. However, there are few challenges in big data, such as missing values, heterogeneous nature of data, the complexity of managing data, etc. that may affect the outcome. So, it is essential to find an appropriate process and algorithm for big data mining to extract useful information out of massive data. Up to date, however, there is no guideline for this, especially about a fair sample size that consists of paramount information for reliable results. Purpose: The goal of this study is to explore the relationship among sample size, statistical parameters and performance of machine learning (ML) methods to ascertain an optimal sample size. Moreover, the study also examines the impact of standard deviations on sample sizes by analyzing the performance of machine learning methods. Method: In this study, I used two kinds of data: experimental data and simulated data. Experimental data is comprised two datasets-the first dataset has 63 stroke patients' brain signals (continuous data), and the other is consist of 120 sleep diaries (discrete categorical data) and each diary records one-person data. To find an optimal sample size, first, I divided each experimental dataset into multiple sample sizes by taking 10% proportion of each dataset. Then, I used these sample sizes in the four most used machine learning methods such as Support vector machine (SVM), Decision tree, Naive Bayes, and Logistic Regression. The ten-fold cross-validation was used to evaluate the classification accuracy. I also measured the grand variance, Eigen value, proportion among the samples of each sample size. On the other hand, I generated artificial dataset by taking an average of real data; the generated data mimicked the real data. I used this dataset to examine the effect of standard deviation on the accuracy of the classifiers when sample sizes were systematically increased from small to large sample sizes. In last, I applied classifiers’ results of both experimental datasets into Receiver operating characteristic curve (ROC) graph to find an appropriate sample size and influence of classifiers’ performance on different sample sizes, small to large size. Results: The results depicted a significant effect of sample sizes on the accuracy of classifiers, data variances, Eigen Value, and proportion in all datasets. Stroke and Sleep datasets showed the intrinsic property in the performance of ML classifiers, data variances (parameter wise variance and subject wise variance), Eigen Value, and proportion of variance. I used this intrinsic property to design two criteria for deciding an appropriate sample size. According to criteria 1, a sample is considered an optimal sample size when the performances of classifiers achieve intrinsic behaviour simultaneously with data variation. In the second criteria, I have used performance, Eigen value and proportion to decide a suitable sample size. When these factors indicate a simultaneous intrinsic property on a specific sample size, then the sample size is considered as an effective sample size. In this study, both criteria suggested similar optimal sample sizes 250 in sleep dataset, although, eigen value showed a little variation as compared to variance between 250 to 500 sample sizes. The variation in eigen values decreased after 500 samples. Thus, due to this trivial variation, criteria II suggested 500 samples size as an effective sample size. It should be noted that if criteria I & II recommend two different sample sizes, then choose a sample size that achieves earlier simultaneous intrinsic property between performance and variance or among performance, eigen value and proportion on a sample size. last, I also designed a third criterion that is based on the receiver operating characteristic curve. The ROC graph illustrates that classifiers have a good performance when the sample sizes have a large size. The large sample sizes have position above the diagonal line. On the other, small sample sizes show worse performance, and they are allocated below the diagonal line. However, the performances of classifiers improve with increment in sample sizes. Conclusion: All the results assert that the sample size has a dramatic impact on the performance of ML methods and data variance. The increment in sample size gives a steady outcome of machine learning methods when data variation has negligible fluctuation. In addition, the intrinsic property of sample size helps us to find an optimal sample size when accuracy, Eigen value, proportion and variance become independent of increment in samples.
APA, Harvard, Vancouver, ISO, and other styles
3

"Robust Experimental Design for Speech Analysis Applications." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57412.

Full text
Abstract:
abstract: In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect). This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process.<br>Dissertation/Thesis<br>Masters Thesis Electrical Engineering 2020
APA, Harvard, Vancouver, ISO, and other styles
4

Moraes, Daniel. "A Contribution to land cover and land use mapping: in Portugal with multi-temporal Sentinel-2 data and supervised classification." Master's thesis, 2021. http://hdl.handle.net/10362/114043.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Geographic Information Systems and Science<br>Remote sensing techniques have been widely employed to map and monitor land cover and land use, important elements for the description of the environment. The current land cover and land use mapping paradigm takes advantage of a variety of data options with proper spatial, spectral and temporal resolutions along with advances in technology. This enabled the creation of automated data processing workflows integrated with classification algorithms to accurately map large areas with multi-temporal data. In Portugal, the General Directorate for Territory (DGT) is developing an operational Land Cover Monitoring System (SMOS), which includes an annual land cover cartography product (COSsim) based on an automatic process using supervised classification of multi-temporal Sentinel-2 data. In this context, a range of experiments are being conducted to improve map accuracy and classification efficiency. This study provides a contribution to DGT’s work. A classification of the biogeographic region of Trás-os-Montes in the North of Portugal was performed for the agricultural year of 2018 using Random Forest and an intra-annual multi-temporal Sentinel-2 dataset, with stratification of the study area and a combination of manually and automatically extracted training samples, with the latter being based on existing reference datasets. This classification was compared to a benchmark classification, conducted without stratification and with training data collected automatically only. In addition, an assessment of the influence of training sample size in classification accuracy was conducted. The main focus of this study was to investigate whether the use of vi classification uncertainty to create an improved training dataset could increase classification accuracy. A process of extracting additional training samples from areas of high classification uncertainty was conducted, then a new classification was performed and the results were compared. Classification accuracy assessment for all proposed experiments was conducted using the overall accuracy, precision, recall and F1-score. The use of stratification and combination of training strategies resulted in a classification accuracy of 66.7%, in contrast to 60.2% in the case of the benchmark classification. Despite the difference being considered not statistically significant, visual inspection of both maps indicated that stratification and introduction of manual training contributed to map land cover more accurately in some areas. Regarding the influence of sample size in classification accuracy, the results indicated a small difference, considered not statistically significant, in accuracy even after a reduction of over 90% in the sample size. This supports the findings of other studies which suggested that Random Forest has low sensitivity to variations in training sample size. However, the results might have been influenced by the training strategy employed, which uses spectral subclasses, thus creating spectral diversity in the samples independently of their size. With respect to the use of classification uncertainty to improve training sample, a slight increase of approximately 1% was observed, which was considered not statistically significant. This result could have been affected by limitations in the process of collecting additional sampling units for some classes, which resulted in a lack of additional training for some classes (eg. agriculture) and an overall imbalanced training dataset. Additionally, some classes had their additional training sampling units collected from a limited number of polygons, which could limit the spectral diversity of new samples. Nevertheless, visual inspection of the map suggested that the new training contributed to reduce confusion between some classes, improving map agreement with ground truth. Further investigation can be conducted to explore more deeply the potential of classification uncertainty, especially focusing on addressing problems related to the collection of the additional samples.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Training Sample Size"

1

Luzon, M. Dolores Moreno. Training & the implementation of quality programmes by a sample of small & medium sized firms in Spain. Aston Business School Research Institute, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Larina, Elena A. Diagnostics of the intonation side of speech in preschool children. TOGU Publishing House, 2020. http://dx.doi.org/10.12731/larinaea.2020.120.

Full text
Abstract:
This educational and methodical manual is intended for students studying in the direction of special (defectological) education 44.03.03, training profile Speech therapy, studying the discipline "Technology of formation of the pronouncing side of speech". The manual examines the ontogenesis of intonational expressiveness of speech, the peculiarities of the prosodic organization of speech in children with disabilities. Modern methods of assessing the intonation expressiveness of speech in preschool children are proposed, and the author's screening diagnostics of the intonation side of speech in preschool children is presented. The appendix contains a sample map of the diagnostic study of the intonation side of the speech of a preschool child. The manual is addressed to students of defectological departments of universities, practicing teachers-speech therapists, specialists in the field of speech pathology.
APA, Harvard, Vancouver, ISO, and other styles
3

Najavits, Lisa M., and Melissa L. Anderson. Psychosocial Treatments for Posttraumatic Stress Disorder. Oxford University Press, 2015. http://dx.doi.org/10.1093/med:psych/9780199342211.003.0018.

Full text
Abstract:
Treatments for posttraumatic stress disorder (PTSD) work better than treatment as usual; average effect sizes are in the moderate to high range. A variety of treatments have been established as effective, with no one treatment having superiority. Both present-focused and past-focused treatment models work (neither consistently outperforms the other). Areas of future development include training, dissemination, client access to care, optimal delivery modes, and mechanisms of action. Methodological issues include improving research reporting, broadening study samples, and greater use of active comparison conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Tzelgov, Joseph, Dana Ganor-Stern, Arava Kallai, and Michal Pinhas. Primitives and Non-primitives of Numerical Representations. Edited by Roi Cohen Kadosh and Ann Dowker. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199642342.013.019.

Full text
Abstract:
Primitives of numerical representation are numbers holistically represented on the mental number line (MNL). Non-primitives are numbers generated from primitives in order to perform specific tasks. Primitives can be automatically retrieved from long-term memory (LTM). Using the size congruency effect in physical comparisons as a marker of automatic retrieval, and its modulation by intrapair numerical distance as an indication of alignment along the MNL, we identify single-digits, but not two-digit numbers, as primitives. By the same criteria, zero is a primitive, but negative numbers are not primitives, which makes zero the smallest numerical primitive. Due to their unique notational structure, fractions are automatically perceived as smaller than 1. While some specific, familiar unit fractions may be primitives, this can be shown only when component bias is eliminated by training participants to denote fractions by unfamiliar figures.
APA, Harvard, Vancouver, ISO, and other styles
5

Hanson, Robin. The Age of Em. Oxford University Press, 2016. http://dx.doi.org/10.1093/oso/9780198754626.001.0001.

Full text
Abstract:
Robots may one day rule the world, but what is a robot-ruled Earth like? Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with the same connections on a fast computer, and you have a robot brain, but recognizably human. Train an em to do some job and copy it a million times: an army of workers is at your disposal. When they can be made cheaply, within perhaps a century, ems will displace humans in most jobs. In this new economic era, the world economy may double in size every few weeks. Some say we can't know the future, especially following such a disruptive new technology, but Professor Robin Hanson sets out to prove them wrong. Applying decades of expertise in physics, computer science, and economics, he uses standard theories to paint a detailed picture of a world dominated by ems. While human lives don't change greatly in the em era, em lives are as different from ours as our lives are from those of our farmer and forager ancestors. Ems make us question common assumptions of moral progress, because they reject many of the values we hold dear. Read about em mind speeds, body sizes, job training and career paths, energy use and cooling infrastructure, virtual reality, aging and retirement, death and immortality, security, wealth inequality, religion, teleportation, identity, cities, politics, law, war, status, friendship and love. This book shows you just how strange your descendants may be, though ems are no stranger than we would appear to our ancestors. To most ems, it seems good to be an em.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Training Sample Size"

1

Muto, Yoshihiko, Hirokazu Nagase, and Yoshihiko Hamamoto. "Evaluation of the Modified Parzen Classifier in Small Training Sample Size Situations." In Soft Computing in Industrial Applications. Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0509-1_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Boonyanunta, Natthaphan, and Panlop Zeephongsekul. "Predicting the Relationship Between the Size of Training Sample and the Predictive Power of Classifiers." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30134-9_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, B., and Y. Y. Tang. "Reduction of Feature Statistics Estimation Error for Small Training Sample Size in Off-Line Signature Verification." In Biometric Authentication. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-25948-0_72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Suárez-Warden, Fernando, Yocelin Cervantes-Gloria, and Eduardo González-Mendívil. "Sample Size Estimation for Statistical Comparative Test of Training by Using Augmented Reality via Theoretical Formula and OCC Graphs: Aeronautical Case of a Component Assemblage." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22024-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Jing, Tingting Wang, and Yulong Qiao. "The Unified Framework of Deep Multiple Kernel Learning for Small Sample Sizes of Training Samples." In Advances in Intelligent Information Hiding and Multimedia Signal Processing. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6420-2_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rahimi, Hamid. "Considering Factors Affecting the Prediction of Time Series by Improving Sine-Cosine Algorithm for Selecting the Best Samples in Neural Network Multiple Training Model." In Lecture Notes in Electrical Engineering. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8672-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohammed, S. G., M. Halliru, J. M. Jibrin, I. Kapran, and H. A. Ajeigbe. "Impact Assessment of Developing Sustainable and Impact-Oriented Groundnut Seed System Under the Tropical Legumes (III) Project in Northern Nigeria." In Enhancing Smallholder Farmers' Access to Seed of Improved Legume Varieties Through Multi-stakeholder Platforms. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8014-7_6.

Full text
Abstract:
AbstractTropical Legumes III project as a development intervention focused on enhancing smallholder farmers’ access to seeds of improved groundnut varieties using multi-stakeholder platforms. Open Data Kit was used to collect information from the platform members using structured questionnaires and focus group discussions (FGDs). Descriptive statistics and adoption score were used to analyze the data. Selection of appropriate project location, reliable beneficiaries, timely supply of seeds, and training on good agronomic practices (GAPs) and effective supervision on production were the major thrusts of the TL III project. The results indicated that the IP members accrued additional income ranging from $214 to $453 per hectare for wet season. The same increase in beneficiaries’ income was reported per hectare for dry season from $193 to $823, respectively; all due to the TL III intervention. The results further indicated increasing access by farmers to services (e.g., improved seeds, extension, credit facilities, market, etc.) and enhanced productivity (farm size, pod and haulm yields). Findings further revealed an average increased market price of 21.5% and 18% for dry and wet season groundnut production, respectively. There was high adoption score (78%) of improved seeds and other GAPs. The study recommends the need to replicate similar interventions in other areas. Continued capacity building on GAPs and improved business management skills to Extension Agents and farmer groups will sustain the successes achieved by the TL III project.
APA, Harvard, Vancouver, ISO, and other styles
8

"Sample-Size Training I." In Improving Statistical Reasoning. Psychology Press, 1999. http://dx.doi.org/10.4324/9781410601247-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Sample-Size Training II." In Improving Statistical Reasoning. Psychology Press, 1999. http://dx.doi.org/10.4324/9781410601247-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Saha, Sangita, Saibal Kumar Saha, Jaya Rani Pandey, and Ajeya Jha. "Employee Motivation for Training and Development." In Handbook of Research on Developing Circular, Digital, and Green Economies in Asia. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8678-5.ch018.

Full text
Abstract:
Training and development is an important function of human resource management. Employees need to regularly undergo training and development programmes to update themselves with the latest technologies and skills, which help to increase the efficiency of the organization. Motivating employees to undergo a training programme is often a challenge faced by employees. This study aims to find the motivating factors for employees to undertake a training and development programme. With a sample size of 172 employees from a leading pharmaceutical company in Sikkim, India, responses were collected and analysed. It was found that interest for updating oneself with the latest technology, better chance of career exploration, commitment to train from the company's end, and encouragement provided from subordinates play significant roles in motivating employees in undertaking a training and development programmes.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Training Sample Size"

1

Lee, M. A., S. Prasad, L. M. Bruce, et al. "Sensitivity of hyperspectral classification algorithms to training sample size." In 2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2009. http://dx.doi.org/10.1109/whispers.2009.5288983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miriyala, Srinivas Soumitri, and Kishalay Mitra. "Novel sample size determination methods for parsimonious training of black box models." In 2017 Indian Control Conference (ICC). IEEE, 2017. http://dx.doi.org/10.1109/indiancc.2017.7846449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Keshari, Rohit, Mayank Vatsa, Richa Singh, and Afzel Noore. "Learning Structure and Strength of CNN Filters for Small Sample Size Training." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zliobaite, Indre, and Ludmila I. Kuncheva. "Determining the Training Window for Small Sample Size Classification with Concept Drift." In 2009 IEEE International Conference on Data Mining Workshops (ICDMW 2009). IEEE, 2009. http://dx.doi.org/10.1109/icdmw.2009.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Shuying, and Weihong Deng. "Very deep convolutional neural network based image classification using small training sample size." In 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2015. http://dx.doi.org/10.1109/acpr.2015.7486599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Bo, Ying Wei, Yu Zhang, and Qiang Yang. "Deep Neural Networks for High Dimension, Low Sample Size Data." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/318.

Full text
Abstract:
Deep neural networks (DNN) have achieved breakthroughs in applications with large sample size. However, when facing high dimension, low sample size (HDLSS) data, such as the phenotype prediction problem using genetic data in bioinformatics, DNN suffers from overfitting and high-variance gradients. In this paper, we propose a DNN model tailored for the HDLSS data, named Deep Neural Pursuit (DNP). DNP selects a subset of high dimensional features for the alleviation of overfitting and takes the average over multiple dropouts to calculate gradients with low variance. As the first DNN method applied on the HDLSS data, DNP enjoys the advantages of the high nonlinearity, the robustness to high dimensionality, the capability of learning from a small number of samples, the stability in feature selection, and the end-to-end training. We demonstrate these advantages of DNP via empirical results on both synthetic and real-world biological datasets.
APA, Harvard, Vancouver, ISO, and other styles
7

Ryumina, Elena, Oxana Verkholyak, and Alexey Karpov. "Annotation Confidence vs. Training Sample Size: Trade-Off Solution for Partially-Continuous Categorical Emotion Recognition." In Interspeech 2021. ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-1636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ninomiya, Hiroshi. "Dynamic sample size selection based quasi-Newton training for highly nonlinear function approximation using multilayer neural networks." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Shuguang, Fangzhou Zhu, Weibing Fan, et al. "Discussion on the Relation Between SVM Training Sample Size and Correct Forecast Ratio for Simulation Experiment Results." In 2010 International Conference on Intelligent Computation Technology and Automation (ICICTA). IEEE, 2010. http://dx.doi.org/10.1109/icicta.2010.301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Daniyal, Wei-Jen Wang, Mu-Chun Su, Si-Huei Lee, Ching-Sui Hung, and Chun-Chuan Chen. "A guideline to determine the training sample size when applying big data mining methods in clinical decision making." In 2018 IEEE International Conference on Applied System Innovation (ICASI). IEEE, 2018. http://dx.doi.org/10.1109/icasi.2018.8394347.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Training Sample Size"

1

Pettit, Chris, and D. Wilson. A physics-informed neural network for sound propagation in the atmospheric boundary layer. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41034.

Full text
Abstract:
We describe what we believe is the first effort to develop a physics-informed neural network (PINN) to predict sound propagation through the atmospheric boundary layer. PINN is a recent innovation in the application of deep learning to simulate physics. The motivation is to combine the strengths of data-driven models and physics models, thereby producing a regularized surrogate model using less data than a purely data-driven model. In a PINN, the data-driven loss function is augmented with penalty terms for deviations from the underlying physics, e.g., a governing equation or a boundary condition. Training data are obtained from Crank-Nicholson solutions of the parabolic equation with homogeneous ground impedance and Monin-Obukhov similarity theory for the effective sound speed in the moving atmosphere. Training data are random samples from an ensemble of solutions for combinations of parameters governing the impedance and the effective sound speed. PINN output is processed to produce realizations of transmission loss that look much like the Crank-Nicholson solutions. We describe the framework for implementing PINN for outdoor sound, and we outline practical matters related to network architecture, the size of the training set, the physics-informed loss function, and challenge of managing the spatial complexity of the complex pressure.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography