Academic literature on the topic 'Nearest Neighbor Classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Nearest Neighbor Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Nearest Neighbor Classification"

1

Onyezewe, Anozie, Armand F. Kana, Fatimah B. Abdullahi, and Aminu O. Abdulsalami. "An Enhanced Adaptive k-Nearest Neighbor Classifier Using Simulated Annealing." International Journal of Intelligent Systems and Applications 13, no. 1 (February 8, 2021): 34–44. http://dx.doi.org/10.5815/ijisa.2021.01.03.

Full text
Abstract:
The k-Nearest Neighbor classifier is a non-complex and widely applied data classification algorithm which does well in real-world applications. The overall classification accuracy of the k-Nearest Neighbor algorithm largely depends on the choice of the number of nearest neighbors(k). The use of a constant k value does not always yield the best solutions especially for real-world datasets with an irregular class and density distribution of data points as it totally ignores the class and density distribution of a test point’s k-environment or neighborhood. A resolution to this problem is to dynamically choose k for each test instance to be classified. However, given a large dataset, it becomes very tasking to maximize the k-Nearest Neighbor performance by tuning k. This work proposes the use of Simulated Annealing, a metaheuristic search algorithm, to select optimal k, thus eliminating the prospect of an exhaustive search for optimal k. The results obtained in four different classification tasks demonstrate a significant improvement in the computational efficiency against the k-Nearest Neighbor methods that perform exhaustive search for k, as accurate nearest neighbors are returned faster for k-Nearest Neighbor classification, thus reducing the computation time.
APA, Harvard, Vancouver, ISO, and other styles
2

Murphy, O. J. "Nearest neighbor pattern classification perceptrons." Proceedings of the IEEE 78, no. 10 (1990): 1595–98. http://dx.doi.org/10.1109/5.58344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hastie, T., and R. Tibshirani. "Discriminant adaptive nearest neighbor classification." IEEE Transactions on Pattern Analysis and Machine Intelligence 18, no. 6 (June 1996): 607–16. http://dx.doi.org/10.1109/34.506411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geva, S., and J. Sitte. "Adaptive nearest neighbor pattern classification." IEEE Transactions on Neural Networks 2, no. 2 (March 1991): 318–22. http://dx.doi.org/10.1109/72.80344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Yingquan, Krassimir Ianakiev, and Venu Govindaraju. "Improved k-nearest neighbor classification." Pattern Recognition 35, no. 10 (October 2002): 2311–18. http://dx.doi.org/10.1016/s0031-3203(01)00132-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gou, Jianping, Yongzhao Zhan, Yunbo Rao, Xiangjun Shen, Xiaoming Wang, and Wu He. "Improved pseudo nearest neighbor classification." Knowledge-Based Systems 70 (November 2014): 361–75. http://dx.doi.org/10.1016/j.knosys.2014.07.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gursoy, Mehmet Emre, Ali Inan, Mehmet Ercan Nergiz, and Yucel Saygin. "Differentially private nearest neighbor classification." Data Mining and Knowledge Discovery 31, no. 5 (July 21, 2017): 1544–75. http://dx.doi.org/10.1007/s10618-017-0532-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajarajeswari, Perepi. "Hyperspectral Image Classification by Using K-Nearest Neighbor Algorithm." International Journal of Psychosocial Rehabilitation 24, no. 5 (April 20, 2020): 5068–74. http://dx.doi.org/10.37200/ijpr/v24i5/pr2020214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hall, Peter, Byeong U. Park, and Richard J. Samworth. "Choice of neighbor order in nearest-neighbor classification." Annals of Statistics 36, no. 5 (October 2008): 2135–52. http://dx.doi.org/10.1214/07-aos537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sugesti, Annisa, Moch Abdul Mukid, and Tarno Tarno. "PERBANDINGAN KINERJA MUTUAL K-NEAREST NEIGHBOR (MKNN) DAN K-NEAREST NEIGHBOR (KNN) DALAM ANALISIS KLASIFIKASI KELAYAKAN KREDIT." Jurnal Gaussian 8, no. 3 (August 30, 2019): 366–76. http://dx.doi.org/10.14710/j.gauss.v8i3.26681.

Full text
Abstract:
Credit feasibility analysis is important for lenders to avoid the risk among the increasement of credit applications. This analysis can be carried out by the classification technique. Classification technique used in this research is instance-based classification. These techniques tend to be simple, but are very dependent on the determination of K values. K is number of nearest neighbor considered for class classification of new data. A small value of K is very sensitive to outliers. This weakness can be overcome using an algorithm that is able to handle outliers, one of them is Mutual K-Nearest Neighbor (MKNN). MKNN removes outliers first, then predicts new observation classes based on the majority class of their mutual nearest neighbors. The algorithm will be compared with KNN without outliers. The model is evaluated by 10-fold cross validation and the classification performance is measured by Gemoetric-Mean of sensitivity and specificity. Based on the analysis the optimal value of K is 9 for MKNN and 3 for KNN, with the highest G-Mean produced by KNN is equal to 0.718, meanwhile G-Mean produced by MKNN is 0.702. The best alternative to classifying credit feasibility in this study is K-Nearest Neighbor (KNN) algorithm with K=3.Keywords: Classification, Credit, MKNN, KNN, G-Mean.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Nearest Neighbor Classification"

1

Karo, Ciril. "Two new nearest neighbor classification rules." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA354997.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1998.
"September 1998." Thesis advisor(s): Samuel E. Buttrey. Includes bibliographical references (p. 69-71). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
2

Moraski, Ashley M. "Classification via distance profile nearest neighbors." Digital WPI, 2006. https://digitalcommons.wpi.edu/etd-theses/703.

Full text
Abstract:
Most classification rules can be expressed in terms of a distance (or dissimilarity) from the point to be classified to each of the candidate classes. For example, linear discriminant analysis classifies points into the class for which the (sample) Mahalanobis distance is smallest. However, dependence among these point-to-group distance measures is generally ignored. The primary goal of this project is to investigate the properties of a general non-parametric classification rule which takes this dependence structure into account. A review of classification procedures and applications is presented. The distance profile nearest-neighbor classification rule is defined. Properties of the rule are then explored via application to both real and simulated data and comparisons to other classification rules are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Gupta, Nidhi. "Mutual k Nearest Neighbor based Classifier." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1289937369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Burkholder, Joshua Jeremy. "Nearest neighbor classification using a density sensitive distance measurement [electronic resource]." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FBurkholder.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments, And Simulations (MOVES))--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Squire, Kevin. "September 2009." Description based on title screen as viewed on November 03, 2009. Author(s) subject terms: Classification, Supervised Learning, k-Nearest Neighbor Classification, Euclidean Distance, Mahalanobis Distance, Density Sensitive Distance, Parzen Windows, Manifold Parzen Windows, Kernel Density Estimation Includes bibliographical references (p. 99-100). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
5

Ozsakabasi, Feray. "Classification Of Forest Areas By K Nearest Neighbor Method: Case Study, Antalya." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609548/index.pdf.

Full text
Abstract:
Among the various remote sensing methods that can be used to map forest areas, the K Nearest Neighbor (KNN) supervised classification method is becoming increasingly popular for creating forest inventories in some countries. In this study, the utility of the KNN algorithm is evaluated for forest/non-forest/water stratification. Antalya is selected as the study area. The data used are composed of Landsat TM and Landsat ETM satellite images, acquired in 1987 and 2002, respectively, SRTM 90 meters digital elevation model (DEM) and land use data from the year 2003. The accuracies of different modifications of the KNN algorithm are evaluated using Leave One Out, which is a special case of K-fold cross-validation, and traditional accuracy assessment using error matrices. The best parameters are found to be Euclidean distance metric, inverse distance weighting, and k equal to 14, while using bands 4, 3 and 2. With these parameters, the cross-validation error is 0.009174, and the overall accuracy is around 86%. The results are compared with those from the Maximum Likelihood algorithm. KNN results are found to be accurate enough for practical applicability of this method for mapping forest areas.
APA, Harvard, Vancouver, ISO, and other styles
6

PORFIRIO, DAVID JONATHAN. "SINGLE-SEQUENCE PROTEIN SECONDARY STRUCTURE PREDICTION BY NEAREST-NEIGHBOR CLASSIFICATION OF PROTEIN WORDS." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/613449.

Full text
Abstract:
Predicting protein secondary structure is the process by which, given a sequence of amino acids as input, the secondary structure class of each position in the sequence is predicted. Our approach is built on the extraction of protein words of a fixed length from protein sequences, followed by nearest-neighbor classification in order to predict the secondary structure class of the middle position in each word. We present a new formulation for learning a distance function on protein words based on position-dependent substitution scores on amino acids. These substitution scores are learned by solving a large linear programming problem on examples of words with known secondary structures. We evaluated this approach by using a database of 5519 proteins with a total amino acid length of approximately 3000000. Presently, a test scheme using words of length 23 achieved a uniform average over word position of 65.2%. The average accuracy for alpha-classified words in the test was 63.1%, for beta-classified words was 56.6%, and for coil classified words was 71.6%.
APA, Harvard, Vancouver, ISO, and other styles
7

Ali, Khan Syed Irteza. "Classification using residual vector quantization." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50300.

Full text
Abstract:
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Dongqing. "GENETIC ALGORITHMS FOR SAMPLE CLASSIFICATION OF MICROARRAY DATA." University of Akron / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=akron1125253420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rudin, Pierre. "Football result prediction using simple classification algorithms, a comparison between k-Nearest Neighbor and Linear Regression." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187659.

Full text
Abstract:
Ever since humans started competing with each other, people have tried to accurately predict the outcome of such events. Football is no exception to this and is extra interesting as subject for a project like this with the ever growing amount of data gathered from matches these days. Previously predictors had to make there predictions using there own knowledge and small amounts of data. This report will use this growing amount of data and find out if it is possible to accurately predict the outcome of a football match using the k-Nearest Neighbor algorithm and Linear regression. The algorithms are compared on how accurately they predict the winner of a match, how precise they predict how many goals each team will score and the accuracy of the predicted goal difference. The results are graphed and presented in tables. A discussion analyzes the results and draw the conclusion that booth algorithms could be useful if used with a good model, and that Linear Regression out performs k-NN.
Ända sedan vi människor började tävla mot varandra, har folk försökt förutspå vinnaren i tävlingarna. Fotboll är inget undantag till detta och är extra intressant för den här studien då den tillgängliga mängden data från fotbollsmatcher ständigt ökar. Tidigare har egna kunskaper och små mängder data använts för att förutspå resultaten. Den här rapporten kommer dra nytta av den växande mängden data för att ta reda på om det är möjligt att med hjälp av k-Nearest Neighbor algoritmen och Linjär regression förutspå resultat i fotbollsmatcher. Algoritmerna kommer jämföras utifrån hur exakt de förutspår vinnaren i matcher, hur många mål de båda lagen gör samt hur precist algoritmerna förutspår målskilnaden i matcherna.    Resultaten presenteras både i grafer och i tabeller. En diskusion förs för att analysera resultaten och kommer fram till att båda algoritmerna kan vara användbara om modelen är välkonstruerad, och att Linjär regression är bättre lämpad än k-NN.
APA, Harvard, Vancouver, ISO, and other styles
10

Blinn, Christine Elizabeth. "Increasing the Precision of Forest Area Estimates through Improved Sampling for Nearest Neighbor Satellite Image Classification." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28694.

Full text
Abstract:
The impacts of training data sample size and sampling method on the accuracy of forest/nonforest classifications of three mosaicked Landsat ETM+ images with the nearest neighbor decision rule were explored. Large training data pools of single pixels were used in simulations to create samples with three sampling methods (random, stratified random, and systematic) and eight sample sizes (25, 50, 75, 100, 200, 300, 400, and 500). Two forest area estimation techniques were used to estimate the proportion of forest in each image and to calculate forest area precision estimates. Training data editing was explored to remove problem pixels from the training data pools. All possible band combinations of the six non-thermal ETM+ bands were evaluated for every sample draw. Comparisons were made between classification accuracies to determine if all six bands were needed. The utility of separability indices, minimum and average Euclidian distances, and cross-validation accuracies for the selection of band combinations, prediction of classification accuracies, and assessment of sample quality were determined. Larger training data sample sizes produced classifications with higher average accuracies and lower variability. All three sampling methods had similar performance. Training data editing improved the average classification accuracies by a minimum of 5.45%, 5.31%, and 3.47%, respectively, for the three images. Band combinations with fewer than all six bands almost always produced the maximum classification accuracy for a single sample draw. The number of bands and combination of bands, which maximized classification accuracy, was dependent on the characteristics of the individual training data sample draw, the image, sample size, and, to a lesser extent, the sampling method. All three band selection measures were unable to select band combinations that produced higher accuracies on average than all six bands. Cross-validation accuracies with sample size 500 had high correlations with classification accuracies, and provided an indication of sample quality. Collection of a high quality training data sample is key to the performance of the nearest neighbor classifier. Larger samples are necessary to guarantee classifier performance and the utility of cross-validation accuracies. Further research is needed to identify the characteristics of "good" training data samples.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Nearest Neighbor Classification"

1

V, Dasarathy Belur, ed. Nearest neighbor (NN) norms: Nn pattern classification techniques. Los Alamitos, Calif: IEEE Computer Society Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Two New Nearest Neighbor Classification Rules. Storming Media, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dasarathy, Belur V. Nearest Neighbor: Pattern Classification Techniques (Nn Norms : Nn Pattern Classification Techniques). Ieee Computer Society, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baillo, Amparo, Antonio Cuevas, and Ricardo Fraiman. Classification methods for functional data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.10.

Full text
Abstract:
This article reviews the literature concerning supervised and unsupervised classification of functional data. It first explains the meaning of unsupervised classification vs. supervised classification before discussing the supervised classification problem in the infinite-dimensional case, showing that its formal statement generally coincides with that of discriminant analysis in the classical multivariate case. It then considers the optimal classifier and plug-in rules, empirical risk and empirical minimization rules, linear discrimination rules, the k nearest neighbor (k-NN) method, and kernel rules. It also describes classification based on partial least squares, classification based on reproducing kernels, and depth-based classification. Finally, it examines unsupervised classification methods, focusing on K-means for functional data, K-means for data in a Hilbert space, and impartial trimmed K-means for functional data. Some practical issues, in particular real-data examples and simulations, are reviewed and some selected proofs are given.
APA, Harvard, Vancouver, ISO, and other styles
5

Min, Renqiang. A non-linear dimensionality reduction method for improving nearest neighbour classification. 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Min, Renqiang. A non-linear dimensionality reduction method for improving nearest neighbour classification. 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

López, César Pérez. DATA MINING and MACHINE LEARNING. CLASSIFICATION PREDICTIVE TECHNIQUES : NAIVE BAYES, NEAREST NEIGHBORS and NEURAL NETWORKS: Examples with MATLAB. Lulu Press, Inc., 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Nearest Neighbor Classification"

1

Seidl, Thomas. "Nearest Neighbor Classification." In Encyclopedia of Database Systems, 1–7. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_561-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Seidl, Thomas. "Nearest Neighbor Classification." In Encyclopedia of Database Systems, 1885–90. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Seidl, Thomas. "Nearest Neighbor Classification." In Encyclopedia of Database Systems, 2472–78. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mucherino, Antonio, Petraq J. Papajorgji, and Panos M. Pardalos. "k-Nearest Neighbor Classification." In Data Mining in Agriculture, 83–106. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-88615-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thomas, Tony, Athira P. Vijayaraghavan, and Sabu Emmanuel. "Nearest Neighbor and Fingerprint Classification." In Machine Learning Approaches in Cyber Security Analytics, 107–28. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1706-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Hongxing, Xili Wang, and Jianping Gou. "Pseudo Nearest Centroid Neighbor Classification." In Lecture Notes in Electrical Engineering, 103–15. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0539-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ishii, Naohiro, Yuta Hoki, Yuki Okada, and Yongguang Bao. "Nearest Neighbor Classification by Relearning." In Intelligent Data Engineering and Automated Learning - IDEAL 2009, 42–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04394-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Biau, Gérard, and Luc Devroye. "Basics of classification." In Lectures on the Nearest Neighbor Method, 223–31. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25388-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Globig, Christoph, and Stefan Wess. "Symbolic Learning and Nearest-Neighbor Classification." In Studies in Classification, Data Analysis, and Knowledge Organization, 17–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/978-3-642-46808-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Yingquan, Krasimir G. Ianakiev, and Venu Govindaraju. "Improvements in K-Nearest Neighbor Classification." In Lecture Notes in Computer Science, 224–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44732-6_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Nearest Neighbor Classification"

1

Zeng Yong, Wang Bing, Zhao Liang, and Yang Yupu. "The extended nearest neighbor classification." In 2008 Chinese Control Conference (CCC). IEEE, 2008. http://dx.doi.org/10.1109/chicc.2008.4605575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ishii, N., I. Torii, Y. Bao, and H. Tanaka. "Modified Reduct: Nearest Neighbor Classification." In 2012 IEEE/ACIS 11th International Conference on Computer and Information Science (ICIS). IEEE, 2012. http://dx.doi.org/10.1109/icis.2012.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Qiaona, and Shiliang Sun. "Hierarchical Large Margin Nearest Neighbor Classification." In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qamar, Ali Mustafa, Eric Gaussier, Jean-Pierre Chevallet, and Joo Hwee Lim. "Similarity Learning for Nearest Neighbor Classification." In 2008 Eighth IEEE International Conference on Data Mining (ICDM). IEEE, 2008. http://dx.doi.org/10.1109/icdm.2008.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Domeniconi, Carlotta, and Dimitrios Gunopulos. "Efficient Local Flexible Nearest Neighbor Classification." In Proceedings of the 2002 SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2002. http://dx.doi.org/10.1137/1.9781611972726.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kriminger, Evan, Jose C. Principe, and Choudur Lakshminarayan. "Nearest Neighbor Distributions for imbalanced classification." In 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane). IEEE, 2012. http://dx.doi.org/10.1109/ijcnn.2012.6252718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gong, Chaoyu, Yongbin Li, Yong Liu, Pei-hong Wang, and Yang You. "Joint Evidential $K$-Nearest Neighbor Classification." In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022. http://dx.doi.org/10.1109/icde53745.2022.00204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ishii, Naohiro, Ippei Torii, Yongguang Bao, and Hidekazu Tanaka. "Mapping of nearest neighbor for classification." In 2013 IEEE/ACIS 12th International Conference on Computer and Information Science (ICIS). IEEE, 2013. http://dx.doi.org/10.1109/icis.2013.6607819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Bing, Yong Zeng, and Yupu Yang. "Generalized nearest neighbor rule for pattern classification." In 2008 7th World Congress on Intelligent Control and Automation. IEEE, 2008. http://dx.doi.org/10.1109/wcica.2008.4594258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tomasev, Nenad, Miloa Radovanović, Dunja Mladenić, and Mirjana Ivanović. "A probabilistic approach to nearest-neighbor classification." In the 20th ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2063576.2063919.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Nearest Neighbor Classification"

1

Han, Euihong, George Karypis, and Vipin Kumar. Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification. Fort Belvoir, VA: Defense Technical Information Center, May 1999. http://dx.doi.org/10.21236/ada439688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Searcy, Stephen W., and Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568747.bard.

Full text
Abstract:
This project includes two main parts: Development of a “Selective Wavelength Imaging Sensor” and an “Adaptive Classifiery System” for adaptive imaging and sorting of agricultural products respectively. Three different technologies were investigated for building a selectable wavelength imaging sensor: diffraction gratings, tunable filters and linear variable filters. Each technology was analyzed and evaluated as the basis for implementing the adaptive sensor. Acousto optic tunable filters were found to be most suitable for the selective wavelength imaging sensor. Consequently, a selectable wavelength imaging sensor was constructed and tested using the selected technology. The sensor was tested and algorithms for multispectral image acquisition were developed. A high speed inspection system for fresh-market carrots was built and tested. It was shown that a combination of efficient parallel processing of a DSP and a PC based host CPU in conjunction with a hierarchical classification system, yielded an inspection system capable of handling 2 carrots per second with a classification accuracy of more than 90%. The adaptive sorting technique was extensively investigated and conclusively demonstrated to reduce misclassification rates in comparison to conventional non-adaptive sorting. The adaptive classifier algorithm was modeled and reduced to a series of modules that can be added to any existing produce sorting machine. A simulation of the entire process was created in Matlab using a graphical user interface technique to promote the accessibility of the difficult theoretical subjects. Typical Grade classifiers based on k-Nearest Neighbor techniques and linear discriminants were implemented. The sample histogram, estimating the cumulative distribution function (CDF), was chosen as a characterizing feature of prototype populations, whereby the Kolmogorov-Smirnov statistic was employed as a population classifier. Simulations were run on artificial data with two-dimensions, four populations and three classes. A quantitative analysis of the adaptive classifier's dependence on population separation, training set size, and stack length determined optimal values for the different parameters involved. The technique was also applied to a real produce sorting problem, e.g. an automatic machine for sorting dates by machine vision in an Israeli date packinghouse. Extensive simulations were run on actual sorting data of dates collected over a 4 month period. In all cases, the results showed a clear reduction in classification error by using the adaptive technique versus non-adaptive sorting.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography