Academic literature on the topic 'K-Nearest Neighbors algorithm'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'K-Nearest Neighbors algorithm.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "K-Nearest Neighbors algorithm"
Zhai, Junhai, Jiaxing Qi, and Sufang Zhang. "An instance selection algorithm for fuzzy K-nearest neighbor." Journal of Intelligent & Fuzzy Systems 40, no. 1 (January 4, 2021): 521–33. http://dx.doi.org/10.3233/jifs-200124.
Full textHouben, I., L. Wehenkel, and M. Pavella. "Genetic Algorithm Based k Nearest Neighbors." IFAC Proceedings Volumes 30, no. 6 (May 1997): 1075–80. http://dx.doi.org/10.1016/s1474-6670(17)43506-3.
Full textOnyezewe, Anozie, Armand F. Kana, Fatimah B. Abdullahi, and Aminu O. Abdulsalami. "An Enhanced Adaptive k-Nearest Neighbor Classifier Using Simulated Annealing." International Journal of Intelligent Systems and Applications 13, no. 1 (February 8, 2021): 34–44. http://dx.doi.org/10.5815/ijisa.2021.01.03.
Full textPiegl, Les A., and Wayne Tiller. "Algorithm for finding all k nearest neighbors." Computer-Aided Design 34, no. 2 (February 2002): 167–72. http://dx.doi.org/10.1016/s0010-4485(00)00141-x.
Full textTu, Ching Ting, Hsiau Wen Lin, Hwei-Jen Lin, and Yue Shen Li. "Super-Resolution Based on Clustered Examples." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 06 (May 9, 2016): 1655015. http://dx.doi.org/10.1142/s0218001416550156.
Full textSong, Yunsheng, Xiaohan Kong, and Chao Zhang. "A Large-Scale k -Nearest Neighbor Classification Algorithm Based on Neighbor Relationship Preservation." Wireless Communications and Mobile Computing 2022 (January 7, 2022): 1–11. http://dx.doi.org/10.1155/2022/7409171.
Full textPrasetio, Rizki Tri, Ali Akbar Rismayadi, and Iedam Fardian Anshori. "Implementasi Algoritma Genetika pada k-nearest neighbours untuk Klasifikasi Kerusakan Tulang Belakang." Jurnal Informatika 5, no. 2 (September 29, 2018): 186–94. http://dx.doi.org/10.31311/ji.v5i2.4123.
Full textPrasetio, Rizki Tri, Ali Akbar Rismayadi, and Iedam Fardian Anshori. "Implementasi Algoritma Genetika pada k-nearest neighbours untuk Klasifikasi Kerusakan Tulang Belakang." Jurnal Informatika 5, no. 2 (September 29, 2018): 186–94. http://dx.doi.org/10.31294/ji.v5i2.4123.
Full textLi, Xiaoguang. "Research and Implementation of Digital Media Recommendation System Based on Semantic Classification." Advances in Multimedia 2022 (March 27, 2022): 1–6. http://dx.doi.org/10.1155/2022/4070827.
Full textPrasad, Devendra, Sandip Kumar Goyal, Avinash Sharma, Amit Bindal, and Virendra Singh Kushwah. "System Model for Prediction Analytics Using K-Nearest Neighbors Algorithm." Journal of Computational and Theoretical Nanoscience 16, no. 10 (October 1, 2019): 4425–30. http://dx.doi.org/10.1166/jctn.2019.8536.
Full textDissertations / Theses on the topic "K-Nearest Neighbors algorithm"
Li, Zheng, and Zheng Li. "Improving Estimation Accuracy of GPS-Based Arterial Travel Time Using K-Nearest Neighbors Algorithm." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625901.
Full textPiro, Paolo. "Learning prototype-based classification rules in a boosting framework: application to real-world and medical image categorization." Phd thesis, Université de Nice Sophia-Antipolis, 2010. http://tel.archives-ouvertes.fr/tel-00590403.
Full textGupta, Nidhi. "Mutual k Nearest Neighbor based Classifier." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1289937369.
Full textOlivares, Javier. "Scaling out-of-core k-nearest neighbors computation on single machines." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S073/document.
Full textThe K-Nearest Neighbors (KNN) is an efficient method to find similar data among a large set of it. Over the years, a huge number of applications have used KNN's capabilities to discover similarities within the data generated in diverse areas such as business, medicine, music, and computer science. Despite years of research have brought several approaches of this algorithm, its implementation still remains a challenge, particularly today where the data is growing at unthinkable rates. In this context, running KNN on large datasets brings two major issues: huge memory footprints and very long runtimes. Because of these high costs in terms of computational resources and time, KNN state-of the-art works do not consider the fact that data can change over time, assuming always that the data remains static throughout the computation, which unfortunately does not conform to reality at all. In this thesis, we address these challenges in our contributions. Firstly, we propose an out-of-core approach to compute KNN on large datasets, using a commodity single PC. We advocate this approach as an inexpensive way to scale the KNN computation compared to the high cost of a distributed algorithm, both in terms of computational resources as well as coding, debugging and deployment effort. Secondly, we propose a multithreading out-of-core approach to face the challenges of computing KNN on data that changes rapidly and continuously over time. After a thorough evaluation, we observe that our main contributions address the challenges of computing the KNN on large datasets, leveraging the restricted resources of a single machine, decreasing runtimes compared to that of the baselines, and scaling the computation both on static and dynamic datasets
Wong, Wing Sing. "K-nearest-neighbor queries with non-spatial predicates on range attributes /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20WONGW.
Full textAikes, Junior Jorge. "Estudo da influência de diversas medidas de similaridade na previsão de séries temporais utilizando o algoritmo KNN-TSP." Universidade Estadual do Oeste do Parana, 2012. http://tede.unioeste.br:8080/tede/handle/tede/1084.
Full textTime series can be understood as any set of observations which are time ordered. Among the many possible tasks appliable to temporal data, one that has attracted increasing interest, due to its various applications, is the time series forecasting. The k-Nearest Neighbor - Time Series Prediction (kNN-TSP) algorithm is a non-parametric method for forecasting time series. One of its advantages, is its easiness application when compared to parametric methods. Even though its easier to define kNN-TSP s parameters, some issues remain opened. This research is focused on the study of one of these parameters: the similarity measure. This parameter was empirically evaluated using various similarity measures in a large set of time series, including artificial series with seasonal and chaotic characteristics, and several real world time series. It was also carried out a case study comparing the predictive accuracy of the kNN-TSP algorithm with the Moving Average (MA), univariate Seasonal Auto-Regressive Integrated Moving Average (SARIMA) and multivariate SARIMA methods in a time series of a Korean s hospital daily patients flow in the Emergency Department. This work also proposes an approach to the development of a hybrid similarity measure which combines characteristics from several measures. The research s result demonstrated that the Lp Norm s measures have an advantage over other measures evaluated, due to its lower computational cost and for providing, in general, greater accuracy in temporal data forecasting using the kNN-TSP algorithm. Although the literature in general adopts the Euclidean similarity measure to calculate de similarity between time series, the Manhattan s distance can be considered an interesting candidate for defining similarity, due to the absence of statistical significant difference and to its lower computational cost when compared to the Euclidian measure. The measure proposed in this work does not show significant results, but it is promising for further research. Regarding the case study, the kNN-TSP algorithm with only the similarity measure parameter optimized achieves a considerably lower error than the MA s best configuration, and a slightly greater error than the univariate e multivariate SARIMA s optimal settings presenting less than one percent of difference.
Séries temporais podem ser entendidas como qualquer conjunto de observações que se encontram ordenadas no tempo. Dentre as várias tarefas possíveis com dados temporais, uma que tem atraído crescente interesse, devido a suas várias aplicações, é a previsão de séries temporais. O algoritmo k-Nearest Neighbor - Time Series Prediction (kNN-TSP) é um método não-paramétrico de previsão de séries temporais que apresenta como uma de suas vantagens a facilidade de aplicação, quando comparado aos métodos paramétricos. Apesar da maior facilidade na determinação de seus parâmetros, algumas questões relacionadas continuam em aberto. Este trabalho está focado no estudo de um desses parâmetros: a medida de similaridade. Esse parâmetro foi avaliado empiricamente utilizando diversas medidas de similaridade em um grande conjunto de séries temporais que incluem séries artificiais, com características sazonais e caóticas, e várias séries reais. Foi realizado também um estudo de caso comparativo entre a precisão da previsão do algoritmo kNN-TSP e a dos métodos de Médias Móveis (MA), Auto-regressivos de Médias Móveis Integrados Sazonais (SARIMA) univariado e SARIMA multivariado, em uma série de fluxo diário de pacientes na Área de Emergência de um hospital coreano. Neste trabalho é ainda proposta uma abordagem para o desenvolvimento de uma medida de similaridade híbrida, que combine características de várias medidas. Os resultados obtidos neste trabalho demonstram que as medidas da Norma Lp apresentam vantagem sobre as demais medidas avaliadas, devido ao seu menor custo computacional e por apresentar, em geral, maior precisão na previsão de dados temporais utilizando o algoritmo kNN-TSP. Apesar de na literatura, em geral, a medida Euclidiana ser adotada como medida de similaridade, a medida Manhattan pode ser considerada candidata interessante para definir a similaridade entre séries temporais, devido a não apresentar diferença estatisticamente significativa com a medida Euclidiana e possuir menor custo computacional. A medida proposta neste trabalho, não apresenta resultados significantes, mas apresenta-se promissora para novas pesquisas. Com relação ao estudo de caso, o algoritmo kNN-TSP, com apenas o parâmetro de medida de similaridade otimizado, alcança um erro consideravelmente inferior a melhor configuração com MA, e pouco maior que as melhores configurações dos métodos SARIMA univariado e SARIMA multivariado, sendo essa diferença inferior a um por cento.
Johansson, David. "Price Prediction of Vinyl Records Using Machine Learning Algorithms." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96464.
Full textMestre, Ricardo Jorge Palheira. "Improvements on the KNN classifier." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10923.
Full textThe object classification is an important area within the artificial intelligence and its application extends to various areas, whether or not in the branch of science. Among the other classifiers, the K-nearest neighbor (KNN) is among the most simple and accurate especially in environments where the data distribution is unknown or apparently not parameterizable. This algorithm assigns the classifying element the major class in the K nearest neighbors. According to the original algorithm, this classification implies the calculation of the distances between the classifying instance and each one of the training objects. If on the one hand, having an extensive training set is an element of importance in order to obtain a high accuracy, on the other hand, it makes the classification of each object slower due to its lazy-learning algorithm nature. Indeed, this algorithm does not provide any means of storing information about the previous calculated classifications,making the calculation of the classification of two equal instances mandatory. In a way, it may be said that this classifier does not learn. This dissertation focuses on the lazy-learning fragility and intends to propose a solution that transforms the KNNinto an eager-learning classifier. In other words, it is intended that the algorithm learns effectively with the training set, thus avoiding redundant calculations. In the context of the proposed change in the algorithm, it is important to highlight the attributes that most characterize the objects according to their discriminating power. In this framework, there will be a study regarding the implementation of these transformations on data of different types: continuous and/or categorical.
Liu, Dongqing. "GENETIC ALGORITHMS FOR SAMPLE CLASSIFICATION OF MICROARRAY DATA." University of Akron / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=akron1125253420.
Full textNeo, TohKoon. "A Direct Algorithm for the K-Nearest-Neighbor Classifier via Local Warping of the Distance Metric." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2168.pdf.
Full textBook chapters on the topic "K-Nearest Neighbors algorithm"
Rekha Sundari, M., G. Siva Rama Krishna, V. Sai Naveen, and G. Bharathi. "Crop Recommendation System Using K-Nearest Neighbors Algorithm." In Proceedings of 6th International Conference on Recent Trends in Computing, 581–89. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4501-0_54.
Full textAgarwal, Pankaj K., and Sandeep Sen. "Selection in monotone matrices and computing k th nearest neighbors." In Algorithm Theory — SWAT '94, 13–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-58218-5_2.
Full textHamraz, Seyed Hamid, and Seyed Shams Feyzabadi. "General-Purpose Learning Machine Using K-Nearest Neighbors Algorithm." In RoboCup 2005: Robot Soccer World Cup IX, 529–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11780519_50.
Full textWang, Xiaochun, Xiali Wang, and Don Mitchell Wilkes. "A New Fast K-Nearest Neighbors-Based Clustering Algorithm." In Machine Learning-based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment, 129–51. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9217-7_7.
Full textYin, Shihao, Runxiu Wu, Peiwu Li, Baohong Liu, and Xuefeng Fu. "Density Peaks Clustering Algorithm Based on K Nearest Neighbors." In Advances in Intelligent Systems and Computing, 129–44. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8048-9_13.
Full textRathore, Manjari Singh, Praneet Saurabh, Ritu Prasad, and Pradeep Mewada. "Text Classification with K-Nearest Neighbors Algorithm Using Gain Ratio." In Advances in Intelligent Systems and Computing, 23–31. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2414-1_3.
Full textGuarracino, Mario R., and Adriano Nebbia. "Predicting Protein-Protein Interactions with K-Nearest Neighbors Classification Algorithm." In Computational Intelligence Methods for Bioinformatics and Biostatistics, 139–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14571-1_10.
Full textGoswami, Partha P., Sandip Das, and Subhas C. Nandy. "Simplex Range Searching and k Nearest Neighbors of a Line Segment in 2D." In Algorithm Theory — SWAT 2002, 69–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45471-3_8.
Full textZhang, Yan, Yan Jia, Xiaobin Huang, Bin Zhou, and Jian Gu. "An Adaptive k-Nearest Neighbors Clustering Algorithm for Complex Distribution Dataset." In Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence, 398–407. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74205-0_44.
Full textAgrawal, Rashmi. "Integrated Effect of Nearest Neighbors and Distance Measures in k-NN Algorithm." In Advances in Intelligent Systems and Computing, 759–66. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6620-7_74.
Full textConference papers on the topic "K-Nearest Neighbors algorithm"
Gauza, Dariusz, Anna Żukowska, and Robert Nowak. "K-nearest neighbors clustering algorithm." In Symposium on Photonics Applications in Astronomy, Communications, Industry and High-Energy Physics Experiments, edited by Ryszard S. Romaniuk. SPIE, 2014. http://dx.doi.org/10.1117/12.2074124.
Full textLI, CHENGJIE, ZHENG PEI, BO LI, and ZHEN ZHANG. "A NEW FUZZY K-NEAREST NEIGHBORS ALGORITHM." In Proceedings of the 4th International ISKE Conference on Intelligent Systems and Knowledge Engineering. WORLD SCIENTIFIC, 2009. http://dx.doi.org/10.1142/9789814295062_0039.
Full textYunlong Gao, Jin-yan Pan, and Feng Gao. "Improved boosting algorithm through weighted k-nearest neighbors classifier." In 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccsit.2010.5563551.
Full textAsadi, Meysam, and Kazem Pourhossein. "Locating Renewable Energy Generators Using K-Nearest Neighbors (KNN) Algorithm." In 2019 Iranian Conference on Renewable Energy & Distributed Generation (ICREDG). IEEE, 2019. http://dx.doi.org/10.1109/icredg47187.2019.190179.
Full textFang Lu and Qingyuan Bai. "A refined weighted K-Nearest Neighbors algorithm for text categorization." In 2010 IEEE International Conference on Intelligent Systems and Knowledge Engineering (ISKE). IEEE, 2010. http://dx.doi.org/10.1109/iske.2010.5680854.
Full textYoung, Barrington, and Raj Bhatnagar. "Secure algorithm for finding K nearest neighbors in distributed databases." In the 2006 International Conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1501434.1501514.
Full textYang, Shu-Bo, Julaiti Alafate, Xi Wang, and Zhen Tian. "A Self-Tuning Model Framework Using K-Nearest Neighbors Algorithm." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-15262.
Full textGauhar, Noushin, Sunanda Das, and Khadiza Sarwar Moury. "Prediction of Flood in Bangladesh using k-Nearest Neighbors Algorithm." In 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST). IEEE, 2021. http://dx.doi.org/10.1109/icrest51555.2021.9331199.
Full textJapa, Arialdis, and Yong Shi. "Parallelizing the Bounded K-Nearest Neighbors Algorithm for Distributed Computing Systems." In 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2020. http://dx.doi.org/10.1109/ccwc47524.2020.9031198.
Full textLing, Wang, and Fu Dong-Mei. "Estimation of Missing Values Using a Weighted K-Nearest Neighbors Algorithm." In 2009 International Conference on Environmental Science and Information Application Technology, ESIAT. IEEE, 2009. http://dx.doi.org/10.1109/esiat.2009.206.
Full textReports on the topic "K-Nearest Neighbors algorithm"
Searcy, Stephen W., and Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568747.bard.
Full text