To see the other types of publications on this topic, follow the link: KNN classification.

Journal articles on the topic 'KNN classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'KNN classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gweon, Hyukjun, Matthias Schonlau, and Stefan H. Steiner. "The k conditional nearest neighbor algorithm for classification and class probability estimation." PeerJ Computer Science 5 (May 13, 2019): e194. http://dx.doi.org/10.7717/peerj-cs.194.

Full text
Abstract:
The k nearest neighbor (kNN) approach is a simple and effective nonparametric algorithm for classification. One of the drawbacks of kNN is that the method can only give coarse estimates of class probabilities, particularly for low values of k. To avoid this drawback, we propose a new nonparametric classification method based on nearest neighbors conditional on each class: the proposed approach calculates the distance between a new instance and the kth nearest neighbor from each class, estimates posterior probabilities of class memberships using the distances, and assigns the instance to the cl
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Shichao. "Cost-sensitive KNN classification." Neurocomputing 391 (May 2020): 234–42. http://dx.doi.org/10.1016/j.neucom.2018.11.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Puning, and Lifeng Lai. "Efficient Classification with Adaptive KNN." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 11007–14. http://dx.doi.org/10.1609/aaai.v35i12.17314.

Full text
Abstract:
In this paper, we propose an adaptive kNN method for classification, in which different k are selected for different test samples. Our selection rule is easy to implement since it is completely adaptive and does not require any knowledge of the underlying distribution. The convergence rate of the risk of this classifier to the Bayes risk is shown to be minimax optimal for various settings. Moreover, under some special assumptions, the convergence rate is especially fast and does not decay with the increase of dimensionality.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Shichao, Xuelong Li, Ming Zong, Xiaofeng Zhu, and Debo Cheng. "Learning k for kNN Classification." ACM Transactions on Intelligent Systems and Technology 8, no. 3 (2017): 1–19. http://dx.doi.org/10.1145/2990508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Khairina, Nurul, Theofil Tri Saputra Sibarani, Rizki Muliono, Zulfikar Sembiring, and Muhathir Muhathir. "Identification of Pneumonia using The K-Nearest Neighbors Method using HOG Fitur Feature Extraction." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 5, no. 2 (2022): 562–68. http://dx.doi.org/10.31289/jite.v5i2.6216.

Full text
Abstract:
Pneumonia is a wet lung disease. Pneumonia is generally caused by viruses, bacteria or fungi. Not infrequently Pneumonia can cause death. The K-Nearest Neighbors method is a classification method that uses the majority value from the closest k value category. At this time people are not too worried about pneumonia because this pneumonia has symptoms like a normal cough. However, fast and accurate information from health experts is also very necessary so that pneumonia symptoms can be recognized early and how to deal with them can also be done faster. In this study, researchers will diagnose pn
APA, Harvard, Vancouver, ISO, and other styles
6

Raeisi Shahraki, Hadi, Saeedeh Pourahmad, and Najaf Zare. "K Important Neighbors: A Novel Approach to Binary Classification in High Dimensional Data." BioMed Research International 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/7560807.

Full text
Abstract:
K nearest neighbors (KNN) are known as one of the simplest nonparametric classifiers but in high dimensional setting accuracy of KNN are affected by nuisance features. In this study, we proposed the K important neighbors (KIN) as a novel approach for binary classification in high dimensional problems. To avoid the curse of dimensionality, we implemented smoothly clipped absolute deviation (SCAD) logistic regression at the initial stage and considered the importance of each feature in construction of dissimilarity measure with imposing features contribution as a function of SCAD coefficients on
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Zhida, Peng Liu, and Yi Yang. "Convective/Stratiform Precipitation Classification Using Ground-Based Doppler Radar Data Based on the K-Nearest Neighbor Algorithm." Remote Sensing 11, no. 19 (2019): 2277. http://dx.doi.org/10.3390/rs11192277.

Full text
Abstract:
Stratiform and convective rain types are associated with different cloud physical processes, vertical structures, thermodynamic influences and precipitation types. Distinguishing convective and stratiform systems is beneficial to meteorology research and weather forecasting. However, there is no clear boundary between stratiform and convective precipitation. In this study, a machine learning algorithm, K-nearest neighbor (KNN), is used to classify precipitation types. Six Doppler radar (WSR-98D/SA) data sets from Jiangsu, Guangzhou and Anhui Provinces in China were used as training and classif
APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Jiaxuan, and Hyukjun Gweon. "Random k conditional nearest neighbor for high-dimensional data." PeerJ Computer Science 11 (January 24, 2025): e2497. https://doi.org/10.7717/peerj-cs.2497.

Full text
Abstract:
The k nearest neighbor (kNN) approach is a simple and effective algorithm for classification and a number of variants have been proposed based on the kNN algorithm. One of the limitations of kNN is that the method may be less effective when data contains many noisy features due to their non-informative influence in calculating distance. Additionally, information derived from nearest neighbors may be less meaningful in high-dimensional data. To address the limitation of nearest-neighbor based approaches in high-dimensional data, we propose to extend the k conditional nearest neighbor (kCNN) met
APA, Harvard, Vancouver, ISO, and other styles
9

Su, Yixin, and Sheng-Uei Guan. "Density and Distance Based KNN Approach to Classification." International Journal of Applied Evolutionary Computation 7, no. 2 (2016): 45–60. http://dx.doi.org/10.4018/ijaec.2016040103.

Full text
Abstract:
KNN algorithm is a simple and efficient algorithm developed to solve classification problems. However, it encounters problems when classifying datasets with non-uniform density distributions. The existing KNN voting mechanism may lose essential information by considering majority only and get degraded performance when a dataset has uneven distribution. The other drawback comes from the way that KNN treat all the participating candidates equally when judging upon one test datum. To overcome the weaknesses of KNN, a Region of Influence Based KNN (RI-KNN) is proposed. RI-KNN computes for each tra
APA, Harvard, Vancouver, ISO, and other styles
10

Ganatra, Dr Dhimant. "Improving classification accuracy :The KNN approach." International Journal of Advanced Trends in Computer Science and Engineering 9, no. 4 (2020): 6147–50. http://dx.doi.org/10.30534/ijatcse/2020/287942020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Boyko, Nataliya I., and Mykhaylo V. Muzyka. "Methods of analysis of multimodal data to increase the accuracy of classification." Applied Aspects of Information Technology 5, no. 2 (2022): 147–60. http://dx.doi.org/10.15276/aait.05.2022.11.

Full text
Abstract:
This paper proposes methods for analyzing multimodal data that will help improve the overall accuracy of the results and plans for classifying K-Nearest Neighbor (KNN) to minimize their risk. The mechanism of increasing the accuracy of KNN classification is considered. The research methods used in this work are comparison, analysis, induction, and experiment. This work aimed to improve the accuracy of KNN classification by comparing existing algorithms and applying new methods. Many literary and media sources on the classification according to the algorithm k of the nearest neighbors were anal
APA, Harvard, Vancouver, ISO, and other styles
12

McHUGH, E. S., A. P. SHINN, and J. W. KAY. "Discrimination of the notifiable pathogen Gyrodactylus salaris from G. thymalli (Monogenea) using statistical classifiers applied to morphometric data." Parasitology 121, no. 3 (2000): 315–23. http://dx.doi.org/10.1017/s0031182099006381.

Full text
Abstract:
The identification and discrimination of 2 closely related and morphologically similar species of Gyrodactylus, G. salaris and G. thymalli, were assessed using the statistical classification methodologies Linear Discriminant Analysis (LDA) and k-Nearest Neighbours (KNN). These statistical methods were applied to morphometric measurements made on the gyrodactylid attachment hooks. The mean estimated classification percentages of correctly identifying each species were 98·1% (LDA) and 97·9% (KNN) for G. salaris and 99·9% (LDA) and 73·2% (KNN) for G. thymalli. The analysis was expanded to include
APA, Harvard, Vancouver, ISO, and other styles
13

Mathivanan, Norsyela Muhammad Noor, Nor Azura Md. Ghani, and Roziah Mohd Janor. "Improving Classification Accuracy Using Clustering Technique." Bulletin of Electrical Engineering and Informatics 7, no. 3 (2018): 465–70. https://doi.org/10.11591/eei.v7i3.1272.

Full text
Abstract:
Product classification is the key issue in e-commerce domains. Many products are released to the market rapidly and to select the correct category in taxonomy for each product has become a challenging task. The application of classification model is useful to precisely classify the products. The study proposed a method to apply clustering prior to classification. This study has used a large-scale real-world data set to identify the efficiency of clustering technique to improve the classification model. The conventional text classification procedures are used in the study such as preprocessing,
APA, Harvard, Vancouver, ISO, and other styles
14

Pandey, Shubham, Vivek Sharma, and Garima Agrawal. "Modification of KNN Algorithm." International Journal of Engineering and Computer Science 8, no. 11 (2019): 24869–77. http://dx.doi.org/10.18535/ijecs/v8i11.4383.

Full text
Abstract:
K-Nearest Neighbor (KNN) classification is one of the most fundamental and simple classification methods. It is among the most frequently used classification algorithm in the case when there is little or no prior knowledge about the distribution of the data. In this paper a modification is taken to improve the performance of KNN. The main idea of KNN is to use a set of robust neighbors in the training data. This modified KNN proposed in this paper is better from traditional KNN in both terms: robustness and performance. Inspired from the traditional KNN algorithm, the main idea is to classify
APA, Harvard, Vancouver, ISO, and other styles
15

Boro, Debojit, and Dhruba K. Bhattacharyya. "Particle swarm optimisation-based KNN for improving KNN and ensemble classification performance." International Journal of Innovative Computing and Applications 6, no. 3/4 (2015): 145. http://dx.doi.org/10.1504/ijica.2015.073004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

N., Suresh Kumar, and Praveena Pothina. "Evolution of hybrid distance based kNN classification." International Journal of Artificial Intelligence (IJ-AI) 10, no. 2 (2021): 510–18. https://doi.org/10.11591/ijai.v10.i2.pp510-518.

Full text
Abstract:
The evolution of classification of opinion mining and user review analysis span from decades reaching into ubiquitous computing in efforts such as movie review analysis. The performance of linear and non-linear models are discussed to classify the positive and negative reviews of movie data sets. The effectiveness of linear and non-linear algorithms are tested and compared in-terms of average accuracy. The performance of various algorithms is tested by implementing them on internet movie data base (IMDB). The hybrid kNN model optimizes the performance classification interns of accuracy. The ac
APA, Harvard, Vancouver, ISO, and other styles
17

Najadat, Hassan, Rasha Obeidat, and Ismail Hmeidi. "Clustering Generalised Instances Set Approaches for Text Classification." Journal of Information & Knowledge Management 10, no. 01 (2011): 91–107. http://dx.doi.org/10.1142/s0219649211002857.

Full text
Abstract:
This paper introduces three new text classification methods: Clustering-Based Generalised Instances Set (CB-GIS), Multilevel Clustering-Based Generalised Instances Set (MLC_GIS) and Multilevel Clustering-Based, k Nearest Neighbours (MLC-kNN). These new methods aim to unify the strengths and overcome the drawbacks of the three similarity-based text classification methods, namely, kNN, centroid-based and GIS. The new methods utilise a clustering technique called spherical K-means to represent each class by a representative set of generalised instances to be used later in the classification. The
APA, Harvard, Vancouver, ISO, and other styles
18

Nascimento, Eduardo Soares, Fernanda Sayuri Yoshino Watanabe, Maria de Lourdes Bueno Trindade Galo, and Erivaldo Antonio da Silva. "Assessing Land Use and Cover Changes arising from the 2022 water crisis in Southeast China: A comparative analysis of Remote Sensing Imagery classifications and Machine Learning algorithms." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-3-2024 (November 4, 2024): 253–60. http://dx.doi.org/10.5194/isprs-annals-x-3-2024-253-2024.

Full text
Abstract:
Abstract. The water crisis in the southeast region of China in 2022, caused by one of the worst heatwaves on record, was characterized by severe shortages of water resources, leading to challenges for local communities, agriculture, and industry. To analyze changes in land use and land cover (LULC) in the Jialing River region, Chongqing, China, we compared Remote Sensing (RS) imagery classifications before and after the intense heat waves of 2022. We evaluated the performance of two machine learning algorithms, KDTree KNN and Random Forest (RF), in LULC classifications. The classifications wer
APA, Harvard, Vancouver, ISO, and other styles
19

Desiani, Anita, Adinda Ayu Lestari, M. Al-Ariq, Ali Amran, and Yuli Andriani. "Comparison of Support Vector Machine and K-Nearest Neighbors in Breast Cancer Classification." Pattimura International Journal of Mathematics (PIJMath) 1, no. 1 (2022): 33–42. http://dx.doi.org/10.30598/pijmathvol1iss1pp33-42.

Full text
Abstract:
Cancer is one of the leading causes of death, and breast cancer is the second leading cause of cancer death in women. One method to realize the level of malignancy of breast cancer from an early age is by classifying the cancer malignancy using data mining. One of the widely used data mining methods with a good level of accuracy is the Support Vector Machine (SVM) and K-Nearest Neighbors (KNN). Evaluation techniques of percentage split and cross-validation were used to evaluate and compare the SVM and KNN classification models. The result was that the accuracy level of the SVM classification m
APA, Harvard, Vancouver, ISO, and other styles
20

Kumar, N. Suresh, and Pothina Praveena. "Evolution of hybrid distance based kNN classification." IAES International Journal of Artificial Intelligence (IJ-AI) 10, no. 2 (2021): 510. http://dx.doi.org/10.11591/ijai.v10.i2.pp510-518.

Full text
Abstract:
<span id="docs-internal-guid-b63d466d-7fff-f94f-7540-9cb92d7bb505"><span>The evolution of classification of opinion mining and user review analysis span from decades reaching into ubiquitous computing in efforts such as movie review analysis. The performance of linear and non-linear models are discussed to classify the positive and negative reviews of movie data sets. The effectiveness of linear and non-linear algorithms are tested and compared in-terms of average accuracy. The performance of various algorithms is tested by implementing them on internet movie data base (IMDB). The
APA, Harvard, Vancouver, ISO, and other styles
21

Abdul, Zrar Kh, Abdulbasit K. Al‑Talabani, Chnoor M. Rahman, and Safar M. Asaad. "Electrocardiogram Heartbeat Classification using Convolutional Neural Network-k Nearest Neighbor." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 12, no. 1 (2024): 61–67. http://dx.doi.org/10.14500/aro.11444.

Full text
Abstract:
Electrocardiogram (ECG) analysis is widely used by cardiologists and medical practitioners for monitoring cardiac health. A high-performance automatic ECG classification system is challenging because there is difficulty in detecting and categorizing different waveforms in the signal, especially in manual analysis of ECG signals, which means, a better classification system is needed in terms of performance and accuracy. Hence, in this paper, the authors propose an accurate ECG classification and monitoring system called convolutional neural network-k nearest neighbor (CNN-kNN). The proposed met
APA, Harvard, Vancouver, ISO, and other styles
22

Xing, Wenchao, and Yilin Bei. "Medical Health Big Data Classification Based on KNN Classification Algorithm." IEEE Access 8 (2020): 28808–19. http://dx.doi.org/10.1109/access.2019.2955754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, Manfu, Wei Deng, Hongtong Liu, and Xinmiao Yun. "An Intrusion Detection Model based on Hybrid Classification algorithm." MATEC Web of Conferences 246 (2018): 03027. http://dx.doi.org/10.1051/matecconf/201824603027.

Full text
Abstract:
Due to using the single classification algorithm can not meet the performance requirements of intrusion detection, combined with the numerical value of KNN and the advantage of naive Bayes in the structure of data, an intrusion detection model KNN-NB based on KNN and Naive Bayes hybrid classification algorithm is proposed. The model first preprocesses the NSL-KDD intrusion detection data set. And then by exploiting the advantages of KNN algorithm in data values, the model calculates the distance between the samples according to the feature items and selects the K sample data with the smallest
APA, Harvard, Vancouver, ISO, and other styles
24

Vichare, Srushti, Gaurav Nanda, and Raji Sundararajan. "Probabilistic Ensemble Framework for Injury Narrative Classification." AI 5, no. 3 (2024): 1684–94. http://dx.doi.org/10.3390/ai5030082.

Full text
Abstract:
In this research, we analyzed narratives from the National Electronic Injury Surveillance System (NEISS) dataset to predict the top two injury codes using a comparative study of ensemble machine learning (ML) models. Four ensemble models were evaluated: Random Forest (RF) combined with Logistic Regression (LR), K-Nearest Neighbor (KNN) paired with RF, LR combined with KNN, and a model integrating LR, RF, and KNN, all utilizing a probabilistic likelihood-based approach to improve decision-making across different classifiers. The combined KNN + LR ensemble achieved an accuracy of 90.47% for the
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Shi Min, and Xian Zhe Cao. "Research on Text Classification Technique." Applied Mechanics and Materials 278-280 (January 2013): 2081–84. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.2081.

Full text
Abstract:
At first the paper will introduce the basic conception and the generic progress of text classification, after that it will introduce three text classification algorithms in detail and finally it will verify NB, SVM and KNN by experiment with the data mining software-weka. The result of experiment shows that KNN is more efficient than the other two algorithms in recall and precision.
APA, Harvard, Vancouver, ISO, and other styles
26

Demidova, Liliya A. "Two-Stage Hybrid Data Classifiers Based on SVM and kNN Algorithms." Symmetry 13, no. 4 (2021): 615. http://dx.doi.org/10.3390/sym13040615.

Full text
Abstract:
The paper considers a solution to the problem of developing two-stage hybrid SVM-kNN classifiers with the aim to increase the data classification quality by refining the classification decisions near the class boundary defined by the SVM classifier. In the first stage, the SVM classifier with default parameters values is developed. Here, the training dataset is designed on the basis of the initial dataset. When developing the SVM classifier, a binary SVM algorithm or one-class SVM algorithm is used. Based on the results of the training of the SVM classifier, two variants of the training datase
APA, Harvard, Vancouver, ISO, and other styles
27

Sajib, AH, AZM Shafiullah, and AH Sumon. "An Alternative Algorithm for Classification Based on Robust Mahalanobis Distance." Dhaka University Journal of Science 61, no. 1 (2013): 81–85. http://dx.doi.org/10.3329/dujs.v61i1.15101.

Full text
Abstract:
This study considers the classification problem for binary output attribute when input attributes are drawn from multivariate normal distribution, in both clean and contaminated case. Classical metrics are affected by the outliers, while robust metrics are computationally inefficient. In order to achieve robustness and computational efficiency at the same time, we propose a new robust distance metric for K-Nearest Neighbor (KNN) method. We call our proposed metric Alternative Robust Mahalanobis Distance (ARMD) metric. Thus KNN using ARMD is alternative KNN method. The classical metrics use non
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Chun Ye, and Xiao Feng Zhou. "The MapReduce Parallel Study of KNN Algorithm." Advanced Materials Research 989-994 (July 2014): 2123–27. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2123.

Full text
Abstract:
Although the parallelization KNN algorithm improves the classification efficiency of the algorithm, the calculation of the parallel algorithms increases with the increasing of the training sample data scale, affecting the classification efficiency of the algorithm. Aiming at this shortage, this paper will improve the original parallelization KNN algorithm in the MapReduce model, adding the text pretreatment process to improve the classification efficiency of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

Aung, Swe Swe, Itaru Nagayama, and Shiro Tamaki. "Dual-kNN for a Pattern Classification Approach." IEIE Transactions on Smart Processing & Computing 6, no. 5 (2017): 326–33. http://dx.doi.org/10.5573/ieiespc.2017.6.5.326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kaur, Gagandeep, Anshu Sharma, and Anurag Sharma. "Heart Disease Prediction using KNN classification approach." International Journal of Computer Sciences and Engineering 7, no. 5 (2019): 416–20. http://dx.doi.org/10.26438/ijcse/v7i5.416420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jia, Bin-Bin, and Min-Ling Zhang. "Multi-Dimensional Classification via kNN Feature Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3975–82. http://dx.doi.org/10.1609/aaai.v33i01.33013975.

Full text
Abstract:
Multi-dimensional classification (MDC) deals with the problem where one instance is associated with multiple class variables, each of which specifies its class membership w.r.t. one specific class space. Existing approaches learn from MDC examples by focusing on modeling dependencies among class variables, while the potential usefulness of manipulating feature space hasn’t been investigated. In this paper, a first attempt towards feature manipulation for MDC is proposed which enriches the original feature space with kNNaugmented features. Specifically, simple counting statistics on the class m
APA, Harvard, Vancouver, ISO, and other styles
32

Yousif, Yousif Elfatih. "Weather Prediction System Using KNN Classification Algorithm." European Journal of Information Technologies and Computer Science 2, no. 1 (2022): 10–13. http://dx.doi.org/10.24018/compute.2022.2.1.44.

Full text
Abstract:
Data Mining is a technology that facilitates extracting relevant and which have factors in common from the set of data. It is the process of analysis data from different perspectives and discovering problems, patterns, and correlations in data sets that are useful for predicting outcomes that help you make a correct decision. Weather Prediction is a field of meteorology that is created by collecting dynamic data related to the current state of the weather such as temperature, humidity, rainfall, wind. In this paper, we designed a system using a classification method by k-Nearest Neighbors algo
APA, Harvard, Vancouver, ISO, and other styles
33

Ghauri, Sajjad Ahmed. "KNN BASED CLASSIFICATION OF DIGITAL MODULATED SIGNALS." IIUM Engineering Journal 17, no. 2 (2016): 71–82. http://dx.doi.org/10.31436/iiumej.v17i2.641.

Full text
Abstract:
Demodulation process without the knowledge of modulation scheme requires Automatic Modulation Classification (AMC). When receiver has limited information about received signal then AMC become essential process. AMC finds important place in the field many civil and military fields such as modern electronic warfare, interfering source recognition, frequency management, link adaptation etc. In this paper we explore the use of K-nearest neighbor (KNN) for modulation classification with different distance measurement methods. Five modulation schemes are used for classification purpose which is Bina
APA, Harvard, Vancouver, ISO, and other styles
34

Jia, Bin-Bin, and Min-Ling Zhang. "Multi-dimensional classification via kNN feature augmentation." Pattern Recognition 106 (October 2020): 107423. http://dx.doi.org/10.1016/j.patcog.2020.107423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kette, Efraim Kurniawan Dairo. "MODIFIED CORRELATION WEIGHT K-NEAREST NEIGHBOR CLASSIFIER USING TRAINING DATASET CLEANING METHOD." Indonesian Journal of Physics 32, no. 2 (2021): 20–25. http://dx.doi.org/10.5614/itb.ijp.2021.32.2.5.

Full text
Abstract:
In pattern recognition, the k-Nearest Neighbor (kNN) algorithm is the simplest non-parametric algorithm. Due to its simplicity, the model cases and the quality of the training data itself usually influence kNN algorithm classification performance. Therefore, this article proposes a sparse correlation weight model, combined with the Training Data Set Cleaning (TDC) method by Classification Ability Ranking (CAR) called the CAR classification method based on Coefficient-Weighted kNN (CAR-CWKNN) to improve kNN classifier performance. Correlation weight in Sparse Representation (SR) has been proven
APA, Harvard, Vancouver, ISO, and other styles
36

R. Sasikala, Dr. S. P. Swornambiga. "Novel K-Nearest Neighbor With Convolutional Neural Networks (KNN-CNN) For Accurate Brain Tumor Detection In Image Mining." Tuijin Jishu/Journal of Propulsion Technology 44, no. 4 (2023): 2090–99. http://dx.doi.org/10.52783/tjjpt.v44.i4.1184.

Full text
Abstract:
Brain tumor classification plays a crucial role in early diagnosis and effective treatment planning. In this paper, we propose a novel approach, K-Nearest Neighbor with Convolutional Neural Networks (KNN-CNN), for accurate brain tumor classification. The proposed method combines the strengths of K-Nearest Neighbor (KNN) and Convolutional Neural Networks (CNNs) to leverage both traditional feature-based classification and deep learning-based feature extraction. We use CNNs to learn high-level features from brain tumor images, and KNN is employed to classify tumors based on the extracted feature
APA, Harvard, Vancouver, ISO, and other styles
37

Taguelmimt, Redha, and Rachid Beghdad. "DS-kNN." International Journal of Information Security and Privacy 15, no. 2 (2021): 131–44. http://dx.doi.org/10.4018/ijisp.2021040107.

Full text
Abstract:
On one hand, there are many proposed intrusion detection systems (IDSs) in the literature. On the other hand, many studies try to deduce the important features that can best detect attacks. This paper presents a new and an easy-to-implement approach to intrusion detection, named distance sum-based k-nearest neighbors (DS-kNN), which is an improved version of k-NN classifier. Given a data sample to classify, DS-kNN computes the distance sum of the k-nearest neighbors of the data sample in each of the possible classes of the dataset. Then, the data sample is assigned to the class having the smal
APA, Harvard, Vancouver, ISO, and other styles
38

Naveena, M., and Hemantha Kumar G. "Classification of Birds Using KNN and SVM Classifier." International Journal of Computer Science Issues 17, no. 1 (2020): 27–31. https://doi.org/10.5281/zenodo.3987114.

Full text
Abstract:
This paper aims to develop a bird’s classification system based on classifiers fusion to easily identify the birds. It is based on image processing, which can control the classification, qualification and segmentation of images and hence recognize the birds. Usually from the captured images multiple shape features like area, centroid, angle at centroid, maximum angle and minimum angle can be extracted and analyzed to classify and recognize the birds. And then the extracted features are classified using KNN (K-Nearest Neighbor) Classifier and SVM (Support vector machine) classifiers.
APA, Harvard, Vancouver, ISO, and other styles
39

Rahmawan, Fachrudin Okta, Hanafi, and Windha Mega Pradnya Dhuita. "Comparison of The Accuracy of K-Nearest Neighbor and Roberta Algorithm in Analysis of Sentiment on Miawaug Youtube Channel Comments." Teknika 14, no. 1 (2025): 26–33. https://doi.org/10.34148/teknika.v14i1.1117.

Full text
Abstract:
This study aims to evaluate the accuracy of two algorithms, K-Nearest Neighbor (KNN) and Robustly Optimized BERT Approach (RoBERTa), in analyzing sentiment within comments on MiawAug’s YouTube channel. Sentiment analysis was conducted on two sentiment categories: binary classification (positive and negative) and multi-class classification (positive, neutral, and negative). Using KNN, the binary classification yielded an accuracy of 86.12%, F1-score of 87.44%, recall of 96.64%, and precision of 79.89%. In contrast, the multi-class classification achieved 98.21% accuracy, F1-score, and recall wi
APA, Harvard, Vancouver, ISO, and other styles
40

Muhammad Noor Mathivanan, Norsyela, Nor Azura Md.Ghani, and Roziah Mohd Janor. "Improving Classification Accuracy Using Clustering Technique." Bulletin of Electrical Engineering and Informatics 7, no. 3 (2018): 465–70. http://dx.doi.org/10.11591/eei.v7i3.1272.

Full text
Abstract:
Product classification is the key issue in e-commerce domains. Many products are released to the market rapidly and to select the correct category in taxonomy for each product has become a challenging task. The application of classification model is useful to precisely classify the products. The study proposed a method to apply clustering prior to classification. This study has used a large-scale real-world data set to identify the efficiency of clustering technique to improve the classification model. The conventional text classification procedures are used in the study such as preprocessing,
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Guobin, Xianzhong Xie, and Shijin Li. "Research on Complex Classification Algorithm of Breast Cancer Chip Based on SVM-RFE Gene Feature Screening." Complexity 2020 (June 13, 2020): 1–12. http://dx.doi.org/10.1155/2020/1342874.

Full text
Abstract:
Screening and classification of characteristic genes is a complex classification problem, and the characteristic sequences of gene expression show high-dimensional characteristics. How to select an effective gene screening algorithm is the main problem to be solved by analyzing gene chips. The combination of KNN, SVM, and SVM-RFE is selected to screen complex classification problems, and a new method to solve complex classification problems is provided. In the process of gene chip pretreatment, LogFC and P value equivalents in the gene expression matrix are screened, and different gene feature
APA, Harvard, Vancouver, ISO, and other styles
42

Prasanti, Annisya Aprilia, M. Ali Fauzi, and Muhammad Tanzil Furqon. "Neighbor Weighted K-Nearest Neighbor for Sambat Online Classification." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 1 (2018): 155. http://dx.doi.org/10.11591/ijeecs.v12.i1.pp155-160.

Full text
Abstract:
<p>Sambat Online is one of the implementation of E-Government for complaints management provided by Malang City Government. All of the complaints will be classified into its intended department. In this study, automatic complaint classification system using Neighbor Weighted K-Nearest Neighbor (NW-KNN) is poposed because Sambat Online has imbalanced data. The system developed consists of three main stages including preprocessing, N-Gram feature extraction, and classification using NW-KNN. Based on the experiment results, it can be concluded that the NW-KNN algorithm is able to classify t
APA, Harvard, Vancouver, ISO, and other styles
43

Barth, Jackson, Duwani Katumullage, Chenyu Yang, and Jing Cao. "Classification of Wines Using Principal Component Analysis." Journal of Wine Economics 16, no. 1 (2021): 56–67. http://dx.doi.org/10.1017/jwe.2020.35.

Full text
Abstract:
AbstractClassification of wines with a large number of correlated covariates may lead to classification results that are difficult to interpret. In this study, we use a publicly available dataset on wines from three known cultivars, where there are 13 highly correlated variables measuring chemical compounds of wines. The goal is to produce an efficient classifier with straightforward interpretation to shed light on the important features of wines in the classification. To achieve the goal, we incorporate principal component analysis (PCA) in the k-nearest neighbor (kNN) classification to deal
APA, Harvard, Vancouver, ISO, and other styles
44

Pulungan, Annisa Fadhillah, Muhammad Zarlis, and Saib Suwilo. "Analysis of Braycurtis, Canberra and Euclidean Distance in KNN Algorithm." SinkrOn 4, no. 1 (2019): 74. http://dx.doi.org/10.33395/sinkron.v4i1.10207.

Full text
Abstract:
Classification is a technique used to build a classification model from a sample of training data. One of the most popular classification techniques is The K-Nearest Neighbor (KNN). The KNN algorithm has important parameter that affect the performance of the KNN Algorithm. The parameter is the value of the K and distance matrix. The distance between two points is determined by the calculation of the distance matrix before classification process by the KNN. The purpose of this study was to analyze and compare performance of the KNN using the distance function. The distance functions are Braycur
APA, Harvard, Vancouver, ISO, and other styles
45

Sneha B. Paymal. "Activity Recognition in MATLAB Using KNN." Journal of Information Systems Engineering and Management 10, no. 31s (2025): 964–70. https://doi.org/10.52783/jisem.v10i31s.5150.

Full text
Abstract:
Activity recognition is a crucial task in machine learning, widely used in healthcare, sports analytics, and human-computer interaction. This study explores the implementation of K-Nearest Neighbors (KNN), a simple yet effective machine learning algorithm, to classify different human activities based on sensor data. The classification is performed using features extracted from accelerometer and gyroscope readings, which capture motion patterns associated with various activities. MATLAB is utilized as the development platform due to its powerful built-in functions for data preprocessing, featur
APA, Harvard, Vancouver, ISO, and other styles
46

Prabavathy, S. "Classification of Musical Instruments Sound Using Pre-Trained Model with Machine Learning Techniques." Asian Journal of Electrical Sciences 9, no. 1 (2020): 45–48. http://dx.doi.org/10.51983/ajes-2020.9.1.2369.

Full text
Abstract:
Classify the musical instruments by machine is a challenging task. Musical data classification becomes very popular in research field. A huge manual process required to classify the musical instrument. This proposed system classifies the musical instruments using GoogleNet which is a pretrained network model; SVM and kNN are the two techniques which is used to classify the features. In this paper, to simply musical instruments classifications based on its features which are extracted from various instruments using recent algorithms. The performance of kNN with SVM compares in this proposed wor
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Yuchun. "Handwritten Digit Recognition Using K Nearest-Neighbor, Radial-Basis Function, and Backpropagation Neural Networks." Neural Computation 3, no. 3 (1991): 440–49. http://dx.doi.org/10.1162/neco.1991.3.3.440.

Full text
Abstract:
Results of recent research suggest that carefully designed multilayer neural networks with local “receptive fields” and shared weights may be unique in providing low error rates on handwritten digit recognition tasks. This study, however, demonstrates that these networks, radial basis function (RBF) networks, and k nearest-neighbor (kNN) classifiers, all provide similar low error rates on a large handwritten digit database. The backpropagation network is overall superior in memory usage and classification time but can provide “false positive” classifications when the input is not a digit. The
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Zongying, Shaoxi Li, Jiangling Hao, Jingfeng Hu, and Mingyang Pan. "An Efficient and Fast Model Reduced Kernel KNN for Human Activity Recognition." Journal of Advanced Transportation 2021 (June 2, 2021): 1–9. http://dx.doi.org/10.1155/2021/2026895.

Full text
Abstract:
With accumulation of data and development of artificial intelligence, human activity recognition attracts lots of attention from researchers. Many classic machine learning algorithms, such as artificial neural network, feed forward neural network, K-nearest neighbors, and support vector machine, achieve good performance for detecting human activity. However, these algorithms have their own limitations and their prediction accuracy still has space to improve. In this study, we focus on K-nearest neighbors (KNN) and solve its limitations. Firstly, kernel method is employed in model KNN, which tr
APA, Harvard, Vancouver, ISO, and other styles
49

Talakua, Andrew H., Gabriella Haumahu, and Marlon S. Noya Van Delsen. "Klasifikasi Penggunaan Alat Kontrasepsi di Kecamatan Salahutu Kabupaten Maluku Tengah Menggunakan Metode K-Nearest Neighbor (KNN)." Proximal: Jurnal Penelitian Matematika dan Pendidikan Matematika 7, no. 2 (2024): 752–59. http://dx.doi.org/10.30605/proximal.v7i2.4088.

Full text
Abstract:
KNN classifier algorithm for developing an automatic classification system in categorizing knn methods. The classification process using the K-Nearest Neighbor (KNN) algorithm was chosen because it is simple and easy to implement. This study aims to determine the characteristics of the choice of contraceptives in Salahutu District, Central Maluku Regency and classify the choice of contraceptives in Salahutu District, Central Maluku Regency using the KNN method. A total of 1393 respondent data as a sample and 11 predictor variables and 1 response variable by calculating the distance between doc
APA, Harvard, Vancouver, ISO, and other styles
50

Aprihartha, Moch Anjas, and Idham Idham. "Optimization of Classification Algorithms Performance with k-Fold Cross Validation." EIGEN MATHEMATICS JOURNAL 7, no. 2 (2024): 61–66. http://dx.doi.org/10.29303/emj.v7i2.212.

Full text
Abstract:
Supervised learning is a predictive method used to make predictions or classifications. Supervised learning algorithms work by building a model using training data that includes both independent and dependent variables. Several methods for building classification include Logistic Regression, Naive Bayes, K-Nearest Neighbor (KNN), decision tree, etc. The lack of capacity of a classification algorithm to generalize certain data can be associated with the problem of overfitting or underfitting. K-fold cross-validation is a method that can help avoid overfitting or underfitting and produce a algor
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!