To see the other types of publications on this topic, follow the link: Nearest Neighbor Classification.

Journal articles on the topic 'Nearest Neighbor Classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Nearest Neighbor Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Onyezewe, Anozie, Armand F. Kana, Fatimah B. Abdullahi, and Aminu O. Abdulsalami. "An Enhanced Adaptive k-Nearest Neighbor Classifier Using Simulated Annealing." International Journal of Intelligent Systems and Applications 13, no. 1 (February 8, 2021): 34–44. http://dx.doi.org/10.5815/ijisa.2021.01.03.

Full text
Abstract:
The k-Nearest Neighbor classifier is a non-complex and widely applied data classification algorithm which does well in real-world applications. The overall classification accuracy of the k-Nearest Neighbor algorithm largely depends on the choice of the number of nearest neighbors(k). The use of a constant k value does not always yield the best solutions especially for real-world datasets with an irregular class and density distribution of data points as it totally ignores the class and density distribution of a test point’s k-environment or neighborhood. A resolution to this problem is to dynamically choose k for each test instance to be classified. However, given a large dataset, it becomes very tasking to maximize the k-Nearest Neighbor performance by tuning k. This work proposes the use of Simulated Annealing, a metaheuristic search algorithm, to select optimal k, thus eliminating the prospect of an exhaustive search for optimal k. The results obtained in four different classification tasks demonstrate a significant improvement in the computational efficiency against the k-Nearest Neighbor methods that perform exhaustive search for k, as accurate nearest neighbors are returned faster for k-Nearest Neighbor classification, thus reducing the computation time.
APA, Harvard, Vancouver, ISO, and other styles
2

Murphy, O. J. "Nearest neighbor pattern classification perceptrons." Proceedings of the IEEE 78, no. 10 (1990): 1595–98. http://dx.doi.org/10.1109/5.58344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hastie, T., and R. Tibshirani. "Discriminant adaptive nearest neighbor classification." IEEE Transactions on Pattern Analysis and Machine Intelligence 18, no. 6 (June 1996): 607–16. http://dx.doi.org/10.1109/34.506411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geva, S., and J. Sitte. "Adaptive nearest neighbor pattern classification." IEEE Transactions on Neural Networks 2, no. 2 (March 1991): 318–22. http://dx.doi.org/10.1109/72.80344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Yingquan, Krassimir Ianakiev, and Venu Govindaraju. "Improved k-nearest neighbor classification." Pattern Recognition 35, no. 10 (October 2002): 2311–18. http://dx.doi.org/10.1016/s0031-3203(01)00132-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gou, Jianping, Yongzhao Zhan, Yunbo Rao, Xiangjun Shen, Xiaoming Wang, and Wu He. "Improved pseudo nearest neighbor classification." Knowledge-Based Systems 70 (November 2014): 361–75. http://dx.doi.org/10.1016/j.knosys.2014.07.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gursoy, Mehmet Emre, Ali Inan, Mehmet Ercan Nergiz, and Yucel Saygin. "Differentially private nearest neighbor classification." Data Mining and Knowledge Discovery 31, no. 5 (July 21, 2017): 1544–75. http://dx.doi.org/10.1007/s10618-017-0532-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajarajeswari, Perepi. "Hyperspectral Image Classification by Using K-Nearest Neighbor Algorithm." International Journal of Psychosocial Rehabilitation 24, no. 5 (April 20, 2020): 5068–74. http://dx.doi.org/10.37200/ijpr/v24i5/pr2020214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hall, Peter, Byeong U. Park, and Richard J. Samworth. "Choice of neighbor order in nearest-neighbor classification." Annals of Statistics 36, no. 5 (October 2008): 2135–52. http://dx.doi.org/10.1214/07-aos537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sugesti, Annisa, Moch Abdul Mukid, and Tarno Tarno. "PERBANDINGAN KINERJA MUTUAL K-NEAREST NEIGHBOR (MKNN) DAN K-NEAREST NEIGHBOR (KNN) DALAM ANALISIS KLASIFIKASI KELAYAKAN KREDIT." Jurnal Gaussian 8, no. 3 (August 30, 2019): 366–76. http://dx.doi.org/10.14710/j.gauss.v8i3.26681.

Full text
Abstract:
Credit feasibility analysis is important for lenders to avoid the risk among the increasement of credit applications. This analysis can be carried out by the classification technique. Classification technique used in this research is instance-based classification. These techniques tend to be simple, but are very dependent on the determination of K values. K is number of nearest neighbor considered for class classification of new data. A small value of K is very sensitive to outliers. This weakness can be overcome using an algorithm that is able to handle outliers, one of them is Mutual K-Nearest Neighbor (MKNN). MKNN removes outliers first, then predicts new observation classes based on the majority class of their mutual nearest neighbors. The algorithm will be compared with KNN without outliers. The model is evaluated by 10-fold cross validation and the classification performance is measured by Gemoetric-Mean of sensitivity and specificity. Based on the analysis the optimal value of K is 9 for MKNN and 3 for KNN, with the highest G-Mean produced by KNN is equal to 0.718, meanwhile G-Mean produced by MKNN is 0.702. The best alternative to classifying credit feasibility in this study is K-Nearest Neighbor (KNN) algorithm with K=3.Keywords: Classification, Credit, MKNN, KNN, G-Mean.
APA, Harvard, Vancouver, ISO, and other styles
11

Song, Yunsheng, Xiaohan Kong, and Chao Zhang. "A Large-Scale k -Nearest Neighbor Classification Algorithm Based on Neighbor Relationship Preservation." Wireless Communications and Mobile Computing 2022 (January 7, 2022): 1–11. http://dx.doi.org/10.1155/2022/7409171.

Full text
Abstract:
Owing to the absence of hypotheses of the underlying distributions of the data and the strong generation ability, the k -nearest neighbor (kNN) classification algorithm is widely used to face recognition, text classification, emotional analysis, and other fields. However, kNN needs to compute the similarity between the unlabeled instance and all the training instances during the prediction process; it is difficult to deal with large-scale data. To overcome this difficulty, an increasing number of acceleration algorithms based on data partition are proposed. However, they lack theoretical analysis about the effect of data partition on classification performance. This paper has made a theoretical analysis of the effect using empirical risk minimization and proposed a large-scale k -nearest neighbor classification algorithm based on neighbor relationship preservation. The process of searching the nearest neighbors is converted to a constrained optimization problem. Then, it gives the estimation of the difference on the objective function value under the optimal solution with data partition and without data partition. According to the obtained estimation, minimizing the similarity of the instances in the different divided subsets can largely reduce the effect of data partition. The minibatch k -means clustering algorithm is chosen to perform data partition for its effectiveness and efficiency. Finally, the nearest neighbors of the test instance are continuously searched from the set generated by successively merging the candidate subsets until they do not change anymore, where the candidate subsets are selected based on the similarity between the test instance and cluster centers. Experiment results on public datasets show that the proposed algorithm can largely keep the same nearest neighbors and no significant difference in classification accuracy as the original kNN classification algorithm and better results than two state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Jing Peng, D. R. Heisterkamp, and H. K. Dai. "LDA/SVM driven nearest neighbor classification." IEEE Transactions on Neural Networks 14, no. 4 (July 2003): 940–42. http://dx.doi.org/10.1109/tnn.2003.813835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Domeniconi, C., Jing Peng, and D. Gunopulos. "Locally adaptive metric nearest-neighbor classification." IEEE Transactions on Pattern Analysis and Machine Intelligence 24, no. 9 (September 2002): 1281–85. http://dx.doi.org/10.1109/tpami.2002.1033219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jing Peng, D. R. Heisterkamp, and H. K. Dai. "Adaptive quasiconformal kernel nearest neighbor classification." IEEE Transactions on Pattern Analysis and Machine Intelligence 26, no. 5 (May 2004): 656–61. http://dx.doi.org/10.1109/tpami.2004.1273978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cérou, Frédéric, and Arnaud Guyader. "Nearest neighbor classification in infinite dimension." ESAIM: Probability and Statistics 10 (September 2006): 340–55. http://dx.doi.org/10.1051/ps:2006014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Y. S., C. C. Chiang, J. W. Shieh, and E. Grimson. "Prototype optimization for nearest-neighbor classification." Pattern Recognition 35, no. 6 (June 2002): 1237–45. http://dx.doi.org/10.1016/s0031-3203(01)00124-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Buttrey, Samuel E. "Nearest-neighbor classification with categorical variables." Computational Statistics & Data Analysis 28, no. 2 (August 1998): 157–69. http://dx.doi.org/10.1016/s0167-9473(98)00032-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Murphy, O. "Computing nearest neighbor pattern classification perceptrons." Information Sciences 83, no. 3-4 (March 1995): 133–42. http://dx.doi.org/10.1016/0020-0255(94)00066-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hajizadeh, Zahra, Mohammad Taheri, and Mansoor Zolghadri Jahromi. "Nearest Neighbor Classification with Locally Weighted Distance for Imbalanced Data." International Journal of Computer and Communication Engineering 3, no. 2 (2014): 81–86. http://dx.doi.org/10.7763/ijcce.2014.v3.296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Prasanti, Annisya Aprilia, M. Ali Fauzi, and Muhammad Tanzil Furqon. "Neighbor Weighted K-Nearest Neighbor for Sambat Online Classification." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 1 (October 1, 2018): 155. http://dx.doi.org/10.11591/ijeecs.v12.i1.pp155-160.

Full text
Abstract:
<p>Sambat Online is one of the implementation of E-Government for complaints management provided by Malang City Government. All of the complaints will be classified into its intended department. In this study, automatic complaint classification system using Neighbor Weighted K-Nearest Neighbor (NW-KNN) is poposed because Sambat Online has imbalanced data. The system developed consists of three main stages including preprocessing, N-Gram feature extraction, and classification using NW-KNN. Based on the experiment results, it can be concluded that the NW-KNN algorithm is able to classify the imbalanced data well with the most optimal k-neighbor value is 3 and unigram as the best features by 77.85% precision, 74.18% recall, and 75.25% f-measure value. Compared to the conventional KNN, NW-KNN algorithm also proved to be better for imbalanced data problems with very slightly differences.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

WILFONG, GORDON. "NEAREST NEIGHBOR PROBLEMS." International Journal of Computational Geometry & Applications 02, no. 04 (December 1992): 383–416. http://dx.doi.org/10.1142/s0218195992000226.

Full text
Abstract:
Suppose E is a set of labeled points (examples) in some metric space. A subset C of E is said to be a consistent subset ofE if it has the property that for any example e∈E, the label of the closest example in C to e is the same as the label of e. We consider the problem of computing a minimum cardinality consistent subset. Consistent subsets have applications in pattern classification schemes that are based on the nearest neighbor rule. The idea is to replace the training set of examples with as small a consistent subset as possible so as to improve the efficiency of the system while not significantly affecting its accuracy. The problem of finding a minimum size consistent subset of a set of examples is shown to be NP-complete. A special case is described and is shown to be equivalent to an optimal disc cover problem. A polynomial time algorithm for this optimal disc cover problem is then given.
APA, Harvard, Vancouver, ISO, and other styles
22

Algamal, Zakariya, Shaimaa Mahmood, and Ghalia Basheer. "Classification of Chronic Kidney Disease Data via Three Algorithms." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 1 (October 1, 2021): 414–20. http://dx.doi.org/10.55562/jrucs.v46i1.92.

Full text
Abstract:
Pattern recognition can be defined as the classification of data based on knowledge already gained or on statistical information extracted from patterns. The classification of objects is an important area for research and application in a variety of fields. In this paper, k-Nearest Neighbor, Fuzzy k-Nearest Neighbor and Modified k-Nearest Neighbor algorithms are used to classify of the chronic kidney disease (CKD) data with different choices of value k. The experiment results prove that the Fuzzy k-Nearest Neighbor and Modified k-Nearest Neighbor algorithms are very effective for classifying CKD data with high classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
23

Putra, Purwa Hasan, Muhammad Syahputra Novelan, and Muhammad Rizki. "Analysis K-Nearest Neighbor Method in Classification of Vegetable Quality Based on Color." Journal of Applied Engineering and Technological Science (JAETS) 3, no. 2 (June 24, 2022): 126–32. http://dx.doi.org/10.37385/jaets.v3i2.763.

Full text
Abstract:
In this research, the process of applying the K-Nearst Neighbor (KNN) method will be carried out, which is a classification method for a collection of data based on the majority of categories and the goal is to classify new objects based on attributes and sample samples from training data. So that the desired output target is close to the accuracy in conducting learning testing. The results of the test of the K-Nearest Neighbor method. It can be seen that from the K values ??of 1 to 10, the percentage of the results of the analysis of the K-NN method is higher than the results of the analysis of the K-NN method. And from the K value that has been tested, the K 2 value and the K 9 value have the largest percentage so that the accuracy is also more precise. As for the results of testing the K-Nearest Neighbor method in data classification. As for the author's test using a variation of the K value of K-Nearest Neighbor 3,4,5,6,7,8,9. Has a very good percentage of accuracy compared to only K-NN. The test results show the K-Nearest Neighbor method in data classification has a good percentage accuracy when using random data. The percentage of variation in the value of K K-Nearest Neighbor 3,4,5,6,7,8,9 has a percentage of 100%.
APA, Harvard, Vancouver, ISO, and other styles
24

Suliztia, Mega Luna, and Achmad Fauzan. "COMPARING NAIVE BAYES, K-NEAREST NEIGHBOR, AND NEURAL NETWORK CLASSIFICATION METHODS OF SEAT LOAD FACTOR IN LOMBOK OUTBOUND FLIGHTS." Jurnal Matematika, Statistika dan Komputasi 16, no. 2 (December 19, 2019): 187. http://dx.doi.org/10.20956/jmsk.v16i2.7864.

Full text
Abstract:
Classification is the process of grouping data based on observed variables to predict new data whose class is unknown. There are some classification methods, such as Naïve Bayes, K-Nearest Neighbor and Neural Network. Naïve Bayes classifies based on the probability value of the existing properties. K-Nearest Neighbor classifies based on the character of its nearest neighbor, where the number of neighbors=k, while Neural Network classifies based on human neural networks. This study will compare three classification methods for Seat Load Factor, which is the percentage of aircraft load, and also a measure in determining the profit of airline.. Affecting factors are the number of passengers, ticket prices, flight routes, and flight times. Based on the analysis with 47 data, it is known that the system of Naïve Bayes method has misclassifies in 14 data, so the accuracy rate is 70%. The system of K-Nearest Neighbor method with k=5 has misclassifies in 5 data, so the accuracy rate is 89%, and the Neural Network system has misclassifies in 10 data with accuracy rate 78%. The method with highest accuracy rate is the best method that will be used, which in this case is K-Nearest Neighbor method with success of classification system is 42 data, including 14 low, 10 medium, and 18 high value. Based on the best method, predictions can be made using new data, for example the new data consists of Bali flight routes (2), flight times in afternoon (2), estimate of passenger numbers is 140 people, and ticket prices is Rp.700,000. By using the K-Nearest Neighbor method, Seat Load Factor prediction is high or at intervals of 80% -100%.
APA, Harvard, Vancouver, ISO, and other styles
25

Atiya, Amir F. "Estimating the Posterior Probabilities Using the K-Nearest Neighbor Rule." Neural Computation 17, no. 3 (March 1, 2005): 731–40. http://dx.doi.org/10.1162/0899766053019971.

Full text
Abstract:
In many pattern classification problems, an estimate of the posterior probabilities (rather than only a classification) is required. This is usually the case when some confidence measure in the classification is needed. In this article, we propose a new posterior probability estimator. The proposed estimator considers the K-nearest neighbors. It attaches a weight to each neighbor that contributes in an additive fashion to the posterior probability estimate. The weights corresponding to the K-nearest-neighbors (which add to 1) are estimated from the data using a maximum likelihood approach. Simulation studies confirm the effectiveness of the proposed estimator.
APA, Harvard, Vancouver, ISO, and other styles
26

Mehta, Sumet, Xiangjun Shen, Jiangping Gou, and Dejiao Niu. "A New Nearest Centroid Neighbor Classifier Based on K Local Means Using Harmonic Mean Distance." Information 9, no. 9 (September 14, 2018): 234. http://dx.doi.org/10.3390/info9090234.

Full text
Abstract:
The K-nearest neighbour classifier is very effective and simple non-parametric technique in pattern classification; however, it only considers the distance closeness, but not the geometricalplacement of the k neighbors. Also, its classification performance is highly influenced by the neighborhood size k and existing outliers. In this paper, we propose a new local mean based k-harmonic nearest centroid neighbor (LMKHNCN) classifier in orderto consider both distance-based proximity, as well as spatial distribution of k neighbors. In our method, firstly the k nearest centroid neighbors in each class are found which are used to find k different local mean vectors, and then employed to compute their harmonic mean distance to the query sample. Lastly, the query sample is assigned to the class with minimum harmonic mean distance. The experimental results based on twenty-six real-world datasets shows that the proposed LMKHNCN classifier achieves lower error rates, particularly in small sample-size situations, and that it is less sensitive to parameter k when compared to therelated four KNN-based classifiers.
APA, Harvard, Vancouver, ISO, and other styles
27

Widiarti, Anastasia Rita. "K-nearest neighbor performance for Nusantara scripts image transliteration." Jurnal Teknologi dan Sistem Komputer 8, no. 2 (March 13, 2020): 150–56. http://dx.doi.org/10.14710/jtsiskom.8.2.2020.150-156.

Full text
Abstract:
The concept of classification using the k-nearest neighbor (KNN) method is simple, easy to understand, and easy to be implemented in the system. The main challenge in classification with KNN is determining the proximity measure of an object and how to make a compact reference class. This paper studied the implementation of the KNN for the automatic transliteration of Javanese, Sundanese, and Bataknese script images into Roman script. The study used the KNN algorithm with the number k set to 1, 3, 5, 7, and 9. Tests used the image dataset of 2520 data. With the 3-fold and 10-fold cross-validation, the results exposed the accuracy differences if the area of the extracted image, the number of neighbors in the classification, and the number of data training were different.
APA, Harvard, Vancouver, ISO, and other styles
28

Purba, Windania, Fando Marehitno Salim, Antoni Antoni, Yuni Suhendrik, and Jeanie Winata. "APPLICATION OF DATA MINING TO CLASSIFY HATE SPEECH ON SOCIAL MEDIA BY USING THE K NEAREST NEIGHBOR ALGORITHM." Jurnal Handayani 10, no. 1 (July 23, 2019): 102. http://dx.doi.org/10.24114/jh.v10i1.14144.

Full text
Abstract:
Abstract: Social media is one of the biggest sources of information that we can get right now. However, in the use and dissemination of information, there are still many social media users who spread information or hateful words (Hate Speech). Therefore classification needs to be done to reduce the appearance of hate speech sentences with K Nearest Neighbor. K Nearest Neighbor Algorithm classifies based on the results of learning on the object being carried out. In the research carried out the KNN algorithm succeeded in classifying the Hate Speech on the given tweet data. Keywords: K neares neighbor, Classification, Data Mining, Hate Speech, Social Media Abstrak: Sosial media fungsinya pada saat ini merupakan salah satu sumber informasi terbesar yang dapat kita dapatkan. Namun, dalam penggunaan dan penyebaran informasinya, masih banyak pengguna sosial media yang menyebarkan informasi atau kata-kata berbau kebencian (Hate Speech). Untuk itu klasifikasi perlu dilakukan untuk mengurangi munculnya kalimat berbau hate speech dengan K Nearest Neighbor. Algoritma K Nearest Neighbor mengklasifikasikan berdasarkan hasil dari pembelajaran terhadap objek yang dilakukan. Pada penelitian yang dilakukan algoritma KNN berhasil melakukan klasifikasi Hate Speech pada data tweet yang diberikan. Kata Kunci: K Nearest Neighbor, Klasifikasi, Data Mining, Ujaran Kebencian, Sosial Media
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Yue, Cong Guo, Chunjing Xiao, and Wei Yang. "Combining Imputation Method and Feature Weighting Algorithms to Improve the Classification Accuracy of Incomplete Data." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012038. http://dx.doi.org/10.1088/1742-6596/2171/1/012038.

Full text
Abstract:
Abstract It is a common problem that datasets in pattern classification tasks contain missing values. In order to deal with missing values, the K-Nearest Neighbor Imputation method with Euclidean distance is often used to find K nearest neighbors, and then the mean of these samples is considered as the filling value. Although this method can deal with the problem of missing values well, it cannot effectively deal with the classification task with noise and redundant features because it ignores the importance of features. In order to do this, the paper proposes a novel imputation method. In our method, missing values are firstly filled by using the K-Nearest Neighbor Imputation method based on Euclidean distance, then a feature weighting vector can be obtained by using a feature weighting algorithm trained on the initially filled data, and finally the K-Nearest Neighbor Imputation with weighted Euclidean distance is used to fill missing values. The experimental results on six datasets show that the proposed method can improve the classification accuracy of incomplete data.
APA, Harvard, Vancouver, ISO, and other styles
30

Milania, Surayya Safira, Cucu Suhery, and Tedy Rismawan. "Fever Classification Using the Neighbor Weighted K-Nearest Neighbor Method." CESS (Journal of Computer Engineering, System and Science) 8, no. 2 (July 1, 2023): 250. http://dx.doi.org/10.24114/cess.v8i2.43267.

Full text
Abstract:
Demam merupakan gejala atau reaksi tubuh terhadap suatu infeksi atau penyakit. Demam dapat disebabkan karena adanya infeksi virus, bakteri, dan parasit. Serta demam akibat gigitan nyamuk. Beberapa penyakit penyebab demam yang perlu diwaspadai antara lain Demam Berdarah Dengue (DBD), Demam Tifoid, dan Malaria dikarenakan gejala klinis dari ketiga penyakit tersebut sangat mirip dan sulit untuk dibedakan. Akibat dari gejala yang mirip, seringkali menyebabkan kesulitan dalam mendapatkan diagnosis awal sehingga kurang tepat dalam penanganan. Oleh karena itu, pada penelitian ini dibangun sebuah sistem yang dapat mengklasifikasikan demam menggunakan metode Neighbor Weighted K-Nearest Neighbor. Data yang digunakan berjumlah 300 data dengan komposisi rasio data latih dan data uji sebesar 70%:30% sehingga data latih yang digunakan berjumlah 210 data dan data uji berjumlah 90 data. Penelitian ini dilakukan dengan mengamati variasi nilai ketetanggaan (K) dan nilai exp (E) terhadap akurasi sistem klasifikasi demam. Hasil pelatihan menunjukkan bahwa nilai K dan E yang bervariasi tidak mempunyai pengaruh terhadap akurasi tersebut. Hasil pengujian yang dilakukan mendapatkan akurasi sebesar 100% pada setiap variasi nilai K dan E.Fever is a symptom of the body's reaction to an infection or disease. Fever can be caused by viral, bacterial, or parasitic infections. as well as fever due to mosquito bites. Several diseases that cause fever that need to be watched out for include dengue hemorrhagic fever (DHF), typhoid fever, and malaria because the clinical symptoms of these three diseases are very similar and difficult to distinguish. As a result of similar symptoms, it often causes difficulties in getting an early diagnosis, so treatment is not appropriate. Therefore, in this study, a system was developed that could classify fever using the neighbor weighted K-nearest neighbor method. The data used totaled 300, with a composition ratio of 70% training data to 30% test data, for a total of 210 training data and 90 test data. This research was conducted by observing the variation in the value of neighborliness (K) and the value of exp (E) on the accuracy of the fever classification system. The results of the training show that the varying K and E values have no effect on accuracy. The results of the tests carried out obtained an accuracy of 100% for each variation in the values of K and E.
APA, Harvard, Vancouver, ISO, and other styles
31

Assegaf, Alwi, Moch Abdul Mukid, and Abdul Hoyyi. "Analisis Kesehatan Bank Menggunakan Local Mean K-Nearest Neighbor dan Multi Local Means K-Harmonic Nearest Neighbor." Jurnal Gaussian 8, no. 3 (August 30, 2019): 343–55. http://dx.doi.org/10.14710/j.gauss.v8i3.26679.

Full text
Abstract:
The classification method continues to develop in order to get more accurate classification results than before. The purpose of the research is comparing the two k-Nearest Neighbor (KNN) methods that have been developed, namely the Local Mean k-Nearest Neighbor (LMKNN) and Multi Local Means k-Harmonic Nearest Neighbor (MLM-KHNN) by taking a case study of listed bank financial statements and financial statements complete recorded at Bank Indonesia in 2017. LMKNN is a method that aims to improve classification performance and reduce the influence of outliers, and MLM-KHNN is a method that aims to reduce sensitivity to a single value. This study uses seven indicators to measure the soundness of a bank, including the Capital Adequacy Ratio, Non Performing Loans, Loan to Deposit Ratio, Return on Assets, Return on Equity, Net Interest Margin, and Operating Expenses on Operational Income with a classification of bank health status is very good (class 1), good (class 2), quite good (class 3) and poor (class 4). The measure of the accuracy of the classification results used is the Apparent Error Rate (APER). The best classification results of the LMKNN method are in the proportion of 80% training data and 20% test data with k=7 which produces the smallest APER 0,0556 and an accuracy of 94,44%, while the best classification results of the MLM-KHNN method are in the proportion of 80% training data and 20% test data with k=3 which produces the smallest APER 0,1667 and an accuracy of 83,33%. Based on APER calculation shows that the LMKNN method is better than MLM-KHNN in classifying the health status of banks in Indonesia.Keywords: Classification, Local Mean k-Nearest Neighbor (LMKNN), Multi Local Means k-Harmonic Nearest Neighbor (MLM-KHNN), Measure of accuracy of classification
APA, Harvard, Vancouver, ISO, and other styles
32

Widyadhana, Arya, Cornelius Bagus Purnama Putra, Rarasmaya Indraswari, and Agus Zainal Arifin. "A Bonferroni Mean Based Fuzzy K Nearest Centroid Neighbor Classifier." Jurnal Ilmu Komputer dan Informasi 14, no. 1 (February 28, 2021): 65–71. http://dx.doi.org/10.21609/jiki.v14i1.959.

Full text
Abstract:
K-nearest neighbor (KNN) is an effective nonparametric classifier that determines the neighbors of a point based only on distance proximity. The classification performance of KNN is disadvantaged by the presence of outliers in small sample size datasets and its performance deteriorates on datasets with class imbalance. We propose a local Bonferroni Mean based Fuzzy K-Nearest Centroid Neighbor (BM-FKNCN) classifier that assigns class label of a query sample dependent on the nearest local centroid mean vector to better represent the underlying statistic of the dataset. The proposed classifier is robust towards outliers because the Nearest Centroid Neighborhood (NCN) concept also considers spatial distribution and symmetrical placement of the neighbors. Also, the proposed classifier can overcome class domination of its neighbors in datasets with class imbalance because it averages all the centroid vectors from each class to adequately interpret the distribution of the classes. The BM-FKNCN classifier is tested on datasets from the Knowledge Extraction based on Evolutionary Learning (KEEL) repository and benchmarked with classification results from the KNN, Fuzzy-KNN (FKNN), BM-FKNN and FKNCN classifiers. The experimental results show that the BM-FKNCN achieves the highest overall average classification accuracy of 89.86% compared to the other four classifiers.
APA, Harvard, Vancouver, ISO, and other styles
33

NOCK, RICHARD, MARC SEBBAN, and DIDIER BERNARD. "A SIMPLE LOCALLY ADAPTIVE NEAREST NEIGHBOR RULE WITH APPLICATION TO POLLUTION FORECASTING." International Journal of Pattern Recognition and Artificial Intelligence 17, no. 08 (December 2003): 1369–82. http://dx.doi.org/10.1142/s0218001403002952.

Full text
Abstract:
In this paper, we propose a thorough investigation of a nearest neighbor rule which we call the "Symmetric Nearest Neighbor (sNN) rule". Basically, it symmetrises the classical nearest neighbor relationship from which are computed the points voting for some instances. Experiments on 29 datasets, most of which are readily available, show that the method significantly outperforms the traditional Nearest Neighbors methods. Experiments on a domain of interest related to tropical pollution normalization also show the greater potential of this method. We finally discuss the reasons for the rule's efficiency, provide methods for speeding-up the classification time, and derive from the sNN rule a reliable and fast algorithm to fix the parameter k in the k-NN rule, a longstanding problem in this field.
APA, Harvard, Vancouver, ISO, and other styles
34

Viola, Rémi, Rémi Emonet, Amaury Habrard, Guillaume Metzler, Sébastien Riou, and Marc Sebban. "A Nearest Neighbor Algorithm for Imbalanced Classification." International Journal on Artificial Intelligence Tools 30, no. 03 (May 2021): 2150013. http://dx.doi.org/10.1142/s0218213021500135.

Full text
Abstract:
Due to the inability of the accuracy-driven methods to address the challenging problem of learning from imbalanced data, several alternative measures have been proposed in the literature, like the Area Under the ROC Curve (AUC), the Average Precision (AP), the F-measure, the G-Mean, etc. However, these latter measures are neither smooth, convex nor separable, making their direct optimization hard in practice. In this paper, we tackle the challenging problem of imbalanced learning from a nearest-neighbor (NN) classification perspective, where the minority examples typically belong to the class of interest. Based on simple geometrical ideas, we introduce an algorithm that rescales the distance between a query sample and any positive training example. This leads to a modification of the Voronoi regions and thus of the decision boundaries of the NN classifier. We provide a theoretical justification about this scaling scheme which inherently aims at reducing the False Negative rate while controlling the number of False Positives. We further formally establish a link between the proposed method and cost-sensitive learning. An extensive experimental study is conducted on many public imbalanced datasets showing that our method is very effective with respect to popular Nearest-Neighbor algorithms, comparable to state-of-the-art sampling methods and even yields the best performance when combined with them.
APA, Harvard, Vancouver, ISO, and other styles
35

Kumar, Ganesh, and A. Arivazhagan. "Mammogram Classification Based Onk-Nearest Neighbor Classifier." Indian Journal of Public Health Research & Development 8, no. 3s (2017): 83. http://dx.doi.org/10.5958/0976-5506.2017.00245.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bay, Stephen D. "Nearest neighbor classification from multiple feature subsets." Intelligent Data Analysis 3, no. 3 (May 1, 1999): 191–209. http://dx.doi.org/10.3233/ida-1999-3304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhu, Qingsheng. "Classification Algorithm Based on Natural Nearest Neighbor." Journal of Information and Computational Science 12, no. 2 (January 20, 2015): 573–80. http://dx.doi.org/10.12733/jics20105267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cucala, Lionel, Jean-Michel Marin, Christian P. Robert, and D. M. Titterington. "A Bayesian Reassessment of Nearest-Neighbor Classification." Journal of the American Statistical Association 104, no. 485 (March 2009): 263–73. http://dx.doi.org/10.1198/jasa.2009.0125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Angiulli, Fabrizio, and Fabio Fassetti. "Nearest Neighbor-Based Classification of Uncertain Data." ACM Transactions on Knowledge Discovery from Data 7, no. 1 (March 2013): 1–35. http://dx.doi.org/10.1145/2435209.2435210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ghosh, A. K., P. Chaudhuri, and C. A. Murthy. "Multiscale Classification Using Nearest Neighbor Density Estimates." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 36, no. 5 (October 2006): 1139–48. http://dx.doi.org/10.1109/tsmcb.2006.873186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Winiarti, S., F. I. Indikawati, A. Oktaviana, and H. Yuliansyah. "Consumable Fish Classification Using k-Nearest Neighbor." IOP Conference Series: Materials Science and Engineering 821 (May 29, 2020): 012039. http://dx.doi.org/10.1088/1757-899x/821/1/012039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bandyopadhyay, Sanghamitra, and Ujjwal Maulik. "Efficient prototype reordering in nearest neighbor classification." Pattern Recognition 35, no. 12 (December 2002): 2791–99. http://dx.doi.org/10.1016/s0031-3203(01)00234-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bressan, M., and J. Vitrià. "Nonparametric discriminant analysis and nearest neighbor classification." Pattern Recognition Letters 24, no. 15 (November 2003): 2743–49. http://dx.doi.org/10.1016/s0167-8655(03)00117-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bay, S. "Nearest neighbor classification from multiple feature subsets." Intelligent Data Analysis 3, no. 3 (September 1999): 191–209. http://dx.doi.org/10.1016/s1088-467x(99)00018-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Shichao, Debo Cheng, Ming Zong, and Lianli Gao. "Self-representation nearest neighbor search for classification." Neurocomputing 195 (June 2016): 137–42. http://dx.doi.org/10.1016/j.neucom.2015.08.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zheng, Wenming, Li Zhao, and Cairong Zou. "Locally nearest neighbor classifiers for pattern classification." Pattern Recognition 37, no. 6 (June 2004): 1307–9. http://dx.doi.org/10.1016/j.patcog.2003.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Masip, David, and Jordi Vitrià. "Boosted discriminant projections for nearest neighbor classification." Pattern Recognition 39, no. 2 (February 2006): 164–70. http://dx.doi.org/10.1016/j.patcog.2005.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sarkar, Manish. "Fuzzy-rough nearest neighbor algorithms in classification." Fuzzy Sets and Systems 158, no. 19 (October 2007): 2134–52. http://dx.doi.org/10.1016/j.fss.2007.04.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ma, Hongxing, Jianping Gou, Xili Wang, Jia Ke, and Shaoning Zeng. "Sparse Coefficient-Based ${k}$ -Nearest Neighbor Classification." IEEE Access 5 (2017): 16618–34. http://dx.doi.org/10.1109/access.2017.2739807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zeng, Yong, Yupu Yang, and Liang Zhao. "Pseudo nearest neighbor rule for pattern classification." Expert Systems with Applications 36, no. 2 (March 2009): 3587–95. http://dx.doi.org/10.1016/j.eswa.2008.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography