To see the other types of publications on this topic, follow the link: K-nearest-neighbor.

Journal articles on the topic 'K-nearest-neighbor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'K-nearest-neighbor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Peterson, Leif. "K-nearest neighbor." Scholarpedia 4, no. 2 (2009): 1883. http://dx.doi.org/10.4249/scholarpedia.1883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wilujeng, Dian Tri, Mohamat Fatekurohman, and I. Made Tirta. "Analisis Risiko Kredit Perbankan Menggunakan Algoritma K-Nearest Neighbor dan Nearest Weighted K-Nearest Neighbor." Indonesian Journal of Applied Statistics 5, no. 2 (2023): 142. http://dx.doi.org/10.13057/ijas.v5i2.58426.

Full text
Abstract:
<p dir="ltr"><span>Bank is a business entity that collects public funds in the form of savings and also distributes them to the public in the form of credit or other forms. Credit risk analysis can be done in various ways such as marketing analysis and big data using machine learning. One example of a machine learning algorithm is K-Nearest Neighbor (KNN) and the development of the K-Nearest Neighbor algorithm is Neighbor Weighted KNearest Neighbor (NWKNN). The K-Nearest Neighbor (KNN) algorithm is one of the machine learning methods that can be used to facilitate the classificatio
APA, Harvard, Vancouver, ISO, and other styles
3

Ertuğrul, Ömer Faruk, and Mehmet Emin Tağluk. "A novel version of k nearest neighbor: Dependent nearest neighbor." Applied Soft Computing 55 (June 2017): 480–90. http://dx.doi.org/10.1016/j.asoc.2017.02.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Syaliman, Khairul Umam, M. Zulfahmi, and Aldi Abdillah Nababan. "Perbandingan Rapid Centroid Estimation (RCE) — K Nearest Neighbor (K-NN) Dengan K Means — K Nearest Neighbor (K-NN)." InfoTekJar (Jurnal Nasional Informatika dan Teknologi Jaringan) 2, no. 1 (2017): 79–89. http://dx.doi.org/10.30743/infotekjar.v2i1.166.

Full text
Abstract:
Teknik Clustering terbukti dapat meningkatkan akurasi dalam melakukan klasifikasi, terutama pada algoritma K-Nearest Neighbor (K-NN). Setiap data dari setiap kelas akan membentuk K cluster yang kemudian nilai centroid akhir dari setiap cluster pada setiap kelas data tersebut akan dijadikan data acuan untuk melakukan proses klasifikasi menggunakan algoritma K-NN. Namun kendala dari banyaknya teknik clustering adalah biaya komputasi yang mahal, Rapid Centroid Estimation (RCE) dan K-Means termasuk kedalam teknik clustering dengan biaya komputasi yang murah. Untuk melihat manakah dari kedua algori
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Yingquan, Krassimir Ianakiev, and Venu Govindaraju. "Improved k-nearest neighbor classification." Pattern Recognition 35, no. 10 (2002): 2311–18. http://dx.doi.org/10.1016/s0031-3203(01)00132-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chung, Yu-Chi, I.-Fang Su, Chiang Lee, and Pei-Chi Liu. "Multiple k nearest neighbor search." World Wide Web 20, no. 2 (2016): 371–98. http://dx.doi.org/10.1007/s11280-016-0392-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Zhiwen, Hantao Chen, Jiming Liuxs, Jane You, Hareton Leung, and Guoqiang Han. "Hybrid $k$ -Nearest Neighbor Classifier." IEEE Transactions on Cybernetics 46, no. 6 (2016): 1263–75. http://dx.doi.org/10.1109/tcyb.2015.2443857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bezdek, James C., Siew K. Chuah, and David Leep. "Generalized k-nearest neighbor rules." Fuzzy Sets and Systems 18, no. 3 (1986): 237–56. http://dx.doi.org/10.1016/0165-0114(86)90004-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Heesung. "K-Nearest Neighbor rule using Distance Information Fusion." Journal of Korean Institute of Intelligent Systems 28, no. 2 (2018): 160–63. http://dx.doi.org/10.5391/jkiis.2018.28.2.160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Algamal, Zakariya, Shaimaa Mahmood, and Ghalia Basheer. "Classification of Chronic Kidney Disease Data via Three Algorithms." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 1 (October 1, 2021): 414–20. http://dx.doi.org/10.55562/jrucs.v46i1.92.

Full text
Abstract:
Pattern recognition can be defined as the classification of data based on knowledge already gained or on statistical information extracted from patterns. The classification of objects is an important area for research and application in a variety of fields. In this paper, k-Nearest Neighbor, Fuzzy k-Nearest Neighbor and Modified k-Nearest Neighbor algorithms are used to classify of the chronic kidney disease (CKD) data with different choices of value k. The experiment results prove that the Fuzzy k-Nearest Neighbor and Modified k-Nearest Neighbor algorithms are very effective for classifying C
APA, Harvard, Vancouver, ISO, and other styles
11

Ren, Zelin, Yongqiang Tang, and Wensheng Zhang. "Quality-related fault diagnosis based on k-nearest neighbor rule for non-linear industrial processes." International Journal of Distributed Sensor Networks 17, no. 11 (2021): 155014772110559. http://dx.doi.org/10.1177/15501477211055931.

Full text
Abstract:
The fault diagnosis approaches based on k-nearest neighbor rule have been widely researched for industrial processes and achieve excellent performance. However, for quality-related fault diagnosis, the approaches using k-nearest neighbor rule have been still not sufficiently studied. To tackle this problem, in this article, we propose a novel quality-related fault diagnosis framework, which is made up of two parts: fault detection and fault isolation. In the fault detection stage, we innovatively propose a novel non-linear quality-related fault detection method called kernel partial least squa
APA, Harvard, Vancouver, ISO, and other styles
12

Zhai, Junhai, Jiaxing Qi, and Sufang Zhang. "An instance selection algorithm for fuzzy K-nearest neighbor." Journal of Intelligent & Fuzzy Systems 40, no. 1 (2021): 521–33. http://dx.doi.org/10.3233/jifs-200124.

Full text
Abstract:
The condensed nearest neighbor (CNN) is a pioneering instance selection algorithm for 1-nearest neighbor. Many variants of CNN for K-nearest neighbor have been proposed by different researchers. However, few studies were conducted on condensed fuzzy K-nearest neighbor. In this paper, we present a condensed fuzzy K-nearest neighbor (CFKNN) algorithm that starts from an initial instance set S and iteratively selects informative instances from training set T, moving them from T to S. Specifically, CFKNN consists of three steps. First, for each instance x ∈ T, it finds the K-nearest neighbors in S
APA, Harvard, Vancouver, ISO, and other styles
13

Violita, Putri, Gomal Juni Yanris, and Mila Nirmala Sari Hasibuan. "Analysis of Visitor Satisfaction Levels Using the K-Nearest Neighbor Method." SinkrOn 8, no. 2 (2023): 898–914. http://dx.doi.org/10.33395/sinkron.v8i2.12257.

Full text
Abstract:
Visitors are people who come to a place, entertainment, shopping, and tourism. Visitors are one of the important factors for the progress and development of a place. With visitors, an entertainment, tourism and shopping area can progress and develop. Therefore researchers will make a study of the level of visitor satisfaction. This research aims to improve the quality of an entertainment venue, shopping and increase the quantity of visitors. This research was conducted using the K-Nearest Neighbor method. The K-Nearest Neighbor method is a classification method based on training data (dataset)
APA, Harvard, Vancouver, ISO, and other styles
14

Tomasev, Nenad, and Dunja Mladenic. "Nearest neighbor voting in high dimensional data: Learning from past occurrences." Computer Science and Information Systems 9, no. 2 (2012): 691–712. http://dx.doi.org/10.2298/csis111211014t.

Full text
Abstract:
Hubness is a recently described aspect of the curse of dimensionality inherent to nearest-neighbor methods. This paper describes a new approach for exploiting the hubness phenomenon in k-nearest neighbor classification. We argue that some of the neighbor occurrences carry more information than others, by the virtue of being less frequent events. This observation is related to the hubness phenomenon and we explore how it affects high-dimensional k-nearest neighbor classification. We propose a new algorithm, Hubness Information k-Nearest Neighbor (HIKNN), which introduces the k-occurrence inform
APA, Harvard, Vancouver, ISO, and other styles
15

Salim, Axel Natanael, Ade Adryani, and Tata Sutabri. "Deteksi Email Spam dan Non-Spam Berdasarkan Isi Konten Menggunakan Metode K-Nearest Neighbor dan Support Vector Machine." Syntax Idea 6, no. 2 (2024): 991–1001. http://dx.doi.org/10.46799/syntax-idea.v6i2.3052.

Full text
Abstract:
Terhadap banyaknya kasus penyalahgunaan email yang berpotensi merugikan orang lain. Email yang disalahgunakan ini biasa dikenal sebagai email spam yang mana email tersebut berisikan iklan, scam, bahkan malware. Penelitian ini bertujuan untuk mendeteksi email spam dan non-spam berdasarkan isi konten menggunakan metode K-Nearest Neighbor dan Support Vector Machine nilai terbaik dari algoritma K-Nearest Neighbor dengan pengukuran jarak Euclidean Distance. Support Vector Machine dan K-Nearest Neighbor dapat mengklasifikasi dan mendeteksi spam email atau non-spam email, K-Nearest Neighbor menggunak
APA, Harvard, Vancouver, ISO, and other styles
16

Rajarajeswari, Perepi. "Hyperspectral Image Classification by Using K-Nearest Neighbor Algorithm." International Journal of Psychosocial Rehabilitation 24, no. 5 (2020): 5068–74. http://dx.doi.org/10.37200/ijpr/v24i5/pr2020214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Choi, Byung-In, and Chung-Hoon Rhee. "Fuzzy Kernel K-Nearest Neighbor Algorithm for Image Segmentation." Journal of Fuzzy Logic and Intelligent Systems 15, no. 7 (2005): 828–33. http://dx.doi.org/10.5391/jkiis.2005.15.7.828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Agrawal, Rashmi. "Extensions of k-Nearest Neighbor Algorithm." Research Journal of Applied Sciences, Engineering and Technology 13, no. 1 (2016): 24–29. http://dx.doi.org/10.19026/rjaset.13.2886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Keller, James M., Michael R. Gray, and James A. Givens. "A fuzzy K-nearest neighbor algorithm." IEEE Transactions on Systems, Man, and Cybernetics SMC-15, no. 4 (1985): 580–85. http://dx.doi.org/10.1109/tsmc.1985.6313426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Samet, H. "K-Nearest Neighbor Finding Using MaxNearestDist." IEEE Transactions on Pattern Analysis and Machine Intelligence 30, no. 2 (2008): 243–52. http://dx.doi.org/10.1109/tpami.2007.1182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bax, Eric. "Validation of $k$-Nearest Neighbor Classifiers." IEEE Transactions on Information Theory 58, no. 5 (2012): 3225–34. http://dx.doi.org/10.1109/tit.2011.2180887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Magnussen, Steen, and Erkki Tomppo. "Model-calibrated k-nearest neighbor estimators." Scandinavian Journal of Forest Research 31, no. 2 (2015): 183–93. http://dx.doi.org/10.1080/02827581.2015.1073348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Lu, Yunjun Gao, Gang Chen, and Haida Zhang. "Metric All-k-Nearest-Neighbor Search." IEEE Transactions on Knowledge and Data Engineering 28, no. 1 (2016): 98–112. http://dx.doi.org/10.1109/tkde.2015.2453954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bermejo, Sergio, and Joan Cabestany. "Adaptive soft k-nearest-neighbor classifiers." Pattern Recognition 32, no. 12 (1999): 2077–79. http://dx.doi.org/10.1016/s0031-3203(99)00120-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Emrich, Tobias, Hans-Peter Kriegel, Peer Kröger, Johannes Niedermayer, Matthias Renz, and Andreas Züfle. "On reverse-k-nearest-neighbor joins." GeoInformatica 19, no. 2 (2014): 299–330. http://dx.doi.org/10.1007/s10707-014-0215-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

GIL-PITA, ROBERTO, and XIN YAO. "EVOLVING EDITED k-NEAREST NEIGHBOR CLASSIFIERS." International Journal of Neural Systems 18, no. 06 (2008): 459–67. http://dx.doi.org/10.1142/s0129065708001725.

Full text
Abstract:
The k-nearest neighbor method is a classifier based on the evaluation of the distances to each pattern in the training set. The edited version of this method consists of the application of this classifier with a subset of the complete training set in which some of the training patterns are excluded, in order to reduce the classification error rate. In recent works, genetic algorithms have been successfully applied to determine which patterns must be included in the edited subset. In this paper we propose a novel implementation of a genetic algorithm for designing edited k-nearest neighbor clas
APA, Harvard, Vancouver, ISO, and other styles
27

Miao, Xiaoye, Yunjun Gao, Gang Chen, Baihua Zheng, and Huiyong Cui. "Processing Incomplete k Nearest Neighbor Search." IEEE Transactions on Fuzzy Systems 24, no. 6 (2016): 1349–63. http://dx.doi.org/10.1109/tfuzz.2016.2516562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Achtert, Elke, Christian Böhm, Peer Kröger, Peter Kunath, Alexey Pryakhin, and Matthias Renz. "Efficient reverse k-nearest neighbor estimation." Informatik - Forschung und Entwicklung 21, no. 3-4 (2007): 179–95. http://dx.doi.org/10.1007/s00450-007-0027-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Steele, Brian M. "Exact bootstrap k-nearest neighbor learners." Machine Learning 74, no. 3 (2008): 235–55. http://dx.doi.org/10.1007/s10994-008-5096-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rodiana, Rosdiana. "Classification of Hypertension Patients in Palembang by K-Nearest Neighbor and Local Mean K-Nearest Neighbor." Journal of Statistics and Data Science 3, no. 1 (2024): 27–35. https://doi.org/10.33369/jsds.v3i1.32381.

Full text
Abstract:
Classification is a multivariate technique for separating different data sets from an object and allocating new objects into predefined groups. Several methods that can be used to classify include the k-Nearest Neighbor (KNN) and Local Mean k-Nearest Neighbor (LMKNN) methods. The KNN method classifies objects based on the majority voting principle, while LMKNN classifies objects based on the local average vector of the k nearest neighbors in each class. In this study, a comparison was made on the results of classifying hypertensive patient data at the Merdeka Health Center in Palembang City wi
APA, Harvard, Vancouver, ISO, and other styles
31

Onyezewe, Anozie, Armand F. Kana, Fatimah B. Abdullahi, and Aminu O. Abdulsalami. "An Enhanced Adaptive k-Nearest Neighbor Classifier Using Simulated Annealing." International Journal of Intelligent Systems and Applications 13, no. 1 (2021): 34–44. http://dx.doi.org/10.5815/ijisa.2021.01.03.

Full text
Abstract:
The k-Nearest Neighbor classifier is a non-complex and widely applied data classification algorithm which does well in real-world applications. The overall classification accuracy of the k-Nearest Neighbor algorithm largely depends on the choice of the number of nearest neighbors(k). The use of a constant k value does not always yield the best solutions especially for real-world datasets with an irregular class and density distribution of data points as it totally ignores the class and density distribution of a test point’s k-environment or neighborhood. A resolution to this problem is to dyna
APA, Harvard, Vancouver, ISO, and other styles
32

Prasanti, Annisya Aprilia, M. Ali Fauzi, and Muhammad Tanzil Furqon. "Neighbor Weighted K-Nearest Neighbor for Sambat Online Classification." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 1 (2018): 155. http://dx.doi.org/10.11591/ijeecs.v12.i1.pp155-160.

Full text
Abstract:
<p>Sambat Online is one of the implementation of E-Government for complaints management provided by Malang City Government. All of the complaints will be classified into its intended department. In this study, automatic complaint classification system using Neighbor Weighted K-Nearest Neighbor (NW-KNN) is poposed because Sambat Online has imbalanced data. The system developed consists of three main stages including preprocessing, N-Gram feature extraction, and classification using NW-KNN. Based on the experiment results, it can be concluded that the NW-KNN algorithm is able to classify t
APA, Harvard, Vancouver, ISO, and other styles
33

TAN, S. "Neighbor-weighted K-nearest neighbor for unbalanced text corpus." Expert Systems with Applications 28, no. 4 (2005): 667–71. http://dx.doi.org/10.1016/j.eswa.2004.12.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Annisya, Aprilia Prasanti, Ali Fauzi M., and Tanzil Furqon M. "Neighbor Weighted K-Nearest Neighbor for Sambat Online Classification." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 1 (2018): 155–60. https://doi.org/10.11591/ijeecs.v12.i1.pp155-160.

Full text
Abstract:
Sambat Online is one of the implementation of E-Government for complaints management provided by Malang City Government. All of the complaints will be classified into its intended department. In this study, automatic complaint classification system using Neighbor Weighted K-Nearest Neighbor (NW-KNN) is poposed because Sambat Online has imbalanced data. The system developed is composed of three major phases including preprocessing, N-Gram feature extraction, and classification using NWKNN. Based on the experiment results, it can be resumed that the NW-KNN algorithm is able to classify the imbal
APA, Harvard, Vancouver, ISO, and other styles
35

Krishandhie, Syuja Zhafran Rakha, and Aji Purwinarko. "Random Forest Algorithm Optimization using K-Nearest Neighborand SMOTE on Diabetes Disease." Recursive Journal of Informatics 3, no. 1 (2025): 43–50. https://doi.org/10.15294/rji.v3i1.1576.

Full text
Abstract:
Abstract. Diabetes is a chronic disease that can cause long-term damage, dysfunction and failure of various organs in the body. Diabetes occurs due to an increase in blood sugar (glucose) levels exceeding normal values. Early diagnosis of diseases is crucial for addressing them, especially in the case of diabetes, which is one of the chronic illnesses. Purpose: This study aims to find out how the implementation of the K-Nearest Neighbor algorithm with the Synthetic Minority Oversampling Technique (SMOTE) in optimizing Random Forest algorithm for diabetes disease prediction. Methods/Study desig
APA, Harvard, Vancouver, ISO, and other styles
36

Andryani, Ade. "Deteksi Email Spam dan Non-Spam Berdasarkan Isi Konten Menggunakan Metode K Nearest Neighbor dan Support Vector Machine." Syntax Idea 6, no. 2 (2024): 1–14. http://dx.doi.org/10.46799/syntax-idea.v6i2.3058.

Full text
Abstract:
Terhadap banyaknya kasus penyalahgunaan email yang berpotensi merugikan orang lain. Email yang disalahgunakan ini biasa dikenal sebagai email spam yang mana email tersebut berisikan iklan, scam, bahkan malware. Penelitian ini bertujuan untuk mendeteksi email spam dan non-spam berdasarkan isi konten menggunakan metode K-Nearest Neighbor dan Support Vector Machine nilai terbaik dari algoritma K-Nearest Neighbor dengan pengukuran jarak Euclidean Distance. Support Vector Machine dan K-Nearest Neighbor dapat mengklasifikasi dan mendeteksi spam email atau non-spam email, K-Nearest Neighbor menggunak
APA, Harvard, Vancouver, ISO, and other styles
37

Setiyorini, Tyas, and Rizky Tri Asmono. "PENERAPAN METODE K-NEAREST NEIGHBOR DAN GINI INDEX PADA KLASIFIKASI KINERJA SISWA." Jurnal Techno Nusa Mandiri 16, no. 2 (2019): 121–26. http://dx.doi.org/10.33480/techno.v16i2.747.

Full text
Abstract:
Predicting student academic performance is one of the important applications in data mining in education. However, existing work is not enough to identify which factors will affect student performance. Information on academic values ​​or progress on student learning is not enough to be a factor in predicting student performance and helps students and educators to make improvements in learning and teaching. K-Nearest Neighbor is a simple method for classifying student performance, but K-Nearest Neighbor has problems in terms of high feature dimensions. To solve this problem, we need a method of
APA, Harvard, Vancouver, ISO, and other styles
38

Romli, Ikhsan, Shanti Prameswari R, and Antika Zahrotul Kamalia. "Sentiment Analysis about Large-Scale Social Restrictions in Social Media Twitter Using Algoritm K-Nearest Neighbor." Jurnal Online Informatika 6, no. 1 (2021): 96. http://dx.doi.org/10.15575/join.v6i1.670.

Full text
Abstract:
Sentiment analysis is a data processing to recognize topics that people talk about and their sentiments toward the topics, one of which in this study is about large-scale social restrictions (PSBB). This study aims to classify negative and positive sentiments by applying the K-Nearest Neighbor algorithm to see the accuracy value of 3 types of distance calculation which are cosine similarity, euclidean, and manhattan distance for Indonesian language tweets about large-scale social restrictions (PSBB) from social media twitter. With the results obtained, the K-Nearest Neighbor accuracy by the Co
APA, Harvard, Vancouver, ISO, and other styles
39

Dhiya'atulhaq, Annisa Reza, Oliver S. Simanjuntak, and Heriyanto Heriyanto. "Classification of prospective borrowing customers to reduce the risk of bad deposits in sharia cooperatives using the FK-NNC method." Computing and Information Processing Letters 1, no. 1 (2021): 8. http://dx.doi.org/10.31315/cip.v1i1.6125.

Full text
Abstract:
Objective: Assisting cooperatives in determining the classification of prospective financing members to reduce non-performing deposits in sharia cooperativesDesign/method/approach: The Fuzzy K-Nearest Neighbor in Every Class method is used to classify prospective financing members. System development using the waterfall method.Results: Based on the implementation and the results of tests carried out using the confusion matrix, the results show that using the Fuzzy K-Nearest Neighbor in Every Class method can classify prospective financing members with an average accuracy rate of 80% with a val
APA, Harvard, Vancouver, ISO, and other styles
40

Setiyorini, Tyas, and Rizky Tri Asmono. "IMPLEMENTATION OF GAIN RATIO AND K-NEAREST NEIGHBOR FOR CLASSIFICATION OF STUDENT PERFORMANCE." Jurnal Pilar Nusa Mandiri 16, no. 1 (2020): 19–24. http://dx.doi.org/10.33480/pilar.v16i1.813.

Full text
Abstract:
Predicting student performance is very useful in analyzing weak students and providing support to students who face difficulties. However, the work done by educators has not been effective enough in identifying factors that affect student performance. The main predictor factor is an informative student academic score, but that alone is not good enough in predicting student performance. Educators utilize Educational Data Mining (EDM) to predict student performance. KK-Nearest Neighbor is often used in classifying student performance because of its simplicity, but the K-Nearest Neighbor has a we
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Hee-Sung, Jae-Hun Lee, and Eun-Tai Kim. "Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS." Journal of Korean Institute of Intelligent Systems 18, no. 6 (2008): 871–75. http://dx.doi.org/10.5391/jkiis.2008.18.6.871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Setianingsih, Susi, Maria Ulfa Chasanah, Yogiek Indra Kurniawan, and Lasmedi Afuan. "IMPLEMENTATION OF PARTICLE SWARM OPTIMIZATION IN K-NEAREST NEIGHBOR ALGORITHM AS OPTIMIZATION HEPATITIS C CLASSIFICATION." Jurnal Teknik Informatika (Jutif) 4, no. 2 (2023): 457–65. http://dx.doi.org/10.52436/1.jutif.2023.4.2.980.

Full text
Abstract:
Hepatitis has become a public health problem that is generally caused by infection with the hepatitis virus. One type of hepatitis caused by a virus is Hepatitis C. This disease can cause patients to experience inflammation of the liver. In the worst conditions, it can even lead to death. Initial predictions need to be made to increase the awareness of each individual against the threat of Hepatitis C by using the K-Nearest Neighbor method. K-Nearest Neighbor is a classification method that can give a pretty good percentage result in classifying, especially when using large training data. Howe
APA, Harvard, Vancouver, ISO, and other styles
43

Rahmawati, Fitri, Fitri Amanah, and Sefri Imanuel Fallo. "Studi Komparasi Regresi Logistik Biner dan K-Nearest Neighbor Pada Kasus Prediksi Curah Hujan." Statistika 24, no. 1 (2024): 20–30. http://dx.doi.org/10.29313/statistika.v24i1.2739.

Full text
Abstract:
ABSTRAK Perubahan iklim yang sedang terjadi di berbagai belahan dunia sebagai akibat dari pemanasan global telah menyebabkan ketidakpastian cuaca. Salah satu perubahan yang dirasakan adalah intensitas curah hujan. Hal ini mengakibatkan prediksi akan curah hujan menjadi penting untuk dilakukan. Ada beberapa teknik analisis data yang digunakan untuk prediksi curah hujan, diantaranya klasifikasi. Pada penelitian ini, dengan menggunakan variabel temperatur, kelembapan, lamanya penyinaran, dan kecepatan angin, akan dilakukan prediksi terhadap klasifikasi curah hujan di Kota Bogor. Model yang diguna
APA, Harvard, Vancouver, ISO, and other styles
44

Widiarti, Anastasia Rita. "K-nearest neighbor performance for Nusantara scripts image transliteration." Jurnal Teknologi dan Sistem Komputer 8, no. 2 (2020): 150–56. http://dx.doi.org/10.14710/jtsiskom.8.2.2020.150-156.

Full text
Abstract:
The concept of classification using the k-nearest neighbor (KNN) method is simple, easy to understand, and easy to be implemented in the system. The main challenge in classification with KNN is determining the proximity measure of an object and how to make a compact reference class. This paper studied the implementation of the KNN for the automatic transliteration of Javanese, Sundanese, and Bataknese script images into Roman script. The study used the KNN algorithm with the number k set to 1, 3, 5, 7, and 9. Tests used the image dataset of 2520 data. With the 3-fold and 10-fold cross-validati
APA, Harvard, Vancouver, ISO, and other styles
45

Setiyorini, Tyas, and Rizky Tri Asmono. "PENERAPAN METODE K-NEAREST NEIGHBOR DAN INFORMATION GAIN PADA KLASIFIKASI KINERJA SISWA." JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer) 5, no. 1 (2019): 7–14. http://dx.doi.org/10.33480/jitk.v5i1.613.

Full text
Abstract:
Education is a very important problem in the development of a country. One way to reach the level of quality of education is to predict student academic performance. The method used is still using an ineffective way because evaluation is based solely on the educator's assessment of information on the progress of student learning. Information on the progress of student learning is not enough to form indicators in evaluating student performance and helping students and educators to make improvements in learning and teaching. K-Nearest Neighbor is an effective method for classifying student perfo
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Wei, Jingbo Guo, Jie Li, Ji Sun, Haoran Qi, and Ximing Chen. "Research on Stratum Identification Method Based on TBM Tunneling Characteristic Parameters." Complexity 2022 (October 29, 2022): 1–12. http://dx.doi.org/10.1155/2022/8540985.

Full text
Abstract:
In order to obtain continuous stratum information during TBM tunneling, using TBM tunneling parameters, stratum recognition is carried out through the K-nearest neighbor model, and the model is improved by the entropy weight method to improve the stratum recognition rate. By analyzing the correlation between TBM tunneling characteristic parameters and stratum, the tunneling characteristic parameter vector which is most sensitive to the stratum is obtained by sensitivity analysis, and the stratum recognition model based on the K-nearest neighbor algorithm is established. Aiming at the problem t
APA, Harvard, Vancouver, ISO, and other styles
47

Pavithraa, G., and S. Sivaprasad. "Analysis and Comparison of Prediction of Heart Disease Using Novel K Nearest Neighbor and Decision Tree Algorithm." CARDIOMETRY, no. 25 (February 14, 2023): 773–77. http://dx.doi.org/10.18137/cardiometry.2022.25.773777.

Full text
Abstract:
Aim: Prediction of coronary illness utilizing novel Novel k nearest neighbor (KNN) and contrasting its accuracy with decision tree algorithm. Materials and Methods: Two gatherings are proposed for foreseeing the accuracy (%) of coronary illness. To be specific, novel Novel k nearest neighbor and decision tree algorithm. Here we take 20 examples each for assessment and look at. The sample size was calculated using G power with pretest power at 80% and the alpha of 0.05 value. Result: The decision tree gives better accuracy (84.95%) contrasted with the Novel k nearest neighbor accuracy (76.29 %)
APA, Harvard, Vancouver, ISO, and other styles
48

Kurniawan, Muchamad, Gusti Eka Yuliastuti, Andy Rachman, Adib Pakar Budi, and Hafida Nur Zaqiyah. "Implementing K-Nearest Neighbors (k-NN) Algorithm and Backward Elimination on Cardiotocography Datasets." JOIV : International Journal on Informatics Visualization 8, no. 3 (2024): 1239. http://dx.doi.org/10.62527/joiv.8.3.1996.

Full text
Abstract:
Having a healthy baby is a dream for mothers. Unfortunately, high maternal and fetal mortality has become a vital problem that requires early risk detection for pregnant women. A cardiotocograph examination is necessary to maintain maternal and fetal health. One method that can solve this problem is classification. This research analyzes the optimal use of k values and distance measurements in the k-NN method. This research expects to become the primary reference for other studies examining the same dataset or developing k-NN. A selection feature is needed to optimize the classification method
APA, Harvard, Vancouver, ISO, and other styles
49

Assegaf, Alwi, Moch Abdul Mukid, and Abdul Hoyyi. "Analisis Kesehatan Bank Menggunakan Local Mean K-Nearest Neighbor dan Multi Local Means K-Harmonic Nearest Neighbor." Jurnal Gaussian 8, no. 3 (2019): 343–55. http://dx.doi.org/10.14710/j.gauss.v8i3.26679.

Full text
Abstract:
The classification method continues to develop in order to get more accurate classification results than before. The purpose of the research is comparing the two k-Nearest Neighbor (KNN) methods that have been developed, namely the Local Mean k-Nearest Neighbor (LMKNN) and Multi Local Means k-Harmonic Nearest Neighbor (MLM-KHNN) by taking a case study of listed bank financial statements and financial statements complete recorded at Bank Indonesia in 2017. LMKNN is a method that aims to improve classification performance and reduce the influence of outliers, and MLM-KHNN is a method that aims t
APA, Harvard, Vancouver, ISO, and other styles
50

Sugesti, Annisa, Moch Abdul Mukid, and Tarno Tarno. "PERBANDINGAN KINERJA MUTUAL K-NEAREST NEIGHBOR (MKNN) DAN K-NEAREST NEIGHBOR (KNN) DALAM ANALISIS KLASIFIKASI KELAYAKAN KREDIT." Jurnal Gaussian 8, no. 3 (2019): 366–76. http://dx.doi.org/10.14710/j.gauss.v8i3.26681.

Full text
Abstract:
Credit feasibility analysis is important for lenders to avoid the risk among the increasement of credit applications. This analysis can be carried out by the classification technique. Classification technique used in this research is instance-based classification. These techniques tend to be simple, but are very dependent on the determination of K values. K is number of nearest neighbor considered for class classification of new data. A small value of K is very sensitive to outliers. This weakness can be overcome using an algorithm that is able to handle outliers, one of them is Mutual K-Neare
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!