To see the other types of publications on this topic, follow the link: K-Support vector nearest neighbor.

Journal articles on the topic 'K-Support vector nearest neighbor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'K-Support vector nearest neighbor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wijaya, Aditya Surya, Nurul Chamidah, and Mayanda Mega Santoni. "Pengenalan Karakter Tulisan Tangan Dengan K-Support Vector Nearest Neighbor." IJEIS (Indonesian Journal of Electronics and Instrumentation Systems) 9, no. 1 (2019): 33. http://dx.doi.org/10.22146/ijeis.38729.

Full text
Abstract:
Handwritten characters are difficult to be recognized by machine because people had various own writing style. This research recognizes handwritten character pattern of numbers and alphabet using K-Nearest Neighbour (KNN) algorithm. Handwritten recognition process is worked by preprocessing handwritten image, segmentation to obtain separate single characters, feature extraction, and classification. Features extraction is done by utilizing Zone method that will be used for classification by splitting this features data to training data and testing data. Training data from extracted features reduced by K-Support Vector Nearest Neighbor (K-SVNN) and for recognizing handwritten pattern from testing data, we used K-Nearest Neighbor (KNN). Testing result shows that reducing training data using K-SVNN able to improve handwritten character recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Salim, Axel Natanael, Ade Adryani, and Tata Sutabri. "Deteksi Email Spam dan Non-Spam Berdasarkan Isi Konten Menggunakan Metode K-Nearest Neighbor dan Support Vector Machine." Syntax Idea 6, no. 2 (2024): 991–1001. http://dx.doi.org/10.46799/syntax-idea.v6i2.3052.

Full text
Abstract:
Terhadap banyaknya kasus penyalahgunaan email yang berpotensi merugikan orang lain. Email yang disalahgunakan ini biasa dikenal sebagai email spam yang mana email tersebut berisikan iklan, scam, bahkan malware. Penelitian ini bertujuan untuk mendeteksi email spam dan non-spam berdasarkan isi konten menggunakan metode K-Nearest Neighbor dan Support Vector Machine nilai terbaik dari algoritma K-Nearest Neighbor dengan pengukuran jarak Euclidean Distance. Support Vector Machine dan K-Nearest Neighbor dapat mengklasifikasi dan mendeteksi spam email atau non-spam email, K-Nearest Neighbor menggunakan perhitungan jarak Euclidean Distance dengan nilai K = 1,3, dan 5. Hasil evaluasi menggunakan confusion matrix yang menghasilkan bahwa motode K-Nearest Neighbor dengan nilai k=3 mendapatkan tingkat akurasi sebesar 92%, tingkat presisi sebesar 91%, recall sebesar 100%, dan f1_score sebesar 95%. Metode Support Vector Machine mendapatkan nilai akurasi sebesar 97% dengan tingkat akurasi sebesar 97%, recall sebesar 100%, dan f1_score sebesar 98%. Hal ini menjadikan metode Support Vector Machine lebih unggal dibandingkan metode K-Nearest Neighbor dalam penelitian ini. Selain itu model yang dibangun juga sudah dapat digunakan untuk memprediksi spam dan non spam dari isi konten email baru.
APA, Harvard, Vancouver, ISO, and other styles
3

Andryani, Ade. "Deteksi Email Spam dan Non-Spam Berdasarkan Isi Konten Menggunakan Metode K Nearest Neighbor dan Support Vector Machine." Syntax Idea 6, no. 2 (2024): 1–14. http://dx.doi.org/10.46799/syntax-idea.v6i2.3058.

Full text
Abstract:
Terhadap banyaknya kasus penyalahgunaan email yang berpotensi merugikan orang lain. Email yang disalahgunakan ini biasa dikenal sebagai email spam yang mana email tersebut berisikan iklan, scam, bahkan malware. Penelitian ini bertujuan untuk mendeteksi email spam dan non-spam berdasarkan isi konten menggunakan metode K-Nearest Neighbor dan Support Vector Machine nilai terbaik dari algoritma K-Nearest Neighbor dengan pengukuran jarak Euclidean Distance. Support Vector Machine dan K-Nearest Neighbor dapat mengklasifikasi dan mendeteksi spam email atau non-spam email, K-Nearest Neighbor menggunakan perhitungan jarak Euclidean Distance dengan nilai K = 1,3, dan 5. Hasil evaluasi menggunakan confusion matrix yang menghasilkan bahwa motode K-Nearest Neighbor dengan nilai k=3 mendapatkan tingkat akurasi sebesar 92%, tingkat presisi sebesar 91%, recall sebesar 100%, dan f1_score sebesar 95%. Metode Support Vector Machine mendapatkan nilai akurasi sebesar 97% dengan tingkat akurasi sebesar 97%, recall sebesar 100%, dan f1_score sebesar 98%. Hal ini menjadikan metode Support Vector Machine lebih unggal dibandingkan metode K-Nearest Neighbor dalam penelitian ini. Selain itu model yang dibangun juga sudah dapat digunakan untuk memprediksi spam dan non spam dari isi konten email baru.
 
 Kata Kunci: Confusion Matrix, Email, KNN, Spam, SVM
APA, Harvard, Vancouver, ISO, and other styles
4

Basedt, Ngabdul, Eko Supriyadi, and Agus Susilo Nugroho. "Perbandingan Algoritma Klasifikasi dalam Analisis Sentimen Opini Masyarakat tentang Kenaikan Harga Bbm." Joined Journal (Journal of Informatics Education) 6, no. 2 (2024): 219. http://dx.doi.org/10.31331/joined.v6i2.2893.

Full text
Abstract:
Kenaikan harga bahan bakar minyak (BBM) telah menjadi permasalahan yang cukup kompleks dan kontroversial . Peningkatan harga BBM memengaruhi berbagai aspek ekonomi dan sosial, termasuk inflasi, biaya produksi, dan tarif transportasi di Indonesia. Klasifikasi sentimen menggunakan algoritma Naïve Bayes, Support Vector Machine, dan K-Nearest Neighbors untuk menentukan algorimat klasifikasi sentimen manakah yang terbaik. Dengan melakukan perbangdingan metode algoritma Naïve Bayes, Support Vector Machine, dan K-Nearest Neighbors untuk menentukan algorimat klasifikasi sentimen manakah yang terbaik. Dengan melakukan perbangdingan algoritma klasifikasi sentimen menghasilkan akurasi yang paling tinggi didapatkan oleh algoritma Naive Bayes dengan akurasi sebesar 80,28%. Kedua adalah algoritma Support Vector Machine (SVM) dengan akurasi sebesar 73,89%. Algoritma yang memiliki nilai akurasi paling kecil adalah algorima K-Nearest Neighbor (KNN) dengan akurasi sebesar 50,00%.
APA, Harvard, Vancouver, ISO, and other styles
5

Srinivasulureddy, Ch, and N. S. Kumar. "Analysis and Comparison for Innovative Prediction Technique of Breast Cancer Tumor using k Nearest Neighbor Algorithm over Support Vector Machine Algorithm with Improved Accuracy." CARDIOMETRY, no. 25 (February 14, 2023): 878–94. http://dx.doi.org/10.18137/cardiometry.2022.25.878884.

Full text
Abstract:
Aim: The main objective of this study is to compare the efficiency of the k-Nearest Neighbor (KNN) and Support vector machine (SVM) algorithms in detecting breast cancer tumors and to examine their improved accuracy, sensitivity, and precision. Materials and Methods: The data for the research of Innovative breast cancer prediction using machine learning algorithms is taken from UCI Machine Learning Repository. The sample size of the innovative technique involves two groups KNN (N=20) and SVM (N=20) according to clincalc.com by keeping alpha error-threshold at 0.05, confidence interval at 95%, enrollment ratio as 0:1, and power at 80%. The accuracy, sensitivity, and precision are calculated using MATLAB software. Result: Accuracy (%), sensitivity (%), precision (%) are compared using SPSS software using an independent sample t-test tool. The accuracy of the k-Nearest Neighbor is 93.38% (p<0.001) while the accuracy of the Support vector machine is 97.50%. The sensitivity rate is 90.85% (p<0.001) for k-Nearest Neighbor whereas the results of Support vector machine sensitivity is 95.83%. The precision of k-Nearest Neighbor is 98.48% (p<0.001) whereas the results of Support vector machine precision is 100%. Conclusion: The support vector machine algorithm appears to have performed better than the k-Nearest Neighbor with improved accuracy in Innovative breast cancer prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, V. S., and K. Vidhya. "Heart Plaque Detection with Improved Accuracy using K-Nearest Neighbors classifier Algorithm in comparison with Least Squares Support Vector Machine." CARDIOMETRY, no. 25 (February 14, 2023): 1590–94. http://dx.doi.org/10.18137/cardiometry.2022.25.15901594.

Full text
Abstract:
Aim: The objective of the work is to evaluate the performance of the k-Nearest Neighbor classifier in detecting heart plaque with high accuracy and comparing it with the Least Squares Support Vector Machine. Materials and Methods: The Kaggle dataset on Heart Plaque Disease yielded a total of 20 samples. Clincalc, which has two groups: alpha, power, and enrollment ratio, is used to assess G power of 0.08 with 95% confidence interval for samples. The training dataset (n = 489 [70 percent]) and the test dataset (n = 277 [30 percent]) are divided into two groups. Accuracy is used to assess the performance of the k-Nearest Neighbor algorithm and the Least Squares Support Vector Machine. Results: The accuracy of the k-Nearest Neighbor algorithm was 86 % and 67.3 % for the Least Squares Support Vector Machine technique. Since p (2-tailed) < 0.05, in SPSS statistical analysis, a significant difference exists between the two groups. Conclusion: In this work, the k-Nearest Neighbor algorithm outperformed the Least Squares Support Vector Machine algorithm in detecting heart plaque disease in the dataset under consideration.
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Xianli, Yao Luo, and Yitian Xu. "K-nearest neighbor based structural twin support vector machine." Knowledge-Based Systems 88 (November 2015): 34–44. http://dx.doi.org/10.1016/j.knosys.2015.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Yitian, and Laisheng Wang. "K-nearest neighbor-based weighted twin support vector regression." Applied Intelligence 41, no. 1 (2014): 299–309. http://dx.doi.org/10.1007/s10489-014-0518-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nasiri, Jalal al-Din, and Amirmahmoud Mir. "An enhanced KNN-based twin support vector machine with stable learning rules." Neural Computing and Applications 32, no. 16 (2020): 12949–69. https://doi.org/10.1007/s00521-020-04740-x(0123456789).

Full text
Abstract:
Among the extensions of twin support vector machine (TSVM), some scholars have utilized K-nearest neighbor (KNN) graph to enhance TSVM’s classification accuracy.‎ However, these KNN-based TSVM classifiers have two major issues such as high computational cost and overfitting.‎ In order to address these issues, this paper presents an enhanced regularized K-nearest neighbor-based twin support vector machine (RKNN-TSVM).‎ It has three additional advantages: (1)‎ Weight is given to each sample by considering the distance from its nearest neighbors.‎ This further reduces the effect of noise and outliers on the output model.‎ (2)‎ An extra stabilizer term was added to each objective function.‎ As a result, the learning rules of the proposed method are stable.‎ (3)‎ To reduce the computational cost of finding KNNs for all the samples, location difference of multiple distances-based K-nearest neighbors algorithm (LDMDBA) was embedded into the learning process of the proposed method.‎ The extensive experimental results on several synthetic and benchmark datasets show the effectiveness of our proposed RKNN-TSVM in both classification accuracy and computational time.‎ Moreover, the largest speedup in the proposed method reaches to 14 times.‎
APA, Harvard, Vancouver, ISO, and other styles
10

Mahfouz, Mohamed A. "INCORPORATING DENSITY IN K-NEAREST NEIGHBORS REGRESSION." International Journal of Advanced Research in Computer Science 14, no. 03 (2023): 144–49. http://dx.doi.org/10.26483/ijarcs.v14i3.6989.

Full text
Abstract:
The application of the traditional k-nearest neighbours in regression analysis suffers from several difficulties when only a limited number of samples are available. In this paper, two decision models based on density are proposed. In order to reduce testing time, a k-nearest neighbours table (kNN-Table) is maintained to keep the neighbours of each object x along with their weighted Manhattan distance to x and a binary vector representing the increase or the decrease in each dimension compared to x’s values. In the first decision model, if the unseen sample having a distance to one of its neighbours x less than the farthest neighbour of x’s neighbour then its label is estimated using linear interpolation otherwise linear extrapolation is used. In the second decision model, for each neighbour x of the unseen sample, the distance of the unseen sample to x and the binary vector are computed. Also, the set S of nearest neighbours of x are identified from the kNN-Table. For each sample in S, a normalized distance to the unseen sample is computed using the information stored in the kNN-Table and it is used to compute the weight of each neighbor of the neighbors of the unseen object. In the two models, a weighted average of the computed label for each neighbour is assigned to the unseen object. The diversity between the two proposed decision models and the traditional kNN regressor motivates us to develop an ensemble of the two proposed models along with traditional kNN regressor. The ensemble is evaluated and the results showed that the ensemble achieves significant increase in the performance compared to its base regressors and several related algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Eko, Prasetyo, Dimas Adityo R., Suciat Nanik, and Fatichah Chastine. "Multi-class K-support Vector Nearest Neighbor for Mango Leaf Classification." TELKOMNIKA Telecommunication, Computing, Electronics and Control 16, no. 4 (2018): 1826–37. https://doi.org/10.12928/TELKOMNIKA.v16i4.8482.

Full text
Abstract:
K-Support Vector Nearest Neighbor (K-SVNN) is one of methods for training data reduction that works only for binary class. This method uses Left Value (LV) and Right Value (RV) to calculate Significant Degree (SD) property. This research aims to modify the K-SVNN for multi-class training data reduction problem by using entropy for calculating SD property. Entropy can measure the impurity of data class distribution, so the selection of the SD can be conducted based on the high entropy. In order to measure performance of the modified K-SVNN in mango leaf classification, experiment is conducted by using multiclass Support Vector Machine (SVM) method on training data with and without reduction. The experiment is performed on 300 mango leaf images, each image represented by 260 features consisting of 256 Weighted Rotation- and Scale-invariant Local Binary Pattern features with average weights (WRSI-LBPavg) texture features, 2 color features, and 2 shape features. The experiment results show that the highest accuracy for data with and without reduction are 71.33% and 71.00% respectively. It is concluded that KSVNN can be used to reduce data in multi-class classification problem while preserve the accuracy. In addition, performance of the modified K-SVNN is also compared with two other methods of multi-class data reduction, i.e. Condensed Nearest Neighbor Rule (CNN) and Template Reduction KNN (TRKNN). The performance comparison shows that the modified K-SVNN achieves better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
12

Nugrahadi, Dodon Turianto, Tri Mulyani, Dwi Kartini, et al. "Efek Transformasi Wavelet Diskrit Pada Klasifikasi Aritmia Dari Data Elektrokardiogram Menggunakan Machine Learning." JURNAL MEDIA INFORMATIKA BUDIDARMA 7, no. 1 (2023): 13. http://dx.doi.org/10.30865/mib.v7i1.4859.

Full text
Abstract:
Arrhythmia is one of the abnormalities of the heart rhythm, and some patients who suffer from arrhythmia do not feel any symptoms. Automating the early detection of arrhythmia is necessary by using an electrocardiogram. Previous research that had been done conducted classifications using several methods of data mining. In this research, the transformation for processing signals used is Discrete Wavelet Transformation, where a filtering process occurs that separates signals into high and low-frequency signals without losing the information from signals and is carried out with a two-level decomposition. After that, data normalization was performed using min-max normalization and was put into the model classification using the Support Vector Machine method with a Gaussian Radial Basis Function kernel of Naïve Bayes and K-Nearest Neighbor. Each data that was being used consisted of 140 data with a total of 35 data for each label. This research shows that at level 1 decomposition, the highest accuracy was obtained at db7 for the classification using Support Vector Machine with an accuracy of 73,57%, 68,57% for Naïve Bayes, K-Nearest Neighbor with k=3 resulting in an accuracy of 59,64%, and K-Nearest Neighbor with k=5 resulting in an accuracy of 63,57% while at level 2 decomposition the highest accuracy was obtained at db6 dan db8 for the classification using Support Vector Machine with an accuracy of 70,71%, 67,50% for Naïve Bayes, K-Nearest Neighbor with k=3 resulting in an accuracy of 66,07%, and K-Nearest Neighbor with k=5 resulting in an accuracy of 65%. From this research, it can be concluded that the highest accuracy is produced by decomposition level 1 using Support Vector Machine classification and that the Daubechies wavelet type has better results than the Haar wavelet.
APA, Harvard, Vancouver, ISO, and other styles
13

Fauziah, Muhammad Arif Tiro, and Ruliana. "Comparison of k-Nearest Neighbor (k-NN) and Support Vector Machine (SVM) Methods for Classification of Poverty Data in Papua." ARRUS Journal of Mathematics and Applied Science 2, no. 2 (2022): 83–91. http://dx.doi.org/10.35877/mathscience741.

Full text
Abstract:
Classification is a job of assessing data objects to include them in a particular class from a number of available classes. The classification method used is the k-Nearest Neighbor (k-NN) and Support Vector Machine (SVM) methods. The data used in this study is data on poverty in Papua with the category of the number of low/high level poor people. Of the 29 regencies/cities that were sampled, 15 regencies/cities represent the number of low-level poor people and 14 districts/cities are the number of high-level poor people. The results of the analysis obtained are the k-Nearest Neighbor (k-NN) method with a value of k=15 producing an accuracy of 58.62%, while the Support Vector Machine (SVM) method with Parameter cost = 1 using the RBF kernel produces an accuracy value. by 93.1%. The classification criteria to find the best method is to look at the Root Mean Square Error (RMSE) which states that the Support Vector Machine (SVM) method is better than the k-Nearest Neighbor (k-NN) method.
APA, Harvard, Vancouver, ISO, and other styles
14

Hu, Jie, and Sier Deng. "Data-Driven Fatigue Damage Monitoring and Early Warning Model for Bearings." Wireless Communications and Mobile Computing 2022 (March 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/7611670.

Full text
Abstract:
Since the manual extraction of features is not sufficient to accurately characterize the health status of rolling bearings, machine learning algorithms are gradually being used for fault diagnosis of bearings, which can adaptively learn the required features from the input data. In this paper, k -nearest neighbor, support vector machines, and convolutional neural networks are successfully applied to the fault diagnosis of bearings, for the benefit of achieving the detection and early warning of bearing fatigue damage. The original samples are segmented into semioverlapping samples. When using k -nearest neighbor and support vector machines as early warning models, we searched their hyperparameters with random search and grid search, and the results showed that support vector machines could achieve 87.1% of bearing detection accuracy and k -nearest neighbor could achieve 100% of detection accuracy. When convolutional neural networks are used as the early warning model, the accuracy can reach 99.75%.
APA, Harvard, Vancouver, ISO, and other styles
15

Al-Dabagh, Mustafa ZuhaerNayef, Mustafa H. Mohammed Alhabib, and Firas H. AL-Mukhtar. "Face Recognition System Based on Kernel Discriminant Analysis, K-Nearest Neighbor and Support Vector Machine." International Journal of Research and Engineering 5, no. 2 (2018): 335–38. http://dx.doi.org/10.21276/ijre.2018.5.3.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Antonio, Roy, and Hironimus Leong. "PERFORMANCE OF SYNTHETIC MINORITY OVER-SAMPLING TECHNIQUE ON SUPPORT VECTOR MACHINE AND K-NEAREST NEIGHBOR FOR SENTIMENT ANALYSIS OF METAVERSE IN INDONESIA." Proxies : Jurnal Informatika 6, no. 2 (2024): 160–70. http://dx.doi.org/10.24167/proxies.v6i2.12459.

Full text
Abstract:
The metaverse is one of the most discussed things on social media, Twitter in Indonesia. This view can be both positive and negative in Indonesian society, hence the need for sentiment analysis. However, creating a sentiment classification model with unbalanced data will reduce performance. For this reason, Synthetic Minority Oversampling is needed in Support Vector Machine and K-Nearest Neighbor algorithms. The results of Synthetic Minority Oversampling can improve the accuracy of the Support Vector Machine and K-Nearest Neighbor algorithms.
APA, Harvard, Vancouver, ISO, and other styles
17

Akinshola-Awe, Funmilayo Jumoke, A. A. Obiniyi, Gilbert I. O. Aimufua, and Tochukwu Kene Anyachebelu. "Framework for the detection and classification of malware using machine learning." Dutse Journal of Pure and Applied Sciences 10, no. 3a (2024): 177–86. http://dx.doi.org/10.4314/dujopas.v10i3a.17.

Full text
Abstract:
Malware constitute a major threat to Network Infrastructure which are vulnerable to several devastating Malware attacks such as Virus and Ransomware. Traditional Antimalware software provides limited efficiency against Malware removal due to evolving evasion techniques capabilities of Malware such as polymorphism. Antimalware only removes Malware they have signatures for and are ineffective and helpless against zero day attack, several research works have made use of supervised and unsupervised learning algorithms to detect and classify Malware but False Positives prevails. This research made use of Machine Learning to detect and classify Malware by employing Machine Learning techniques including Feature Selection techniques as well as Grid Search hyperparameter optimization. Principal Component Analysis was combined with Chi Square to cure the curse of dimensionality. Support Vector Machine, K Nearest Neighbor and Decision Tree were used to train the model separately with two datasets. The research model was evaluated with Confusion Matrix, Precision, Recall and F1 Score. Accuracy of 99%,98.64% and 100% was achieved with K Nearest Neighbor, Decision Tree and Support Vector Machine respectively using CICMalmem dataset which has equal number of Malware and Benign files, K Nearest Neighbor achieved no False Positive. Accuracy of 97.7%,70% and 96% was achieved with K Nearest Neighbor, Decision Tree and Support Vector Machine respectively with Dataset_Malware.csv dataset, K Nearest Neighbor achieved False Positives of 38.The Model was trained separately with default hyperparameters of the chosen algorithms as well as the optimal hyperparameters obtained from Grid Search and it was discovered that optimizing hyperparameters and combining features obtained with Principal Component Analysis and Chi Square to train the Model using the dataset with equal number of Benign and Malicious files(CICMalmem dataset) yielded optimal performance with Support Vector Machine. Future works includes employing deep learning and ensemble learning as classifiers as well as implementing other hyperparameter optimization techniques.
APA, Harvard, Vancouver, ISO, and other styles
18

Hartono, Seno, Anggi Perwitasari, and Herry Sujaini. "Komparasi Algoritma Nonparametrik untuk Klasifikasi Citra Wajah Berdasarkan Suku di Indonesia." Jurnal Edukasi dan Penelitian Informatika (JEPIN) 6, no. 3 (2020): 337. http://dx.doi.org/10.26418/jp.v6i3.43268.

Full text
Abstract:
Klasifikasi merupakan metode data mining yang berfungsi untuk mengatur dan mengkategorikan data pada kelas yang berbeda-beda. Penelitian ini bertujuan untuk membandingkan dan menentukan algoritma nonparametrik terbaik dalam pengklasifikasian citra wajah. Dalam proses pengklasifikasian, penelitian ini menggunakan algoritma klasifikasi nonparametrik yaitu k-Nearest Neighbor (kNN), Support Vector Machine (SVM), Decision Tree, dan AdaBoost Untuk mengklasifikasikan citra wajah penduduk Indonesia yang berasal dari suku Batak, Dayak, Jawa, Melayu, dan Tionghoa. Penelitian ini menggunakan Orange Data Mining Tool sebagai alat bantu untuk melakukan proses data mining. Dari hasil pengklasifikasian dengan menerapkan algoritma k-Nearest Neigbor, Support Vector Machine, Decision Tree, dan AdaBoost, SVM memberikan nilai akurasi yang lebih baik dibanding algoritma lainnya. Rata-rata nilai precision keempat algoritma tersebut berturut-turut adalah Support Vector Machine 37.5%, diikuti oleh algoritma k-Nearest Neighbor 31.55%, AdaBoost 30.25%, dan untuk Decision Tree 29.75%.
APA, Harvard, Vancouver, ISO, and other styles
19

Zuo, Chaoji, and Dong Deng. "ARKGraph: All-Range Approximate K-Nearest-Neighbor Graph." Proceedings of the VLDB Endowment 16, no. 10 (2023): 2645–58. http://dx.doi.org/10.14778/3603581.3603601.

Full text
Abstract:
Given a collection of vectors, the approximate K-nearest-neighbor graph (KGraph for short) connects every vector to its approximate K-nearest-neighbors (KNN for short). KGraph plays an important role in high dimensional data visualization, semantic search, manifold learning, and machine learning. The vectors are typically vector representations of real-world objects (e.g., images and documents), which often come with a few structured attributes, such as times-tamps and locations. In this paper, we study the all-range approximate K-nearest-neighbor graph (ARKGraph) problem. Specifically, given a collection of vectors, each associated with a numerical search key (e.g., a timestamp), we aim to build an index that takes a search key range as the query and returns the KGraph of vectors whose search keys are within the query range. ARKGraph can facilitate interactive high dimensional data visualization, data mining, etc. A key challenge of this problem is the huge index size. This is because, given n vectors, a brute-force index stores a KGraph for every search key range, which results in O (K n 3 ) index size as there are O ( n 2 ) search key ranges and each KGraph takes O (K n ) space. We observe that the KNN of a vector in nearby ranges are often the same, which can be grouped together to save space. Based on this observation, we propose a series of novel techniques that reduce the index size significantly to just O (K n log n ) in the average case. Furthermore, we develop an efficient indexing algorithm that constructs the optimized ARKGraph index directly without exhaustively calculating the distance between every pair of vectors. To process a query, for each vector in the query range, we only need O (log log n + K log K) to restore its KNN in the query range from the optimized ARKGraph index. We conducted extensive experiments on real-world datasets. Experimental results show that our optimized ARKGraph index achieved a small index size, low query latency, and good scalability. Specifically, our approach was 1000x faster than the baseline method that builds a KGraph for all the vectors in the query range on-the-fly.
APA, Harvard, Vancouver, ISO, and other styles
20

Tjikdaphia, Nadya Bethry Balqies, and Sulastri Sulastri. "COMPARISON OF NBC, SVM, KNN CLASSIFICATION RESULTS IN SENTIMENT ANALYSIS OF MOBILE JKN." JURTEKSI (Jurnal Teknologi dan Sistem Informasi) 9, no. 4 (2023): 665–72. http://dx.doi.org/10.33330/jurteksi.v9i4.2539.

Full text
Abstract:
Abstract: The JKN Mobile application is a mobile application created to facilitate healthcare administration in Indonesia since 2017. The application has been downloaded by over 10 million users and has received 484,000 diverse reviews, including positive, negative, and neutral feedback. The average rating given by users is 4.5 out of 5 stars. This research aims to perform sentiment analysis on user reviews found in the Google Play Store review column. The methods used for sentiment analysis are Naive Bayes, K-Nearest Neighbor (K-NN), and Support Vector Machine (SVM). The test results show that with a 10% test data and 90% training data proportion, the SVM method achieves the highest accuracy of 95%. Naive Bayes follows with an accuracy of 87%, and K-NN with an accuracy of 75%. Keywords: JKN mobile application, sentiment analysis, naive bayes, k-nearest neighbor (K-NN), support vector machine (SVM). Abstrak: Aplikasi Mobile JKN adalah sebuah aplikasi yang dibuat untuk mempermudah administrasi kesehatan di Indonesia sejak tahun 2017. Aplikasi ini telah diunduh lebih dari 10 juta pengguna dengan 484 ribu ulasan beragam positif, negatif, dan netral. Rata-rata rating yang diberikan pengguna adalah 4,5 bintang dari 5 bintang. Penelitian ini bertujuan untuk melakukan analisis sentimen terhadap ulasan pengguna yang terdapat di kolom review Google Play Store. Metode yang digunakan untuk analisis sentimen adalah Naive Bayes, K-Nearest Neighbor (K-NN), dan Support Vector Machine (SVM). Hasil pengujian menunjukkan bahwa dengan menggunakan proporsi data uji sebesar 10% dan data training sebesar 90%, metode SVM mencapai akurasi tertinggi sebesar 95%. Diikuti oleh Naive Bayes dengan akurasi 87%, dan K-NN dengan akurasi 75%. Kata kunci: JKN mobile, analisis sentimen, naïve bayes, k-nearest neighbor (K-NN), support vector machine (SVM).
APA, Harvard, Vancouver, ISO, and other styles
21

Al-Thwaib, Eman, and Waseem Al-Romimah. "Support Vector Machine versus k-Nearest Neighbor for Arabic Text Classification." International Journal of Sciences Volume 3, no. 2014-06 (2014): 1–5. https://doi.org/10.5281/zenodo.3348731.

Full text
Abstract:
Text Classification (TC) or text categorization can be described as the act of assigning text documents to predefined classes or categories. The need for automatic text classification came from the large amount of electronic documents on the web. The classification accuracy is affected by the documents content and the classification technique being used. In this research, an automatic Support Vector Machine (SVM) and k-Nearest Neighbor (kNN) classifiers will be developed and compared in classifying 800 Arabic documents into four categories (sport, politics, religion, and economy). The experimental results are presented in terms of F1-measure, precision, and recall.Read Complete Article at ijSciences: V3201405505
APA, Harvard, Vancouver, ISO, and other styles
22

Umar, Rusydi, Imam Riadi, and Dewi Astria Faroek. "A Komparasi Image Matching Menggunakan Metode K-Nearest Neightbor (KNN) dan Support Vector Machine (SVM)." Journal of Applied Informatics and Computing 4, no. 2 (2020): 124–31. http://dx.doi.org/10.30871/jaic.v4i2.2226.

Full text
Abstract:
Pencocokan gambar adalah proses menemukan gambar digital yang memiliki tingkat kesamaan. mencocokkan gambar menggunakan metode klasifikasi. Dalam mengukur pencocokan gambar, gambar yang digunakan adalah gambar logo asli dan gambar logo hasil manipulasi. Perbandingan algoritma klasifikasi dari dua metode yaitu K-Nearest Neighbor (KNN) dan Support Vector Machine dengan optimasi Sequential Minimal Optimization (SMO) yang digunakan untuk menghitung kecocokan berdasarkan nilai akurasi. Metode klasifikasi K-Nearest Neighbor (KNN) didasarkan pada kedekatan atau perhitungan K sedangkan metode klasifikasi Support Vector Machine (SVM) mengukur jarak antara hyperplane dan data terdekat. Nilai kecocokan gambar diukur dengan Precision, Recall, F1-Score, dan Accuracy. Langkah-langkah pencocokan gambar mulai dari persiapan pemrosesan data, ekstraksi fitur dan bentuk warna HSV, kemudian tahap klasifikasi. Gambar digital digunakan sebanyak 10 gambar yang terdiri dari satu logo asli dan 9 logo yang dimanipulasi. Pada tahap pengujian klasifikasi, menggunakan aplikasi WEKA dengan menerapkan metode validasi silang 10 kali lipat. Dari hasil tes yang dilakukan bahwa metode klasifikasi k-neighbor (KNN) terdekat adalah 80% dan memiliki k = 0,889 yang cukup baik dalam mengukur kedekatan, sedangkan metode klasifikasi SVM adalah 70%. Hasil perbandingan pencocokan gambar ini dapat disimpulkan bahwa metode klasifikasi K-Nearest Neighbor bekerja lebih baik daripada SVM untuk pencocokan gambar.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Hongxiao, Mian Xiang Xiang, Bingtao Zhou, Li Zhu, Yaqiong Duan, and Xiaoyan Zhang. "Partial Discharge Detection Method for Distribution Network Based on Feature Engineering." Journal of Physics: Conference Series 2456, no. 1 (2023): 012048. http://dx.doi.org/10.1088/1742-6596/2456/1/012048.

Full text
Abstract:
Abstract Partial discharge phenomenon of overhead lines in distribution network is usually caused by the concentration of local electric field inside or on the surface of electrical equipment. According to partial discharge problem, based on the characteristics of the project the use of a machine learning is proposed for distribution network overhead line partial discharge detection model, first using the characteristics of engineering to extract the signal characteristics of different sides characterization, then respectively using K neighbor algorithm and back propagation algorithm, support vector machine (SVM) classification algorithm test. Experimental results show that when machine learning algorithm is used to classify time domain characteristic signals based on the feature engineering selected in this paper, k-nearest Neighbor algorithm has better classification and recognition effect than back propagation algorithm and support vector machine algorithm, with accuracy rate of 97.20%, recall rate of 96.30% and F value of 96.73%. In the frequency domain feature recognition and classification, the k-nearest Neighbor algorithm has 98.95% accuracy, 99.42% recall rate and 97.61% F value. Compared with the back propagation algorithm and support vector machine algorithm, the K-nearest Neighbor algorithm has the highest detection accuracy in the frequency domain feature detection of partial discharge on overhead lines of distribution network.
APA, Harvard, Vancouver, ISO, and other styles
24

Wibowo, V. V. P., Z. Rustam, S. Hartini, F. Maulidina, I. Wirasati, and W. Sadewo. "Ovarian cancer classification using K-Nearest Neighbor and Support Vector Machine." Journal of Physics: Conference Series 1821, no. 1 (2021): 012007. http://dx.doi.org/10.1088/1742-6596/1821/1/012007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Prasetyo, Eko, R. Dimas Adityo, Nanik Suciati, and Chastine Fatichah. "Multi-class K-support Vector Nearest Neighbor for Mango Leaf Classification." TELKOMNIKA (Telecommunication Computing Electronics and Control) 16, no. 4 (2018): 1826. http://dx.doi.org/10.12928/telkomnika.v16i4.8482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Yitian. "K-nearest neighbor-based weighted multi-class twin support vector machine." Neurocomputing 205 (September 2016): 430–38. http://dx.doi.org/10.1016/j.neucom.2016.04.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xie, Fan, and Yitian Xu. "An efficient regularized K-nearest neighbor structural twin support vector machine." Applied Intelligence 49, no. 12 (2019): 4258–75. http://dx.doi.org/10.1007/s10489-019-01505-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Angula, Taapopi John, and Valerianus Hashiyana. "Detection of Structured Query Language Injection Attacks Using Machine Learning Techniques." International Journal of Computer Science and Information Technology 15, no. 4 (2023): 13–26. http://dx.doi.org/10.5121/ijcsit.2023.15402.

Full text
Abstract:
This paper presents a comparative analysis of various machine learning classification models for structured query language injection prevention. The objective is to identify the best-performing model in terms of accuracy on a given dataset. The study utilizes popular classifiers such as Logistic Regression, Naive Bayes, Decision Tree, Random Forest, K-Nearest Neighbors, and Support Vector Machine. Based on the tests used to evaluate the performance of the classifiers, the Naïve Bayes gets the highest level of accurate detection. The results show a 97.06% detection rate for the Naïve Bayes, followed by LogisticRegression (0.9610), Support Vector Machine (0.9586), RandomForest (0.9530), DecisionTree (0.9069), and K-Nearest Neighbor (0.6937). The code snippet provided demonstrates the implementation and evaluation of these models.
APA, Harvard, Vancouver, ISO, and other styles
29

Pamungkas, Adji Surya, and Nuri Cahyono. "Analisis Sentimen Review ChatGPT di Play Store menggunakan Support Vector Machine dan K-Nearest Neighbor." Edumatic: Jurnal Pendidikan Informatika 8, no. 1 (2024): 1–10. http://dx.doi.org/10.29408/edumatic.v8i1.24114.

Full text
Abstract:
The ChatGPT application for Android was launched on July 25, 2023, and the language model from OpenAI achieved a rating of 4.8 until early 2024. Despite the majority of positive reviews, user reports stating that ChatGPT provides inaccurate answers raise concerns about the reliability of this application. This research aims to compare the models of the Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) algorithms in analyzing the sentiment of ChatGPT application reviews. Utilizing text mining methods to extract information from text, data was collected from Google Play Store reviews using data scraping techniques and analyzed with Support Vector Machine and K-Nearest Neighbor algorithms. Cross-validation with 5 folds and data split using 80% training and 20% testing data were applied to evaluate the performance of both algorithms. The sentiment classification results showed that the Support Vector Machine algorithm achieved an average accuracy of 80%, while K-Nearest Neighbor reached 71%. SVM excels due to its ability to overcome KNN's limitations regarding less relevant features that do not significantly contribute to predictions. The findings of this study are expected to help developers understand and respond to user feedback regarding the reliability of ChatGPT.
APA, Harvard, Vancouver, ISO, and other styles
30

Putra, Raymond Chandra. "Pembangunan Perangkat Pendeteksi Jenis Gerakan Raket Bulu Tangkis Dengan Algoritma KNN dan SVM." Teknika 9, no. 2 (2020): 113–20. http://dx.doi.org/10.34148/teknika.v9i2.291.

Full text
Abstract:
Internet of Things (IoT) dapat diaplikasikan untuk banyak bidang, salah satunya pada latihan olahraga bulu tangkis. Pada olahraga bulu tangkis, terutama bagi pemain pemula mengalami kesulitan untuk mengetahui apakah gerakan yang dilakukan sudah benar atau belum. Pada penelitian ini, dibangun sebuah embedded system yang dipasang pada raket yang berfungsi mengambil data gerakan pukulan. Data pukulan ini dikirim ke sebuah perangkat lunak yang dapat mendeteksi jenis gerakan raket bulu tangkis. Embedded system terdiri dari Arduino dan sensor accelerometer dan gyroscope. Data pukulan disimpan ke dalam basis data melalui web service. Perangkat lunak dibangun dengan memanfaatkan prinsip pembelajaran mesin terarah yaitu klasifikasi. Algoritma klasifikasi yang digunakan adalah algoritma k-Nearest Neighbor dan membandingkan hasilnya dengan algoritma lain yaitu Support Vector Machine. Pengujian dilakukan dengan mengumpulkan data latih yang digunakan oleh algoritma klasifikasi untuk memprediksi gerakan. Kinerja dari kedua algoritma klasifikasi diukur dan dibandingkan. Dari hasil pengujian, maka disimpulkan bahwa algoritma Support Vector Machine menghasilkan kinerja yang lebih baik dari k-Nearest Neighbor dalam mendeteksi gerakan raket. Selain itu kinerja algoritma Support Vector Machine yang lebih baik tersebut dihasilkan dengan data latih yang lebih sedikit dibandingkan k-Nearest Neighbor.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Xiang, Shang Bing Gao, and Ying Quan Chen. "Application of Fuzzy Support Vector Machine in Chalky Rice Identification." Advanced Materials Research 507 (April 2012): 202–7. http://dx.doi.org/10.4028/www.scientific.net/amr.507.202.

Full text
Abstract:
In order to improve the identification accuracy of fuzzy support vector machine for chalky rice, this paper puts forward a fuzzy support vector machine method based on fuzzy K nearest-neighbor. This method firstly gets a sample center by calculating sample mean aimed at every class sample; and then it calculates the initial membership of sample by calculating the distance between sample and center; finally, it calculates K neighbor points of each sample, calculates the membership of sample according to the fuzzy K neighbor method, and integrates the initial membership with fuzzy K neighbor membership at a certain proportion, to get the ultimate membership values of samples. Combined with image detection problems of rice, verify the validity of this method. Experiments show that this method not only can improve the accuracy of identification but also can improve its speed, with a better result than common fuzzy support vector machine.
APA, Harvard, Vancouver, ISO, and other styles
32

Yuan, Xinpan, Qunfeng Liu, Jun Long, Lei Hu, and Songlin Wang. "Multi-PQTable for Approximate Nearest-Neighbor Search." Information 10, no. 6 (2019): 190. http://dx.doi.org/10.3390/info10060190.

Full text
Abstract:
Image retrieval or content-based image retrieval (CBIR) can be transformed into the calculation of the distance between image feature vectors. The closer the vectors are, the higher the image similarity will be. In the image retrieval system for large-scale dataset, the approximate nearest-neighbor (ANN) search can quickly obtain the top k images closest to the query image, which is the Top-k problem in the field of information retrieval. With the traditional ANN algorithms, such as KD-Tree, R-Tree, and M-Tree, when the dimension of the image feature vector increases, the computing time will increase exponentially due to the curse of dimensionality. In order to reduce the calculation time and improve the efficiency of image retrieval, we propose an ANN search algorithm based on the Product Quantization Table (PQTable). After quantizing and compressing the image feature vectors by the product quantization algorithm, we can construct the image index structure of the PQTable, which speeds up image retrieval. We also propose a multi-PQTable query strategy for ANN search. Besides, we generate several nearest-neighbor vectors for each sub-compressed vector of the query vector to reduce the failure rate and improve the recall in image retrieval. Through theoretical analysis and experimental verification, it is proved that the multi-PQTable query strategy and the generation of several nearest-neighbor vectors are greatly correct and efficient.
APA, Harvard, Vancouver, ISO, and other styles
33

Ansori, Yusuf, and Khadijah Fahmi Hayati Holle. "Perbandingan Metode Machine Learning dalam Analisis Sentimen Twitter." Jurnal Sistem dan Teknologi Informasi (JustIN) 10, no. 4 (2022): 429. http://dx.doi.org/10.26418/justin.v10i4.51784.

Full text
Abstract:
Perbedaan pemahaman di kalangan masyarakat sering terjadi terkait diterbitkannya kebijakan baru oleh pemerintah. Diantaranya adalah kebijakan dalam menangani kasus kekerasan seksual di lingkungan kampus yang tertulis dalam Peraturan Menteri Pendidikan, Kebudayaan, Riset dan Teknologi Nomor 30 Tahun 2021 sehingga diperlukan kajian mendalam dengan melakukan analisis sentimen. Ada banyak algoritma yang digunakan dalam penelitian analisis sentimen, maka dalam penelitian ini peneliti menggunakan 4 algoritma klasifikasi machine learning, yaitu Support Vector Machine, K-Nearest Neighbor, Naïve Bayes Classifier, dan Logistic Regression untuk dilakukan perbandingan performa dari masing-masing algoritma. Data penelitian yang digunakan berjumlah 470 data dengan pembagian 236 tweet berlabel positif dan 238 tweet berlabel negatif yang diambil pada rentang bulan Oktober sampai Desember. Dalam penelitian ini menggunakan perangkat lunak RapidMiner dengan menerapkan teknik k-Fold Cross Validation untuk memisahkan data latih dan data uji secara acak. Terdapat perbedaan performa pada algoritma machine learning yang digunakan untuk analisis sentimen, dari algoritma yang telah diujikan, nilai akurasi tertinggi terdapat pada algoritma Support Vector Machine, yaitu sebesar 69,15%, kemudian nilai presisi tertinggi terdapat pada algoritma K-Nearest Neighbor, sebesar 69,07%, kemudian nilai recall tertinggi terdapat pada algoritma Support Vector Machine sebesar 71,98%, dan nilai f-measure tertinggi terdapat pada algoritma K-Nearest Neighbor yaitu sebesar 68,08%.
APA, Harvard, Vancouver, ISO, and other styles
34

Saidah, Sofia, Muhammad Bayu Adinegara, Rita Magdalena, and Nor Kumalasari Caecar. "Identifikasi Kualitas Beras Menggunakan Metode k-Nearest Neighbor dan Support Vector Machine." TELKA - Telekomunikasi, Elektronika, Komputasi dan Kontrol 5, no. 2 (2019): 114–21. http://dx.doi.org/10.15575/telka.v5n2.114-121.

Full text
Abstract:
Beras merupakan makanan pokok bagi mayoritas penduduk Indonesia. Beragamnya kualitas beras di pasaran menuntut adanya pengawasan terhadap standar kualitas beras. Pengamatan terhadap kualitas beras secara visual rentan terhadap kesalahan dikarenakan subjektifitas setiap pengamat berbeda-beda. Penelitian ini dilakukan dengan mendeteksi kualitas beras berbasis morfologi citra.. Sistem didesain dengan menggunakan dua metode klasifikasi yang berbeda, yaitu k-Nearest Neighbor (K-NN) dan Support Vector Machine (SVM) untuk kemudian diperoleh sistem dengan metode terbaik. Hasil dari penelitian menunjukkan bahwa sistem mampu melakukan identifikasi kualitas beras dengan akurasi terbaik yang diperoleh yaitu 96,67% ketika digunakan metode K-NN jenis Euclidean dengan nilai k=1, dan 96,67% pada saat digunakan parameter SVM OAO dan OAA dengan tipe kernel Polynomial serta kernel option 7.
APA, Harvard, Vancouver, ISO, and other styles
35

Utami, Lila Dini. "KOMPARASI ALGORITMA KLASIFIKASI PADA ANALISIS REVIEW HOTEL." Jurnal Pilar Nusa Mandiri 14, no. 2 (2018): 261. http://dx.doi.org/10.33480/pilar.v14i2.1023.

Full text
Abstract:
At this time the freedom to express opinions in oral and written forms about everything is very easy. This activity can be used to make decisions by some business people. Especially by service providers, such as hotels. This will be very useful in the development of the hotel business itself. But the review data must be processed using the right algorithm. So this study was conducted to find out which algorithms are more feasible to use to get the highest accuracy. The methods used are Naïve Bayes (NB), Support Vector Machine (SVM), and k-Nearest Neighbor (k-NN). From the process that has been done, the results of Naïve Bayes accuracy are 71.50% with the AUC value is 0.500, Support Vector Machine is 72.50% with the AUC value is 0.936 and the accuracy results if using the k-Nearest Neighbor algorithm is 75.00% with the AUC value is 0.500. The use of the k-Nearest Neighbor algorithm can help in making more appropriate decisions for hotel reviews at this time.
APA, Harvard, Vancouver, ISO, and other styles
36

Prasetyo, Eko. "K- Support Vector Nearest Neighbor: Classification Method, Data Reduction, and Performance Comparison." JEECS (Journal of Electrical Engineering and Computer Sciences) 1, no. 1 (2016): 1–6. http://dx.doi.org/10.54732/jeecs.v1i1.180.

Full text
Abstract:
The use of data mining in the past 2 decades in harnessing the data sets become important. This is due to the information given outcome becomes very important, but the big problem are the obstacles data mining task is a very large amount of data. A very large number indeed specificity of data mining in extracting information, but the amount of too big data also cause decrease the performance. On the issue of classification, data that are not positioned on the decision boundary becomes less useful and make classification method is not efficient. K-Nearest Neighbor Support Vector present to answer the problem that data is normally owned by very large data. K-SVNN able to reduce the amount of very large data with good accuracy without degrading performance. Results of performance comparisons with a number of classification method also proves that K-SVNN can provide good accuracy. Among the five comparison methods, K-SVNN got in the big 3 methods. K-SVNN difference accuracy to other methods less of 0.66% on the data set Iris and 20:29% on the data set Wine.
APA, Harvard, Vancouver, ISO, and other styles
37

Alifta Putri Ramadhani. "Analisis Performa Algoritma Support Vector Machine dan Algoritma K-Nearest Neighbors untuk Kasus Penyakit Mulut dan Kuku pada Sapi di Jawa Timur." JOURNAL ZETROEM 6, no. 1 (2024): 73–78. http://dx.doi.org/10.36526/ztr.v6i1.3489.

Full text
Abstract:
Penyakit mulut dan kuku (PMK) saat ini tengah mewabah di Indonesia. Penyakit ini umunya menyerang hewan berkuku genap atau belah yaitu seperti sapi, kerbau hingga domba atau kambing. Gejala Penyakit ini tidak ditularkan ke manusia atau bukan penyakit zoonosis. Memprediksi penyakit mulut dan kuku pada sapi merupakan suatu permasalahan yang solusinya dapat dilakukan dengan menggunakan machine learning. Terdapat beberapa metode yang berbeda maka hasil akurasi juga akan berbeda-beda. Penelitian ini bertujuan untuk membandingkan performa algoritma Support Vector Machine dan algoritma K-Nearest Neighbor. Dalam penelitian ini jumlah dataset berjumlah 540 baris dan 12 kolom. Pada penelitian algoritma Support Vector Machine menggunakan beberapa kernel yaitu kernel rbf, kernel linear, kernel poly, dan kernel sigmoid lalu untuk algoritma K-Nearest Neighbors menggunakan nilai K=1 hingga K=20. Penelitian ini juga menggunakan beberapa skenario yaitu perbandingan jumlah data latih dan jumlah data uji yang pertama data latih 70% dan data uji 30% lalu yang kedua data latih 80% data uji 20% dan yang ketiga 90% data latih 10% data uji. Pengunaan algoritma Support Vector Machine dan algoritma K-Nearest Neighbors digunakan untuk memperoleh hasil yangrelevan atau akurat dalam memprediksi penyakit mulut dan kuku pada sapi. Hasil yang diperoleh dari penelitian ini untuk kedua algoritma dapat di katakan baik karena sama- sama memiliki nilai akurasi yang tinggi yaitu sebesar 100%.
APA, Harvard, Vancouver, ISO, and other styles
38

Solikin, Steven, Angga Dwinovantyo, Henry Munandar Manik, Sri Pujiyati, and Susilohadi Susilohadi. "Combining Two Classification Methods for Predicting Jakarta Bay Seabed Type Using Multibeam Echosounder Data." Journal of Applied Geospatial Information 7, no. 2 (2023): 898–903. http://dx.doi.org/10.30871/jagi.v7i2.6363.

Full text
Abstract:
Classification of seabed types from multibeam echosounder data using machine learning techniques has been widely used in recent decades, such as Random Forest (RF), Artificial Neural Network (ANN), Support Vector Machine (SVM), and Nearest Neighbor (NN). This study combines the two most frequently used machine learning techniques to classify and map the seabed sediment types from multibeam echosounder data. The classification model developed in this study is a combination of two machine learning classification techniques, namely Support Vector Machine (SVM) and K-Nearest Neighbor (K-NN). This classification technique is called SV-KNN. Simply, SV-KNN adopts these two techniques to carry out the classification process. The SV-KNN technique begins with determining test data by specifying support vectors and hyperplanes, as was done on the SVM method, and executes the classification process using the K-NN. Clay, fine silt, medium silt, coarse silt, and fine sand are the five main classes produced by SVKNN. The SV-KNN method has an overall accuracy value of 87.38% and a Kappa coefficient of 0.3093.
APA, Harvard, Vancouver, ISO, and other styles
39

Setiyorini, Tyas, and Rizky Tri Asmono. "PENERAPAN METODE K-NEAREST NEIGHBOR DAN INFORMATION GAIN PADA KLASIFIKASI KINERJA SISWA." JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer) 5, no. 1 (2019): 7–14. http://dx.doi.org/10.33480/jitk.v5i1.613.

Full text
Abstract:
Education is a very important problem in the development of a country. One way to reach the level of quality of education is to predict student academic performance. The method used is still using an ineffective way because evaluation is based solely on the educator's assessment of information on the progress of student learning. Information on the progress of student learning is not enough to form indicators in evaluating student performance and helping students and educators to make improvements in learning and teaching. K-Nearest Neighbor is an effective method for classifying student performance, but K-Nearest Neighbor has problems in terms of large vector dimensions. This study aims to predict the academic performance of students using the K-Nearest Neighbor algorithm with the Information Gain feature selection method to reduce vector dimensions. Several experiments were conducted to obtain an optimal architecture and produce accurate classifications. The results of 10 experiments with k values ​​(1 to 10) in the student performance dataset with the K-Nearest Neighbor method showed the largest average accuracy of 74.068 while the K-Nearest Neighbor and Information Gain methods obtained the highest average accuracy of 76.553. From the results of these tests it can be concluded that Information Gain can reduce vector dimensions, so that the application of K-Nearest Neighbor and Information Gain can improve the accuracy of the classification of student performance better than using the K-Nearest Neighbor method.
APA, Harvard, Vancouver, ISO, and other styles
40

Rodiana, Rosdiana. "Classification of Hypertension Patients in Palembang by K-Nearest Neighbor and Local Mean K-Nearest Neighbor." Journal of Statistics and Data Science 3, no. 1 (2024): 27–35. https://doi.org/10.33369/jsds.v3i1.32381.

Full text
Abstract:
Classification is a multivariate technique for separating different data sets from an object and allocating new objects into predefined groups. Several methods that can be used to classify include the k-Nearest Neighbor (KNN) and Local Mean k-Nearest Neighbor (LMKNN) methods. The KNN method classifies objects based on the majority voting principle, while LMKNN classifies objects based on the local average vector of the k nearest neighbors in each class. In this study, a comparison was made on the results of classifying hypertensive patient data at the Merdeka Health Center in Palembang City with the KNN and LMKNN methods by looking at the accuracy and the smallest APER value produced. The results showed that by using the same proportion of training and testing data and choosing different k values, the results of classifying hypertension patient data at the Merdeka Health Center in Palembang City with the KNN and LMKNN methods resulted in the APER value or the same error rate and accuracy, namely sequentially equal to 0.0303 and 96.97%.
APA, Harvard, Vancouver, ISO, and other styles
41

Labolo, Abdul Yunus, Sarlis Mooduto, Andi Bode, and Ivo Colanus Rally Drajana. "Penerapan Algoritma Spport Vector Machine dan K-Nearest Neighbor Menggunkan Feature Selection Backward Elimination Untuk Prediksi Status Penderita Stunting Pada Balita." JURNAL TECNOSCIENZA 6, no. 2 (2022): 374–88. http://dx.doi.org/10.51158/tecnoscienza.v6i2.713.

Full text
Abstract:
Stunting adalah malnutrisi yang ditandai dengan tinggi badan, diukur dengan standar deviasi dari WHO. Dinas Kesehatan Provinsi Gorontalo khususnya dibidang Gizi mengenai stunting, selama ini melakukan kegiatan pemantauan tiap-tiap puskesmas dan posyandu. Pemantauan dan pendataan terkait stunting di berbagai puskesmas di wilayah Gorontalo merupakan faktor penting dalam menentukan faktor tumbuh kembang baik dalam kandungan maupun bayi yang dilahirkan. Masalah yang sering muncul adalah data yang dikumpulkan untuk underestimasi selalu tidak akurat setiap bulannya, karena hanya perkiraan yang dihitung berdasarkan kasus Puskesmas. Prediksi yang akurat diperlukan untuk mengatasi permasalahan yang ada. Data mining didefinisikan sebagai ekstraksi informasi berharga atau berguna dari industri pertambangan atau database yang sangat besar. Penelitian ini menggunakan algoritma K-Nearest Neighbor (K-NN) dan Support Vector Machine (SVM) menggunakan feature selection backward elimination. Berdasarkan hasil eksperimen, diprediksi jumlah penderita stunting menggunakan algoritma Support Vector Machine (SVM), dan k-Nearest Neighbor (K-NN) menggunakan Backward Elimination (BE). Tingkat error terkecil hasil RMSE 2,476 pada algoritma k-nearest neighbor. Adapun perbandingan antara hasil prediksi jumlah penderita stunting dibulan januari yaitu 23 orang dengan data aktual jumlah penderita stunting yakni 26 orang. Hasil prediksi menghasilkan nilai keakuratan 88,46%.
APA, Harvard, Vancouver, ISO, and other styles
42

Assegie, Tsehay Admassu. "Support Vector Machine And K-Nearest Neighbor Based Liver Disease Classification Model." Indonesian Journal of electronics, electromedical engineering, and medical informatics 3, no. 1 (2021): 9–14. http://dx.doi.org/10.35882/ijeeemi.v3i1.2.

Full text
Abstract:
Machine-learning approaches have become greatly applicable in disease diagnosis and prediction process. This is because of the accuracy and better precision of the machine learning models in disease prediction. However, different machine learning models have different accuracy and precision on disease prediction. Selecting the better model that would result in better disease prediction accuracy and precision is an open research problem. In this study, we have proposed machine learning model for liver disease prediction using Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) learning algorithms and we have evaluated the accuracy and precision of the models on liver disease prediction using the Indian liver disease data repository. The analysis of result showed 82.90% accuracy for SVM and 72.64% accuracy for the KNN algorithm. Based on the accuracy score of SVM and KNN on experimental test results, the SVM is better in performance on the liver disease prediction than the KNN algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Waleed Naji, Ghaidaa, and Jamal Mustafa. "Satellite Images Scene Classification Based Support Vector Machines and K-Nearest Neighbor." Diyala Journal For Pure Science 15, no. 3 (2019): 70–87. http://dx.doi.org/10.24237/djps.15.03.486b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tanveer, M., K. Shubham, M. Aldhaifallah, and S. S. Ho. "An efficient regularized K-nearest neighbor based weighted twin support vector regression." Knowledge-Based Systems 94 (February 2016): 70–87. http://dx.doi.org/10.1016/j.knosys.2015.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Minarno, Agus Eko, Fauzi Dwi Setiawan Sumadi, Hardianto Wibowo, and Yuda Munarko. "Classification of batik patterns using K-Nearest neighbor and support vector machine." Bulletin of Electrical Engineering and Informatics 9, no. 3 (2020): 1260–67. http://dx.doi.org/10.11591/eei.v9i3.1971.

Full text
Abstract:
This study is proposed to compare which are the better method to classify Batik image between K-Nearest neighbor and Support Vector Machine using minimum features of GLCM. The proposed steps are started by converting image to grayscale and extracting colour feature using four features of GLCM. The features include Energy, Entropy, Contras, Correlation and 0o, 45o, 90o, and 135o. The classifier features consist of 16 features in total. In the experimental result, there exist comparison of previous works regarding the classification KNN and SVM using multi texton histogram (MTH). The experiments are carried out in the form of calculation of accuracy with data sharing and cross-validation scenario. From the test results, the average accuracy for KNN is 78.3% and 92.3% for SVM in the cross-validation scenario. The scenario for the highest accuracy of data sharing is at 70% for KNN and at 100% for SVM. Thus, it is apparent that the application of the GLCM and SVM method for extracting and classifying batik motifs has been effective and better than previous work.
APA, Harvard, Vancouver, ISO, and other styles
46

Aziz, Aqliima, Cik Feresa Mohd Foozy, Palaniappan Shamala, and Zurinah Suradi. "YouTube Spam Comment Detection Using Support Vector Machine and K–Nearest Neighbor." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 2 (2018): 612. http://dx.doi.org/10.11591/ijeecs.v12.i2.pp612-619.

Full text
Abstract:
<p>Social networking such as YouTube, Facebook and others are very popular nowadays. The best thing about YouTube is user can subscribe also giving opinion on the comment section. However, this attract the spammer by spamming the comments on that videos. Thus, this study develop a YouTube detection framework by using Support Vector Machine (SVM) and K-Nearest Neighbor (k-NN). There are five (5) phases involved in this research such as Data Collection, Pre-processing, Feature Selection, Classification and Detection. The experiments is done by using Weka and RapidMiner. The accuracy result of SVM and KNN by using both machine learning tools show good accuracy result. Others solution to avoid spam attack is trying not to click the link on comments to avoid any problems.</p>
APA, Harvard, Vancouver, ISO, and other styles
47

Aqliima, Aziz, Feresa Mohd Foozy Cik, Shamala Palaniappan, and Suradi Zurinah. "YouTube Spam Comment Detection Using Support Vector Machine and K–Nearest Neighbor." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 2 (2018): 607–11. https://doi.org/10.11591/ijeecs.v12.i2.pp607-611.

Full text
Abstract:
Social networking such as YouTube, Facebook and others are very popular nowadays. The best thing about YouTube is user can subscribe also giving opinion on the comment section. However, this attract the spammer by spamming the comments on that videos. Thus, this study develop a YouTube detection framework by using Support Vector Machine (SVM) and KNearest Neighbor (k-NN). There are five (5) phases involved in this research such as Data Collection, Pre-processing, Feature Selection, Classification and Detection. The experiments is done by using Weka and RapidMiner. The accuracy result of SVM and KNN by using both machine learning tools show good accuracy result. Others solution to avoid spam attack is trying not to click the link on comments to avoid any problems.
APA, Harvard, Vancouver, ISO, and other styles
48

Agus, Eko Minarno, Dwi Setiawan Sumadi Fauzi, Wibowo Hardianto, and Munarko Yuda. "Classification of batik patterns using K-Nearest neighbor and support vector machine." Bulletin of Electrical Engineering and Informatics 9, no. 3 (2020): 1260–67. https://doi.org/10.11591/eei.v9i3.1971.

Full text
Abstract:
This study is proposed to compare which are the better method to classify Batik image between K-Nearest neighbor and support vector machine using minimum features of GLCM. The proposed steps are started by converting image to grayscale and extracting colour feature using four features of GLCM. The features include energy, entropy, contras, correlation and 0o, 45o, 90o, and 135o. The classifier features consist of 16 features in total. In the experimental result, there exist comparison of previous works regarding the classification KNN and SVM using multi texton histogram (MTH). The experiments are carried out in the form of calculation of accuracy with data sharing and cross-validation scenario. From the test results, the average accuracy for KNN is 78.3% and 92.3% for SVM in the cross-validation scenario. The scenario for the highest accuracy of data sharing is at 70% for KNN and at 100% for SVM. Thus, it is apparent that the application of the GLCM and SVM method for extracting and classifying batik motifs has been effective and better than previous work.
APA, Harvard, Vancouver, ISO, and other styles
49

Widyadhana, Arya, Cornelius Bagus Purnama Putra, Rarasmaya Indraswari, and Agus Zainal Arifin. "A Bonferroni Mean Based Fuzzy K Nearest Centroid Neighbor Classifier." Jurnal Ilmu Komputer dan Informasi 14, no. 1 (2021): 65–71. http://dx.doi.org/10.21609/jiki.v14i1.959.

Full text
Abstract:
K-nearest neighbor (KNN) is an effective nonparametric classifier that determines the neighbors of a point based only on distance proximity. The classification performance of KNN is disadvantaged by the presence of outliers in small sample size datasets and its performance deteriorates on datasets with class imbalance. We propose a local Bonferroni Mean based Fuzzy K-Nearest Centroid Neighbor (BM-FKNCN) classifier that assigns class label of a query sample dependent on the nearest local centroid mean vector to better represent the underlying statistic of the dataset. The proposed classifier is robust towards outliers because the Nearest Centroid Neighborhood (NCN) concept also considers spatial distribution and symmetrical placement of the neighbors. Also, the proposed classifier can overcome class domination of its neighbors in datasets with class imbalance because it averages all the centroid vectors from each class to adequately interpret the distribution of the classes. The BM-FKNCN classifier is tested on datasets from the Knowledge Extraction based on Evolutionary Learning (KEEL) repository and benchmarked with classification results from the KNN, Fuzzy-KNN (FKNN), BM-FKNN and FKNCN classifiers. The experimental results show that the BM-FKNCN achieves the highest overall average classification accuracy of 89.86% compared to the other four classifiers.
APA, Harvard, Vancouver, ISO, and other styles
50

Amalia, Resti, Ahmad Faiz Zaidan, Syahrul Ramadhan, Farhan Septian, Ananta Mikail Aqsha, and Perani Rosyani. "Classification of Autoimmune Diseases Using the K-Nearest Neighbors Algorithm." Formosa Journal of Science and Technology 4, no. 1 (2025): 337–48. https://doi.org/10.55927/fjst.v4i1.13443.

Full text
Abstract:
Autoimmune diseases occur when the immune system attacks the body’s own tissues, causing serious complications and overlapping symptoms that challenge early detection. This study reviews the use of the K-Nearest Neighbors (K-NN) algorithm for classifying autoimmune diseases through a systematic literature review of five articles. Compared to methods like Genetic Algorithms, Support Vector Machines (SVM), and Single Layer Perceptrons (SLP), K-NN shows high accuracy when optimal parameters and neighbor counts are used. However, challenges include sensitivity to imbalanced data and high computational demands for large datasets. Combining K-NN with optimization techniques, such as Genetic Algorithms, enhances accuracy and stability. The study concludes that K-NN is effective for classifying autoimmune diseases, especially with hybrid approaches, and recommends further research with larger datasets.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography