Academic literature on the topic 'K-Support vector nearest neighbor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'K-Support vector nearest neighbor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "K-Support vector nearest neighbor"

1

Wijaya, Aditya Surya, Nurul Chamidah, and Mayanda Mega Santoni. "Pengenalan Karakter Tulisan Tangan Dengan K-Support Vector Nearest Neighbor." IJEIS (Indonesian Journal of Electronics and Instrumentation Systems) 9, no. 1 (2019): 33. http://dx.doi.org/10.22146/ijeis.38729.

Full text
Abstract:
Handwritten characters are difficult to be recognized by machine because people had various own writing style. This research recognizes handwritten character pattern of numbers and alphabet using K-Nearest Neighbour (KNN) algorithm. Handwritten recognition process is worked by preprocessing handwritten image, segmentation to obtain separate single characters, feature extraction, and classification. Features extraction is done by utilizing Zone method that will be used for classification by splitting this features data to training data and testing data. Training data from extracted features reduced by K-Support Vector Nearest Neighbor (K-SVNN) and for recognizing handwritten pattern from testing data, we used K-Nearest Neighbor (KNN). Testing result shows that reducing training data using K-SVNN able to improve handwritten character recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Salim, Axel Natanael, Ade Adryani, and Tata Sutabri. "Deteksi Email Spam dan Non-Spam Berdasarkan Isi Konten Menggunakan Metode K-Nearest Neighbor dan Support Vector Machine." Syntax Idea 6, no. 2 (2024): 991–1001. http://dx.doi.org/10.46799/syntax-idea.v6i2.3052.

Full text
Abstract:
Terhadap banyaknya kasus penyalahgunaan email yang berpotensi merugikan orang lain. Email yang disalahgunakan ini biasa dikenal sebagai email spam yang mana email tersebut berisikan iklan, scam, bahkan malware. Penelitian ini bertujuan untuk mendeteksi email spam dan non-spam berdasarkan isi konten menggunakan metode K-Nearest Neighbor dan Support Vector Machine nilai terbaik dari algoritma K-Nearest Neighbor dengan pengukuran jarak Euclidean Distance. Support Vector Machine dan K-Nearest Neighbor dapat mengklasifikasi dan mendeteksi spam email atau non-spam email, K-Nearest Neighbor menggunakan perhitungan jarak Euclidean Distance dengan nilai K = 1,3, dan 5. Hasil evaluasi menggunakan confusion matrix yang menghasilkan bahwa motode K-Nearest Neighbor dengan nilai k=3 mendapatkan tingkat akurasi sebesar 92%, tingkat presisi sebesar 91%, recall sebesar 100%, dan f1_score sebesar 95%. Metode Support Vector Machine mendapatkan nilai akurasi sebesar 97% dengan tingkat akurasi sebesar 97%, recall sebesar 100%, dan f1_score sebesar 98%. Hal ini menjadikan metode Support Vector Machine lebih unggal dibandingkan metode K-Nearest Neighbor dalam penelitian ini. Selain itu model yang dibangun juga sudah dapat digunakan untuk memprediksi spam dan non spam dari isi konten email baru.
APA, Harvard, Vancouver, ISO, and other styles
3

Andryani, Ade. "Deteksi Email Spam dan Non-Spam Berdasarkan Isi Konten Menggunakan Metode K Nearest Neighbor dan Support Vector Machine." Syntax Idea 6, no. 2 (2024): 1–14. http://dx.doi.org/10.46799/syntax-idea.v6i2.3058.

Full text
Abstract:
Terhadap banyaknya kasus penyalahgunaan email yang berpotensi merugikan orang lain. Email yang disalahgunakan ini biasa dikenal sebagai email spam yang mana email tersebut berisikan iklan, scam, bahkan malware. Penelitian ini bertujuan untuk mendeteksi email spam dan non-spam berdasarkan isi konten menggunakan metode K-Nearest Neighbor dan Support Vector Machine nilai terbaik dari algoritma K-Nearest Neighbor dengan pengukuran jarak Euclidean Distance. Support Vector Machine dan K-Nearest Neighbor dapat mengklasifikasi dan mendeteksi spam email atau non-spam email, K-Nearest Neighbor menggunakan perhitungan jarak Euclidean Distance dengan nilai K = 1,3, dan 5. Hasil evaluasi menggunakan confusion matrix yang menghasilkan bahwa motode K-Nearest Neighbor dengan nilai k=3 mendapatkan tingkat akurasi sebesar 92%, tingkat presisi sebesar 91%, recall sebesar 100%, dan f1_score sebesar 95%. Metode Support Vector Machine mendapatkan nilai akurasi sebesar 97% dengan tingkat akurasi sebesar 97%, recall sebesar 100%, dan f1_score sebesar 98%. Hal ini menjadikan metode Support Vector Machine lebih unggal dibandingkan metode K-Nearest Neighbor dalam penelitian ini. Selain itu model yang dibangun juga sudah dapat digunakan untuk memprediksi spam dan non spam dari isi konten email baru.
 
 Kata Kunci: Confusion Matrix, Email, KNN, Spam, SVM
APA, Harvard, Vancouver, ISO, and other styles
4

Basedt, Ngabdul, Eko Supriyadi, and Agus Susilo Nugroho. "Perbandingan Algoritma Klasifikasi dalam Analisis Sentimen Opini Masyarakat tentang Kenaikan Harga Bbm." Joined Journal (Journal of Informatics Education) 6, no. 2 (2024): 219. http://dx.doi.org/10.31331/joined.v6i2.2893.

Full text
Abstract:
Kenaikan harga bahan bakar minyak (BBM) telah menjadi permasalahan yang cukup kompleks dan kontroversial . Peningkatan harga BBM memengaruhi berbagai aspek ekonomi dan sosial, termasuk inflasi, biaya produksi, dan tarif transportasi di Indonesia. Klasifikasi sentimen menggunakan algoritma Naïve Bayes, Support Vector Machine, dan K-Nearest Neighbors untuk menentukan algorimat klasifikasi sentimen manakah yang terbaik. Dengan melakukan perbangdingan metode algoritma Naïve Bayes, Support Vector Machine, dan K-Nearest Neighbors untuk menentukan algorimat klasifikasi sentimen manakah yang terbaik. Dengan melakukan perbangdingan algoritma klasifikasi sentimen menghasilkan akurasi yang paling tinggi didapatkan oleh algoritma Naive Bayes dengan akurasi sebesar 80,28%. Kedua adalah algoritma Support Vector Machine (SVM) dengan akurasi sebesar 73,89%. Algoritma yang memiliki nilai akurasi paling kecil adalah algorima K-Nearest Neighbor (KNN) dengan akurasi sebesar 50,00%.
APA, Harvard, Vancouver, ISO, and other styles
5

Srinivasulureddy, Ch, and N. S. Kumar. "Analysis and Comparison for Innovative Prediction Technique of Breast Cancer Tumor using k Nearest Neighbor Algorithm over Support Vector Machine Algorithm with Improved Accuracy." CARDIOMETRY, no. 25 (February 14, 2023): 878–94. http://dx.doi.org/10.18137/cardiometry.2022.25.878884.

Full text
Abstract:
Aim: The main objective of this study is to compare the efficiency of the k-Nearest Neighbor (KNN) and Support vector machine (SVM) algorithms in detecting breast cancer tumors and to examine their improved accuracy, sensitivity, and precision. Materials and Methods: The data for the research of Innovative breast cancer prediction using machine learning algorithms is taken from UCI Machine Learning Repository. The sample size of the innovative technique involves two groups KNN (N=20) and SVM (N=20) according to clincalc.com by keeping alpha error-threshold at 0.05, confidence interval at 95%, enrollment ratio as 0:1, and power at 80%. The accuracy, sensitivity, and precision are calculated using MATLAB software. Result: Accuracy (%), sensitivity (%), precision (%) are compared using SPSS software using an independent sample t-test tool. The accuracy of the k-Nearest Neighbor is 93.38% (p<0.001) while the accuracy of the Support vector machine is 97.50%. The sensitivity rate is 90.85% (p<0.001) for k-Nearest Neighbor whereas the results of Support vector machine sensitivity is 95.83%. The precision of k-Nearest Neighbor is 98.48% (p<0.001) whereas the results of Support vector machine precision is 100%. Conclusion: The support vector machine algorithm appears to have performed better than the k-Nearest Neighbor with improved accuracy in Innovative breast cancer prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, V. S., and K. Vidhya. "Heart Plaque Detection with Improved Accuracy using K-Nearest Neighbors classifier Algorithm in comparison with Least Squares Support Vector Machine." CARDIOMETRY, no. 25 (February 14, 2023): 1590–94. http://dx.doi.org/10.18137/cardiometry.2022.25.15901594.

Full text
Abstract:
Aim: The objective of the work is to evaluate the performance of the k-Nearest Neighbor classifier in detecting heart plaque with high accuracy and comparing it with the Least Squares Support Vector Machine. Materials and Methods: The Kaggle dataset on Heart Plaque Disease yielded a total of 20 samples. Clincalc, which has two groups: alpha, power, and enrollment ratio, is used to assess G power of 0.08 with 95% confidence interval for samples. The training dataset (n = 489 [70 percent]) and the test dataset (n = 277 [30 percent]) are divided into two groups. Accuracy is used to assess the performance of the k-Nearest Neighbor algorithm and the Least Squares Support Vector Machine. Results: The accuracy of the k-Nearest Neighbor algorithm was 86 % and 67.3 % for the Least Squares Support Vector Machine technique. Since p (2-tailed) < 0.05, in SPSS statistical analysis, a significant difference exists between the two groups. Conclusion: In this work, the k-Nearest Neighbor algorithm outperformed the Least Squares Support Vector Machine algorithm in detecting heart plaque disease in the dataset under consideration.
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Xianli, Yao Luo, and Yitian Xu. "K-nearest neighbor based structural twin support vector machine." Knowledge-Based Systems 88 (November 2015): 34–44. http://dx.doi.org/10.1016/j.knosys.2015.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Yitian, and Laisheng Wang. "K-nearest neighbor-based weighted twin support vector regression." Applied Intelligence 41, no. 1 (2014): 299–309. http://dx.doi.org/10.1007/s10489-014-0518-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nasiri, Jalal al-Din, and Amirmahmoud Mir. "An enhanced KNN-based twin support vector machine with stable learning rules." Neural Computing and Applications 32, no. 16 (2020): 12949–69. https://doi.org/10.1007/s00521-020-04740-x(0123456789).

Full text
Abstract:
Among the extensions of twin support vector machine (TSVM), some scholars have utilized K-nearest neighbor (KNN) graph to enhance TSVM’s classification accuracy.‎ However, these KNN-based TSVM classifiers have two major issues such as high computational cost and overfitting.‎ In order to address these issues, this paper presents an enhanced regularized K-nearest neighbor-based twin support vector machine (RKNN-TSVM).‎ It has three additional advantages: (1)‎ Weight is given to each sample by considering the distance from its nearest neighbors.‎ This further reduces the effect of noise and outliers on the output model.‎ (2)‎ An extra stabilizer term was added to each objective function.‎ As a result, the learning rules of the proposed method are stable.‎ (3)‎ To reduce the computational cost of finding KNNs for all the samples, location difference of multiple distances-based K-nearest neighbors algorithm (LDMDBA) was embedded into the learning process of the proposed method.‎ The extensive experimental results on several synthetic and benchmark datasets show the effectiveness of our proposed RKNN-TSVM in both classification accuracy and computational time.‎ Moreover, the largest speedup in the proposed method reaches to 14 times.‎
APA, Harvard, Vancouver, ISO, and other styles
10

Mahfouz, Mohamed A. "INCORPORATING DENSITY IN K-NEAREST NEIGHBORS REGRESSION." International Journal of Advanced Research in Computer Science 14, no. 03 (2023): 144–49. http://dx.doi.org/10.26483/ijarcs.v14i3.6989.

Full text
Abstract:
The application of the traditional k-nearest neighbours in regression analysis suffers from several difficulties when only a limited number of samples are available. In this paper, two decision models based on density are proposed. In order to reduce testing time, a k-nearest neighbours table (kNN-Table) is maintained to keep the neighbours of each object x along with their weighted Manhattan distance to x and a binary vector representing the increase or the decrease in each dimension compared to x’s values. In the first decision model, if the unseen sample having a distance to one of its neighbours x less than the farthest neighbour of x’s neighbour then its label is estimated using linear interpolation otherwise linear extrapolation is used. In the second decision model, for each neighbour x of the unseen sample, the distance of the unseen sample to x and the binary vector are computed. Also, the set S of nearest neighbours of x are identified from the kNN-Table. For each sample in S, a normalized distance to the unseen sample is computed using the information stored in the kNN-Table and it is used to compute the weight of each neighbor of the neighbors of the unseen object. In the two models, a weighted average of the computed label for each neighbour is assigned to the unseen object. The diversity between the two proposed decision models and the traditional kNN regressor motivates us to develop an ensemble of the two proposed models along with traditional kNN regressor. The ensemble is evaluated and the results showed that the ensemble achieves significant increase in the performance compared to its base regressors and several related algorithms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "K-Support vector nearest neighbor"

1

Amlathe, Prakhar. "Standard Machine Learning Techniques in Audio Beehive Monitoring: Classification of Audio Samples with Logistic Regression, K-Nearest Neighbor, Random Forest and Support Vector Machine." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7050.

Full text
Abstract:
Honeybees are one of the most important pollinating species in agriculture. Every three out of four crops have honeybee as their sole pollinator. Since 2006 there has been a drastic decrease in the bee population which is attributed to Colony Collapse Disorder(CCD). The bee colonies fail/ die without giving any traditional health symptoms which otherwise could help in alerting the Beekeepers in advance about their situation. Electronic Beehive Monitoring System has various sensors embedded in it to extract video, audio and temperature data that could provide critical information on colony behavior and health without invasive beehive inspections. Previously, significant patterns and information have been extracted by processing the video/image data, but no work has been done using audio data. This research inaugurates and takes the first step towards the use of audio data in the Electronic Beehive Monitoring System (BeePi) by enabling a path towards the automatic classification of audio samples in different classes and categories within it. The experimental results give an initial support to the claim that monitoring of bee buzzing signals from the hive is feasible, it can be a good indicator to estimate hive health and can help to differentiate normal behavior against any deviation for honeybees.
APA, Harvard, Vancouver, ISO, and other styles
2

Sakouvogui, Kekoura. "Comparative Classification of Prostate Cancer Data using the Support Vector Machine, Random Forest, Dualks and k-Nearest Neighbours." Thesis, North Dakota State University, 2015. https://hdl.handle.net/10365/27698.

Full text
Abstract:
This paper compares four classifications tools, Support Vector Machine (SVM), Random Forest (RF), DualKS and the k-Nearest Neighbors (kNN) that are based on different statistical learning theories. The dataset used is a microarray gene expression of 596 male patients with prostate cancer. After treatment, the patients were classified into one group of phenotype with three levels: PSA (Prostate-Specific Antigen), Systematic and NED (No Evidence of Disease). The purpose of this research is to determine the performance rate of each classifier by selecting the optimal kernels and parameters that give the best prediction rate of the phenotype. The paper begins with the discussion of previous implementations of the tools and their mathematical theories. The results showed that three classifiers achieved a comparable performance that was above the average while DualKS did not. We also observed that SVM outperformed the kNN, RF and DualKS classifiers.
APA, Harvard, Vancouver, ISO, and other styles
3

Naram, Hari Prasad. "Classification of Dense Masses in Mammograms." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1528.

Full text
Abstract:
This dissertation material provided in this work details the techniques that are developed to aid in the Classification of tumors, non-tumors, and dense masses in a Mammogram, certain characteristics such as texture in a mammographic image are used to identify the regions of interest as a part of classification. Pattern recognizing techniques such as nearest mean classifier and Support vector machine classifier are also used to classify the features. The initial stages include the processing of mammographic image to extract the relevant features that would be necessary for classification and during the final stage the features are classified using the pattern recognizing techniques mentioned above. The goal of this research work is to provide the Medical Experts and Researchers an effective method which would aid them in identifying the tumors, non-tumors, and dense masses in a mammogram. At first the breast region extraction is carried using the entire mammogram. The extraction is carried out by creating the masks and using those masks to extract the region of interest pertaining to the tumor. A chain code is employed to extract the various regions, the extracted regions could potentially be classified as tumors, non-tumors, and dense regions. Adaptive histogram equalization technique is employed to enhance the contrast of an image. After applying the adaptive histogram equalization for several times which will provide a saturated image which would contain only bright spots of the mammographic image which appear like dense regions of the mammogram. These dense masses could be potential tumors which would need treatment. Relevant Characteristics such as texture in the mammographic image are used for feature extraction by using the nearest mean and support vector machine classifier. A total of thirteen Haralick features are used to classify the three classes. Support vector machine classifier is used to classify two class problems and radial basis function (RBF) kernel is used to find the best possible (c and gamma) values. Results obtained in this research suggest the best classification accuracy was achieved by using the support vector machines for both Tumor vs Non-Tumor and Tumor vs Dense masses. The maximum accuracies achieved for the tumor and non-tumor is above 90 % and for the dense masses is 70.8% using 11 features for support vector machines. Support vector machines performed better than the nearest mean majority classifier in the classification of the classes. Various case studies were performed using two distinct datasets in which each dataset consisting of 24 patients’ data in two individual views. Each patient data will consist of both the cranio caudal view and medio lateral oblique views. From these views the region of interest which could possibly be a tumor, non-tumor, or a dense regions(mass).
APA, Harvard, Vancouver, ISO, and other styles
4

Janson, Lisa, and Minna Mathisson. "Data mining inom tillverkningsindustrin : En fallstudie om möjligheten att förutspå kvalitetsutfall i produktionslinjer." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301246.

Full text
Abstract:
I detta arbete har en fallstudie utförts på Volvo Group i Köping. I takt med ¨övergången till industri 4.0, ökar möjligheterna att använda maskininlärning som ett verktyg i analysen av industriell data och vidareutvecklingen av industriproduktionen. Detta arbete syftar till att undersöka möjligheten att förutspå kvalitetsutfall vid sammanpressning av nav och huvudaxel. Metoden innefattar implementering av tre maskininlärningsmodeller samt evaluering av dess prestation i förhållande till varandra. Vid applicering av modellerna på monteringsdata från fabriken erhölls ett bristfälligt resultat, vilket indikerar att det utifrån de inkluderade variablerna inte är möjligt att förutspå kvalitetsutfallet. Orsakerna som låg till grund för resultatet granskades, och det resulterade i att det förmodligen berodde på att modellerna var oförmögna att finna samband i datan eller att det inte fanns något samband i datasetet. För att avgöra vilken av dessa två faktorer som var avgörande skapades ett fabricerat dataset där tre nya variabler introducerades. De fabricerade värdena på dessa variabler skapades på sådant sätt att det fanns syntetisk kausalitet mellan två av variablerna och kvalitetsutfallet. Vid applicering av modellerna på den fabricerade datan, lyckades samtliga modeller identifiera det syntetiska sambandet. Utifrån det drogs slutsatsen att det bristfälliga resultatet inte berodde på modellernas prestation utan att det inte fanns något samband i datasetet bestående av verklig monteringsdata. Det här bidrog till bedömningen att om spårbarheten på komponenterna hade ökat i framtiden, i kombination med att fler maskiner i produktionslinjen genererade data till ett sammankopplat system, skulle denna studie kunna utföras igen, men med fler variabler och ett större dataset. Support vector machine var den modell som presterade bäst, givet de prestationsmått som användes i denna studie. Det faktum att modellerna som inkluderats i den här studien lyckades identifiera sambandet i datan, när det fanns vetskap om att sambandet existerade, motiverar användandet av dessa modeller i framtida studier. Avslutningsvis kan det konstateras att med förbättrad spårbarhet och en allt mer uppkopplad fabrik, finns det möjlighet att använda maskininlärningsmodeller som komponenter i större system för att kunna uppnå effektiviseringar.<br>As the adaptation towards Industry 4.0 proceeds, the possibility of using machine learning as a tool for further development of industrial production, becomes increasingly profound. In this paper, a case study has been conducted at Volvo Group in Köping, in order to investigate the wherewithals of predicting quality outcomes in the compression of hub and mainshaft. In the conduction of this study, three different machine learning models were implemented and compared amongst each other. A dataset containing data from Volvo’s production site in Köping was utilized when training and evaluating the models. However, the low evaluation scores acquired from this, indicate that the quality outcome of the compression could not be predicted given solely the variables included in that dataset. Therefore, a dataset containing three additional variables consisting of fabricated values and a known causality between two of the variables and the quality outcome, was also utilized. The purpose of this was to investigate whether the poor evaluation metrics resulted from a non-existent pattern between the included variables and the quality outcome, or from the models not being able to find the pattern. The performance of the models, when trained and evaluated on the fabricated dataset, indicate that the models were in fact able to find the pattern that was known to exist. Support vector machine was the model that performed best, given the evaluation metrics that were chosen in this study. Consequently, if the traceability of the components were to be enhanced in the future and an additional number of machines in the production line would transmit production data to a connected system, it would be possible to conduct the study again with additional variables and a larger data set. The fact that the models included in this study succeeded in finding patterns in the dataset when such patterns were known to exist, motivates the use of the same models. Furthermore, it can be concluded that with enhanced traceability of the components and a larger amount of machines transmitting production data to a connected system, there is a possibility that machine learning models could be utilized as components in larger business monitoring systems, in order to achieve efficiencies.
APA, Harvard, Vancouver, ISO, and other styles
5

Pai, Chih-Yun. "Automatic Pain Assessment from Infants’ Crying Sounds." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6560.

Full text
Abstract:
Crying is infants utilize to express their emotional state. It provides the parents and the nurses a criterion to understand infants’ physiology state. Many researchers have analyzed infants’ crying sounds to diagnose specific diseases or define the reasons for crying. This thesis presents an automatic crying level assessment system to classify infants’ crying sounds that have been recorded under realistic conditions in the Neonatal Intensive Care Unit (NICU) as whimpering or vigorous crying. To analyze the crying signal, Welch’s method and Linear Predictive Coding (LPC) are used to extract spectral features; the average and the standard deviation of the frequency signal and the maximum power spectral density are the other spectral features which are used in classification. For classification, three state-of-the-art classifiers, namely K-nearest Neighbors, Random Forests, and Least Squares Support Vector Machine are tested in this work, and the experimental result achieves the highest accuracy in classifying whimper and vigorous crying using the clean dataset is 90%, which is sampled with 10 seconds before scoring and 5 seconds after scoring and uses K-nearest neighbors as the classifier.
APA, Harvard, Vancouver, ISO, and other styles
6

Björk, Gabriella. "Evaluation of system design strategies and supervised classification methods for fruit recognition in harvesting robots." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217859.

Full text
Abstract:
This master thesis project is carried out by one student at the Royal Institute of Technology in collaboration with Cybercom Group. The aim was to evaluate and compare system design strategies for fruit recognition in harvesting robots and the performance of supervised machine learning classification methods when applied to this specific task. The thesis covers the basics of these systems; to which parameters, constraints, requirements, and design decisions have been investigated. The framework is used as a foundation for the implementation of both sensing system, and processing and classification algorithms. A plastic tomato plant with fruit of varying maturity was used as a basis for training and testing, and a Kinect v2 for Windows including sensors for high resolution color-, depth, and IR data was used for image acquisition. The obtained data were processed and features of objects of interest extracted using MATLAB and a SDK for Kinect provided by Microsoft. Multiple views of the plant were acquired by having the plant rotate on a platform controlled by a stepper motor and an Ardunio Uno. The algorithms tested were binary classifiers, including Support Vector Machine, Decision Tree, and k-Nearest Neighbor. The models were trained and validated using a five fold cross validation in MATLABs Classification Learner application. Peformance metrics such as precision, recall, and the F1-score, used for accuracy comparison, were calculated. The statistical models k-NN and SVM achieved the best scores. The method considered most promising for fruit recognition purposes was the SVM.<br>Det här masterexamensarbetet har utförts av en student från Kungliga Tekniska Högskolan i samarbete med Cybercom Group. Målet var att utvärdera och jämföra designstrategier för igenkänning av frukt i en skörderobot och prestandan av klassificerande maskininlärningsalgoritmer när de appliceras på det specifika problemet. Arbetet omfattar grunderna av dessa system; till vilket parametrar, begränsningar, krav och designbeslut har undersökts. Ramverket användes sedan som grund för implementationen av sensorsystemet, processerings- och klassifikationsalgoritmerna. En tomatplanta i pplast med frukter av varierande mognasgrad användes som bas för träning och validering av systemet, och en Kinect för Windows v2 utrustad med sensorer för högupplöst färg, djup, och infraröd data anvöndes för att erhålla bilder. Datan processerades i MATLAB med hjälp av mjukvaruutvecklingskit för Kinect tillhandahållandet av Windows, i syfte att extrahera egenskaper ifrån objekt på bilderna. Multipla vyer erhölls genom att låta tomatplantan rotera på en plattform, driven av en stegmotor Arduino Uno. De binära klassifikationsalgoritmer som testades var Support Vector MAchine, Decision Tree och k-Nearest Neighbor. Modellerna tränades och valideras med hjälp av en five fold cross validation i MATLABs Classification Learner applikation. Prestationsindikatorer som precision, återkallelse och F1- poäng beräknades för de olika modellerna. Resultatet visade bland annat att statiska modeller som k-NN och SVM presterade bättre för det givna problemet, och att den sistnömnda är mest lovande för framtida applikationer.
APA, Harvard, Vancouver, ISO, and other styles
7

VANCE, DANNY W. "AN ALL-ATTRIBUTES APPROACH TO SUPERVISED LEARNING." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1162335608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carls, Fredrik. "Evaluation of machine learning methods for anomaly detection in combined heat and power plant." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255006.

Full text
Abstract:
In the hope to increase the detection rate of faults in combined heat and power plant boilers thus lowering unplanned maintenance three machine learning models are constructed and evaluated. The algorithms; k-Nearest Neighbor, One-Class Support Vector Machine, and Auto-encoder have a proven track record in research for anomaly detection, but are relatively unexplored for industrial applications such as this one due to the difficulty in collecting non-artificial labeled data in the field.The baseline versions of the k-Nearest Neighbor and Auto-encoder performed very similarly. Nevertheless, the Auto-encoder was slightly better and reached an area under the precision-recall curve (AUPRC) of 0.966 and 0.615 on the trainingand test period, respectively. However, no sufficiently good results were reached with the One-Class Support Vector Machine. The Auto-encoder was made more sophisticated to see how much performance could be increased. It was found that the AUPRC could be increased to 0.987 and 0.801 on the trainingand test period, respectively. Additionally, the model was able to detect and generate one alarm for each incident period that occurred under the test period.The conclusion is that ML can successfully be utilized to detect faults at an earlier stage and potentially circumvent otherwise costly unplanned maintenance. Nevertheless, there is still a lot of room for improvements in the model and the collection of the data.<br>I hopp om att öka identifieringsgraden av störningar i kraftvärmepannor och därigenom minska oplanerat underhåll konstrueras och evalueras tre maskininlärningsmodeller.Algoritmerna; k-Nearest Neighbor, One-Class Support Vector Machine, och Autoencoder har bevisad framgång inom forskning av anomalidetektion, men är relativt outforskade för industriella applikationer som denna på grund av svårigheten att samla in icke-artificiell uppmärkt data inom området.Grundversionerna av k-Nearest Neighbor och Auto-encoder presterade nästan likvärdigt. Dock var Auto-encoder-modellen lite bättre och nådde ett AUPRC-värde av 0.966 respektive 0.615 på träningsoch testperioden. Inget tillräckligt bra resultat nåddes med One-Class Support Vector Machine. Auto-encoder-modellen gjordes mer sofistikerad för att se hur mycket prestandan kunde ökas. Det visade sig att AUPRC-värdet kunde ökas till 0.987 respektive 0.801 under träningsoch testperioden. Dessutom lyckades modellen identifiera och generera ett larm vardera för alla incidenter under testperioden. Slutsatsen är att ML framgångsrikt kan användas för att identifiera störningar iett tidigare skede och därigenom potentiellt kringgå i annat fall dyra oplanerade underhåll. Emellertid finns det fortfarande mycket utrymme för förbättringar av modellen samt inom insamlingen av data.
APA, Harvard, Vancouver, ISO, and other styles
9

Veras, Ricardo da Costa. "Utilização de métodos de machine learning para identificação de instrumentos musicais de sopro pelo timbre." reponame:Repositório Institucional da UFABC, 2018.

Find full text
Abstract:
Orientador: Prof. Dr. Ricardo Suyama<br>Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, Santo André, 2018.<br>De forma geral a Classificação de Padrões voltada a Processamento de Sinais vem sendo estudada e utilizada para a interpretação de informações diversas, que se manifestam em forma de imagens, áudios, dados geofísicos, impulsos elétricos, entre outros. Neste trabalho são estudadas técnicas de Machine Learning aplicadas ao problema de identificação de instrumentos musicais, buscando obter um sistema automático de reconhecimento de timbres. Essas técnicas foram utilizadas especificamente com cinco instrumentos da categoria de Sopro de Madeira (o Clarinete, o Fagote, a Flauta, o Oboé e o Sax). As técnicas utilizadas foram o kNN (com k = 3) e o SVM (numa configuração não linear), assim como foram estudadas algumas características (features) dos áudios, tais como o MFCC (do inglês Mel-Frequency Cepstral Coefficients), o ZCR (do inglês Zero Crossing Rate), a entropia, entre outros, sendo fonte de dados para os processos de treinamento e de teste. Procurou-se estudar instrumentos nos quais se observa uma aproximação nos timbres, e com isso verificar como é o comportamento de um sistema classificador nessas condições específicas. Observou-se também o comportamento dessas técnicas com áudios desconhecidos do treinamento, assim como com trechos em que há uma mistura de elementos (gerando interferências para cada modelo classificador) que poderiam desviar os resultados, ou com misturas de elementos que fazem parte das classes observadas, e que se somam num mesmo áudio. Os resultados indicam que as características selecionadas possuem informações relevantes a respeito do timbre de cada um dos instrumentos avaliados (como observou-se em relação aos solos), embora a acurácia obtida para alguns dos instrumentos tenha sido abaixo do esperado (como observou-se em relação aos duetos).<br>In general, Pattern Classification for Signal Processing has been studied and used for the interpretation of several information, which are manifested in many ways, like: images, audios, geophysical data, electrical impulses, among others. In this project we study techniques of Machine Learning applied to the problem of identification of musical instruments, aiming to obtain an automatic system of timbres recognition. These techniques were used specifically with five instruments of Woodwind category (Clarinet, Bassoon, Flute, Oboe and Sax). The techniques used were the kNN (with k = 3) and the SVM (in a non-linear configuration), as well as some audio features, such as MFCC (Mel-Frequency Cepstral Coefficients), ZCR (Zero Crossing Rate), entropy, among others, used as data source for the training and testing processes. We tried to study instruments in which an approximation in the timbres is observed, and to verify in this case how is the behavior of a classifier system in these specific conditions. It was also observed the behavior of these techniques with audios unknown to the training, as well as with sections in which there is a mixture of elements (generating interferences for each classifier model) that could deviate the results, or with mixtures of elements that are part of the observed classes, and added in a same audio. The results indicate that the selected characteristics have relevant information regarding the timbre of each one of evaluated instruments (as observed on the solos results), although the accuracy obtained for some of the instruments was lower than expected (as observed on the duets results).
APA, Harvard, Vancouver, ISO, and other styles
10

Wahab, Nor-Ul. "Evaluation of Supervised Machine LearningAlgorithms for Detecting Anomalies in Vehicle’s Off-Board Sensor Data." Thesis, Högskolan Dalarna, Mikrodataanalys, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-28962.

Full text
Abstract:
A diesel particulate filter (DPF) is designed to physically remove diesel particulate matter or soot from the exhaust gas of a diesel engine. Frequently replacing DPF is a waste of resource and waiting for full utilization is risky and very costly, so, what is the optimal time/milage to change DPF? Answering this question is very difficult without knowing when the DPF is changed in a vehicle. We are finding the answer with supervised machine learning algorithms for detecting anomalies in vehicles off-board sensor data (operational data of vehicles). Filter change is considered an anomaly because it is rare as compared to normal data. Non-sequential machine learning algorithms for anomaly detection like oneclass support vector machine (OC-SVM), k-nearest neighbor (K-NN), and random forest (RF) are applied for the first time on DPF dataset. The dataset is unbalanced, and accuracy is found misleading as a performance measure for the algorithms. Precision, recall, and F1-score are found good measure for the performance of the machine learning algorithms when the data is unbalanced. RF gave highest F1-score of 0.55 than K-NN (0.52) and OCSVM (0.51). It means that RF perform better than K-NN and OC-SVM but after further investigation it is concluded that the results are not satisfactory. However, a sequential approach should have been tried which could yield better result.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "K-Support vector nearest neighbor"

1

Peng, J., D. R. Heisterkamp, and H. K. Dai. "Adaptive Discriminant and Quasiconformal Kernel Nearest Neighbor Classification." In Support Vector Machines: Theory and Applications. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/10984697_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cerulli, Giovanni. "Discriminant Analysis, Nearest Neighbor, and Support Vector Machine." In Fundamentals of Supervised Machine Learning. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-41337-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bartz-Beielstein, Thomas, and Martin Zaefferer. "Models." In Hyperparameter Tuning for Machine and Deep Learning with R. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-5170-1_3.

Full text
Abstract:
AbstractThis chapter presents a unique overview and a comprehensive explanation of Machine Learning (ML) and Deep Learning (DL) methods. Frequently used ML and DL methods; their hyperparameter configurations; and their features such as types, their sensitivity, and robustness, as well as heuristics for their determination, constraints, and possible interactions are presented. In particular, we cover the following methods: $$k$$ k -Nearest Neighbor (KNN), Elastic Net (EN), Decision Tree (DT), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), and DL. This chapter in itself might serve as a stand-alone handbook already. It contains years of experience in transferring theoretical knowledge into a practical guide.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Yuangui, Zhonghui Hu, Yunze Cai, and Weidong Zhang. "Support Vector Based Prototype Selection Method for Nearest Neighbor Rules." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11539087_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sim, Doreen Ying Ying. "Redefining the White-Box of k-Nearest Neighbor Support Vector Machine for Better Classification." In Lecture Notes in Electrical Engineering. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0058-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pardamean, Bens, Teddy Suparyanto, Gokma Sahat Tua Sinaga, et al. "Comparison of K-Nearest Neighbor and Support Vector Regression for Predicting Oil Palm Yield." In Lecture Notes in Electrical Engineering. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-29078-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pavani, S., and P. Augusta Sophy Beulet. "Prediction of Jowar Crop Yield Using K-Nearest Neighbor and Support Vector Machine Algorithms." In Futuristic Communication and Network Technologies. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4625-6_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Devak, Manjula, and C. T. Dhanya. "Downscaling of Precipitation in Mahanadi Basin, India Using Support Vector Machine, K-Nearest Neighbour and Hybrid of Support Vector Machine with K-Nearest Neighbour." In Geostatistical and Geospatial Approaches for the Characterization of Natural Resources in the Environment. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-18663-4_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nova, David, Pablo A. Estévez, and Pablo Huijse. "K-Nearest Neighbor Nonnegative Matrix Factorization for Learning a Mixture of Local SOM Models." In Advances in Self-Organizing Maps and Learning Vector Quantization. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07695-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dash, Ritesh, and Sarat Chandra Swain. "A Review on Nearest-Neighbor and Support Vector Machine Algorithms and Its Applications." In AI in Manufacturing and Green Technology. CRC Press, 2020. http://dx.doi.org/10.1201/9781003032465-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "K-Support vector nearest neighbor"

1

Barus, Okky Putra, Felix Billie, Jusin, Jefri Junifer Pangaribuan, and Ade Maulana. "Obesity Prediction: K-Nearest Neighbor vs. Support Vector Machine." In 2024 2nd International Conference on Technology Innovation and Its Applications (ICTIIA). IEEE, 2024. https://doi.org/10.1109/ictiia61827.2024.10761704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thorpe, Balvin. "Comparing Support Vector Machine and K Nearest Neighbor Algorithms in Classifying Speech." In SoutheastCon 2025. IEEE, 2025. https://doi.org/10.1109/southeastcon56624.2025.10971498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mackenzie, Gabriel, Meyliana, Rezki Yunanda, and Kristien Margi Suryaningrum. "Comparison of a Combined Model (K-Nearest Neighbor Algorithm and Support Vector Machine Algorithm), K-Nearest Neighbor Algorithm, and Support Vector Machine Algorithm to Detect Hate Speech on Social Media." In 2024 International Conference on Information Management and Technology (ICIMTech). IEEE, 2024. https://doi.org/10.1109/icimtech63123.2024.10780891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Uwazie, Emmanuel Chinanu, Afolayan A. Obiniyi, Morufu Olalere, and Perpetua N. Achi. "Comparison of Random Forest, K-Nearest Neighbor, and Support Vector Machine Classifiers for Intrusion Detection System." In 2024 International Conference on Science, Engineering and Business for Driving Sustainable Development Goals (SEB4SDG). IEEE, 2024. http://dx.doi.org/10.1109/seb4sdg60871.2024.10629939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Varotto, Matteo, Florian Heinrichs, Timo Schürg, Stefano Tomasin, and Stefan Valentin. "Detecting 5G Narrowband Jammers with CNN, k-nearest Neighbors, and Support Vector Machines." In 2024 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2024. https://doi.org/10.1109/wifs61860.2024.10810672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zuhri, Fadli, Aditya Firman Ihsan, and Widi Astuti. "Performance Analysis of K-Nearest Neighbor and Support Vector Machine in Anomaly Detection in Oil and Gas Pipelines." In 2024 International Conference on Data Science and Its Applications (ICoDSA). IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10652065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sunori, S. K., Chethan M, V. S. Gaikwad, G. Ravivarman, Preetjot Singh, and P. Sharmila. "Classify Text Using K-Nearest Neighbor Algorithm to Reduce the Term Vector Space." In 2024 Global Conference on Communications and Information Technologies (GCCIT). IEEE, 2024. https://doi.org/10.1109/gccit63234.2024.10862171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajput, G. G., and Vanita Bhimappa Doddamani. "Exploring Support Vector Machine and K-Nearest Neighbors for Pigeonpea Leaf Image Disease Detection and Classification." In 2024 4th Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2024. https://doi.org/10.1109/asiancon62057.2024.10837841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yeshvant.V and Smitha G.L. "Enhancing the Accuracy in Predicting Snow Avalanche with Support Vector Machine Algorithm Compared with K-Nearest Neighbour." In 2024 Second International Conference Computational and Characterization Techniques in Engineering & Sciences (IC3TES). IEEE, 2024. https://doi.org/10.1109/ic3tes62412.2024.10877515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Buslon Malarejes, John Stephen, Vanesa Bea Man-On Salvaleon, Joseph Espina Mission, and Max Angelo Dapitilla Perin. "A Comparative Study of Bird Species Classification Using K-Nearest Neighbors, Convolutional Neural Networks, and Support Vector Machines." In 2025 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream). IEEE, 2025. https://doi.org/10.1109/estream66938.2025.11016879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography