To see the other types of publications on this topic, follow the link: Naive bayes classifier.

Journal articles on the topic 'Naive bayes classifier'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Naive bayes classifier.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hairani, Hairani, Gibran Satya Nugraha, Mokhammad Nurkholis Abdillah, and Muhammad Innuddin. "Komparasi Akurasi Metode Correlated Naive Bayes Classifier dan Naive Bayes Classifier untuk Diagnosis Penyakit Diabetes." InfoTekJar (Jurnal Nasional Informatika dan Teknologi Jaringan) 3, no. 1 (September 13, 2018): 6–11. http://dx.doi.org/10.30743/infotekjar.v3i1.558.

Full text
Abstract:
Penyakit diabetes merupakan salah satu penyakit paling banyak diderita oleh manusia seluruh dunia. Setiap tahun terjadi peningkatan kematian yang disebabkan oleh penyakit diabetes. Penyakit diabetes terjadi disebabkan oleh tubuh tidak menghasilkan insulin dalam jumlah yang cukup. Salah satu cara yang digunakan untuk mengurangi jumlah kematian yang disebabkan oleh penyakit diabetes adalah melakukan diagnosis secara dini. Salah satu teknik yang bisa digunakan adalah memanfaatkan teknik data mining. Untuk melakukan diagnosis penyakit diabetes dibutuhkan suatu metode yang memiliki akurasi terbaik. Pada penelitian ini melakukan komparasi metode Correlated-Naive Bayes Classifier dan Naive Bayes Classifier untuk mendapatkan akurasi terbaik sehingga dapat digunakan untuk diagnosis penyakit diabetes. Berdasarkan pengujian yang telah dilakukan menunjukkan bahwa metode Correlated Naive Bayes Classifier (CNBC) memperoleh akurasi terbaik dibandingkan dengan metode Naive Bayes Classifier (NBC) untuk Dataset Pima indian Diabetes. Tingkat akurasi metode Correlated Naive Bayes Classifier (CNBC) sebesar 67,15%, sedangkan metode Naive Bayes Classifier (NBC) sebesar 64,33%. Metode Correlated Naive Bayes Classifier (C-NBC) memiliki akurasi lebih tinggi dibandingkan metode Naïve Bayes Classifier (NBC) karena pada metode Correlated Naïve Bayes Classifier memperhitungkan nilai korelasi dari masing-masing atribut dataset terhadap Kelasnya. Dengan demikian penggunaan metode Correlated Naïve Bayes Classifier (C-NBC) dapat digunakan untuk melakukan diagnosis penyakit diabetes karena memiliki tingkat akurasi yang bagus dibandigkan metode Naive Bayes Classifier.
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmad, Amir, Hamza Abujabal, and C. Aswani Kumar. "Random Subclasses Ensembles by Using 1-Nearest Neighbor Framework." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 10 (February 24, 2017): 1750031. http://dx.doi.org/10.1142/s0218001417500318.

Full text
Abstract:
A classifier ensemble is a combination of diverse and accurate classifiers. Generally, a classifier ensemble performs better than any single classifier in the ensemble. Naive Bayes classifiers are simple but popular classifiers for many applications. As it is difficult to create diverse naive Bayes classifiers, naive Bayes ensembles are not very successful. In this paper, we propose Random Subclasses (RS) ensembles for Naive Bayes classifiers. In the proposed method, new subclasses for each class are created by using 1-Nearest Neighbor (1-NN) framework that uses randomly selected points from the training data. A classifier considers each subclass as a class of its own. As the method to create subclasses is random, diverse datasets are generated. Each classifier in an ensemble learns on one dataset from the pool of diverse datasets. Diverse training datasets ensure diverse classifiers in the ensemble. New subclasses create easy to learn decision boundaries that in turn create accurate naive Bayes classifiers. We developed two variants of RS, in the first variant RS(2), two subclasses per class were created whereas in the second variant RS(4), four subclasses per class were created. We studied the performance of these methods against other popular ensemble methods by using naive Bayes as the base classifier. RS(4) outperformed other popular ensemble methods. A detailed study was carried out to understand the behavior of RS ensembles.
APA, Harvard, Vancouver, ISO, and other styles
3

Nakra, Abhilasha, and Manoj Duhan. "Comparative Analysis of Bayes Net Classifier, Naive Bayes Classifier and Combination of both Classifiers using WEKA." International Journal of Information Technology and Computer Science 11, no. 3 (March 8, 2019): 38–45. http://dx.doi.org/10.5815/ijitcs.2019.03.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

., Pooja, and Komal Kumar Bhatia. "Spam Detection using Naive Bayes Classifier." International Journal of Computer Sciences and Engineering 6, no. 7 (July 31, 2018): 712–16. http://dx.doi.org/10.26438/ijcse/v6i7.712716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

., Pooja, and Komal Kumar Bhatia. "Spam Detection using Naive Bayes Classifier." International Journal of Computer Sciences and Engineering 6, no. 7 (July 31, 2018): 934–38. http://dx.doi.org/10.26438/ijcse/v6i7.934938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Setianingrum, Anif Hanifa, Dea Herwinda Kalokasari, and Imam Marzuki Shofi. "IMPLEMENTASI ALGORITMA MULTINOMIAL NAIVE BAYES CLASSIFIER." JURNAL TEKNIK INFORMATIKA 10, no. 2 (January 26, 2018): 109–18. http://dx.doi.org/10.15408/jti.v10i2.6822.

Full text
Abstract:
ABSTRAK Informasi diperkirakan lebih dari 80% tersimpan dalam bentuk teks tidak terstruktur. Oleh karena itu, dibutuhkan sistem pengelolaan teks yaitu dengan metode text mining yang diyakini memiliki potensial nilai komersial tinggi. Salah satu implementasi dari text mining yaitu klasifikasi teks. Tidak hanya dokumen, pemanfaatan klasifikasi juga digunakan pada surat. Peneliti mengkaji Multinomial Naive Bayes Classifier untuk mengklasifikasi surat keluar sehingga dapat menentukan nomor surat secara otomatis. Sistem klasifikasi didukung dengan confix-stripping stemmer untuk menemukan kata dasar dan TF-IDF untuk pembobotan kata. Pengujian diukur dengan menggunakan confusion matrix. Dari hasil pengujian menunjukkan bahwa implementasi Multinomial Naive Bayes Classifier pada sistem klasifikasi surat memiliki tingkat accuracy, precision, recall, dan F-measure berturut-turut sebesar 89,58%, 79,17%, 78,72%, dan 77,05%. ABSTRACT The information estimated that more than 80% is stored in the form of unstructured text. Therefore, it takes a text management system, namely text mining method is believed to have high potential commercial. One of text mining implementation is text classification. Not only documents, the use of classification is also used in official letter. Researcher examined Multinomial Naive Bayes Classifier to classify the letter so it can determine the letters classification code automatically. The classification system is supported by confix-stripping stemmer to find root and TF-IDF for term weighting. The test used by confusion matrix of a classified as a measure of its quality. The test results showed that the implementation of Multinomial Naive Bayes Classifier on letter classification system has a level of accuracy, precision, recall, and F-measure respectively for 89.58%, 79.17%, 78.72% and 77.05%.How to Cite : Setianingrum, A. H. Kalokasari, D.H . Shofi. I. M. (2017). IMPLEMENTASI ALGORITMA MULTINOMIAL NAIVE BAYES CLASSIFIER. Jurnal Teknik Informatika, 10(2), 109-118. doi: 10.15408/jti.v10i2.6822Permalink/DOI: http://dx.doi.org/10.15408/jti.v10i2.6822
APA, Harvard, Vancouver, ISO, and other styles
7

Utami, Putri Dinda, and Risna Sari. "Filtering Hoax Menggunakan Naive Bayes Classifier." MULTINETICS 4, no. 1 (May 30, 2018): 57. http://dx.doi.org/10.32722/multinetics.vol4.no.1.2018.pp.57-61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Utami, Putri Dinda, and Risna Sari. "Filtering Hoax Menggunakan Naive Bayes Classifier." MULTINETICS 4, no. 1 (May 30, 2018): 57. http://dx.doi.org/10.32722/vol4.no1.2018.pp57-61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Utami, Putri Dinda, and Risna Sari. "Filtering Hoax Menggunakan Naive Bayes Classifier." MULTINETICS 4, no. 1 (May 30, 2018): 57–61. http://dx.doi.org/10.32722/multinetics.v4i1.1179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sugahara, Shouta, and Maomi Ueno. "Exact Learning Augmented Naive Bayes Classifier." Entropy 23, no. 12 (December 20, 2021): 1703. http://dx.doi.org/10.3390/e23121703.

Full text
Abstract:
Earlier studies have shown that classification accuracies of Bayesian networks (BNs) obtained by maximizing the conditional log likelihood (CLL) of a class variable, given the feature variables, were higher than those obtained by maximizing the marginal likelihood (ML). However, differences between the performances of the two scores in the earlier studies may be attributed to the fact that they used approximate learning algorithms, not exact ones. This paper compares the classification accuracies of BNs with approximate learning using CLL to those with exact learning using ML. The results demonstrate that the classification accuracies of BNs obtained by maximizing the ML are higher than those obtained by maximizing the CLL for large data. However, the results also demonstrate that the classification accuracies of exact learning BNs using the ML are much worse than those of other methods when the sample size is small and the class variable has numerous parents. To resolve the problem, we propose an exact learning augmented naive Bayes classifier (ANB), which ensures a class variable with no parents. The proposed method is guaranteed to asymptotically estimate the identical class posterior to that of the exactly learned BN. Comparison experiments demonstrated the superior performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
11

Pradeepika, Aluru, and Sabitha R. "Examination of Diabetes Mellitus for Early Forecast Using Decision Tree Classifier and an Innovative Dependent Feature Vector Based Naive Bayes Classifier." ECS Transactions 107, no. 1 (April 24, 2022): 12937–52. http://dx.doi.org/10.1149/10701.12937ecst.

Full text
Abstract:
The proposed study aims to evaluate the accuracy and precision in earlier diabetes mellitus detection using Decision Tree and Naive Bayes classification algorithm. Materials and methods: Naive Bayes classifier is applied on a Pima Indian diabetes dataset that consist of 769 records. A machine learning technique for earlier prediction of diabetes disease which compares Decision Tree and Naive Bayes classification algorithms has been proposed and developed. The sample size was measured as 27 per group. The accuracy and precision of the classifiers was evaluated and recorded. Results: The accuracy was maximum in predicting diabetes using Naive Bayesian classifier (76.46%) with minimum mean error when compared with Decision Tree Classifier (70.09%). There is a significant difference of 0.006 between the groups. Hence, Naive Bayes appears to be better than Decision Tree Classifier. Conclusion: The study proves that Naive Bayesian Classifier exhibits better accuracy than Decision Tree Classifier in predicting diabetes.
APA, Harvard, Vancouver, ISO, and other styles
12

Zheng, Jian, Wei Yang, Changchun Wang, Dandan Jiang, Dan Wang, Qi Yang, and Yijiao Zhang. "Research on complaint prediction based on feature weighted Naive Bayes." IOP Conference Series: Earth and Environmental Science 983, no. 1 (February 1, 2022): 012117. http://dx.doi.org/10.1088/1755-1315/983/1/012117.

Full text
Abstract:
Abstract Complaint prediction is a hot topic in current research. Due to complaint prediction is random, the accuracy of complaint prediction is not high. To solve this problem, this paper proposes a complaint prediction method based on feature weighted Naive Bayes. According to rough set theory, we first calculate the importance of each conditional attribute, combine the importance of attribute as the weight with the naive Bayes classifier to form a weighted naive Bayes classifier. We use multiple classifiers to make predictions, the latter classifier performs iterative learning on the basis of the previous classifier, and finally all classifiers are given different weights to make decisions. The experimental results show that the proposed method effectively combines the weighting method and the Naive Bayes method to achieve reliable prediction of complaints.
APA, Harvard, Vancouver, ISO, and other styles
13

Pidmohylʹnyy, O. O., O. M. Tkachenko, O. I. Holubenko, and O. V. Drobyk. "Naive Bayes Classifier as one way to filter spam mail." Connectivity 142, no. 6 (2019): 58–60. http://dx.doi.org/10.31673/2412-9070.2019.065860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Taheri, Sona, and Musa Mammadov. "Learning the naive Bayes classifier with optimization models." International Journal of Applied Mathematics and Computer Science 23, no. 4 (December 1, 2013): 787–95. http://dx.doi.org/10.2478/amcs-2013-0059.

Full text
Abstract:
Abstract Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.
APA, Harvard, Vancouver, ISO, and other styles
15

Saptono, Ristu, Meianto Eko Sulistyo, and Nur Shobriana Trihabsari. "TEXT CLASSIFICATION USING NAIVE BAYES UPDATEABLE ALGORITHM IN SBMPTN TEST QUESTIONS." Telematika 13, no. 2 (January 29, 2017): 123. http://dx.doi.org/10.31315/telematika.v13i2.1728.

Full text
Abstract:
Document classification is a growing interest in the research of text mining. Classification can be done based on the topics, languages, and so on. This study was conducted to determine how Naive Bayes Updateable performs in classifying the SBMPTN exam questions based on its theme. Increment model of one classification algorithm often used in text classification Naive Bayes classifier has the ability to learn from new data introduces with the system even after the classifier has been produced with the existing data. Naive Bayes Classifier classifies the exam questions based on the theme of the field of study by analyzing keywords that appear on the exam questions. One of feature selection method DF-Thresholding is implemented for improving the classification performance. Evaluation of the classification with Naive Bayes classifier algorithm produces 84,61% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
16

Ji, Jie, and Qiangfu Zhao. "Applying Naive Bayes Classifier to Document Clustering." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 6 (September 20, 2010): 624–30. http://dx.doi.org/10.20965/jaciii.2010.p0624.

Full text
Abstract:
Document clustering partitions sets of unlabeled documents so that documents in clusters share common concepts. A Naive Bayes Classifier (BC) is a simple probabilistic classifier based on applying Bayes’ theorem with strong (naive) independence assumptions. BC requires a small amount of training data to estimate parameters required for classification. Since training data must be labeled, we propose an Iterative Bayes Clustering (IBC) algorithm. To improve IBC performance, we propose combining IBC with Comparative Advantage-based (CA) initialization method. Experimental results show that our proposal improves performance significantly over classical clustering methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Afeni, Babajide, Thomas Aruleba, and Iyanuoluwa Oloyede. "Hypertension Prediction System Using Naive Bayes Classifier." Journal of Advances in Mathematics and Computer Science 24, no. 2 (January 10, 2017): 1–11. http://dx.doi.org/10.9734/jamcs/2017/35610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Indraja, Baisani, and K. Annapurani. "Classification of medicines using naive bayes classifier." Research Journal of Pharmacy and Technology 11, no. 5 (2018): 1940. http://dx.doi.org/10.5958/0974-360x.2018.00360.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

CHEN, Lei, Wei LU, Ergude BAO, Liqiang WANG, Weiwei XING, and Yuanyuan CAI. "Naive Bayes Classifier Based Partitioner for MapReduce." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101.A, no. 5 (May 1, 2018): 778–86. http://dx.doi.org/10.1587/transfun.e101.a.778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Yi-Chen, and Lyudmila Sakhanenko. "The naive Bayes classifier for functional data." Statistics & Probability Letters 152 (September 2019): 137–46. http://dx.doi.org/10.1016/j.spl.2019.04.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shiri Harzevili, Nima, and Sasan H. Alizadeh. "Mixture of latent multinomial naive Bayes classifier." Applied Soft Computing 69 (August 2018): 516–27. http://dx.doi.org/10.1016/j.asoc.2018.04.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tütüncü, G. Yazgı, and Necla Kayaalp. "An Aggregated Fuzzy Naive Bayes Data Classifier." Journal of Computational and Applied Mathematics 286 (October 2015): 17–27. http://dx.doi.org/10.1016/j.cam.2015.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hsu, Chung-Chian, Yan-Ping Huang, and Keng-Wei Chang. "Extended Naive Bayes classifier for mixed data." Expert Systems with Applications 35, no. 3 (October 2008): 1080–83. http://dx.doi.org/10.1016/j.eswa.2007.08.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Oladipo, Francisca Onaolapo, Ogunsanya Funmilayo Blessing, and Ezendu Ariwa. "Terrorism Detection Model using Naive Bayes Classifier." International Journal of Computer Science and Engineering 7, no. 12 (December 25, 2020): 9–15. http://dx.doi.org/10.14445/23488387/ijcse-v7i12p103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Rizqa, Ifan. "KLASIFIKASI ASPIRASI MAHASISWA DENGAN NAIVE BAYES CLASSIFIER." CSRID (Computer Science Research and Its Development Journal) 11, no. 1 (March 3, 2021): 01. http://dx.doi.org/10.22303/csrid.11.1.2019.01-09.

Full text
Abstract:
<em>Aspirasi mahasiswa adalah berbagai tuntunan dari mahasiswa yang dikemas dalam ide kreatif untuk mengusulkan proses perubahan atas suatu hal. Rata-rata aspirasi yang disampaikan berupa keluhan dan harapan. Aspirasi berguna sebagai bahan evaluasi dan deteksi dini terhadap kelemahan sistem kualitas perguruan tinggi menjadi lebih baik. kegiatan ini terjadi di UDINUS, DPM sebagai pihak yang mengelola aspirasi mahasiswa. Aspirasi didapatkan melalui sebuah mekanisme yang telah ditentukan seperti penyebaran kuisioner secara manual atau berbasis google form. Kuisioner yang disediakan mengharuskan mahasiswa mengisi isi aspirasi sesuai katagori yang disediakan. Namun, permasalahan yang dihadapi terkadang mahasiswa salah memilih katagori sesuai isi aspirasi yang ditulis. Maka dari itu perlu dibuat sistem yang dapat mengkatagorisasikan aspirasi mahasiswa secara otomatis. Klasifikasi teks dokumen menjadi cara yang paling baik untuk menentukan katagori berdasarkan isi aspirasi mahasiswa. Metode Naïve Bayes Classifier digunakan karena mampu menghasilkan akurasi yang tinggi. Dengan 1000 dokumen data latih yang masing-masing 250 dokumen setiap katagori yaitu “Sarana dan Prasarana”, “Dosen”, “Kepegawaian dan Sistem Akademik”,”Saran dan Masukan” menghasilkan akurasi 90.20%. Dapat dikatakan metode ini layak implementasi dalam penelitian ini. </em>
APA, Harvard, Vancouver, ISO, and other styles
26

LIDYA, WINDA, HAZMIRA YOZZA, and FERRA YANUAR. "KLASIFIKASI DAERAH TERTINGGAL DI INDONESIA MENGGUNAKAN METODE NAIVE BAYEA CLASSIFIER." Jurnal Matematika UNAND 9, no. 1 (May 29, 2020): 23. http://dx.doi.org/10.25077/jmu.9.1.23-29.2020.

Full text
Abstract:
Status daerah dapat diprediksi berdasarkan klasifikasi dengan metode Naive Bayes. Naive Bayes merupakan teknik prediksi berbasis peluang sederhana yang berdasarkan penerapan teorema Bayes dengan tingkat akurasi cukup tinggi. Klasifikasi status daerah ditentukan berdasarkan indikator yang terkait dalam penentuan status daerah. Menghitung klasifikasi Naive Bayes untuk indikator kontinu menggunakan sebaran normal. Pengukuran Kinerja Klasifikasi ditentukan dengan menggunakan matriks konfusi, diperoleh nilai akurasi sebesar 0,905 yang artinya nilai akurasi yang diperoleh cukup baik dalam klasifikasi status daerah.Kata Kunci: Status daerah, klasifikasi, Naive Bayes, sebaran normal, matriks konfusi, akurasi.
APA, Harvard, Vancouver, ISO, and other styles
27

Adi, Sumarni, and Edi Winarko. "Klasifikasi Data NAP (Nota Analisis Pembiayaan) untuk Prediksi Tingkat Keamanan Pemberian Kredit (Studi Kasus : Bank Syariah Mandiri Cabang Luwuk Sulawesi Tengah)." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 9, no. 1 (January 31, 2015): 1. http://dx.doi.org/10.22146/ijccs.6635.

Full text
Abstract:
AbstrakSetiap bulannya bank syariah mandiri cabang luwuk menerima proposal kredit (NAP) dari nasabah dalam jumlah yang terus meningkat dan perlu respon yang cepat. Dengan demikian, perlu dikembangkan sistem untuk melakukan data mining dari tumpukan data tersebut yang akan digunakan untuk kepentingan tertentu, salah satunya adalah untuk menganalisis resiko pemberian kredit.Teknik data mining digunakan dalam penelitian ini untuk klasifikasi tingkat keamanan pemberian kredit dengan menerapakan algoritma Naïve Bayes Classificatio. Naive bayes classifier merupakan pendekatan yang mengacu pada teorema Bayes yang menkombinasikan pengetahuan sebelumnya dengan pengetahuan baru, sehingga merupakan salah satu algoritma klasifikasi yang sederhana namun memiliki akurasi tinggi. Sebelum dilakukan klasifikasi, data debitur melalui preprocessing. Kemudian dari preprocessing ini dilakukan klasifikasi dengan naive bayes classifier, sehingga menghasilkan model probabilitas klasifikasi untuk prediksi kelas pada debitur selanjutnya. Teknik pengujian akurasi model diukur menggunakan boostrap, dan menunjukkan bahwa nilai akurasi terkecil 80% dihasilkan pada sampel data 100, dan menghasilkan nilai akurasi terbesar 98,66% pada sampel data 463. Kata kunci— akurasi, naive bayes, data mining, klasifikasi, preprocessing, NAP AbstractEvery month the Mandiri Syariah Bank Branch Office of Luwuk receives a very large number of proposal credit. Thus, the system should be developed to perform data mining of the heap data to be used for specific purpose, one of which is for the risk analysis of credit allowance. Data mining techniques used in this study for classification level prediction of credit allowance by applying a naïve Bayes Classification algorithm . Naive bayes classifier is an approach that refers to the bayes theorem, is a combination of prior knowledge with new knowledge. So that is one of the classification algorithm is simple but has a high accuracy. Prior to classification, data of debitur has been through a preprocessing. Then the weight is to perform classification with naive bayes classifier. After the data is classified, so produce probabilitas of model classification for prediction class to next debitur. Testing techniques the accuracy of the model was measured by bosstrap, and shows that the smallest value of accuracy is 80% produced in the 100 data sample, and the largest value of accuracy 98,66% on a data sample of 463. Keywords— accuracy, naive bayes, data mining, classification, preprocessing, NAP
APA, Harvard, Vancouver, ISO, and other styles
28

Finki Dona Marleny and Mambang. "COMPARISON OF K-NN AND NAÏVE BAYES CLASSIFIER FOR ASPHYXIA FACTOR." Jurnal Teknologi Informasi Universitas Lambung Mangkurat (JTIULM) 3, no. 1 (April 20, 2018): 13–17. http://dx.doi.org/10.20527/jtiulm.v3i1.23.

Full text
Abstract:
Asphyxia is influenced by several factors, including the factors affecting the Immediate Was maternal factors That relates Conditions mother Pregnancy and childbirth such as hypoxia mother, Asphyxia factor data can be modeled using the classification approach. this paper will be compared k-nearest neighbor algorithm and Naive Bayes classifier to classify asphyxia factor. Naive Bayes uses the concept of Bayes’ Theorem which assuming the independency between predictors. Basically, Bayes theorem is used to compute the subsequent probabilities. Analysis of the two algorithms has been done on several parameters such as Kappa statistics, classification error, precision, recall, F-measure and AUC. We achieved the best classification accuracy with KNN algorithm, 92,27%, for k=4. are lower than the rates achieved with Naïve Bayes 83,19%.
APA, Harvard, Vancouver, ISO, and other styles
29

Gede Widnyana Putra, Ida Bagus, Made Sudarma, and I. Nyoman Satya Kumara. "Klasifikasi Teks Bahasa Bali dengan Metode Information Gain dan Naive Bayes Classifier." Majalah Ilmiah Teknologi Elektro 15, no. 2 (December 15, 2016): 81–86. http://dx.doi.org/10.24843/mite.1502.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rahamat Basha, S., and J. K. Rani. "A Comparative Approach of Dimensionality Reduction Techniques in Text Classification." Engineering, Technology & Applied Science Research 9, no. 6 (December 1, 2019): 4974–79. http://dx.doi.org/10.48084/etasr.3146.

Full text
Abstract:
This work deals with document classification. It is a supervised learning method (it needs a labeled document set for training and a test set of documents to be classified). The procedure of document categorization includes a sequence of steps consisting of text preprocessing, feature extraction, and classification. In this work, a self-made data set was used to train the classifiers in every experiment. This work compares the accuracy, average precision, precision, and recall with or without combinations of some feature selection techniques and two classifiers (KNN and Naive Bayes). The results concluded that the Naive Bayes classifier performed better in many situations.
APA, Harvard, Vancouver, ISO, and other styles
31

Nugroho, Kuncahyo Setyo, Istiadi Istiadi, and Fitri Marisa. "Naive Bayes classifier optimization for text classification on e-government using particle swarm optimization." Jurnal Teknologi dan Sistem Komputer 8, no. 1 (October 17, 2019): 21–26. http://dx.doi.org/10.14710/jtsiskom.8.1.2020.21-26.

Full text
Abstract:
One of the public e-government services is a web-based online complaints portal. Text of complaint needs to be classified so that it can be forwarded to the responsible office quickly and accurately. The standard classification approach commonly used is the Naive Bayes Classifier (NBC) and k-Nearest Neighbor (k-NN), which still classifies one label and needs to be optimized. This research aims to classify the complaint text of more than one label at the same time with NBC, which is optimized using Particle Swarm Optimization (PSO). The data source comes from the Sambat Online portal and is divided into 70 % as training data and 30 % as testing data to be classified into seven labels. NBC and k-NN algorithms are used as a comparison method to find out the performance of PSO optimization. The 10-fold cross-validation shows that NBC optimization using PSO achieves an accuracy of 87.44 % better than k-NN of 75 % and NBC of 64.38 %. The optimization model can be used to increase the effectiveness of services to e-government in society.
APA, Harvard, Vancouver, ISO, and other styles
32

Uddin, M. A., and M. S. Ahmed. "Modified naive Bayes classifier for classification of protein-protein interaction sites." Journal of Bioscience and Agriculture Research 26, no. 02 (December 10, 2020): 2177–84. http://dx.doi.org/10.18801/jbar.260220.266.

Full text
Abstract:
The prediction of protein-protein interaction sites (PPIs) is a vital importance in biology for understanding the physical and functional interactions between molecules in living systems. There are several classification approaches for the prediction of PPI sites; the naïve Bayes classifier is one of the most popular candidates. But the ordinary naïve Bayes classifier is sensitive to unusual protein sequence profiling feature dataset and sometimes it gives ambiguous prediction results. To overcome this problem we have been modified the naïve Bayes classifier by radial basis function (RBF) kernel for the prediction of PPI sites. We investigate the performance of our proposed method compared with the popular classifiers like linear discriminant analysis (LDA), naïve Bayes classifier (NBC), support vector machine (SVM), AdaBoost and k-nearest neighbor (KNN) by the protein sequence profiling data analysis. The mNBC method showed sensitivity (86%), specificity (81%), accuracy (83%) and MCC (65%) for prediction of PPI sites.
APA, Harvard, Vancouver, ISO, and other styles
33

Susilawati, Desi Susilawati, and Dwiza Riana. "Optimization the Naive Bayes Classifier Method to diagnose diabetes Mellitus." IAIC Transactions on Sustainable Digital Innovation (ITSDI) 1, no. 1 (November 13, 2019): 78–86. http://dx.doi.org/10.34306/itsdi.v1i1.21.

Full text
Abstract:
World Health Organization (WHO) states that Diabetes Mellitus is the world's top deadly disease. several studies in the health sector including diabetes mellitus have been carried out to detect diseases early. In this study optimization of naive bayes classifier using particle swarm optimization was applied to the data of patients with 2 classes namely positive diabetes mellitus and negative diabetes mellitus and data on patients with 3 classes, those who tested positive for diabetes mellitus type 1, diabetes mellitus type 2 and negative diabetes mellitus. After testing, the algorithm of Naive Bayes Classifier and Naive Bayes Classifier based on Particle Swarm Optimization, the results obtained are the Naive Bayes Classifier method for 2 classes and 3 classes each producing an accuracy value of 78.88% and 68.50%. but after adding Particle Swarm Optimization the value of accuracy increased respectively to 82.58% and 71, 29%. The classification results for 2 classes have an accuracy value higher than 3 classes with a difference of 11.29%
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Yihong, and Adam Jatowt. "Estimating a one-class naive Bayes text classifier." Intelligent Data Analysis 24, no. 3 (May 21, 2020): 567–79. http://dx.doi.org/10.3233/ida-194669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chavan, Ronak. "A Literature Review on Hierarchical Naive Bayes Classifier." International Journal for Research in Applied Science and Engineering Technology 7, no. 4 (April 30, 2019): 179–80. http://dx.doi.org/10.22214/ijraset.2019.4032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kohonen, Jukka, Sarish Talikota, Jukka Corander, Petri Auvinen, and Elja Arjas. "A Naive Bayes Classifier for Protein Function Prediction." In Silico Biology 9, no. 1,2 (2009): 23–34. http://dx.doi.org/10.3233/isb-2009-0382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Geetha, S., and R. Maniyosai. "An Improved Naive Bayes Classifier on Imbalanced Attributes." International Journal of Organizational and Collective Intelligence 9, no. 2 (April 2019): 1–15. http://dx.doi.org/10.4018/ijoci.2019040101.

Full text
Abstract:
Data plays a major and prominent role in this modern information era. Classification is a data mining task to discover the hidden information from large amounts of data stored in the repository. This process becomes extremely challenging in case of highly imbalanced dataset. Prediction from imbalanced attributes cannot be done accurately in the following case: During the training phase, the categorical variable is not observed but the test phase encounters the categorical variable and hence it assigns zero probability which leads to false prediction. To overcome this scenario, this article proposes a novel smoothing technique called optimized laplace smoothing estimation. This technique adds a bias value function to improve the accuracy of imbalanced attributes. For example, a child dataset has more attributes and the classification model is used to predict the child weight. Some of the attribute values may not be present in the child dataset due to which Naive Bayes assigns a zero for incomplete and an empty attribute. This leads to inaccurate prediction. In such cases, Naive Bayes can be further tuned by adding some new parameters as well as altering the existing optimization method. Experimental analysis shows that this novel smoothing technique enhances the classification accuracy by means of accurate predictions for imbalanced attributes.
APA, Harvard, Vancouver, ISO, and other styles
38

YA, Divya. "Prediction of Diabetic Retinopathy using Naive Bayes Classifier." IJIREEICE 5, no. 5 (May 15, 2017): 49–51. http://dx.doi.org/10.17148/ijireeice.2017.5509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jadon, Ekta, and Roopesh Sharma. "Data Mining: Document Classification using Naive Bayes Classifier." International Journal of Computer Applications 167, no. 6 (June 15, 2017): 13–16. http://dx.doi.org/10.5120/ijca2017913925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Marcos de Moraes, Ronei, Elaine Anita de Melo Gomes Soares, and Liliane dos Santos Machado. "A double weighted fuzzy gamma naive bayes classifier." Journal of Intelligent & Fuzzy Systems 38, no. 1 (January 9, 2020): 577–88. http://dx.doi.org/10.3233/jifs-179431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Suryono, Sigit, Ema Utami, and Emha Taufiq Luthfi. "KLASIFIKASI SENTIMEN PADA TWITTER DENGAN NAIVE BAYES CLASSIFIER." Angkasa: Jurnal Ilmiah Bidang Teknologi 10, no. 1 (May 23, 2018): 89. http://dx.doi.org/10.28989/angkasa.v10i1.218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Feng, Xianda, Shuchen Li, Chao Yuan, Peng Zeng, and Yang Sun. "Prediction of Slope Stability using Naive Bayes Classifier." KSCE Journal of Civil Engineering 22, no. 3 (March 2018): 941–50. http://dx.doi.org/10.1007/s12205-018-1337-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Al-juaifari, Mohammad Khalaf Rahim, Abdulhussein Abdulmohson, and Radhwan Hussein Abdulzhraa Al_Sagheer. "Iraqi newborns mortality prediction using naive bayes classifier." International Journal of Research in Engineering and Innovation 05, no. 02 (2021): 123–27. http://dx.doi.org/10.36037/ijrei.2021.5205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

NAHAR, JESMIN, YI-PING PHOEBE CHEN, and SHAWKAT ALI. "KERNEL-BASED NAIVE BAYES CLASSIFIER FOR BREAST CANCER PREDICTION." Journal of Biological Systems 15, no. 01 (March 2007): 17–25. http://dx.doi.org/10.1142/s0218339007002076.

Full text
Abstract:
The classification of breast cancer patients is of great importance in cancer diagnosis. Most classical cancer classification methods are clinical-based and have limited diagnostic ability. The recent advances in machine learning technique has made a great impact in cancer diagnosis. In this research, we develop a new algorithm: Kernel-Based Naive Bayes (KBNB) to classify breast cancer tumor based on memography data. The performance of the proposed algorithm is compared with that of classical navie bayes algorithm and kernel-based decision tree algorithm C4.5. The proposed algorithm is found to outperform in the both cases. We recommend the proposed algorithm could be used as a tool to classify the breast patient for early cancer diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
45

Arifin, Toni, and Daniel Ariesta. "PREDIKSI PENYAKIT GINJAL KRONIS MENGGUNAKAN ALGORITMA NAIVE BAYES CLASSIFIER BERBASIS PARTICLE SWARM OPTIMIZATION." Jurnal Tekno Insentif 13, no. 1 (April 16, 2019): 26–30. http://dx.doi.org/10.36787/jti.v13i1.97.

Full text
Abstract:
Penyakit ginjal kronis (PGK) merupakan masalah kesehatan masyarakat global dengan prevalens dan insidens gagal ginjal yang meningkat, prognosis yang buruk dan biaya yang tinggi. nilai prevalensi di seluruh Indonesia untuk penyakit gagal ginjal memiliki nilai rata - rata berkisar kurang lebih 0.2 persen. Langkah pertama dalam pengelolaan penyakit ginjal adalah penetapan diagnosis yang tepat. Maka dibutuhkan sebuah metode untuk memprediksi penyakit ginjalkronis. Naïve Bayes memiliki beberapa kelebihan, yaitu cepat dalam perhitungan, algoritma yang sederhana dan berakurasi tinggi. Naïve Bayes Classifier lebih tepat diterapkan pada data yang besar dan dapat menangani data yang tidak lengkap (missing value) serta kuat terhadap atribut yang tidak relevan dan noise pada data. Untuk meningkatkan akurasi maka digunakan Particle Swarm Optimization untuk pembobotan atribut. Dari hasil penelitian Naive Bayes Classification berbasis Particle Swarm Optimization memiliki akurasi confusion matrix sebesar 98,75% dan AUC sebesar 99%. sedangkan Naive Bayes memiliki akurasi confusion matrix 97.00% dan AUC sebesar 99.8%.
APA, Harvard, Vancouver, ISO, and other styles
46

Yaokumah, Winfred, and Isaac Wiafe. "Analysis of Machine Learning Techniques for Anomaly-Based Intrusion Detection." International Journal of Distributed Artificial Intelligence 12, no. 1 (January 2020): 20–38. http://dx.doi.org/10.4018/ijdai.2020010102.

Full text
Abstract:
Determining the machine learning (ML) technique that performs best on new datasets is an important factor in the design of effective anomaly-based intrusion detection systems. This study therefore evaluated four machine learning algorithms (naive Bayes, k-nearest neighbors, decision tree, and random forest) on UNSW-NB 15 dataset for intrusion detection. The experiment results showed that random forest and decision tree classifiers are effective for detecting intrusion. Random forest had the highest weighted average accuracy of 89.66% and a mean absolute error (MAE) value of 0.0252 whereas decision tree recorded 89.20% and 0.0242, respectively. Naive Bayes classifier had the worst results on the dataset with 56.43% accuracy and a MAE of 0.0867. However, contrary to existing knowledge, naïve Bayes was observed to be potent in classifying backdoor attacks. Observably, naïve Bayes performed relatively well in classes where tree-based classifiers demonstrated abysmal performance.
APA, Harvard, Vancouver, ISO, and other styles
47

Nurdin, Nurdin, M. Suhendri, Yesy Afrilia, and Rizal Rizal. "Klasifikasi Karya Ilmiah (Tugas Akhir) Mahasiswa Menggunakan Metode Naive Bayes Classifier (NBC)." SISTEMASI 10, no. 2 (May 30, 2021): 268. http://dx.doi.org/10.32520/stmsi.v10i2.1193.

Full text
Abstract:
ABSTRACTThe final project or thesis is the result of research that addresses a problem according to the student's field of science. By increasing the number of graduates, the number of final project documents produced will also be even greater. The large number of scientific papers or final project documents will be difficult to find according to the topic if they are not grouped. A large number of documents will not be effective if classification is done manually. This study makes a scientific paper classification application aimed at classifying the scientific work (final project) of students in the field of Informatics Engineering. This application was built by implementing the Naive Bayes Classifier algorithm based on background parameters and will be classified into 5 categories, namely image processing, data mining, decision making systems, geographic information systems and expert systems. With the research stages, namely data collection, preprocessing, calculation of the Naive Bayes Classifier method, implementation and system testing. This study uses 170 scientific papers, which are divided into 150 data for training and 20 data for testing. The results of this study illustrate that the Naive Bayes Classifier algorithm is a simple algorithm that can be used to classify scientific papers with an average accuracy of 86.68% and the average processing time required in each test is 5.7406 seconds / test.Keywords:scientific work, naive bayes classifier, classification,training, testing ABSTRAKTugas akhir atau skripsi merupakan hasil penelitian yang membahas suatu masalah sesuai bidang ilmu dari mahasiswa. Dengan bertambah jumlah lulusan, maka jumlah dokumen tugas akhir yang dihasilkan juga akan semakin besar. Jumlah dokumen karya ilmiah atau tugas akhir yang besar akan sulit dicari sesuai dengan topik jika tidak dikelompokkan. Jumlah dokumen yang besar akan tidak efektif jika dilakukan klasifikasi secara manual. Penelitian ini membuat aplikasi klasifikasi karya ilmiah bertujuan untuk mengklasifikasikan karya ilmiah (tugas akhir) mahasiswa dalam bidang ilmu Teknik Informatika. Aplikasi ini dibangun dengan mengimplementasikan algoritma Naive Bayes Classifier berdasarkan parameter latar belakang dan akan diklasifikasikan menjadi 5 kategori yaitu pengolahan citra, data mining, sistem pengambilan keputusan, sistem informasi geografis dan sistem pakar. Dengan tahapan penelitian yaitu pengumpulan data, preprocessing, perhitungan metode Naive Bayes Classifier,implementasi dan pengujian sistem.Penelitian ini menggunakan data sebanyak 170 data karya ilmiah, yang dibagi menjadi 150 data untuk pelatihan dan 20 data untuk pengujian. Hasil penelitian ini menggambarkan bahwa algoritma Naive Bayes Classifier merupakan algoritma sederhana yang mampu digunakan untuk melakukan klasifikasi karya ilmiah dengan rata-rata akurasi 86,68% serta rata-rata waktu proses yang dibutuhkan dalam setiap pengujian yaitu 5,7406 detik/pengujian.Kata Kunci:Karya ilmiah, Naive bayes classifier, Klasifikasi, Pelatihan, Pengujian.
APA, Harvard, Vancouver, ISO, and other styles
48

Haryono, Haryono, Pritasari Palupiningsih, Yessy Asri, and Andi Nikma Sri Handayani. "KLASIFIKASI PESAN GANGGUAN PELANGGAN MENGGUNAKAN METODE NAIVE BAYES CLASSIFIER." KILAT 7, no. 2 (October 30, 2018): 100–108. http://dx.doi.org/10.33322/kilat.v7i2.354.

Full text
Abstract:
The application of customer disturbance message classifiers is made because of the process of reporting the interruption by the customer must be done by selection of data disorders by one by the admin to be able to follow-up from the existing customer reports. Naive Bayes is one of machine learning methods that uses probability calculations where the algorithm takes advantage of probability and statistical methods that predict future probabilities based on past experience. The application of the naive bayes classifier method with text mining as the initial data processor of the disorder messaging application can be concluded that this study yields an accuracy of probability values of 95 percent and proves that the Naive Bayes method can be used to help classify interference messages sent by customers.
APA, Harvard, Vancouver, ISO, and other styles
49

Kumar Bhowmik, Tapan. "Naive Bayes vs Logistic Regression: Theory, Implementation and Experimental Validation." Inteligencia Artificial 18, no. 56 (December 18, 2015): 14. http://dx.doi.org/10.4114/intartif.vol18iss56pp14-30.

Full text
Abstract:
This article presents the theoretical derivation as well as practical steps for implementing Naive Bayes (NB) and Logistic Regression (LR) classifiers. A generative learning under Gaussian Naive Bayes assumption and two discriminative learning techniques based on gradient ascent and Newton-Raphson methods are described to estimate the parameters of LR. Some limitation of learning techniques and implementation issues are discussed as well. A set of experiments are performed for both the classifiers under different learning circumstances and their performances are compared. From the experiments, it is observed that LR learning with gradient ascent technique outperforms general NB classifier. However, under Gaussian Naive Bayes assumption, both classifiers NB and LR perform similar.
APA, Harvard, Vancouver, ISO, and other styles
50

Morariu, Daniel, Radu Crețulescu, and Lucian Vințan. "Improving a SVM Meta-classifier for Text Documents by using Naive Bayes." International Journal of Computers Communications & Control 5, no. 3 (September 1, 2010): 351. http://dx.doi.org/10.15837/ijccc.2010.3.2487.

Full text
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. In this paper, we investigated two approaches: a) to develop a classifier for text document based on Naive Bayes Theory and b) to integrate this classifier into a meta-classifier in order to increase the classification accuracy. The basic idea is to learn a meta-classifier to optimally select the best component classifier for each data point. The experimental results show that combining classifiers can significantly improve the classification accuracy and that our improved meta-classification strategy gives better results than each individual classifier. For Reuters2000 text documents we obtained classification accuracies up to 93.87%
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography