To see the other types of publications on this topic, follow the link: Bayes Algorithm.

Journal articles on the topic 'Bayes Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayes Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Qinghua, Bin Wu, Chengyu Hu, and Xuesong Yan. "Evolutionary Multilabel Classification Algorithm Based on Cultural Algorithm." Symmetry 13, no. 2 (February 16, 2021): 322. http://dx.doi.org/10.3390/sym13020322.

Full text
Abstract:
As one of the common methods to construct classifiers, naïve Bayes has become one of the most popular classification methods because of its solid theoretical basis, strong prior knowledge learning characteristics, unique knowledge expression forms, and high classification accuracy. This classification method has a symmetry phenomenon in the process of data classification. Although the naïve Bayes classifier has high classification performance in single-label classification problems, it is worth studying whether the multilabel classification problem is still valid. In this paper, with the naïve Bayes classifier as the basic research object, in view of the naïve Bayes classification algorithm’s shortage of conditional independence assumptions and label class selection strategies, the characteristics of weighted naïve Bayes is given a better label classifier algorithm framework; the introduction of cultural algorithms to search for and determine the optimal weights is proposed as the weighted naïve Bayes multilabel classification algorithm. Experimental results show that the algorithm proposed in this paper is superior to other algorithms in classification performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Zonyfar, Candra. "Student Enrollment: Data Mining Using Naïve Bayes Algorithm." Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (July 25, 2020): 1077–83. http://dx.doi.org/10.5373/jardcs/v12sp7/20202205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

M, Harshitha, and Dr B. M. Sagar. "Smart Health Care Implementation Using Naïve Bayes Algorithm." International Journal of Innovative Research in Computer Science & Technology 7, no. 3 (May 2019): 90–93. http://dx.doi.org/10.21276/ijircst.2019.7.3.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Noviriandini, Astrid, and Nurajijah Nurajijah. "ANALISIS KINERJA ALGORITMA C4.5 DAN NAÏVE BAYES UNTUK MEMPREDIKSI PRESTASI SISWA SEKOLAH MENENGAH KEJURUAN." JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer) 5, no. 1 (August 7, 2019): 23–28. http://dx.doi.org/10.33480/jitk.v5i1.607.

Full text
Abstract:
This research informs students and teachers to anticipate early in following the learning period in order to get maximum learning outcomes. The method used is C4.5 decision tree algorithm and Naïve Bayes algorithm. The purpose of this study was to compare and evaluate the decision tree model C4.5 as the selected algorithm and Naïve Bayes to find out algorithms that have higher accuracy in predicting student achievement. Learning achievement can be measured by the value of report cards. After comparison of the two algorithms, the results of the learning achievement prediction are obtained. The results showed that the Naïve Bayes algorithm had an accuracy value of 95.67% and the AUC value of 0.999 was included in Excellent Clasification, for the C4.5 algorithm the accuracy value was 90.91% and the AUC value of 0.639 was included in the state of Poor Clasification. Thus the Naïve Bayes algorithm can better predict student achievement.
APA, Harvard, Vancouver, ISO, and other styles
5

Dinesh, T. "Higher Classification of Fake Political News Using Decision Tree Algorithm Over Naive Bayes Algorithm." Revista Gestão Inovação e Tecnologias 11, no. 2 (June 5, 2021): 1084–96. http://dx.doi.org/10.47059/revistageintec.v11i2.1738.

Full text
Abstract:
Aim: The main aim of the study proposed is to perform higher classification of fake political news by implementing fake news detectors using machine learning classifiers by comparing their performance. Materials and Methods: By considering two groups such as Decision Tree algorithm and Naive Bayes algorithm. The algorithms have been implemented and tested over a dataset which consists of 44,000 records. Through the programming experiment which is performed using N=10 iterations on each algorithm to identify various scales of fake news and true news classification. Result: After performing the experiment the mean accuracy of 99.6990 by using Decision Tree algorithm and the accuracy of 95.3870 by using Naive Bayes algorithm for fake political news in. There is a statistical significant difference in accuracy for two algorithms is p<0.05 by performing independent samples t-tests. Conclusion: This paper is intended to implement the innovative fake news detection approach on recent Machine Learning Classifiers for prediction of fake political news. By testing the algorithms performance and accuracy on fake political news detection and other issues. The comparison results shows that the Decision Tree algorithm has better performance when compared to Naive Bayes algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Pizzo, Anaïs, Pascal Teyssere, and Long Vu-Hoang. "Boosted Gaussian Bayes Classifier and its application in bank credit scoring." Journal of Advanced Engineering and Computation 2, no. 2 (June 30, 2018): 131. http://dx.doi.org/10.25073/jaec.201822.193.

Full text
Abstract:
With the explosion of computer science in the last decade, data banks and networksmanagement present a huge part of tomorrows problems. One of them is the development of the best classication method possible in order to exploit the data bases. In classication problems, a representative successful method of the probabilistic model is a Naïve Bayes classier. However, the Naïve Bayes effectiveness still needs to be upgraded. Indeed, Naïve Bayes ignores misclassied instances instead of using it to become an adaptive algorithm. Different works have presented solutions on using Boosting to improve the Gaussian Naïve Bayes algorithm by combining Naïve Bayes classier and Adaboost methods. But despite these works, the Boosted Gaussian Naïve Bayes algorithm is still neglected in the resolution of classication problems. One of the reasons could be the complexity of the implementation of the algorithm compared to a standard Gaussian Naïve Bayes. We present in this paper, one approach of a suitable solution with a pseudo-algorithm that uses Boosting and Gaussian Naïve Bayes principles having the lowest possible complexity. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
APA, Harvard, Vancouver, ISO, and other styles
7

Utami, Dwi Yuni, Elah Nurlelah, and Noer Hikmah. "Attribute Selection in Naive Bayes Algorithm Using Genetic Algorithms and Bagging for Prediction of Liver Disease." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 4, no. 1 (July 20, 2020): 76–85. http://dx.doi.org/10.31289/jite.v4i1.3793.

Full text
Abstract:
Liver disease is an inflammatory disease of the liver and can cause the liver to be unable to function as usual and even cause death. According to WHO (World Health Organization) data, almost 1.2 million people per year, especially in Southeast Asia and Africa, have died from liver disease. The problem that usually occurs is the difficulty of recognizing liver disease early on, even when the disease has spread. This study aims to compare and evaluate Naive Bayes algorithm as a selected algorithm and Naive Bayes algorithm based on Genetic Algorithm (GA) and Bagging to find out which algorithm has a higher accuracy in predicting liver disease by processing a dataset taken from the UCI Machine Learning Repository database (GA). University of California Invene). From the results of testing by evaluating both the confusion matrix and the ROC curve, it was proven that the testing carried out by the Naive Bayes Optimization algorithm using Algortima Genetics and Bagging has a higher accuracy value than only using the Naive Bayes algorithm. The accuracy value for the Naive Bayes algorithm model is 66.66% and the accuracy value for the Naive Bayes model with attribute selection using Genetic Algorithms and Bagging is 72.02%. Based on this value, the difference in accuracy is 5.36%.Keywords: Liver Disease, Naïve Bayes, Genetic Agorithms, Bagging.
APA, Harvard, Vancouver, ISO, and other styles
8

M. V., Ishwarya, and K. Ramesh Kumar. "Selective Colligation and Selective Scrambling for Privacy Preservation in Data Mining." Indonesian Journal of Electrical Engineering and Computer Science 10, no. 2 (May 1, 2018): 778. http://dx.doi.org/10.11591/ijeecs.v10.i2.pp778-785.

Full text
Abstract:
The work is to enhance the time efficiency in retrieving the data from enormous bank database. The major drawback in retrieving data from large databases is time delay. This time hindrance is owed as the already existing method (SVM), Abstract Data Type (ADT) tree pursues some elongated Sequential steps. These techniques takes additional size and with a reduction of speed in training and testing. Another major negative aspect of these techniques is its Algorithmic complexity. The classification algorithms have five categories. They are ID3, k-nearest neighbour, Decision tree, ANN, and Naïve Bayes algorithm. To triumph over the drawbacks in SVM techniques, we worn a technique called Naïve Bayes Classification (NBC) Algorithm that works in parallel manner rather than sequential manner. For further enhancement we commenced a Naïve Bayes updatable algorithm which is the advanced version of Naïve Bayes classification algorithm. Thus the proposed technique Naïve bayes algorithm ensures that miner can mine more efficiently from the enormous database.
APA, Harvard, Vancouver, ISO, and other styles
9

Lasulika, Mohamad Efendi. "KOMPARASI NAÏVE BAYES, SUPPORT VECTOR MACHINE DAN K-NEAREST NEIGHBOR UNTUK MENGETAHUI AKURASI TERTINGGI PADA PREDIKSI KELANCARAN PEMBAYARAN TV KABEL." ILKOM Jurnal Ilmiah 11, no. 1 (May 8, 2019): 11–16. http://dx.doi.org/10.33096/ilkom.v11i1.408.11-16.

Full text
Abstract:
One obstacle of the default payment is the lack of analysis in the new customer acceptance process which is only reviewed from the form provided at registration, as for the purpose of this study to find out the highest accuracy results from the comparison of Naïve Bayes, SVM and K-NN Algorithms. It can be seen that the Naïve Bayes algorithm which has the highest accuracy value is 96%, while the K-Neural Network algorithm has the highest accuracy at K = 3 which is 92%, while Support Vector Machine only gets accuracy of 66%. The ROC Curve results show that Naïve Bayes achieved the best AUC value of 0.99. Comparison between data mining classification algorithms namely Naïve Bayes, K-Neural Network and Support Vector Machine for predicting smooth payment using multivariate data types, Naïve Bayes method is an accurate algorithm and this method is also very dominant towards other methods. Based on Accuracy, AUC and T-tests this method falls into the best classification category.
APA, Harvard, Vancouver, ISO, and other styles
10

ZHANG, HARRY. "EXPLORING CONDITIONS FOR THE OPTIMALITY OF NAÏVE BAYES." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 02 (March 2005): 183–98. http://dx.doi.org/10.1142/s0218001405003983.

Full text
Abstract:
Naïve Bayes is one of the most efficient and effective inductive learning algorithms for machine learning and data mining. Its competitive performance in classification is surprising, because the conditional independence assumption on which it is based is rarely true in real-world applications. An open question is: what is the true reason for the surprisingly good performance of Naïve Bayes in classification? In this paper, we propose a novel explanation for the good classification performance of Naïve Bayes. We show that, essentially, dependence distribution plays a crucial role. Here dependence distribution means how the local dependence of an attribute distributes in each class, evenly or unevenly, and how the local dependences of all attributes work together, consistently (supporting a certain classification) or inconsistently (canceling each other out). Specifically, we show that no matter how strong the dependences among attributes are, Naïve Bayes can still be optimal if the dependences distribute evenly in classes, or if the dependences cancel each other out. We propose and prove a sufficient and necessary condition for the optimality of Naïve Bayes. Further, we investigate the optimality of Naïve Bayes under the Gaussian distribution. We present and prove a sufficient condition for the optimality of Naïve Bayes, in which the dependences among attributes exist. This provides evidence that dependences may cancel each other out. Our theoretic analysis can be used in designing learning algorithms. In fact, a major class of learning algorithms for Bayesian networks are conditional independence-based (or CI-based), which are essentially based on dependence. We design a dependence distribution-based algorithm by extending the ChowLiu algorithm, a widely used CI based algorithm. Our experiments show that the new algorithm outperforms the ChowLiu algorithm, which also provides empirical evidence to support our new explanation.
APA, Harvard, Vancouver, ISO, and other styles
11

Taha, Ahmed Majid, Aida Mustapha, and Soong-Der Chen. "Naive Bayes-Guided Bat Algorithm for Feature Selection." Scientific World Journal 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/325973.

Full text
Abstract:
When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets.
APA, Harvard, Vancouver, ISO, and other styles
12

Suyadi, Suyadi, Arief Setyanto, and Hanif Al Fattah. "Analisis Perbandingan Algoritma Decision Tree (C4.5) Dan K-Naive Bayes Untuk Mengklasifikasi Penerimaan Mahasiswa Baru Tingkat Universitas." Indonesian Journal of Applied Informatics 2, no. 1 (December 16, 2017): 59. http://dx.doi.org/10.20961/ijai.v2i1.13258.

Full text
Abstract:
<em>Profile of PMB (New Student Admissions) students from several periods have abundant data that can be used for research. The data is in the form of student information from the majors of origin, NEM and majors now. Classifying the PMB profile data of students at the University level in Yogyakarta can know the majority of learners. Comparing some algorithms is needed to find out the best algorithm. Classification is a grouping algorithm that has several algorithms such as Decision Tree (C4.5) and K-Naive Bayes. Decision Tree (C4.5) is an algorithm with decision tree, while K-Naive Bayes is the likely algorithm that will occur. This analysis uses Rapidminer which is a data analysis software with features of several algorithms that are easy to operate. Both algorithms have results with large data of 1504 students, Decision tree (C4.5) has an accuracy of 81.84% and an error accuracy of 18.16%, while K-Naive Bayes 85.12% and accuracy of error 14.88%. Whereas with smaller data the Decision tree (C4.5) has 100% accuracy whereas K-Naive Bayes has the same accuracy as Decision Tree (C4.5) that is 100%.</em>
APA, Harvard, Vancouver, ISO, and other styles
13

Mathapati, Pramod M., A. S. Shahapurkar, and K. D. Hanabaratti. "Sentiment Analysis using Naïve bayes Algorithm." International Journal of Computer Sciences and Engineering 5, no. 7 (July 2017): 75–77. http://dx.doi.org/10.26438/ijcse/v5i7.7577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Shenglei, Geoffrey I. Webb, Linyuan Liu, and Xin Ma. "A novel selective naïve Bayes algorithm." Knowledge-Based Systems 192 (March 2020): 105361. http://dx.doi.org/10.1016/j.knosys.2019.105361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kulczycki, Piotr. "An Algorithm for Bayes Parameter Identification." Journal of Dynamic Systems, Measurement, and Control 123, no. 4 (December 6, 1999): 611–14. http://dx.doi.org/10.1115/1.1409552.

Full text
Abstract:
This paper deals with the task of parameter identification using the Bayes estimation method, which makes it possible to take into account the differing consequences of positive and negative estimation errors. The calculation procedures are based on the kernel estimators technique. The final result constitutes a complete algorithm usable for obtaining the value of the Bayes estimator on the basis of an experimentally obtained random sample. An elaborated method is provided for numerical computations.
APA, Harvard, Vancouver, ISO, and other styles
16

Lebrun, Marc, Antoni Buades, and Jean-Michel Morel. "Implementation of the "Non-Local Bayes" (NL-Bayes) Image Denoising Algorithm." Image Processing On Line 3 (June 17, 2013): 1–42. http://dx.doi.org/10.5201/ipol.2013.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Xiao, Shou Bai. "Prediction of Road Congestion Level Based on Bayes Algorithm." Applied Mechanics and Materials 599-601 (August 2014): 1593–96. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1593.

Full text
Abstract:
Traffic jams increasingly threaten the normal city traffic, so our paper analyzes the state of the existing road traffic congestion, road traffic congestion found in the state is a relatively vague and random dynamic data model. Based on these two characteristics, we propose a road traffic congestion degree assessment model based on Bayesian algorithm. Based on the theoretical analysis of Bayesian algorithms to improve the processing efficiency of the algorithm to construct the road traffic congestion degree evaluation model based on Bayesian algorithm set, and the simulation experiments.
APA, Harvard, Vancouver, ISO, and other styles
18

Winarti, Titin, Henny Indriyawati, Vensy Vydia, and Febrian Wahyu Christanto. "Performance comparison between naive bayes and k- nearest neighbor algorithm for the classification of Indonesian language articles." IAES International Journal of Artificial Intelligence (IJ-AI) 10, no. 2 (June 1, 2021): 452. http://dx.doi.org/10.11591/ijai.v10.i2.pp452-457.

Full text
Abstract:
<span id="docs-internal-guid-210930a7-7fff-b7fb-428b-3176d3549972"><span>The match between the contents of the article and the article theme is the main factor whether or not an article is accepted. Many people are still confused to determine the theme of the article appropriate to the article they have. For that reason, we need a document classification algorithm that can group the articles automatically and accurately. Many classification algorithms can be used. The algorithm used in this study is naive bayes and the k-nearest neighbor algorithm is used as the baseline. The naive bayes algorithm was chosen because it can produce maximum accuracy with little training data. While the k-nearest neighbor algorithm was chosen because the algorithm is robust against data noise. The performance of the two algorithms will be compared, so it can be seen which algorithm is better in classifying documents. The comes about obtained show that the naive bayes algorithm has way better execution with an accuracy rate of 88%, while the k-nearest neighbor algorithm has a fairly low accuracy rate of 60%.</span></span>
APA, Harvard, Vancouver, ISO, and other styles
19

Finki Dona Marleny and Mambang. "COMPARISON OF K-NN AND NAÏVE BAYES CLASSIFIER FOR ASPHYXIA FACTOR." Jurnal Teknologi Informasi Universitas Lambung Mangkurat (JTIULM) 3, no. 1 (April 20, 2018): 13–17. http://dx.doi.org/10.20527/jtiulm.v3i1.23.

Full text
Abstract:
Asphyxia is influenced by several factors, including the factors affecting the Immediate Was maternal factors That relates Conditions mother Pregnancy and childbirth such as hypoxia mother, Asphyxia factor data can be modeled using the classification approach. this paper will be compared k-nearest neighbor algorithm and Naive Bayes classifier to classify asphyxia factor. Naive Bayes uses the concept of Bayes’ Theorem which assuming the independency between predictors. Basically, Bayes theorem is used to compute the subsequent probabilities. Analysis of the two algorithms has been done on several parameters such as Kappa statistics, classification error, precision, recall, F-measure and AUC. We achieved the best classification accuracy with KNN algorithm, 92,27%, for k=4. are lower than the rates achieved with Naïve Bayes 83,19%.
APA, Harvard, Vancouver, ISO, and other styles
20

Manna, Subhankar, and Malathi G. "PERFORMANCE ANALYSIS OF CLASSIFICATION ALGORITHM ON DIABETES HEALTHCARE DATASET." International Journal of Research -GRANTHAALAYAH 5, no. 8 (August 31, 2017): 260–66. http://dx.doi.org/10.29121/granthaalayah.v5.i8.2017.2229.

Full text
Abstract:
Healthcare industry collects huge amount of unclassified data every day. For an effective diagnosis and decision making, we need to discover hidden data patterns. An instance of such dataset is associated with a group of metabolic diseases that vary greatly in their range of attributes. The objective of this paper is to classify the diabetic dataset using classification techniques like Naive Bayes, ID3 and k means classification. The secondary objective is to study the performance of various classification algorithms used in this work. We propose to implement the classification algorithm using R package. This work used the dataset that is imported from the UCI Machine Learning Repository, Diabetes 130-US hospitals for years 1999-2008 Data Set. Motivation/Background: Naïve Bayes is a probabilistic classifier based on Bayes theorem. It provides useful perception for understanding many algorithms. In this paper when Bayesian algorithm applied on diabetes dataset, it shows high accuracy. Is assumes variables are independent of each other. In this paper, we construct a decision tree from diabetes dataset in which it selects attributes at each other node of the tree like graph and model, each branch represents an outcome of the test, and each node hold a class attribute. This technique separates observation into branches to construct tree. In this technique tree is split in a recursive way called recursive partitioning. Decision tree is widely used in various areas because it is good enough for dataset distribution. For example, by using ID3 (Decision tree) algorithm we get a result like they are belong to diabetes or not. Method: We will use Naïve Bayes for probabilistic classification and ID3 for decision tree. Results: The dataset is related to Diabetes dataset. There are 18 columns like – Races, Gender, Take_metformin, Take_repaglinide, Insulin, Body_mass_index, Self_reported_health etc. and 623 rows. Naive Bayes Classifier algorithm will be used for getting the probability of having diabetes or not. Here Diabetes is the class for Diabetes data set. There are two conditions “Yes” and “No” and have some personal information about the patient like - Races, Gender, Take_metformin, Take_repaglinide, Insulin, Body_mass_index, Self_reported_health etc. We will see the probability that for “Yes” what unit of probability and for “No” what unit of probability which is given bellow. For Example: Gender – Female have 0.4964 for “No” and 0.5581 for “Yes” and for Male 0.5035 is for “No” and 0.4418 for “Yes”. Conclusions: In this paper two algorithms had been implemented Naive Bayes Classifier algorithm and ID3 algorithm. From Naive Bayes Classifier algorithm, the probability of having diabetes has been predicted and from ID3 algorithm a decision tree has been generated.
APA, Harvard, Vancouver, ISO, and other styles
21

Azhari, Mulkan, Zakaria Situmorang, and Rika Rosnelly. "Perbandingan Akurasi, Recall, dan Presisi Klasifikasi pada Algoritma C4.5, Random Forest, SVM dan Naive Bayes." JURNAL MEDIA INFORMATIKA BUDIDARMA 5, no. 2 (April 25, 2021): 640. http://dx.doi.org/10.30865/mib.v5i2.2937.

Full text
Abstract:
In this study aims to compare the performance of several classification algorithms namely C4.5, Random Forest, SVM, and naive bayes. Research data in the form of JISC participant data amounting to 200 data. Training data amounted to 140 (70%) and testing data amounted to 60 (30%). Classification simulation using data mining tools in the form of rapidminer. The results showed that . In the C4.5 algorithm obtained accuracy of 86.67%. Random Forest algorithm obtained accuracy of 83.33%. In SVM algorithm obtained accuracy of 95%. Naive Bayes' algorithm obtained an accuracy of 86.67%. The highest algorithm accuracy is in SVM algorithm and the smallest is in random forest algorithm
APA, Harvard, Vancouver, ISO, and other styles
22

Lawend, Haider O., Anuar Muad, and Aini Hussain. "An Improved Flexible Partial Histogram Bayes Learning Algorithm." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (September 1, 2018): 975. http://dx.doi.org/10.11591/ijeecs.v11.i3.pp975-986.

Full text
Abstract:
<em>This paper presents a proposed supervised classification technique namely flexible partial histogram Bayes (fPHBayes) learning algorithm. In our previous work, partial histogram Bayes (PHBayes) learning algorithm showed some advantages in the aspects of speed and accuracy in classification tasks. However, its accuracy declines when dealing with small number of instances or when the class feature distributes in wide area. In this work, the proposed fPHBayes solves these limitations in order to increase the classification accuracy. fPHBayes was analyzed and compared with PHBayes and other standard learning algorithms like first nearest neighbor, nearest subclass mean, nearest class mean, naive Bayes and Gaussian mixture model classifier. The experiments were performed using both real data and synthetic data considering different number of instances and different variances of Gaussians. The results showed that fPHBayes is more accurate and flexible to deal with different number of instances and different variances of Gaussians as compared to PHBayes.</em>
APA, Harvard, Vancouver, ISO, and other styles
23

Ardianto, Rian, Tri Rivanie, Yuris Alkhalifi, Fitra Septia Nugraha, and Windu Gata. "SENTIMENT ANALYSIS ON E-SPORTS FOR EDUCATION CURRICULUM USING NAIVE BAYES AND SUPPORT VECTOR MACHINE." Jurnal Ilmu Komputer dan Informasi 13, no. 2 (July 1, 2020): 109–22. http://dx.doi.org/10.21609/jiki.v13i2.885.

Full text
Abstract:
The development of e-sports education is not just playing games, but about start making, development, marketing, research and other forms education aimed at training skills and providing knowledge in fostering character. The opinions expressed by the public can take form support, criticism and input. Very large volume of comments need to be analyzed accurately in order separate positive and negative sentiments. This research was conducted to measure opinions or separate positive and negative sentiments towards e-sports education, so that valuable information can be sought from social media. Data used in this study was obtained by crawling on social media Twitter. This study uses a classification algorithm, Naïve Bayes and Support Vector Machine. Comparison two algorithms produces predictions obtained that the Naïve Bayes algorithm with SMOTE gets accuracy value 70.32%, and AUC value 0.954. While Support Vector Machine with SMOTE gets accuracy value 66.92% and AUC value 0.832. From these results can be concluded that Naïve Bayes algorithm has a higher accuracy compared to Support Vector Machine algorithm, it can be seen that the accuracy difference between naïve Bayes and the vector machine support is 3.4%. Naïve Bayes algorithm can thus better predict the achievement of e-sports for students' learning curriculum.
APA, Harvard, Vancouver, ISO, and other styles
24

Nugroho, Agung, and Yoga Religia. "Analisis Optimasi Algoritma Klasifikasi Naive Bayes menggunakan Genetic Algorithm dan Bagging." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 5, no. 3 (June 19, 2021): 504–10. http://dx.doi.org/10.29207/resti.v5i3.3067.

Full text
Abstract:
The increasing demand for credit applications to banks has motivated the banking world to switch to more sophisticated techniques for analyzing the level of credit risk. One technique for analyzing the level of credit risk is the data mining approach. Data mining provides a technique for finding meaningful information from large amounts of data by way of classification. However, bank marketing data is a type of imbalance data so that if the classification is done the results are less than optimal. The classification algorithm that can be used for imbalance data types can use naïve Bayes. Naïve Bayes performs well in terms of classification. However, optimization is needed in order to obtain more optimal classification results. Optimization techniques in handling imbalance data have been developed with several approaches. Bagging and Genetic Algorithms can be used to overcome imbalance data. This study aims to compare the accuracy level of the naïve Bayes algorithm after optimization using the bagging and genetic algorithm. The results showed that the combination of bagging and a genetic algorithm could improve the performance of Naive Bayes by 4.57%.
APA, Harvard, Vancouver, ISO, and other styles
25

Gai, Yu Lian, and Ya Ping Wang. "Data Fusion and Bayes Estimation Algorithm Research." Applied Mechanics and Materials 347-350 (August 2013): 2620–24. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2620.

Full text
Abstract:
This paper starts from the prospective of data processing and use of information, analysis the meaning and realistic background of processing integrated data by using Data Fusion technology. On the basis of a clear basic idea and principle theory of Data Fusion, studies and discusses its hierarchical levels from three aspects. A relatively comprehensive description of Data Fusion process is given in the paper. Incorporate with the description of the basic principles and ideas of Bayes estimation algorithm, identifies the limitation of Bayes estimation algorithm. The practical significance of Data Fusion technology in dealing with information uncertainty and incompleteness are summarized.
APA, Harvard, Vancouver, ISO, and other styles
26

Gouthami, Shiramshetty. "Ranking Popular Items By Naive Bayes Algorithm." International Journal of Computer Science and Information Technology 4, no. 1 (February 29, 2012): 147–63. http://dx.doi.org/10.5121/ijcsit.2012.4112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

syah, Ardian, Boy Yuliadi, and Riad Sahara. "Music Genre Classification using Naïve Bayes Algorithm." International Journal of Computer Trends and Technology 62, no. 1 (August 25, 2018): 50–57. http://dx.doi.org/10.14445/22312803/ijctt-v62p107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Jingmei, Weifei Wu, and Di Xue. "Transfer Naive Bayes algorithm with group probabilities." Applied Intelligence 50, no. 1 (June 24, 2019): 61–73. http://dx.doi.org/10.1007/s10489-019-01512-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Samsir, Deci Irmayani, Firman Edi, Junaidi Mustapa Harahap, Jupriaman, Rizki Kurniawan Rangkuti, Basyarul Ulya, and Ronal Watrianthos. "Naives Bayes Algorithm for Twitter Sentiment Analysis." Journal of Physics: Conference Series 1933, no. 1 (June 1, 2021): 012019. http://dx.doi.org/10.1088/1742-6596/1933/1/012019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Palupi, Endang Sri. "EMPLOYEE TURNOVER CLASSIFICATION USING PSO-BASED NAÏVE BAYES AND NAÏVE BAYES ALGORITHM IN PT. MASTERSYSTEM INFOTAMA." Jurnal Riset Informatika 3, no. 3 (June 1, 2021): 233–40. http://dx.doi.org/10.34288/jri.v3i3.232.

Full text
Abstract:
Turnover occurs because many employees leave and new employees enter, so the turnover in and out of employees is quite high, therefore turnover can be controlled with a strategy to increase employee engagement. PT. Mastersystem Infotama is a System Integrator company or better known as a fairly large IT company with a total of approximately 600 employees. Turnover is high enough to make some divisions lack human resources, and the human capital management division is quite difficult to recruit employees to find candidates with various criteria that must be available in a short time. Competition in the IT world is quite tight both within companies and employees with good experience and abilities. Especially the sales department that holds a database of potential customers, and the engineer section that already has a certificate of expertise that is widely used in the IT business world. Therefore, it is necessary to classify what factors make employee turnover high by using the Naïve Bayes and Naïve Bayes algorithms based on Particle Swarm Optimization, so that they can be used as material for internal evaluation to increase employee engagement. The results of this study, classification using the Naïve Bayes algorithm, has an accuracy of 79.17%, while the classification using the Naïve Bayes algorithm based on Particle Swarm Optimization is 94.17%.
APA, Harvard, Vancouver, ISO, and other styles
31

Akhmad Fahmi Alfa’izy, Erick, Khairil Anam, Naidah Naing, Rosanita Tritias Utami, Nur Anim Jauhariyah, Ahmad Munib Syafa’at, Lely Ana Ferawati Ekaningsih, Mohammad Roesli, Yanna Ika Pratiwi, and Yeni Ika Pratiwi. "Application of the Naïve Bayes Algorithm for Student Graduation Analysis." International Journal of Engineering & Technology 7, no. 4.15 (October 7, 2018): 421. http://dx.doi.org/10.14419/ijet.v7i4.15.23596.

Full text
Abstract:
Design an analysis system to find out graduation by comparing previous data and existing data to overcome errors in a college system. By taking data records that are already available to be processed using the naïve Bayes algorithm. This research was conducted at Universitas Maarif Hasyim Latif. In this case, the object of research is to analyze the data of students with naïve Bayes algorithms to find out their graduation. For sampling the data taken is the previous Faculty of Law Student data to be used as training data, to retrieve the entire data using data records that are already available in the Directorate of Information Systems. That the naïve Bayes algorithm can be used in the classification of data in the form of a string or textual. This is based on researchers' trials in taking examples of calculations that have been done before. To compare the results of the classification of graduation analysis using the naïve Bayes algorithm testing is done with a sample of data in the form of training data compared to data testing. From the calculations that have been made, the accuracy is 77.78%.
APA, Harvard, Vancouver, ISO, and other styles
32

Gadebe, Moses L., Okuthe P. Kogeda, and Sunday O. Ojo. "Smartphone Naïve Bayes Human Activity Recognition Using Personalized Datasets." Journal of Advanced Computational Intelligence and Intelligent Informatics 24, no. 5 (September 20, 2020): 685–702. http://dx.doi.org/10.20965/jaciii.2020.p0685.

Full text
Abstract:
Recognizing human activity in real time with a limited dataset is possible on a resource-constrained device. However, most classification algorithms such as Support Vector Machines, C4.5, and K Nearest Neighbor require a large dataset to accurately predict human activities. In this paper, we present a novel real-time human activity recognition model based on Gaussian Naïve Bayes (GNB) algorithm using a personalized JavaScript object notation dataset extracted from the publicly available Physical Activity Monitoring for Aging People dataset and University of Southern California Human Activity dataset. With the proposed method, the personalized JSON training dataset is extracted and compressed into a 12×8 multi-dimensional array of the time-domain features extracted using a signal magnitude vector and tilt angles from tri-axial accelerometer sensor data. The algorithm is implemented on the Android platform using the Cordova cross-platform framework with HTML5 and JavaScript. Leave-one-activity-out cross validation is implemented as a testTrainer() function, the results of which are presented using a confusion matrix. The testTrainer() function leaves category K as the testing subset and the remaining K-1 as the training dataset to validate the proposed GNB algorithm. The proposed model is inexpensive in terms of memory and computational power owing to the use of a compressed small training dataset. Each K category was repeated five times and the algorithm consistently produced the same result for each test. The result of the simulation using the tilted angle features shows overall precision, recall, F-measure, and accuracy rates of 90%, 99.6%, 94.18%, and 89.51% respectively, in comparison to rates of 36.9%, 75%, 42%, and 36.9% when the signal magnitude vector features were used. The results of the simulations confirmed and proved that when using the tilt angle dataset, the GNB algorithm is superior to Support Vector Machines, C4.5, and K Nearest Neighbor algorithms.
APA, Harvard, Vancouver, ISO, and other styles
33

Alhussan, Amel, and Khalil El Hindi. "Selectively Fine-Tuning Bayesian Network Learning Algorithm." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 08 (July 17, 2016): 1651005. http://dx.doi.org/10.1142/s0218001416510058.

Full text
Abstract:
In this work, we propose a Selective Fine-Tuning algorithm for Bayesian Networks (SFTBN). The aim is to enhance the accuracy of Bayesian Network (BN) classifiers by finding better estimations for the probability terms used by the classifiers. The algorithm augments a BN learning algorithm with a fine-tuning stage that aims to more accurately estimate the probability terms used by the BN. If the value of a probability term causes a misclassification of a training instances and falls outside its valid range then we update (fine-tune) that value. The amount of such an update is proportional to the distance between the value and its valid range. We use the algorithm to fine-tune several forms of BNs: the Naive Bayes (NB), Tree Augmented Naive Bayes (TAN), and Bayesian Augmented Naive Bayes (BAN) models. Our empirical experiments indicate that the SFTBN algorithm improves the classification accuracy of BN classifiers. We also generalized the original fine-tuning algorithm of Naive Bayesian (FTNB) for BN models. We empirically compare the two algorithms, and the empirical results show that while FTNB is more accurate than SFTBN for fine-tuning NB classifiers, SFTBN is more accurate for fine-tuning BNs than the adapted version of FTNB.
APA, Harvard, Vancouver, ISO, and other styles
34

Santiko, Irfan, and Ikhsan Honggo. "Naive Bayes Algorithm Using Selection of Correlation Based Featured Selections Features for Chronic Diagnosis Disease." IJIIS: International Journal of Informatics and Information Systems 2, no. 2 (September 1, 2019): 56–60. http://dx.doi.org/10.47738/ijiis.v2i2.14.

Full text
Abstract:
Chronic kidney disease is a disease that can cause death, because the pathophysiological etiology resulting in a progressive decline in renal function, and ends in kidney failure. Chronic Kidney Disease (CKD) has now become a serious problem in the world. Kidney and urinary tract diseases have caused the death of 850,000 people each year. This suggests that the disease was ranked the 12th highest mortality rate. Some studies in the field of health including one with chronic kidney disease have been carried out to detect the disease early, In this study, testing the Naive Bayes algorithm to detect the disease on patients who tested positive for negative CKD and CKD. From the results of the test algorithm accuracy value will be compared against the results of the algorithm accuracy before use and after feature selection using feature selection Featured Correlation Based Selection (CFS), it is known that Naive Bayes algorithm after feature selection that is 93.58%, while the naive Bayes without feature selection the result is 93.54% accuracy. Seeing the value of a second accuracy testing Naive Bayes algorithm without using the feature selection and feature selection, testing both these algorithms including the classification is very good, because the accuracy value above 0.90 to 1.00. Included in the excellent classification. higher accuracy results.
APA, Harvard, Vancouver, ISO, and other styles
35

Rajesh, N., Maneesha T, Shaik Hafeez, and Hari Krishna. "Prediction of Heart Disease Using Machine Learning Algorithms." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 363. http://dx.doi.org/10.14419/ijet.v7i2.32.15714.

Full text
Abstract:
Heart disease is the one of the most common disease. This disease is quite common now a days we used different attributes which can relate to this heart diseases well to find the better method to predict and we also used algorithms for prediction. Naive Bayes, algorithm is analyzed on dataset based on risk factors. We also used decision trees and combination of algorithms for the prediction of heart disease based on the above attributes. The results shown that when the dataset is small naive Bayes algorithm gives the accurate results and when the dataset is large decision trees gives the accurate results.
APA, Harvard, Vancouver, ISO, and other styles
36

Su, Ya, and Mengyao Wang. "Age-Variation Face Recognition Based on Bayes Inference." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 07 (April 10, 2017): 1756013. http://dx.doi.org/10.1142/s0218001417560134.

Full text
Abstract:
Studies have discovered that face recognition will benefit from age information. However, since the age estimation is unstable in practice, it is still an open question how to improve face recognition with help of automatic age estimation techniques. This paper presents to improve the performance of face recognition by automatic age estimation. The main contribution is a new age-variational face recognition algorithm based on Bayesian framework (FRAB). By introducing the age estimation result as a prior, the recognition problem is divided into several age-specific sub-problems. As a result, the proposed algorithm leads to two algorithms according to how the age is given. The first one is FRAB-AE, which introduces age estimation result as the age prior. The second one is FRAB-GT, which considers that the ground truth of age information is given. Experimental results are conducted on FG-NET and Morph datasets to evaluate the performance of the proposed framework. It shows that the proposed algorithms is able to make use of age priors to improve the face recognition.
APA, Harvard, Vancouver, ISO, and other styles
37

Walshe, Brian, Rob Brennan, and Declan O'Sullivan. "Bayes-ReCCE." International Journal on Semantic Web and Information Systems 12, no. 2 (April 2016): 25–52. http://dx.doi.org/10.4018/ijswis.2016040102.

Full text
Abstract:
Linked Open Data consists of a large set of structured data knowledge bases which have been linked together, typically using equivalence statements. These equivalences usually take the form of owl:sameAs statements linking individuals, but links between classes are far less common. Often, the lack of linking between classes is because the relationships cannot be described as elementary one to one equivalences. Instead, complex correspondences referencing multiple entities in logical combinations are often necessary if we want to describe how the classes in one ontology are related to classes in a second ontology. In this paper the authors introduce a novel Bayesian Restriction Class Correspondence Estimation (Bayes-ReCCE) algorithm, an extensional approach to detecting complex correspondences between classes. Bayes-ReCCE operates by analysing features of matched individuals in the knowledge bases, and uses Bayesian inference to search for complex correspondences between the classes these individuals belong to. Bayes-ReCCE is designed to be capable of providing meaningful results even when only small amounts of matched instances are available. They demonstrate this capability empirically, showing that the complex correspondences generated by Bayes-ReCCE have a median F1 score of over 0.75 when compared against a gold standard set of complex correspondences between Linked Open Data knowledge bases covering the geographical and cinema domains. In addition, the authors discuss how metadata produced by Bayes-ReCCE can be included in the correspondences to encourage reuse by allowing users to make more informed decisions on the meaning of the relationship described in the correspondences.
APA, Harvard, Vancouver, ISO, and other styles
38

Xia, Xiongjun, and Jin Yan. "Construction of Music Teaching Evaluation Model Based on Weighted Naïve Bayes." Scientific Programming 2021 (September 16, 2021): 1–9. http://dx.doi.org/10.1155/2021/7196197.

Full text
Abstract:
Evaluation of music teaching is a highly subjective task often depending upon experts to assess both the technical and artistic characteristics of performance from the audio signal. This article explores the task of building computational models for evaluating music teaching using machine learning algorithms. As one of the widely used methods to build classifiers, the Naïve Bayes algorithm has become one of the most popular music teaching evaluation methods because of its strong prior knowledge, learning features, and high classification performance. In this article, we propose a music teaching evaluation model based on the weighted Naïve Bayes algorithm. Moreover, a weighted Bayesian classification incremental learning approach is employed to improve the efficiency of the music teaching evaluation system. Experimental results show that the algorithm proposed in this paper is superior to other algorithms in the context of music teaching evaluation.
APA, Harvard, Vancouver, ISO, and other styles
39

Shashank Reddy, Danda, Chinta Naga Harshitha, and Carmel Mary Belinda. "Brain tumor prediction using naïve Bayes’ classifier and decision tree algorithms." International Journal of Engineering & Technology 7, no. 1.7 (February 5, 2018): 137. http://dx.doi.org/10.14419/ijet.v7i1.7.10634.

Full text
Abstract:
Now a day’s many advanced techniques are proposed in diagnosing the tumor in brain like magnetic resonance imaging, computer tomography scan, angiogram, spinal tap and biospy. Based on diagnosis it is easy to predict treatment. All of the types of brain tumor are officially reclassified by the World Health Organization. Brain tumors are of 120 types, almost each tumor is having same symptoms and it is difficult to predict treatment. For this regard we are proposing more accurate and efficient algorithm in predicting the type of brain tumor is Naïve Bayes’ classification and decision tree algorithm. The main focus is on solving tumor classification problem using these algorithms. Here the main goal is to show that the prediction through the decision tree algorithm is simple and easy than the Naïve Bayes’ algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Derisma, D. "Perbandingan Kinerja Algoritma untuk Prediksi Penyakit Jantung dengan Teknik Data Mining." Journal of Applied Informatics and Computing 4, no. 1 (July 13, 2020): 84–88. http://dx.doi.org/10.30871/jaic.v4i1.2152.

Full text
Abstract:
Heart disease is a disease that contributes to a relatively high mortality rate. The rate of human death caused by disease in the heart is a widespread problem in the world. The main objective of this study is to predict people with heart disease using the publicly available dataset in the UCI Repository with the Heart Disease dataset. To obtain the best classification algorithm is by comparing three Algoritma Naive Bayes, Random Forest, Neural Network algorithms, which are frequently used to predict people with heart disease. Comparison results show that Naive Bayes ' algorithm is a precise and accurate algorithm used to predict people with heart disease with a percentage of 83 %.
APA, Harvard, Vancouver, ISO, and other styles
41

Athaillah, Muhammad, Yufiz Azhar, and Yuda Munarko. "Perbandingan Metode Klasifikasi Berita Hoaks Berbahasa Indonesia Berbasis Pembelajaran Mesin." Jurnal Repositor 2, no. 5 (March 11, 2020): 675. http://dx.doi.org/10.22219/repositor.v2i5.692.

Full text
Abstract:
AbstrakKlasifiaksi berita hoaks merupakan salah satu aplikasi kategorisasi teks. Berita hoaks harus diklasifikasikan karena berita hoaks dapat mempengaruhi tindakan dan pola pikir pembaca. Dalam proses klasifikasi pada penelitian ini menggunakan beberapa tahapan yaitu praproses, ekstraksi fitur, seleksi fitur dan klasifikasi. Penelitian ini bertujuan membandingkan dua algoritma yaitu algoritma Naïve Bayes dan Multinomial Naïve Bayes, manakah dari kedua algoritma tersebut yang lebih efektif dalam mengklasifikasikan berita hoaks. Data yang digunakan dalam penelitian ini berasal dari www.trunbackhoax.id untuk data berita hoaks sebanyak 100 artikel dan data berita non-hoaks berasal dari kompas.com, detik.com berjumlah 100 artikel. Data latih berjumlah 140 artikel dan data uji berjumlah 60 artikel. Hasil perbandingan algoritma Naïve Bayes memiliki nilai F1-score sebesar 0,93 dan nilai F1-score Multinomial Naïve Bayes sebesar 0,92. Abstarct Classification hoax news is one of text categorizations applications. Hoax news must be classified because the hoax news can influence the reader actions and thinking patterns. Classification process in this reseacrh uses several stages, namely preprocessing, features extraxtion, features selection and classification. This research to compare Naïve Bayes algorithm and Multinomial Naïve Bayes algorithm, which of the two algorithms is more effective on classifying hoax news. The data from this research from turnbackhoax.id as hoax news of 100 articles and non-hoax news from kompas.com, detik.com of 100 articles. Training data 140 articles dan test data 60 articles. The result of the comparison of algorithms Naïve Bayes has an F1-score value of 0,93 and Naïve Bayes has an F1-score value of 0,92.
APA, Harvard, Vancouver, ISO, and other styles
42

Safri, Yofi Firdan, Riza Arifudin, and Much Aziz Muslim. "K-Nearest Neighbor and Naive Bayes Classifier Algorithm in Determining The Classification of Healthy Card Indonesia Giving to The Poor." Scientific Journal of Informatics 5, no. 1 (May 21, 2018): 18. http://dx.doi.org/10.15294/sji.v5i1.12057.

Full text
Abstract:
Health is a human right and one of the elements of welfare that must be realized in the form of giving various health efforts to all the people of Indonesia. Poverty in Indonesia has become a national problem and even the government seeks efforts to alleviate poverty. For example, poor families have relatively low levels of livelihood and health. One of the new policies of the Sakti Government Card Program issued by the government includes three cards, namely Indonesia Smart Card (KIP), Healthy Indonesia Card (KIS) and Prosperous Family Card (KKS). In this study to determine the feasibility of a healthy Indonesian card (KIS) required a method of optimal accuracy. The data used in this study is KIS data which amounts to 200 data records with 15 determinants of feasibility in 2017 taken at the Social Service of Pekalongan Regency. The data were processed using the K-Nearest Neighbor algorithm and the combination of K-Nearest Neighbor-Naive Bayes Classifier algorithm. This can be seen from the accuracy of determining the feasibility of K-Nearest Neighbor algorithm of 64%, while the combination of K-Nearest Neighbor-Naive Bayes Classifier algorithm is 96%, so the combination of K-Nearest Neighbor-Naive Bayes Classifier algorithm is the optimal algorithm in determining the feasibility of healthy Indonesian card recipients with an increase of 32% accuracy. This study shows that the accuracy of the results of determining feasibility using a combination of K-Nearest Neighbor-Naive Bayes Classifier algorithms is better than the K-Nearest Neighbor algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Cinarer, Gokalp, and Bulent Gursel Emiroglu. "Classification of brain tumours using radiomic features on MRI." New Trends and Issues Proceedings on Advances in Pure and Applied Sciences, no. 12 (April 30, 2020): 80–90. http://dx.doi.org/10.18844/gjpaas.v0i12.4989.

Full text
Abstract:
Glioma is one of the most common brain tumours among the diagnoses of existing brain tumours. Glioma grades are important factors that should be known in the treatment of brain tumours. In this study, the radiomic features of gliomas were analysed and glioma grades were classified by Gaussian Naive Bayes algorithm. Glioma tumours of 121 patients of Grade II and Grade III were examined. The glioma tumours were segmented with the Grow Cut Algorithm and the 3D feature of tumour magnetic resonance imaging images were obtained with the 3D Slicer programme. The obtained quantitative values were statistically analysed with Spearman and Mann–Whitney U tests and 21 features with statistically significant properties were selected from 107 features. The results showed that the best performing among the algorithms was Gaussian Naive Bayes algorithm with 80% accuracy. Machine learning and feature selection techniques can be used in the analysis of gliomas as well as pathological evaluations in glioma grading processes. Keywords: Radiomics, glioma, naive bayes.
APA, Harvard, Vancouver, ISO, and other styles
44

Saputra, Muhammad Firman Aji, Triyanna Widiyaningtyas, and Aji Prasetya Wibawa. "Illiteracy Classification Using K Means-Naïve Bayes Algorithm." JOIV : International Journal on Informatics Visualization 2, no. 3 (May 15, 2018): 153. http://dx.doi.org/10.30630/joiv.2.3.129.

Full text
Abstract:
Illiteracy is an inability to recognize characters, both in order to read and write. It is a significant problem for countries all around the world including Indonesia. In Indonesia, illiteracy rate is generally set as an indicator to see whether or not education in Indonesia is successful. If this problem is not going to be overcome, it will affect people’s prosperity. One system that has been used to overcome this problem is prioritizing the treatment from areas with the highest illiteracy rate and followed by areas with lower illiteracy rate. The method is going to be a way easier to be applied if it is supported by classification process. Since the classification process needs a class, and there has not been any fine classification of illiteracy rate, there is needed a clustering process before classification process. This research is aimed to get optimal number of classes through clustering process and know the result of illiteracy classification process. The clustering process is conducted by using k means algorithm, and for the classification process is conducted by using Naïve Bayes algorithm. The testing method used to assess the success of classification process is 10-fold method. Based on the research result, it can be concluded that the optimal illiteracy classes are three classes with the classification accuracy value of 96.4912% and error rate value of 3.5088%. Whereas the classification with two classes get the accuracy value of 93.8596% and error rate value of 6.1404%. And for the classification with five classes get the accuracy value of 90.3509% and error rate value of 9.6491%.
APA, Harvard, Vancouver, ISO, and other styles
45

O. Lawend, Haider, Anuar M. Muad, and Aini Hussain. "Partial Histogram Bayes Learning Algorithm for Classification Applications." International Journal of Engineering & Technology 7, no. 4.11 (October 2, 2018): 126. http://dx.doi.org/10.14419/ijet.v7i4.11.20787.

Full text
Abstract:
This paper presents a proposed supervised classification technique namely partial histogram Bayes (PHBayes) learning algorithm. Conventional classifier based on Gaussian function has limitation when dealing with different probability distribution functions and requires large memory for large number of instance. Alternatively, histogram based classifiers are flexible for different probability density function. The aims of PHBayes are to handle large number of instances in datasets with lesser memory requirement, and fast in training and testing phases. The PHBayes depends on portion of the observed histogram that is similar to the probability density function. PHBayes was analyzed using synthetic and real data. Several factors affecting classification accuracy were considered. The PHBayes was compared with other established classifiers and demonstrated higher accurate classification, lesser memory even when dealing with large number of instance, and faster in training and testing phases.
APA, Harvard, Vancouver, ISO, and other styles
46

Thanuja Nishadi, A. S. "Text Analysis: Naïve Bayes Algorithm using Python JupyterLab." International Journal of Scientific and Research Publications (IJSRP) 9, no. 11 (November 6, 2019): p9515. http://dx.doi.org/10.29322/ijsrp.9.11.2019.p9515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Shedriko, Shedriko. "NAIVE BAYES ALGORITHM IN PREDICT GRADUATION OF STUDENTS." ZERO: Jurnal Sains, Matematika dan Terapan 3, no. 1 (July 1, 2020): 45. http://dx.doi.org/10.30829/zero.v3i1.7665.

Full text
Abstract:
<p><strong><em>Abstract.</em></strong><em> </em><em>The University of XYZ is a well established university which has five faculties with one of them is post graduate. Charging with a very low cost of tuition fee has attracted so many high school graduation and make it becoming the university with the bulk number of students. It has caused one subject is taught by more than one lecturer. This research is using quantitative analysis method with Naive Bayes Algorithm methodology in passing decision on PTI (Pengantar Teknologi Informasi) subject. The result gives pattern of training data in mean and standard deviation towards three attributes, i.e. tasks, mid-semester and final exam score, which can classify or predict graduation for new data test. The pattern of this equation can also be used for other class classification, on the same or different subjects. </em></p>
APA, Harvard, Vancouver, ISO, and other styles
48

Ma, Jun. "Indirect density estimation using the iterative Bayes algorithm." Computational Statistics & Data Analysis 55, no. 3 (March 2011): 1180–95. http://dx.doi.org/10.1016/j.csda.2010.09.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Viet, Tran Ngoc, Hoang Le Minh, Le Cong Hieu, and Tong Hung Anh. "THE NAÏVE BAYES ALGORITHM FOR LEARNING DATA ANALYTICS." Indian Journal of Computer Science and Engineering 12, no. 4 (August 20, 2021): 1038–43. http://dx.doi.org/10.21817/indjcse/2021/v12i4/211204191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sharazita Dyah Anggita and Ikmah. "Algorithm Comparation of Naive Bayes and Support Vector Machine based on Particle Swarm Optimization in Sentiment Analysis of Freight Forwarding Services." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 4, no. 2 (April 20, 2020): 362–69. http://dx.doi.org/10.29207/resti.v4i2.1840.

Full text
Abstract:
The needs of the community for freight forwarding are now starting to increase with the marketplace. User opinion about freight forwarding services is currently carried out by the public through many things one of them is social media Twitter. By sentiment analysis, the tendency of an opinion will be able to be seen whether it has a positive or negative tendency. The methods that can be applied to sentiment analysis are the Naive Bayes Algorithm and Support Vector Machine (SVM). This research will implement the two algorithms that are optimized using the PSO algorithms in sentiment analysis. Testing will be done by setting parameters on the PSO in each classifier algorithm. The results of the research that have been done can produce an increase in the accreditation of 15.11% on the optimization of the PSO-based Naive Bayes algorithm. Improved accuracy on the PSO-based SVM algorithm worth 1.74% in the sigmoid kernel.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography