To see the other types of publications on this topic, follow the link: Bayesian classification.

Journal articles on the topic 'Bayesian classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayesian classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yazdi, Hadi Sadoghi, Mehri Sadoghi Yazdi, and Abedin Vahedian. "Fuzzy Bayesian Classification of LR Fuzzy Numbers." International Journal of Engineering and Technology 1, no. 5 (2009): 415–23. http://dx.doi.org/10.7763/ijet.2009.v1.78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, ShuangCheng, GuangLin Xu, and RuiJie Du. "Restricted Bayesian classification networks." Science China Information Sciences 56, no. 7 (January 9, 2013): 1–15. http://dx.doi.org/10.1007/s11432-012-4729-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Berrett, Candace, and Catherine A. Calder. "Bayesian spatial binary classification." Spatial Statistics 16 (May 2016): 72–102. http://dx.doi.org/10.1016/j.spasta.2016.01.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dojer, Norbert, Paweł Bednarz, Agnieszka Podsiadło, and Bartek Wilczyński. "BNFinder2: Faster Bayesian network learning and Bayesian classification." Bioinformatics 29, no. 16 (July 1, 2013): 2068–70. http://dx.doi.org/10.1093/bioinformatics/btt323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Reguzzoni, M., F. Sansò, G. Venuti, and P. A. Brivio. "Bayesian classification by data augmentation." International Journal of Remote Sensing 24, no. 20 (January 2003): 3961–81. http://dx.doi.org/10.1080/0143116031000103817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xiaohui, Shubhankar Ray, and Bani K. Mallick. "Bayesian Curve Classification Using Wavelets." Journal of the American Statistical Association 102, no. 479 (September 2007): 962–73. http://dx.doi.org/10.1198/016214507000000455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, C. K. I., and D. Barber. "Bayesian classification with Gaussian processes." IEEE Transactions on Pattern Analysis and Machine Intelligence 20, no. 12 (1998): 1342–51. http://dx.doi.org/10.1109/34.735807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dellaportas, Petros. "Bayesian classification of Neolithic tools." Journal of the Royal Statistical Society: Series C (Applied Statistics) 47, no. 2 (June 28, 2008): 279–97. http://dx.doi.org/10.1111/1467-9876.00112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Miguel Hernández-Lobato, Jose, Daniel Hernández-Lobato, and Alberto Suárez. "Network-based sparse Bayesian classification." Pattern Recognition 44, no. 4 (April 2011): 886–900. http://dx.doi.org/10.1016/j.patcog.2010.10.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hunter, L., and D. J. States. "Bayesian classification of protein structure." IEEE Expert 7, no. 4 (August 1992): 67–75. http://dx.doi.org/10.1109/64.153466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gupal, A. M., S. V. Pashko, and I. V. Sergienko. "Efficiency of Bayesian classification procedure." Cybernetics and Systems Analysis 31, no. 4 (July 1995): 543–54. http://dx.doi.org/10.1007/bf02366409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Martinez, Matthew, Phillip L. De Leon, and David Keeley. "Bayesian classification of falls risk." Gait & Posture 67 (January 2019): 99–103. http://dx.doi.org/10.1016/j.gaitpost.2018.09.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Baram, Yoram. "Bayesian classification by iterated weighting." Neurocomputing 25, no. 1-3 (April 1999): 73–79. http://dx.doi.org/10.1016/s0925-2312(98)00110-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Michael D. "Bayesian outcome-based strategy classification." Behavior Research Methods 48, no. 1 (February 20, 2015): 29–41. http://dx.doi.org/10.3758/s13428-014-0557-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Davig, Troy, and Aaron Smalter Hall. "Recession forecasting using Bayesian classification." International Journal of Forecasting 35, no. 3 (July 2019): 848–67. http://dx.doi.org/10.1016/j.ijforecast.2018.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shaarawy, Samir, and Ahmed Haroun. "Bayesian Classification with Arma Sources." Egyptian Statistical Journal 38, no. 2 (December 1, 1994): 165–76. http://dx.doi.org/10.21608/esju.1994.314823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ershadi, Mohammad Mahdi, and Abbas Seifi. "An efficient Bayesian network for differential diagnosis using experts' knowledge." International Journal of Intelligent Computing and Cybernetics 13, no. 1 (March 9, 2020): 103–26. http://dx.doi.org/10.1108/ijicc-10-2019-0112.

Full text
Abstract:
PurposeThis study aims to differential diagnosis of some diseases using classification methods to support effective medical treatment. For this purpose, different classification methods based on data, experts’ knowledge and both are considered in some cases. Besides, feature reduction and some clustering methods are used to improve their performance.Design/methodology/approachFirst, the performances of classification methods are evaluated for differential diagnosis of different diseases. Then, experts' knowledge is utilized to modify the Bayesian networks' structures. Analyses of the results show that using experts' knowledge is more effective than other algorithms for increasing the accuracy of Bayesian network classification. A total of ten different diseases are used for testing, taken from the Machine Learning Repository datasets of the University of California at Irvine (UCI).FindingsThe proposed method improves both the computation time and accuracy of the classification methods used in this paper. Bayesian networks based on experts' knowledge achieve a maximum average accuracy of 87 percent, with a minimum standard deviation average of 0.04 over the sample datasets among all classification methods.Practical implicationsThe proposed methodology can be applied to perform disease differential diagnosis analysis.Originality/valueThis study presents the usefulness of experts' knowledge in the diagnosis while proposing an adopted improvement method for classifications. Besides, the Bayesian network based on experts' knowledge is useful for different diseases neglected by previous papers.
APA, Harvard, Vancouver, ISO, and other styles
18

Xu, Shuo. "Bayesian Naïve Bayes classifiers to text classification." Journal of Information Science 44, no. 1 (November 1, 2016): 48–59. http://dx.doi.org/10.1177/0165551516677946.

Full text
Abstract:
Text classification is the task of assigning predefined categories to natural language documents, and it can provide conceptual views of document collections. The Naïve Bayes (NB) classifier is a family of simple probabilistic classifiers based on a common assumption that all features are independent of each other, given the category variable, and it is often used as the baseline in text classification. However, classical NB classifiers with multinomial, Bernoulli and Gaussian event models are not fully Bayesian. This study proposes three Bayesian counterparts, where it turns out that classical NB classifier with Bernoulli event model is equivalent to Bayesian counterpart. Finally, experimental results on 20 newsgroups and WebKB data sets show that the performance of Bayesian NB classifier with multinomial event model is similar to that of classical counterpart, but Bayesian NB classifier with Gaussian event model is obviously better than classical counterpart.
APA, Harvard, Vancouver, ISO, and other styles
19

Puspitowati, Yenita Endah, Mumun Nurwilawati, and Daniel Swanjaya. "Klasifikasi Siswa Menggunakan Bayesian Classification Di Uptd Smp Negeri 2 Baron." Melek IT : Information Technology Journal 1, no. 1 (April 21, 2021): 15–22. https://doi.org/10.30742/melekitjournal.v1i1.36.

Full text
Abstract:
To provide learning for students is not enough only in academics , but also needed in non- academic fields such as counseling . With counseling , psychological known how the students with what they are experiencing , so it takes the handling of student counseling . Giving questionnaire or questionnaires to students is a way to find out the problems in a natural student . This research is the development of the students' classification problems using Bayesian classification . Data obtained using a questionnaire that has been converted into a computerized , then weighted and processed using a Bayesian classification to obtain results in student services . Classification of students using Bayesian classification can generate services to students in accordance with the natural problems in students . Through the classification of students with Bayesian classification devote facilitate students in their personal problems without difficulty or feel embarrassed , but it also helps administrators in data processing to generate the appropriate student services on student issues . The use of Bayesian classification algorithm is suitable to assist performance and efficiency while counseling guidance teachers in performing services to students.
APA, Harvard, Vancouver, ISO, and other styles
20

Siagian, Novriadi Antonius, Sutarman Wage, and Sawaluddin. "Dataset Weighting Features Using Gain Ratio To Improve Method Accuracy Naïve Bayesian Classification." IOP Conference Series: Earth and Environmental Science 748, no. 1 (April 1, 2021): 012034. http://dx.doi.org/10.1088/1755-1315/748/1/012034.

Full text
Abstract:
Abstract The Naïve Bayes method is proven to have a high speed when applied to large datasets, but the Naïve Bayes method has weaknesses when selecting attributes because Naïve Bayes is a statistical classification method that is only based on the Bayes theorem so that it can only be used to predict the probability of the class membership of a class independently. Independent without being able to do the selection of attributes that have a high correlation and correlation between one attribute with other attributes so that it can affect the value of accuracy. Naïve Bayesian Weight has been able to provide better accuracy than conventional Naïve Bayesian. Where an increase in the highest accuracy value obtained from the Water Quality dataset is equal to 88.57% in the Weight Naïve Bayesian classification model, while the lowest accuracy value is obtained from the Haberman dataset which is 78.95% in the conventional Naïve Bayesian classification model. The increase in accuracy of the Weight Naïve Bayesian classification model in the Water Quality dataset is 2.9%. While the increase in accuracy value in the Haberman dataset is 1.8%. If done the average accuracy of each dataset using the Weight Naïve Bayesian classification model is 2.35%. Based on the testing that has been done on all test data, it can be said that the Weight Naïve Bayesian classification model can provide better accuracy values than those produced by the conventional Naïve Bayesian classification model.
APA, Harvard, Vancouver, ISO, and other styles
21

DEL ÁGUILA, ISABEL MARÍA, and JOSÉ DEL SAGRADO. "REQUIREMENT RISK LEVEL FORECAST USING BAYESIAN NETWORKS CLASSIFIERS." International Journal of Software Engineering and Knowledge Engineering 21, no. 02 (March 2011): 167–90. http://dx.doi.org/10.1142/s0218194011005219.

Full text
Abstract:
Requirement engineering is a key issue in the development of a software project. Like any other development activity it is not without risks. This work is about the empirical study of risks of requirements by applying machine learning techniques, specifically Bayesian networks classifiers. We have defined several models to predict the risk level for a given requirement using three dataset that collect metrics taken from the requirement specifications of different projects. The classification accuracy of the Bayesian models obtained is evaluated and compared using several classification performance measures. The results of the experiments show that the Bayesians networks allow obtaining valid predictors. Specifically, a tree augmented network structure shows a competitive experimental performance in all datasets. Besides, the relations established between the variables collected to determine the level of risk in a requirement, match with those set by requirement engineers. We show that Bayesian networks are valid tools for the automation of risks assessment in requirement engineering.
APA, Harvard, Vancouver, ISO, and other styles
22

Reddy, Soma Datta, and Sunitha Palissery. "Uncertainty-Aware Seismic Signal Discrimination using Bayesian Convolutional Neural Networks." International Journal on Cybernetics & Informatics 13, no. 5 (October 7, 2024): 207–18. http://dx.doi.org/10.5121/ijci.2024.130513.

Full text
Abstract:
Seismic signal classification plays a crucial role in mitigating the impact of seismic events on human lives and infrastructure. Traditional methods in seismic hazard assessment often overlook the inherent uncertainties associated with the prediction of this complex geological phenomenon. This work introduces a probabilistic framework that leverages Bayesian principles to model and quantify uncertainty in seismic signal classification by applying a Bayesian Convolutional Neural Network (BCNN). The BCNN was trained on a dataset that comprises waveforms detected in the Southern California region and achieved an accuracy of 99.1%. Monte Carlo Sampling subsequently creates a 95% prediction interval for probabilities that considers epistemic and aleatoric uncertainties. The ability to visualize both aleatoric and epistemic uncertainties provides decision-makers with information to determine the reliability of seismic signal classifications. Further, the use of Bayesian CNN for seismic signal classification provides a more robust foundation for decision-making and risk assessment in earthquake-prone regions.
APA, Harvard, Vancouver, ISO, and other styles
23

Long, Yuqi, and Xingzhong Xu. "Bayesian decision rules to classification problems." Australian & New Zealand Journal of Statistics 63, no. 2 (May 24, 2021): 394–415. http://dx.doi.org/10.1111/anzs.12325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mittal, Amit Kumar, Shivangi Mittal, and Digendra Singh Rathore. "Bayesian Classification for Social Media Text." International Journal of Computer Sciences and Engineering 6, no. 7 (July 31, 2018): 641–46. http://dx.doi.org/10.26438/ijcse/v6i7.641646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

dos Santos, Edimilson B., Estevam R. Hruschka, Eduardo R. Hruschka, and Nelson F. F. Ebecken. "Bayesian network classifiers: Beyond classification accuracy." Intelligent Data Analysis 15, no. 3 (May 4, 2011): 279–98. http://dx.doi.org/10.3233/ida-2010-0468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ruiz, Pablo, Javier Mateos, Gustavo Camps-Valls, Rafael Molina, and Aggelos K. Katsaggelos. "Bayesian Active Remote Sensing Image Classification." IEEE Transactions on Geoscience and Remote Sensing 52, no. 4 (April 2014): 2186–96. http://dx.doi.org/10.1109/tgrs.2013.2258468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Cruz-Mesía, Rolando De la, Fernando A. Quintana, and Peter Müller. "Semiparametric Bayesian classification with longitudinal markers." Journal of the Royal Statistical Society: Series C (Applied Statistics) 56, no. 2 (March 2007): 119–37. http://dx.doi.org/10.1111/j.1467-9876.2007.00569.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Tein-Hsiang, and Kang G. Shin. "A Bayesian approach to fault classification." ACM SIGMETRICS Performance Evaluation Review 18, no. 1 (April 1990): 58–66. http://dx.doi.org/10.1145/98460.98505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Akhtar, Naveed, Faisal Shafait, and Ajmal Mian. "Discriminative Bayesian Dictionary Learning for Classification." IEEE Transactions on Pattern Analysis and Machine Intelligence 38, no. 12 (December 1, 2016): 2374–88. http://dx.doi.org/10.1109/tpami.2016.2527652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hurn, M. A., K. V. Mardia, T. J. Hainsworth, J. Kirkbride, and E. Berry. "Bayesian fused classification of medical images." IEEE Transactions on Medical Imaging 15, no. 6 (1996): 850–58. http://dx.doi.org/10.1109/42.544502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hong, Euy-Seok. "Software Quality Classification using Bayesian Classifier." Journal of the Korea society of IT services 11, no. 1 (March 31, 2012): 211–21. http://dx.doi.org/10.9716/kits.2012.11.1.211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wan, E. A. "Neural network classification: a Bayesian interpretation." IEEE Transactions on Neural Networks 1, no. 4 (1990): 303–5. http://dx.doi.org/10.1109/72.80269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dellaportas, Patros. "Corrigendum: Bayesian classification of Neolithic tools." Journal of the Royal Statistical Society: Series C (Applied Statistics) 47, no. 4 (January 6, 2002): 620. http://dx.doi.org/10.1111/1467-9876.00133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Vinay, A., Abhijay Gupta, Aprameya Bharadwaj, Arvind Srinivasan, K. N. Balasubramanya Murthy, and S. Natarajan. "Unconstrained Face Recognition using Bayesian Classification." Procedia Computer Science 143 (2018): 519–27. http://dx.doi.org/10.1016/j.procs.2018.10.425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kabán, Ata. "On Bayesian classification with Laplace priors." Pattern Recognition Letters 28, no. 10 (July 2007): 1271–82. http://dx.doi.org/10.1016/j.patrec.2007.02.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ni, Yang, Peter Müller, Maurice Diesendruck, Sinead Williamson, Yitan Zhu, and Yuan Ji. "Scalable Bayesian Nonparametric Clustering and Classification." Journal of Computational and Graphical Statistics 29, no. 1 (July 19, 2019): 53–65. http://dx.doi.org/10.1080/10618600.2019.1624366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Dadaneh, Siamak Zamani, Edward R. Dougherty, and Xiaoning Qian. "Optimal Bayesian Classification With Missing Values." IEEE Transactions on Signal Processing 66, no. 16 (August 15, 2018): 4182–92. http://dx.doi.org/10.1109/tsp.2018.2847660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Klocker, Johanna, Bettina Wailzer, Gerhard Buchbauer, and Peter Wolschann. "Bayesian Neural Networks for Aroma Classification." Journal of Chemical Information and Computer Sciences 42, no. 6 (November 2002): 1443–49. http://dx.doi.org/10.1021/ci0202640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Xunan, Shiji Song, and Cheng Wu. "Robust Bayesian Classification with Incomplete Data." Cognitive Computation 5, no. 2 (September 21, 2012): 170–87. http://dx.doi.org/10.1007/s12559-012-9188-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bielza, C., G. Li, and P. Larrañaga. "Multi-dimensional classification with Bayesian networks." International Journal of Approximate Reasoning 52, no. 6 (September 2011): 705–27. http://dx.doi.org/10.1016/j.ijar.2011.01.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gutiérrez, Luis, Eduardo Gutiérrez-Peña, and Ramsés H. Mena. "Bayesian nonparametric classification for spectroscopy data." Computational Statistics & Data Analysis 78 (October 2014): 56–68. http://dx.doi.org/10.1016/j.csda.2014.04.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Krzanowski, Wojtek J., Trevor C. Bailey, Derek Partridge, Jonathan E. Fieldsend, Richard M. Everson, and Vitaly Schetinin. "Confidence in Classification: A Bayesian Approach." Journal of Classification 23, no. 2 (September 2006): 199–220. http://dx.doi.org/10.1007/s00357-006-0013-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Carhart, Gary W., Bret F. Draayer, and Michael K. Giles. "Optical pattern recognition using bayesian classification." Pattern Recognition 27, no. 4 (April 1994): 587–606. http://dx.doi.org/10.1016/0031-3203(94)90039-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kehagias, A. "Bayesian classification of Hidden Markov Models." Mathematical and Computer Modelling 23, no. 5 (March 1996): 25–43. http://dx.doi.org/10.1016/0895-7177(96)00010-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Cazorla, M. A., and F. Escolano. "Two bayesian methods for junction classification." IEEE Transactions on Image Processing 12, no. 3 (March 2003): 317–27. http://dx.doi.org/10.1109/tip.2002.806242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nykaza, Edward T., Matthew G. Blevins, Carl R. Hart, and Anton Netchaev. "Bayesian classification of environmental noise sources." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3522. http://dx.doi.org/10.1121/1.4987416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Flach, Peter A., and Nicolas Lachiche. "Naive Bayesian Classification of Structured Data." Machine Learning 57, no. 3 (December 2004): 233–69. http://dx.doi.org/10.1023/b:mach.0000039778.69032.ab.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Klein, Ruben, and S. James Press. "Adaptive Bayesian Classification of Spatial Data." Journal of the American Statistical Association 87, no. 419 (September 1992): 844–51. http://dx.doi.org/10.1080/01621459.1992.10475287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Xiuqi, and Subhashis Ghosal. "Bayesian classification of multiclass functional data." Electronic Journal of Statistics 12, no. 2 (2018): 4669–96. http://dx.doi.org/10.1214/18-ejs1522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Konomi, Bledar A., Soma S. Dhavala, Jianhua Z. Huang, Subrata Kundu, David Huitink, Hong Liang, Yu Ding, and Bani K. Mallick. "Bayesian object classification of gold nanoparticles." Annals of Applied Statistics 7, no. 2 (June 2013): 640–68. http://dx.doi.org/10.1214/12-aoas616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!