Academic literature on the topic 'Unsupervised and supervised learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unsupervised and supervised learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unsupervised and supervised learning"

1

Fong, A. C. M., and G. Hong. "Boosted Supervised Intensional Learning Supported by Unsupervised Learning." International Journal of Machine Learning and Computing 11, no. 2 (2021): 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.

Full text
Abstract:
Traditionally, supervised machine learning (ML) algorithms rely heavily on large sets of annotated data. This is especially true for deep learning (DL) neural networks, which need huge annotated data sets for good performance. However, large volumes of annotated data are not always readily available. In addition, some of the best performing ML and DL algorithms lack explainability – it is often difficult even for domain experts to interpret the results. This is an important consideration especially in safety-critical applications, such as AI-assisted medical endeavors, in which a DL’s failure mode is not well understood. This lack of explainability also increases the risk of malicious attacks by adversarial actors because these actions can become obscured in the decision-making process that lacks transparency. This paper describes an intensional learning approach which uses boosting to enhance prediction performance while minimizing reliance on availability of annotated data. The intensional information is derived from an unsupervised learning preprocessing step involving clustering. Preliminary evaluation on the MNIST data set has shown encouraging results. Specifically, using the proposed approach, it is now possible to achieve similar accuracy result as extensional learning alone while using only a small fraction of the original training data set.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, MengYang, MingJun Li, and XiaoYang Zhang. "The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents." Computational Intelligence and Neuroscience 2022 (June 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/5653942.

Full text
Abstract:
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, MengYang, MingJun Li, and XiaoYang Zhang. "The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents." Computational Intelligence and Neuroscience 2022 (June 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/5653942.

Full text
Abstract:
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Ritu. "Study of Supervised Learning and Unsupervised Learning." International Journal for Research in Applied Science and Engineering Technology 8, no. 6 (2020): 588–93. http://dx.doi.org/10.22214/ijraset.2020.6095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lok, Lai Kai, Vazeerudeen Abdul Hameed, and Muhammad Ehsan Rana. "Hybrid machine learning approach for anomaly detection." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 2 (2022): 1016. http://dx.doi.org/10.11591/ijeecs.v27.i2.pp1016-1024.

Full text
Abstract:
This research aims to <span lang="EN-US">improve anomaly detection performance by developing two variants of hybrid models combining supervised and unsupervised machine learning techniques. Supervised models cannot detect new or unseen types of anomaly. Hence in variant 1, a supervised model that detects normal samples is followed by an unsupervised learning model to screen anomaly. The unsupervised model is weak in differentiating between noise and fraud. Hence in variant 2, the hybrid model incorporates an unsupervised model that detects anomaly is followed by a supervised model to validate an anomaly. Three different datasets are used for model evaluation. The experiment is begun with 5 supervised models and 3 unsupervised models. After performance evaluation, 2 supervised models with the highest F1-Score and one unsupervised model with the best recall value are selected for hybrid model development. The variant 1 hybrid model recorded the best recall value across all the experiments, indicating that it is the best at detecting actual fraud and less likely to miss it compared to other models. The variant 2 hybrid model can improve the precision score significantly compared to the original unsupervised model, indicating that it is better in separating noise from fraud,</span>
APA, Harvard, Vancouver, ISO, and other styles
6

Love, Bradley C. "Comparing supervised and unsupervised category learning." Psychonomic Bulletin & Review 9, no. 4 (2002): 829–35. http://dx.doi.org/10.3758/bf03196342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Jianran, Chan Li, and Wenyuan Yang. "Supervised Learning via Unsupervised Sparse Autoencoder." IEEE Access 6 (2018): 73802–14. http://dx.doi.org/10.1109/access.2018.2884697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Jinghan, Dong Wei, Kai Ma, Liansheng Wang, and Yefeng Zheng. "Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2307–15. http://dx.doi.org/10.1609/aaai.v36i2.20129.

Full text
Abstract:
Unsupervised pretraining based on contrastive learning has made significant progress recently and showed comparable or even superior transfer learning performance to traditional supervised pretraining on various tasks. In this work, we first empirically investigate when and why unsupervised pretraining surpasses supervised counterparts for image classification tasks with a series of control experiments. Besides the commonly used accuracy, we further analyze the results qualitatively with the class activation maps and assess the learned representations quantitatively with the representation entropy and uniformity. Our core finding is that it is the amount of information effectively perceived by the learning model that is crucial to transfer learning, instead of absolute size of the dataset. Based on this finding, we propose Classification Activation Map guided contrastive (CAMtrast) learning which better utilizes the label supervsion to strengthen supervised pretraining, by making the networks perceive more information from the training images. CAMtrast is evaluated with three fundamental visual learning tasks: image recognition, object detection, and semantic segmentation, on various public datasets. Experimental results show that our CAMtrast effectively improves the performance of supervised pretraining, and that its performance is superior to both unsupervised counterparts and a recent related work which similarly attempted improving supervised pretraining.
APA, Harvard, Vancouver, ISO, and other styles
9

C A Padmanabha Reddy, Y., P. Viswanath, and B. Eswara Reddy. "Semi-supervised learning: a brief review." International Journal of Engineering & Technology 7, no. 1.8 (2018): 81. http://dx.doi.org/10.14419/ijet.v7i1.8.9977.

Full text
Abstract:
Most of the application domain suffers from not having sufficient labeled data whereas unlabeled data is available cheaply. To get labeled instances, it is very difficult because experienced domain experts are required to label the unlabeled data patterns. Semi-supervised learning addresses this problem and act as a half way between supervised and unsupervised learning. This paper addresses few techniques of Semi-supervised learning (SSL) such as self-training, co-training, multi-view learning, TSVMs methods. Traditionally SSL is classified in to Semi-supervised Classification and Semi-supervised Clustering which achieves better accuracy than traditional supervised and unsupervised learning techniques. The paper also addresses the issue of scalability and applications of Semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Mingle, Sook Yoon, Jaesu Lee, and Dong Sun Park. "Unsupervised Transfer Learning for Plant Anomaly Recognition." Korean Institute of Smart Media 11, no. 4 (2022): 30–37. http://dx.doi.org/10.30693/smj.2022.11.4.30.

Full text
Abstract:
Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography