To see the other types of publications on this topic, follow the link: Self-supervised learninig.

Journal articles on the topic 'Self-supervised learninig'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Self-supervised learninig.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Taeheon, Jaewon Hur, and Youkyung Han. "Very High-Resolution Satellite Image Registration Based on Self-supervised Deep Learning." Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography 41, no. 4 (2023): 217–25. http://dx.doi.org/10.7848/ksgpc.2023.41.4.217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dang, Thanh-Vu, JinYoung Kim, Gwang-Hyun Yu, Ji Yong Kim, Young Hwan Park, and ChilWoo Lee. "Korean Text to Gloss: Self-Supervised Learning approach." Korean Institute of Smart Media 12, no. 1 (2023): 32–46. http://dx.doi.org/10.30693/smj.2023.12.1.32.

Full text
Abstract:
Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Ko
APA, Harvard, Vancouver, ISO, and other styles
3

HAN Xizhen, 韩希珍, 蒋振刚 JIANG Zhengang, 刘媛媛 LIU Yuanyuan, 赵建 ZHAO Jian, 孙强 SUN Qiang та 刘建卓 LIU Jianzhuo. "BYOL框架下的自监督高光谱图像分类". Infrared and Laser Engineering 53, № 10 (2024): 20240215. https://doi.org/10.3788/irla20240215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liang Dan, 梁丹, 张海苗 Zhang Haimiao та 邱钧 Qiu Jun. "基于自监督学习的光场空间域超分辨成像". Laser & Optoelectronics Progress 61, № 4 (2024): 0411007. http://dx.doi.org/10.3788/lop231188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burlacu, Alexandru. "OVERVIEW OF COMPUTER VISION SUPERVISED LEARNING TECHNIQUES FOR LOW-DATA TRAINING." Journal of Social Sciences III (3) (September 1, 2020): 18–28. https://doi.org/10.5281/zenodo.3971950.

Full text
Abstract:
In the age of big data and machine learning the costs to turn the data into fuel for the algorithms is prohibitively high. Organizations that can train better models with fewer annotation efforts will have a competitive edge. This work is an overview of techniques of varying complexity and novelty for supervised, or rather weakly supervised learning for computer vision algorithms. The paper starts describing various methods to ease the need for a big labeled dataset with giving some background on supervised, weakly-supervised and then self-supervised learning in general, and in computer vision
APA, Harvard, Vancouver, ISO, and other styles
6

Burlacu, Alexandru. "OVERVIEW OF COMPUTER VISION SUPERVISED LEARNING TECHNIQUES FOR LOW-DATA TRAINING." Journal of Engineering Science XXVII (4) (December 15, 2020): 197–207. https://doi.org/10.5281/zenodo.4298709.

Full text
Abstract:
In the age of big data and machine learning the costs to turn the data into fuel for the algorithms is prohibitively high. Organizations that can train better models with fewer annotation efforts will have a competitive edge. This work is an overview of techniques of varying complexity and novelty for supervised, or rather weakly supervised learning for computer vision algorithms. The paper starts describing various methods to ease the need for a big labeled dataset with giving some background on supervised, weakly-supervised and then self-supervised learning in general, and in computer vision
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Qingyu, Zixuan Liu, Ehsan Adeli, and Kilian M. Pohl. "Longitudinal self-supervised learning." Medical Image Analysis 71 (July 2021): 102051. http://dx.doi.org/10.1016/j.media.2021.102051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Feng Fan, 冯凡, 张永生 Zhang Yongsheng, 张津 Zhang Jin, 刘冰 Liu Bing та 于英 Yu Ying. "基于混合卷积网络的高光谱图像自监督特征学习方法". Acta Optica Sinica 44, № 18 (2024): 1828007. http://dx.doi.org/10.3788/aos231776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang Junjie, 黄军杰, 徐锋 Xu Feng, 罗亮 Luo Liang та 陈天宝 Chen Tianbao. "基于掩模和自监督学习的海浪三维重建". Laser & Optoelectronics Progress 61, № 14 (2024): 1437008. http://dx.doi.org/10.3788/lop231953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Xiaoling, Muhammad Izzad Ramli, Marshima Mohd Rosli, Nursuriati Jamil, and Syed Mohd Zahid Syed Zainal Ariffin. "Revisiting self-supervised contrastive learning for imbalanced classification." International Journal of Electrical and Computer Engineering (IJECE) 15, no. 2 (2025): 1949–60. https://doi.org/10.11591/ijece.v15i2.pp1949-1960.

Full text
Abstract:
Class imbalance remains a formidable challenge in machine learning, particularly affecting fields that depend on accurate classification across skewed datasets, such as medical imaging and software defect prediction. Traditional approaches often fail to adequately address the underrepresentation of minority classes, leading to models that exhibit high performance on majority classes but have poor performance on critical minority classes. Self-supervised contrastive learning has become an extremely encouraging method for this issue, enabling the utilization of unlabeled data to generate robust
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Weiyi, Haoran Zhang, Qi Lan, et al. "Self-supervised PSF-informed deep learning enables real-time deconvolution for optical coherence tomography." Advanced Imaging 2, no. 2 (2025): 021001. https://doi.org/10.3788/ai.2025.10026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gaurav, Kashyap. "Self-Supervised Learning: How Self-Supervised Learning Approaches Can Reduce Dependence on Labeled Data." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 10, no. 4 (2024): 1–10. https://doi.org/10.5281/zenodo.14507625.

Full text
Abstract:
A promising paradigm that lessens the need for sizable labeled datasets for machine learning model training is self-supervised learning (SSL). SSL models are able to learn data representations through pretext tasks by utilizing unlabeled data. These representations can then be refined for tasks that come after. The development of self-supervised learning, its underlying techniques, and its potential to address the difficulties associated with obtaining labeled data are all examined in this paper. We go over the main self-supervised methods, their uses, and how they might improve the generaliza
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Daehak. "A study on a semi-supervised learning using self-supervised learning." Journal of the Korean Data And Information Science Society 34, no. 6 (2023): 967–77. http://dx.doi.org/10.7465/jkdi.2023.34.6.967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Akande, Jamiu Olamilekan. "Designing AI-Augmented Intrusion Detection Systems Using Self-Supervised Learning and Adversarial Threat Signal Modeling." International Journal of Research Publication and Reviews 6, no. 6 (2025): 568–89. https://doi.org/10.55248/gengpi.6.0625.2338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Ze-Hao, Tong-Tian Weng, Xiang-Dong Chen, Li Zhao, and Fang-Wen Sun. "SSL Depth: self-supervised learning enables 16× speedup in confocal microscopy-based 3D surface imaging [Invited]." Chinese Optics Letters 22, no. 6 (2024): 060002. http://dx.doi.org/10.3788/col202422.060002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tobias, Höppe, Miszkurka Agnieszka, and Bogatov Wilkman Dennis. "[Re] Understanding Self-Supervised Learning Dynamics without Contrastive Pairs." ReScience C 8, no. 2 (2022): #17. https://doi.org/10.5281/zenodo.6574659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Shuokai, Ruobing Xie, Yongchun Zhu, et al. "Self-Supervised learning for Conversational Recommendation." Information Processing & Management 59, no. 6 (2022): 103067. http://dx.doi.org/10.1016/j.ipm.2022.103067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Haiping, Khimya Khetarpal, and Doina Precup. "Self-Supervised Attention-Aware Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10311–19. http://dx.doi.org/10.1609/aaai.v35i12.17235.

Full text
Abstract:
Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware d
APA, Harvard, Vancouver, ISO, and other styles
19

Salazar, Domingos S. P. "Nonequilibrium thermodynamics of self-supervised learning." Physics Letters A 419 (December 2021): 127756. http://dx.doi.org/10.1016/j.physleta.2021.127756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chauhan, Mihir, Mohammad Abuzar Hashemi, Abhishek Satbhai, et al. "Self-supervised learning based handwriting verification." IET Conference Proceedings 2024, no. 10 (2024): 170–77. https://doi.org/10.1049/icp.2024.3302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shenoy, Jayanth, Xingjian Davis Zhang, Bill Tao, et al. "Self-Supervised Learning across the Spectrum." Remote Sensing 16, no. 18 (2024): 3470. http://dx.doi.org/10.3390/rs16183470.

Full text
Abstract:
Satellite image time series (SITS) segmentation is crucial for many applications, like environmental monitoring, land cover mapping, and agricultural crop type classification. However, training models for SITS segmentation remains a challenging task due to the lack of abundant training data, which requires fine-grained annotation. We propose S4, a new self-supervised pretraining approach that significantly reduces the requirement for labeled training data by utilizing two key insights of satellite imagery: (a) Satellites capture images in different parts of the spectrum, such as radio frequenc
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Zhongnian, Jiayu Wang, Qingcong Geng, and Xinzheng Xu. "Group-based siamese self-supervised learning." Electronic Research Archive 32, no. 8 (2024): 4913–25. http://dx.doi.org/10.3934/era.2024226.

Full text
Abstract:
<p>In this paper, we introduced a novel group self-supervised learning approach designed to improve visual representation learning. This new method aimed to rectify the limitations observed in conventional self-supervised learning. Traditional methods tended to focus on embedding distortion-invariant in single-view features. However, our belief was that a better representation can be achieved by creating a group of features derived from multiple views. To expand the siamese self-supervised architecture, we increased the number of image instances in each crop, enabling us to obtain an ave
APA, Harvard, Vancouver, ISO, and other styles
23

Hrycej, Tomas. "Supporting supervised learning by self-organization." Neurocomputing 4, no. 1-2 (1992): 17–30. http://dx.doi.org/10.1016/0925-2312(92)90040-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Fei, and Changshui Zhang. "Robust self-tuning semi-supervised learning." Neurocomputing 70, no. 16-18 (2007): 2931–39. http://dx.doi.org/10.1016/j.neucom.2006.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Shuo, Adria Mallol-Ragolta, Emilia Parada-Cabaleiro, et al. "Audio self-supervised learning: A survey." Patterns 3, no. 12 (2022): 100616. http://dx.doi.org/10.1016/j.patter.2022.100616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Han, Kyoungmin, and Minsik Lee. "Self-supervised learning with ensemble representations." Engineering Applications of Artificial Intelligence 143 (March 2025): 110007. https://doi.org/10.1016/j.engappai.2025.110007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Singh, Arunima. "Self-supervised learning of molecular representations." Nature Methods 22, no. 7 (2025): 1395. https://doi.org/10.1038/s41592-025-02757-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mohak, Sharma, Gandhi Neeraj, Datta Supreme, Annarapu Bhavani, Arvind Tomanvar Krutika, and Bhovardhan Mayuresh. "Reinforcement Learning and its application in making Recommendation System." International Journal of All Research Education and Scientific Methods (IJARESM) 11, no. 2 (2023): 278–83. https://doi.org/10.5281/zenodo.8076574.

Full text
Abstract:
<strong>Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents are made to take actions in an environment in order to maximize the total reward. RL works on Markov Decision Process which leads to Q-learning. MDP provides a mechanism to maximize the reward in a given environment. Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. DRL has applications in many fields like medicine, robotics, games, etc. Combining DL and RL leads to the formation of Deep Q- Networks. Another application of RL and the focus of t
APA, Harvard, Vancouver, ISO, and other styles
29

Xi, Liang, Zichao Yun, Han Liu, Ruidong Wang, Xunhua Huang, and Haoyi Fan. "Semi-supervised Time Series Classification Model with Self-supervised Learning." Engineering Applications of Artificial Intelligence 116 (November 2022): 105331. http://dx.doi.org/10.1016/j.engappai.2022.105331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Indris, Christopher, Fady Ibrahim, Hatem Ibrahem, et al. "Supervised and Self-Supervised Learning for Assembly Line Action Recognition." Journal of Imaging 11, no. 1 (2025): 17. https://doi.org/10.3390/jimaging11010017.

Full text
Abstract:
The safety and efficiency of assembly lines are critical to manufacturing, but human supervisors cannot oversee all activities simultaneously. This study addresses this challenge by performing a comparative study to construct an initial real-time, semi-supervised temporal action recognition setup for monitoring worker actions on assembly lines. Various feature extractors and localization models were benchmarked using a new assembly dataset, with the I3D model achieving an average mAP@IoU=0.1:0.7 of 85% without optical flow or fine-tuning. The comparative study was extended to self-supervised l
APA, Harvard, Vancouver, ISO, and other styles
31

Shin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee, and Kyoobin Lee. "Self-Supervised Transfer Learning from Natural Images for Sound Classification." Applied Sciences 11, no. 7 (2021): 3043. http://dx.doi.org/10.3390/app11073043.

Full text
Abstract:
We propose the implementation of transfer learning from natural images to audio-based images using self-supervised learning schemes. Through self-supervised learning, convolutional neural networks (CNNs) can learn the general representation of natural images without labels. In this study, a convolutional neural network was pre-trained with natural images (ImageNet) via self-supervised learning; subsequently, it was fine-tuned on the target audio samples. Pre-training with the self-supervised learning scheme significantly improved the sound classification performance when validated on the follo
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Yuanyuan, and Qianqian Liu. "Research on Self-Supervised Comparative Learning for Computer Vision." Journal of Electronic Research and Application 5, no. 3 (2021): 5–17. http://dx.doi.org/10.26689/jera.v5i3.2320.

Full text
Abstract:
In recent years, self-supervised learning which does not require a large number of manual labels generate supervised signals through the data itself to attain the characterization learning of samples. Self-supervised learning solves the problem of learning semantic features from unlabeled data, and realizes pre-training of models in large data sets. Its significant advantages have been extensively studied by scholars in recent years. There are usually three types of self-supervised learning: “Generative, Contrastive, and Generative-Contrastive.” The model of the comparative learning method is
APA, Harvard, Vancouver, ISO, and other styles
33

Xu, Jiashu, and Sergii Stirenko. "Denoising Self-Distillation Masked Autoencoder for Self-Supervised Learning." International Journal of Image, Graphics and Signal Processing 15, no. 5 (2023): 29–38. http://dx.doi.org/10.5815/ijigsp.2023.05.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wei, Longhui, Lingxi Xie, Jianzhong He, Xiaopeng Zhang, and Qi Tian. "Can Semantic Labels Assist Self-Supervised Visual Representation Learning?" Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 2642–50. http://dx.doi.org/10.1609/aaai.v36i3.20166.

Full text
Abstract:
Recently, contrastive learning has largely advanced the progress of unsupervised visual representation learning. Pre-trained on ImageNet, some self-supervised algorithms reported higher transfer learning performance compared to fully-supervised methods, seeming to deliver the message that human labels hardly contribute to learning transferrable visual features. In this paper, we defend the usefulness of semantic labels but point out that fully-supervised and self-supervised methods are pursuing different kinds of features. To alleviate this issue, we present a new algorithm named Supervised Co
APA, Harvard, Vancouver, ISO, and other styles
35

Shwartz Ziv, Ravid, and Yann LeCun. "To Compress or Not to Compress—Self-Supervised Learning and Information Theory: A Review." Entropy 26, no. 3 (2024): 252. http://dx.doi.org/10.3390/e26030252.

Full text
Abstract:
Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory has shaped deep neural networks, particularly the information bottleneck principle. This principle optimizes the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work,
APA, Harvard, Vancouver, ISO, and other styles
36

Taherdoost, Hamed. "Beyond Supervised: The Rise of Self-Supervised Learning in Autonomous Systems." Information 15, no. 8 (2024): 491. http://dx.doi.org/10.3390/info15080491.

Full text
Abstract:
Supervised learning has been the cornerstone of many successful medical imaging applications. However, its reliance on large labeled datasets poses significant challenges, especially in the medical domain, where data annotation is time-consuming and expensive. In response, self-supervised learning (SSL) has emerged as a promising alternative, leveraging unlabeled data to learn meaningful representations without explicit supervision. This paper provides a detailed overview of supervised learning and its limitations in medical imaging, underscoring the need for more efficient and scalable approa
APA, Harvard, Vancouver, ISO, and other styles
37

Zhuang, Benhui, Chunhong Zhang, and Zheng Hu. "Self-Supervised Skill Learning for Semi-Supervised Long-Horizon Instruction Following." Electronics 12, no. 7 (2023): 1587. http://dx.doi.org/10.3390/electronics12071587.

Full text
Abstract:
Language as an abstraction for hierarchical agents is promising to solve compositional long-time horizon decision-making tasks. The learning of the agent poses significant challenges, as it typically requires plenty of trajectories annotated with languages. This paper addresses the challenge of learning such an agent under the scarcity of language annotations. One approach for leveraging unannotated data is to generate pseudo-labels for unannotated trajectories using sparse seed annotations. However, as the scenes of the environment and tasks assigned to the agent are diverse, the inference of
APA, Harvard, Vancouver, ISO, and other styles
38

Sheng, Jinrong, Jiaruo Yu, Ziqiang Li, Ao Li, and Yongxin Ge. "Self-supervised temporal adaptive learning for weakly-supervised temporal action localization." Information Sciences 705 (July 2025): 121986. https://doi.org/10.1016/j.ins.2025.121986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sabiri, Bihi, Amal Khtira, Bouchra El Asri, and Maryem Rhanoui. "Investigating Contrastive Pair Learning’s Frontiers in Supervised, Semisupervised, and Self-Supervised Learning." Journal of Imaging 10, no. 8 (2024): 196. http://dx.doi.org/10.3390/jimaging10080196.

Full text
Abstract:
In recent years, contrastive learning has been a highly favored method for self-supervised representation learning, which significantly improves the unsupervised training of deep image models. Self-supervised learning is a subset of unsupervised learning in which the learning process is supervised by creating pseudolabels from the data themselves. Using supervised final adjustments after unsupervised pretraining is one way to take the most valuable information from a vast collection of unlabeled data and teach from a small number of labeled instances. This study aims firstly to compare contras
APA, Harvard, Vancouver, ISO, and other styles
40

Epstein, Sean C., Timothy J. P. Bray, Margaret Hall-Craggs, and Hui Zhang. "Choice of training label matters: how to best use deep learning for quantitative MRI parameter estimation." Machine Learning for Biomedical Imaging 2, January 2024 (2024): 586–610. http://dx.doi.org/10.59275/j.melba.2024-geb5.

Full text
Abstract:
Deep learning (DL) is gaining popularity as a parameter estimation method for quantitative MRI. A range of competing implementations have been proposed, relying on either supervised or self-supervised learning. Self-supervised approaches, sometimes referred to as unsupervised, have been loosely based on auto-encoders, whereas supervised methods have, to date, been trained on groundtruth labels. These two learning paradigms have been shown to have distinct strengths. Notably, self-supervised approaches offer lower-bias parameter estimates than their supervised alternatives. This result is count
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Shanshan, Yutong Jia, You Wu, Ning Wei, Liyan Zhang, and Jingfeng Guo. "Knowledge-Aware Graph Self-Supervised Learning for Recommendation." Electronics 12, no. 23 (2023): 4869. http://dx.doi.org/10.3390/electronics12234869.

Full text
Abstract:
Collaborative filtering (CF) based on graph neural networks (GNN) can capture higher-order relationships between nodes, which in turn improves recommendation performance. Although effective, GNN-based methods still face the challenges of sparsity and noise in real scenarios. In recent years, researchers have introduced graph self-supervised learning (SSL) techniques into CF to alleviate the sparse supervision problem. The technique first augments the data to obtain contrastive views and then utilizes the mutual information maximization to provide self-supervised signals for the contrastive vie
APA, Harvard, Vancouver, ISO, and other styles
42

Tripathi, Achyut Mani, and Aakansha Mishra. "Self-supervised learning for Environmental Sound Classification." Applied Acoustics 182 (November 2021): 108183. http://dx.doi.org/10.1016/j.apacoust.2021.108183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

ITO, Seiya, Naoshi KANEKO, and Kazuhiko SUMI. "Self-Supervised Learning for Multi-View Stereo." Journal of the Japan Society for Precision Engineering 86, no. 12 (2020): 1042–50. http://dx.doi.org/10.2493/jjspe.86.1042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hayat, Md Abul, George Stein, Peter Harrington, Zarija Lukić, and Mustafa Mustafa. "Self-supervised Representation Learning for Astronomical Images." Astrophysical Journal Letters 911, no. 2 (2021): L33. http://dx.doi.org/10.3847/2041-8213/abf2c7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Che, Feihu, Guohua Yang, Dawei Zhang, Jianhua Tao, and Tong Liu. "Self-supervised graph representation learning via bootstrapping." Neurocomputing 456 (October 2021): 88–96. http://dx.doi.org/10.1016/j.neucom.2021.03.123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jaiswal, Ashish, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. "A Survey on Contrastive Self-Supervised Learning." Technologies 9, no. 1 (2020): 2. http://dx.doi.org/10.3390/technologies9010002.

Full text
Abstract:
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an e
APA, Harvard, Vancouver, ISO, and other styles
47

Polceanu, Mihai, Julie Porteous, Alan Lindsay, and Marc Cavazza. "Narrative Plan Generation with Self-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (2021): 5984–92. http://dx.doi.org/10.1609/aaai.v35i7.16747.

Full text
Abstract:
Narrative Generation has attracted significant interest as a novel application of Automated Planning techniques. However, the vast amount of narrative material available opens the way to the use of Deep Learning techniques. In this paper, we explore the feasibility of narrative generation through self-supervised learning, using sequence embedding techniques or auto-encoders to produce narrative sequences. We use datasets of well-formed plots generated by a narrative planning approach, using pre-existing, published, narrative planning domains, to train generative models. Our experiments demonst
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Nanxuan, Zhirong Wu, Rynson W. H. Lau, and Stephen Lin. "Distilling Localization for Self-Supervised Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10990–98. http://dx.doi.org/10.1609/aaai.v35i12.17312.

Full text
Abstract:
Recent progress in contrastive learning has revolutionized unsupervised representation learning. Concretely, multiple views (augmentations) from the same image are encouraged to map to close embeddings, while views from different images are pulled apart.In this paper, through visualizing and diagnosing classification errors, we observe that current contrastive models are ineffective at localizing the foreground object, limiting their ability to extract discriminative high-level features. This is due to the fact that view generation process considers pixels in an image uniformly.To address this
APA, Harvard, Vancouver, ISO, and other styles
49

Fu, Zheren, Yan Li, Zhendong Mao, Quan Wang, and Yongdong Zhang. "Deep Metric Learning with Self-Supervised Ranking." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1370–78. http://dx.doi.org/10.1609/aaai.v35i2.16226.

Full text
Abstract:
Deep metric learning aims to learn a deep embedding space, where similar objects are pushed towards together and different objects are repelled against. Existing approaches typically use inter-class characteristics, e.g. class-level information or instance-level similarity, to obtain semantic relevance of data points and get a large margin between different classes in the embedding space. However, the intra-class characteristics, e.g. local manifold structure or relative relationship within the same class, are usually overlooked in the learning process. Hence the data structure cannot be fully
APA, Harvard, Vancouver, ISO, and other styles
50

Zeng, Jiaqi, and Pengtao Xie. "Contrastive Self-supervised Learning for Graph Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10824–32. http://dx.doi.org/10.1609/aaai.v35i12.17293.

Full text
Abstract:
Graph classification is a widely studied problem and has broad applications. In many real-world problems, the number of labeled graphs available for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting. In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels, then finetune the pretrained encoders on labeled graphs. In the second approach, we deve
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!