Academic literature on the topic 'Self-supervised learninig'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Self-supervised learninig.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Self-supervised learninig"

1

Kim, Taeheon, Jaewon Hur, and Youkyung Han. "Very High-Resolution Satellite Image Registration Based on Self-supervised Deep Learning." Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography 41, no. 4 (2023): 217–25. http://dx.doi.org/10.7848/ksgpc.2023.41.4.217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dang, Thanh-Vu, JinYoung Kim, Gwang-Hyun Yu, Ji Yong Kim, Young Hwan Park, and ChilWoo Lee. "Korean Text to Gloss: Self-Supervised Learning approach." Korean Institute of Smart Media 12, no. 1 (2023): 32–46. http://dx.doi.org/10.30693/smj.2023.12.1.32.

Full text
Abstract:
Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Ko
APA, Harvard, Vancouver, ISO, and other styles
3

HAN Xizhen, 韩希珍, 蒋振刚 JIANG Zhengang, 刘媛媛 LIU Yuanyuan, 赵建 ZHAO Jian, 孙强 SUN Qiang та 刘建卓 LIU Jianzhuo. "BYOL框架下的自监督高光谱图像分类". Infrared and Laser Engineering 53, № 10 (2024): 20240215. https://doi.org/10.3788/irla20240215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liang Dan, 梁丹, 张海苗 Zhang Haimiao та 邱钧 Qiu Jun. "基于自监督学习的光场空间域超分辨成像". Laser & Optoelectronics Progress 61, № 4 (2024): 0411007. http://dx.doi.org/10.3788/lop231188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burlacu, Alexandru. "OVERVIEW OF COMPUTER VISION SUPERVISED LEARNING TECHNIQUES FOR LOW-DATA TRAINING." Journal of Social Sciences III (3) (September 1, 2020): 18–28. https://doi.org/10.5281/zenodo.3971950.

Full text
Abstract:
In the age of big data and machine learning the costs to turn the data into fuel for the algorithms is prohibitively high. Organizations that can train better models with fewer annotation efforts will have a competitive edge. This work is an overview of techniques of varying complexity and novelty for supervised, or rather weakly supervised learning for computer vision algorithms. The paper starts describing various methods to ease the need for a big labeled dataset with giving some background on supervised, weakly-supervised and then self-supervised learning in general, and in computer vision
APA, Harvard, Vancouver, ISO, and other styles
6

Burlacu, Alexandru. "OVERVIEW OF COMPUTER VISION SUPERVISED LEARNING TECHNIQUES FOR LOW-DATA TRAINING." Journal of Engineering Science XXVII (4) (December 15, 2020): 197–207. https://doi.org/10.5281/zenodo.4298709.

Full text
Abstract:
In the age of big data and machine learning the costs to turn the data into fuel for the algorithms is prohibitively high. Organizations that can train better models with fewer annotation efforts will have a competitive edge. This work is an overview of techniques of varying complexity and novelty for supervised, or rather weakly supervised learning for computer vision algorithms. The paper starts describing various methods to ease the need for a big labeled dataset with giving some background on supervised, weakly-supervised and then self-supervised learning in general, and in computer vision
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Qingyu, Zixuan Liu, Ehsan Adeli, and Kilian M. Pohl. "Longitudinal self-supervised learning." Medical Image Analysis 71 (July 2021): 102051. http://dx.doi.org/10.1016/j.media.2021.102051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Feng Fan, 冯凡, 张永生 Zhang Yongsheng, 张津 Zhang Jin, 刘冰 Liu Bing та 于英 Yu Ying. "基于混合卷积网络的高光谱图像自监督特征学习方法". Acta Optica Sinica 44, № 18 (2024): 1828007. http://dx.doi.org/10.3788/aos231776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang Junjie, 黄军杰, 徐锋 Xu Feng, 罗亮 Luo Liang та 陈天宝 Chen Tianbao. "基于掩模和自监督学习的海浪三维重建". Laser & Optoelectronics Progress 61, № 14 (2024): 1437008. http://dx.doi.org/10.3788/lop231953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Xiaoling, Muhammad Izzad Ramli, Marshima Mohd Rosli, Nursuriati Jamil, and Syed Mohd Zahid Syed Zainal Ariffin. "Revisiting self-supervised contrastive learning for imbalanced classification." International Journal of Electrical and Computer Engineering (IJECE) 15, no. 2 (2025): 1949–60. https://doi.org/10.11591/ijece.v15i2.pp1949-1960.

Full text
Abstract:
Class imbalance remains a formidable challenge in machine learning, particularly affecting fields that depend on accurate classification across skewed datasets, such as medical imaging and software defect prediction. Traditional approaches often fail to adequately address the underrepresentation of minority classes, leading to models that exhibit high performance on majority classes but have poor performance on critical minority classes. Self-supervised contrastive learning has become an extremely encouraging method for this issue, enabling the utilization of unlabeled data to generate robust
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Self-supervised learninig"

1

Vančo, Timotej. "Self-supervised učení v aplikacích počítačového vidění." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442510.

Full text
Abstract:
The aim of the diploma thesis is to make research of the self-supervised learning in computer vision applications, then to choose a suitable test task with an extensive data set, apply self-supervised methods and evaluate. The theoretical part of the work is focused on the description of methods in computer vision, a detailed description of neural and convolution networks and an extensive explanation and division of self-supervised methods. Conclusion of the theoretical part is devoted to practical applications of the Self-supervised methods in practice. The practical part of the diploma thesi
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Zhaoqing. "Self-supervised Visual Representation Learning." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29595.

Full text
Abstract:
In general, large-scale annotated data are essential to training deep neural networks in order to achieve better performance in visual feature learning for various computer vision applications. Unfortunately, the amount of annotations is challenging to obtain, requiring a high cost of money and human resources. The dependence on large-scale annotated data has become a crucial bottleneck in developing an advanced intelligence perception system. Self-supervised visual representation learning, a subset of unsupervised learning, has gained popularity because of its ability to avoid the high cost
APA, Harvard, Vancouver, ISO, and other styles
3

Zaiem, Mohamed Salah. "Informed Speech Self-supervised Representation Learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT009.

Full text
Abstract:
L'apprentissage des caractéristiques a été un des principaux moteurs des progrès de l'apprentissage automatique. L'apprentissage auto-supervisé est apparu dans ce contexte, permettant le traitement de données non étiquetées en vue d'une meilleure performance sur des tâches faiblement étiquetées. La première partie de mon travail de doctorat vise à motiver les choix dans les pipelines d'apprentissage auto-supervisé de la parole qui apprennent les représentations non supervisées. Dans cette thèse, je montre d'abord comment une fonction basée sur l'indépendance conditionnelle peut être utilisée p
APA, Harvard, Vancouver, ISO, and other styles
4

Ermolov, Aleksandr. "Self-supervised Representation Learning in Computer Vision and Reinforcement Learning." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/360781.

Full text
Abstract:
This work is devoted to self-supervised representation learning (SSL). We consider both contrastive and non-contrastive methods and present a new loss function for SSL based on feature whitening. Our solution is conceptually simple and competitive with other methods. Self-supervised representations are beneficial for most areas of deep learning, and reinforcement learning is of particular interest because SSL can compensate for the sparsity of the training signal. We present two methods from this area. The first tackles the partial observability providing the agent with a history, represented
APA, Harvard, Vancouver, ISO, and other styles
5

Khan, Umair. "Self-supervised deep learning approaches to speaker recognition." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671496.

Full text
Abstract:
In speaker recognition, i-vectors have been the state-of-the-art unsupervised technique over the last few years, whereas x-vectors is becoming the state-of-the-art supervised technique, these days. Recent advances in Deep Learning (DL) approaches to speaker recognition have improved the performance but are constrained to the need of labels for the background data. In practice, labeled background data is not easily accessible, especially when large training data is required. In i-vector based speaker recognition, cosine and Probabilistic Linear Discriminant Analysis (PLDA) are the two basic sco
APA, Harvard, Vancouver, ISO, and other styles
6

Korecki, John Nicholas. "Semi-Supervised Self-Learning on Imbalanced Data Sets." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1686.

Full text
Abstract:
Semi-supervised self-learning algorithms have been shown to improve classifier accuracy under a variety of conditions. In this thesis, semi-supervised self-learning using ensembles of random forests and fuzzy c-means clustering similarity was applied to three data sets to show where improvement is possible over random forests alone. Two of the data sets are emulations of large simulations in which the data may be distributed. Additionally, the ratio of majority to minority class examples in the training set was altered to examine the effect of training set bias on performance when applying the
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Kun. "Supervised and Self-Supervised Learning for Video Object Segmentation in the Compressed Domain." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29361.

Full text
Abstract:
Video object segmentation has attracted remarkable attention since it is more and more critical in real video understanding scenarios. Raw videos have very high redundancies. Therefore, using a heavy backbone network to extract features from all individual frames may be a waste of time. Also, the motion vectors and residuals in compressed videos provide motion information that can be utilized directly. Therefore, this thesis will discuss semi-supervised video object segmentation methods working directly on compressed videos. First, we discuss a supervised learning method for semi-supervised
APA, Harvard, Vancouver, ISO, and other styles
8

Govindarajan, Hariprasath. "Self-Supervised Representation Learning for Content Based Image Retrieval." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166223.

Full text
Abstract:
Automotive technologies and fully autonomous driving have seen a tremendous growth in recent times and have benefitted from extensive deep learning research. State-of-the-art deep learning methods are largely supervised and require labelled data for training. However, the annotation process for image data is time-consuming and costly in terms of human efforts. It is of interest to find informative samples for labelling by Content Based Image Retrieval (CBIR). Generally, a CBIR method takes a query image as input and returns a set of images that are semantically similar to the query image. The
APA, Harvard, Vancouver, ISO, and other styles
9

Zangeneh, Kamali Fereidoon. "Self-supervised learning of camera egomotion using epipolar geometry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286286.

Full text
Abstract:
Visual odometry is one of the prevalent techniques for the positioning of autonomous agents equipped with cameras. Several recent works in this field have in various ways attempted to exploit the capabilities of deep neural networks to improve the performance of visual odometry solutions. One of such approaches is using an end-to-end learning-based solution to infer the egomotion of the camera from a sequence of input images. The state of the art end-to-end solutions employ a common self-supervised training strategy that minimises a notion of photometric error formed by the view synthesis of t
APA, Harvard, Vancouver, ISO, and other styles
10

Marsal, Rémi. "Motion analysis in videos with deep self-supervised learning." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS137.

Full text
Abstract:
Ces travaux de thèse explorent les méthodes d'apprentissage auto-supervisé basées sur le mouvement dans les vidéos afin de réduire la dépendance à l'égard de coûteux ensembles de données annotées pour les tâches d'estimation du flux optique et de la profondeur monoculaire. En l'absence de vérité terrain, ces deux tâches sont principalement apprises par minimisation d'une erreur de reconstruction d'images en supposant l'hypothèse de constance de la luminosité vérifiée. Dans la pratique, en raison des variations de luminosité causées par des ombres mobiles ou des surfaces non lambertiennes, cett
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Self-supervised learninig"

1

Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

van Dijk, Tom van Dijk. Self-Supervised Learning for Visual Obstacle Avoidance. TU Delft OPEN Publishing, 2022. http://dx.doi.org/10.34641/mg.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Munro, Paul. Self-supervised learning of concepts by single units and "weakly local" representations. School of Library and Information Science, University of Pittsburgh, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sawarkar, Kunal, and Dheeraj Arremsetty. Deep Learning with PyTorch Lightning: Build and Train High-Performance Artificial Intelligence and Self-Supervised Models Using Python. Packt Publishing, Limited, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Self-supervised learninig"

1

Nedelkoski, Sasho, Jasmin Bogatinovski, Alexander Acker, Jorge Cardoso, and Odej Kao. "Self-supervised Log Parsing." In Machine Learning and Knowledge Discovery in Databases: Applied Data Science Track. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67667-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Acar, Mert, Tolga Çukur, and İlkay Öksüz. "Self-supervised Dynamic MRI Reconstruction." In Machine Learning for Medical Image Reconstruction. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88552-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Shaojie, Hao Chen, Jianping Huang, Yong Yan, Jiewei Chen, and Ao Xiong. "Split Learning Based on Self-supervised Learning." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-6901-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jamaludin, Amir, Timor Kadir, and Andrew Zisserman. "Self-supervised Learning for Spinal MRIs." In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67558-9_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jawed, Shayan, Josif Grabocka, and Lars Schmidt-Thieme. "Self-supervised Learning for Semi-supervised Time Series Classification." In Advances in Knowledge Discovery and Data Mining. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Yun-Hao, Peiqin Sun, Yechang Huang, Jianxin Wu, and Shuchang Zhou. "Synergistic Self-supervised and Quantization Learning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20056-4_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moon, WonJun, Ji-Hwan Kim, and Jae-Pil Heo. "Tailoring Self-Supervision for Supervised Learning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19806-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qin, Wenkang, Shan Jiang, and Lin Luo. "Pathological Image Contrastive Self-supervised Learning." In Resource-Efficient Medical Image Analysis. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16876-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Yu, Wei Jin, and Tyler Derr. "Graph Neural Networks: Self-supervised Learning." In Graph Neural Networks: Foundations, Frontiers, and Applications. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6054-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tran, Manuel, Sophia J. Wagner, Melanie Boxberg, and Tingying Peng. "S5CL: Unifying Fully-Supervised, Self-supervised, and Semi-supervised Learning Through Hierarchical Contrastive Learning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16434-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Self-supervised learninig"

1

Ernst, Markus R., Francisco M. López, Arthur Aubret, Roland W. Fleming, and Jochen Triesch. "Self-Supervised Learning of Color Constancy." In 2024 IEEE International Conference on Development and Learning (ICDL). IEEE, 2024. http://dx.doi.org/10.1109/icdl61372.2024.10644375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kalapos, András, and Bálint Gyires-Tóth. "Whitening Consistently Improves Self-Supervised Learning." In 2024 International Conference on Machine Learning and Applications (ICMLA). IEEE, 2024. https://doi.org/10.1109/icmla61862.2024.00066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Diskin, Tzvi, and Ami Wiesel. "Self-Supervised Learning for Covariance Estimation." In 2024 32nd European Signal Processing Conference (EUSIPCO). IEEE, 2024. http://dx.doi.org/10.23919/eusipco63174.2024.10715219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Qing, Jielei Chu, Zhaoyu Li, Hua Yu, and Tianrui Li. "FedTMatch: Self-Supervised Federated Semi-Supervised Learning with Dynamic Threshold." In 2024 4th International Conference on Industrial Automation, Robotics and Control Engineering (IARCE). IEEE, 2024. https://doi.org/10.1109/iarce64300.2024.00061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yifei, Weizhi Song, You Zhou, Bo Xiong, and Xun Cao. "Hybrid Scanning Lensless Imaging by Diffractive Neural Field." In Computational Optical Sensing and Imaging. Optica Publishing Group, 2024. http://dx.doi.org/10.1364/cosi.2024.cf1a.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Choudhary, Shubham, Paul Masset, and Demba Ba. "Self Supervised Dictionary Learning Using Kernel Matching." In 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2024. http://dx.doi.org/10.1109/mlsp58920.2024.10734736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

An, Yuexuan, Hui Xue, Xingyu Zhao, and Lu Zhang. "Conditional Self-Supervised Learning for Few-Shot Classification." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/295.

Full text
Abstract:
How to learn a transferable feature representation from limited examples is a key challenge for few-shot classification. Self-supervision as an auxiliary task to the main supervised few-shot task is considered to be a conceivable way to solve the problem since self-supervision can provide additional structural information easily ignored by the main task. However, learning a good representation by traditional self-supervised methods is usually dependent on large training samples. In few-shot scenarios, due to the lack of sufficient samples, these self-supervised methods might learn a biased rep
APA, Harvard, Vancouver, ISO, and other styles
8

Beyer, Lucas, Xiaohua Zhai, Avital Oliver, and Alexander Kolesnikov. "S4L: Self-Supervised Semi-Supervised Learning." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kniaz, Vladimir Vladimirovich, Vladimir Alexandrovich Knyaz, Petr Vladislavovich Moshkantsev, and Sergey Melnikov. "DINONAT: Exploring Self-Supervised training with Neighbourhood Attention Transformers." In 33rd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2023. http://dx.doi.org/10.20948/graphicon-2023-427-435.

Full text
Abstract:
Data-driven methods achieved great progress in wide variety of machine vision and data analysis applications due to new possibilities for collecting, annotating and processing huge amounts of data, with supervised learning having the most impressive results. Unfortunately, the extremely time-consuming process of data annotation restricts wide applicability of deep learning in many applications. Several approaches, such as unsupervised learning or weakly supervised learning has been proposed recently to overcome this problem. Nowadays self-supervised learning demonstrates state-of-the-art perfo
APA, Harvard, Vancouver, ISO, and other styles
10

Fuadi, Erland Hilman, Aristo Renaldo Ruslim, Putu Wahyu Kusuma Wardhana, and Novanto Yudistira. "Gated Self-supervised Learning for Improving Supervised Learning." In 2024 IEEE Conference on Artificial Intelligence (CAI). IEEE, 2024. http://dx.doi.org/10.1109/cai59869.2024.00120.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Self-supervised learninig"

1

SECOND-ORDER ANALYSIS OF BEAM-COLUMNS BY MACHINE LEARNING-BASED STRUCTURAL ANALYSIS THROUGH PHYSICS-INFORMED NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.10.

Full text
Abstract:
The second-order analysis of slender steel members could be challenging, especially when large deflection is involved. This paper proposes a novel machine learning-based structural analysis (MLSA) method for second-order analysis of beam-columns, which could be a promising alternative to the prevailing solutions using over-simplified analytical equations or traditional finite-element-based methods. The effectiveness of the conventional machine learning method heavily depends on both the qualitative and the quantitative of the provided data. However, such data are typically scarce and expensive
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!