Academic literature on the topic 'Cross-Domain Few-Shot Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cross-Domain Few-Shot Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cross-Domain Few-Shot Learning"

1

Hassani, Kaveh. "Cross-Domain Few-Shot Graph Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6856–64. http://dx.doi.org/10.1609/aaai.v36i6.20642.

Full text
Abstract:
We study the problem of few-shot graph classification across domains with nonequivalent feature spaces by introducing three new cross-domain benchmarks constructed from publicly available datasets. We also propose an attention-based graph encoder that uses three congruent views of graphs, one contextual and two topological views, to learn representations of task-specific information for fast adaptation, and task-agnostic information for knowledge transfer. We run exhaustive experiments to evaluate the performance of contrastive and meta-learning strategies. We show that when coupled with metri
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Qi, Yingluo Jiang, and Zhijie Wen. "TACDFSL: Task Adaptive Cross Domain Few-Shot Learning." Symmetry 14, no. 6 (2022): 1097. http://dx.doi.org/10.3390/sym14061097.

Full text
Abstract:
Cross Domain Few-Shot Learning (CDFSL) has attracted the attention of many scholars since it is closer to reality. The domain shift between the source domain and the target domain is a crucial problem for CDFSL. The essence of domain shift is the marginal distribution difference between two domains which is implicit and unknown. So the empirical marginal distribution measurement is proposed, that is, WDMDS (Wasserstein Distance for Measuring Domain Shift) and MMDMDS (Maximum Mean Discrepancy for Measuring Domain Shift). Besides this, pre-training a feature extractor and fine-tuning a classifie
APA, Harvard, Vancouver, ISO, and other styles
3

Paeedeh, Naeem, Mahardhika Pratama, Muhammad Anwar Ma’sum, Wolfgang Mayer, Zehong Cao, and Ryszard Kowlczyk. "Cross-domain few-shot learning via adaptive transformer networks." Knowledge-Based Systems 288 (March 2024): 111458. http://dx.doi.org/10.1016/j.knosys.2024.111458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, Suhyun, Jungwon Park, Wonseok Lee, and Wonjong Rhee. "Task-Specific Preconditioner for Cross-Domain Few-Shot Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 17 (2025): 17760–69. https://doi.org/10.1609/aaai.v39i17.33953.

Full text
Abstract:
Cross-Domain Few-Shot Learning (CDFSL) methods typically parameterize models with task-agnostic and task-specific parameters. To adapt task-specific parameters, recent approaches have utilized fixed optimization strategies, despite their potential sub-optimality across varying domains or target tasks. To address this issue, we propose a novel adaptation mechanism called Task-Specific Preconditioned gradient descent (TSP). Our method first meta-learns Domain-Specific Preconditioners (DSPs) that capture the characteristics of each meta-training domain, which are then linearly combined using task
APA, Harvard, Vancouver, ISO, and other styles
5

Wawer, Aleksander. "Few-Shot Methods for Aspect-Level Sentiment Analysis." Information 15, no. 11 (2024): 664. http://dx.doi.org/10.3390/info15110664.

Full text
Abstract:
In this paper, we explore the approaches to the problem of cross-domain few-shot classification of sentiment aspects. By cross-domain few-shot, we mean a setting where the model is trained on large data in one domain (for example, hotel reviews) and is intended to perform on another (for example, restaurant reviews) with only a few labelled examples in the target domain. We start with pre-trained monolingual language models. Using the Polish language dataset AspectEmo, we compare model training using standard gradient-based learning to a zero-shot approach and two dedicated few-shot methods: P
APA, Harvard, Vancouver, ISO, and other styles
6

Yuan, Wang, Zhizhong Zhang, Cong Wang, Haichuan Song, Yuan Xie, and Lizhuang Ma. "Task-Level Self-Supervision for Cross-Domain Few-Shot Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 3215–23. http://dx.doi.org/10.1609/aaai.v36i3.20230.

Full text
Abstract:
Learning with limited labeled data is a long-standing problem. Among various solutions, episodic training progres-sively classifies a series of few-shot tasks and thereby is as-sumed to be beneficial for improving the model’s generalization ability. However, recent studies show that it is eveninferior to the baseline model when facing domain shift between base and novel classes. To tackle this problem, we pro-pose a domain-independent task-level self-supervised (TL-SS) method for cross-domain few-shot learning.TL-SS strategy promotes the general idea of label-based instance-levelsupervision to
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Jiamin, Xin Liu, Xiaotian Yin, Tianzhu Zhang, and Yongdong Zhang. "Task-Adaptive Prompted Transformer for Cross-Domain Few-Shot Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 6012–20. http://dx.doi.org/10.1609/aaai.v38i6.28416.

Full text
Abstract:
Cross-Domain Few-Shot Learning (CD-FSL) aims at recognizing samples in novel classes from unseen domains that are vastly different from training classes, with few labeled samples. However, the large domain gap between training and novel classes makes previous FSL methods perform poorly. To address this issue, we propose MetaPrompt, a Task-adaptive Prompted Transformer model for CD-FSL, by jointly exploiting prompt learning and the parameter generation framework. The proposed MetaPrompt enjoys several merits. First, a task-conditioned prompt generator is established upon attention mechanisms. I
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xueying, Zihang He, Lingyan Zhang, Shaojun Guo, Bin Hu, and Kehua Guo. "CDCNet: Cross-domain few-shot learning with adaptive representation enhancement." Pattern Recognition 162 (June 2025): 111382. https://doi.org/10.1016/j.patcog.2025.111382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cui, Xiaodong, Zhuofan He, Yangtao Xue, Keke Tang, Peican Zhu, and Jing Han. "Cross-Domain Contrastive Learning-Based Few-Shot Underwater Acoustic Target Recognition." Journal of Marine Science and Engineering 12, no. 2 (2024): 264. http://dx.doi.org/10.3390/jmse12020264.

Full text
Abstract:
Underwater Acoustic Target Recognition (UATR) plays a crucial role in underwater detection devices. However, due to the difficulty and high cost of collecting data in the underwater environment, UATR still faces the problem of small datasets. Few-shot learning (FSL) addresses this challenge through techniques such as Siamese networks and prototypical networks. However, it also suffers from the issue of overfitting, which leads to catastrophic forgetting and performance degradation. Current underwater FSL methods primarily focus on mining similar information within sample pairs, ignoring the un
APA, Harvard, Vancouver, ISO, and other styles
10

Yandong, Du, Feng Lin, Tao Peng, Gong Xun, and Wang Jun. "Meta-transfer learning in cross-domain image classification with few-shot learning." Journal of Image and Graphics 28, no. 9 (2023): 2899–912. http://dx.doi.org/10.11834/jig.220664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Cross-Domain Few-Shot Learning"

1

Guan, Jiechao, Manli Zhang, and Zhiwu Lu. "Large-Scale Cross-Domain Few-Shot Learning." In Computer Vision – ACCV 2020. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69535-4_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Yunhui, Noel C. Codella, Leonid Karlinsky, et al. "A Broader Study of Cross-Domain Few-Shot Learning." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58583-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Wentao, Zhang Zhang, Wei Wang, Liang Wang, Zilei Wang, and Tieniu Tan. "Cross-Domain Cross-Set Few-Shot Learning via Learning Compact and Aligned Representations." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20044-1_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hatano, Masashi, Ryo Hachiuma, Ryo Fujii, and Hideo Saito. "Multimodal Cross-Domain Few-Shot Learning for Egocentric Action Recognition." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73414-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Hongyu, Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael Mayo. "A Comparison of Machine Learning Methods for Cross-Domain Few-Shot Learning." In AI 2020: Advances in Artificial Intelligence. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64984-5_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Yixiao, and Fanzhang Li. "Task-Aware Adversarial Feature Perturbation for Cross-Domain Few-Shot Learning." In Artificial Neural Networks and Machine Learning – ICANN 2023. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44213-1_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xiang, Ziwei, Luming Chen, Kai Lei, and Xu-Yao Zhang. "Cross-Domain Few-Shot Learning with Equiangular Embedding and Dynamic Adversarial Augmentation." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-6582-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aboutahoun, Dina, Rami Zewail, Keiji Kimura, and Mostafa I. Soliman. "Cross-Domain Few-Shot Sparse-Quantization Aware Learning for Lymphoblast Detection in Blood Smear Images." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47665-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Saha, Punyajoy, Divyanshu Sheth, Kushal Kedia, Binny Mathew, and Animesh Mukherjee. "Rationale-Guided Few-Shot Classification to Detect Abusive Language." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230497.

Full text
Abstract:
Abusive language is a concerning problem in online social media. Past research on detecting abusive language covers different platforms, languages, demographies, etc. However, models trained using these datasets do not perform well in cross-domain evaluation settings. To overcome this, a common strategy is to use a few samples from the target domain to train models to get better performance in that domain (cross-domain few-shot training). However, this might cause the models to overfit the artefacts of those samples. A compelling solution could be to guide the models toward rationales, i.e., s
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yafeng, Zilan Yu, Yuang Huang, and Jing Tang. "CLLMFS: A Contrastive Learning Enhanced Large Language Model Framework for Few-Shot Named Entity Recognition." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240714.

Full text
Abstract:
Few-shot Named Entity Recognition (NER), the task of identifying named entities with only a limited amount of labeled data, has gained increasing significance in natural language processing. While existing methodologies have shown some effectiveness, such as enriching label semantics through various prompting modes or employing metric learning techniques, their performance exhibits limited robustness across diverse domains due to the lack of rich knowledge in their pre-trained models. To address this issue, we propose CLLMFS, a Contrastive Learning enhanced Large Language Model (LLM) Framework
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cross-Domain Few-Shot Learning"

1

Balakrishnan, T. Suresh, Gururama Senthilvel P, U. Samson Ebenezar, L. Karthikeyan, and Kishan B. S. "Exploring Few-Shot Learning to Enhance NLP's Cross-Domain Capabilities." In 2025 International Conference on Computing for Sustainability and Intelligent Future (COMP-SIF). IEEE, 2025. https://doi.org/10.1109/comp-sif65618.2025.10969937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Yixiong, Yicong Liu, Yiman Hu, Yuhua Li, and Ruixuan Li. "Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot Learning." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shi, Jiaji, Yuan Liu, Wen Yi, and Xiaochen Lu. "Semantic-Guided Cross-Modal Feature Alignment for Cross-Domain Few-Shot Hyperspectral Image Classification." In 2025 6th International Conference on Computer Vision, Image and Deep Learning (CVIDL). IEEE, 2025. https://doi.org/10.1109/cvidl65390.2025.11085853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gan, Yaozong, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, and Miki Haseyama. "Cross-Domain Few-Shot In-Context Learning For Enhancing Traffic Sign Recognition." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yue, Ling, Lin Feng, Qiuping Shuai, Lingxiao Xu, and Zihao Li. "Diversified Task Augmentation with Redundancy Reduction for Cross-Domain Few-Shot Learning." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shang, Zhenduo, Xiyao Liu, Xing Xie, and Zhi Han. "Collaborative Teaching with Attention Distillation for Multiple Cross-Domain Few-Shot Learning." In 2024 IEEE 14th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2024. http://dx.doi.org/10.1109/cyber63482.2024.10749467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Haochen, Wenxiu Diao, Lingwu Meng, Zhongjie Qian, Zhixin Zhao, and Jingzhou Chen. "Cross-Domain Few-Shot Learning with Label Propagation for Hyperspectral Image Classification." In 2024 IEEE International Conference on Progress in Informatics and Computing (PIC). IEEE, 2024. https://doi.org/10.1109/pic62406.2024.10892799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aimen, Aroof, Arsh Verma, Makarand Tapaswi, and Narayanan C. Krishnan. "Generalized Cross-Domain Multi-Label Few-Shot Learning for Chest X-Rays." In 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI). IEEE, 2025. https://doi.org/10.1109/isbi60581.2025.10980999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shuai, Qiuping, Lin Feng, Ling Yue, Zihao Li, and Lingxiao Xu. "De-Redundancy Distillation And Feature Shift Correction For Cross-Domain Few-Shot Learning." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lv, Guohua, Xiang Gao, Qiang Chi, Guixin Zhao, Aimei Dong, and Wei Li. "SSFSL: Self-Supervised and Few-Shot Learning for Cross-Domain Hyperspectral Image Classification." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!