To see the other types of publications on this topic, follow the link: Multi-labels.

Journal articles on the topic 'Multi-labels'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-labels.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Seongmin, Hyunsik Jeon, and U. Kang. "Multi-EPL: Accurate multi-source domain adaptation." PLOS ONE 16, no. 8 (2021): e0255754. http://dx.doi.org/10.1371/journal.pone.0255754.

Full text
Abstract:
Given multiple source datasets with labels, how can we train a target model with no labeled data? Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels. MSDA is a crucial problem applicable to many practical cases where labels for the target data are unavailable due to privacy issues. Existing MSDA frameworks are limited since they align data without considering labels of the features of each domain. They also do not fully utilize the target data without labels and rely on limited feature
APA, Harvard, Vancouver, ISO, and other styles
2

Hao, Pingting, Kunpeng Liu, and Wanfu Gao. "Double-Layer Hybrid-Label Identification Feature Selection for Multi-View Multi-Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12295–303. http://dx.doi.org/10.1609/aaai.v38i11.29120.

Full text
Abstract:
Multi-view multi-label feature selection aims to select informative features where the data are collected from multiple sources with multiple interdependent class labels. For fully exploiting multi-view information, most prior works mainly focus on the common part in the ideal circumstance. However, the inconsistent part hidden in each view, including noises and specific elements, may affect the quality of mapping between labels and feature representations. Meanwhile, ignoring the specific part might lead to a suboptimal result, as each label is supposed to possess specific characteristics of
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Ziquan, Mingxuan Xia, Xiangyu Ren, et al. "Multi-Instance Multi-Label Classification from Crowdsourced Labels." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21438–46. https://doi.org/10.1609/aaai.v39i20.35445.

Full text
Abstract:
Multi-instance multi-label classification (MIML) is a fundamental task in machine learning, where each data sample comprises a bag containing several instances and multiple binary labels. Despite its wide applications, the data collection process involves matching multiple instances and labels, typically resulting in high annotation costs. In this paper, we study a novel yet practical crowdsourced multi-instance multi-label classification (CMIML) setup, where labels are collected from multiple crowd sources. To address this problem, we first propose a novel data generation process for CMIML, i
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Kai-Wei, Chong Ho Lee, and Xiao-Feng Xie. "MLHN: A Hypernetwork Model for Multi-Label Classification." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 06 (2015): 1550020. http://dx.doi.org/10.1142/s0218001415500202.

Full text
Abstract:
Multi-label classification has attracted significant attentions in machine learning. In multi-label classification, exploiting correlations among labels is an essential but nontrivial task. First, labels may be correlated in various degrees. Second, the scalability may suffer from the large number of labels, because the number of combinations among labels grows exponentially as the number of labels increases. In this paper, a multi-label hypernetwork (MLHN) is proposed to deal with these problems. By extending the traditional hypernetwork model, MLHN can represent arbitrary order correlations
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Hai-Feng, Lixin Han, Shoubao Su, and Zhou-Bao Sun. "Deep Multi-Instance Multi-Label Learning for Image Annotation." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 03 (2017): 1859005. http://dx.doi.org/10.1142/s021800141859005x.

Full text
Abstract:
Multi-Instance Multi-Label learning (MIML) is a popular framework for supervised classification where an example is described by multiple instances and associated with multiple labels. Previous MIML approaches have focused on predicting labels for instances. The idea of tackling the problem is to identify its equivalence in the traditional supervised learning framework. Motivated by the recent advancement in deep learning, in this paper, we still consider the problem of predicting labels and attempt to model deep learning in MIML learning framework. The proposed approach enables us to train de
APA, Harvard, Vancouver, ISO, and other styles
6

Xing, Yuying, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang, and Maozu Guo. "Multi-View Multi-Instance Multi-Label Learning Based on Collaborative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5508–15. http://dx.doi.org/10.1609/aaai.v33i01.33015508.

Full text
Abstract:
Multi-view Multi-instance Multi-label Learning (M3L) deals with complex objects encompassing diverse instances, represented with different feature views, and annotated with multiple labels. Existing M3L solutions only partially explore the inter or intra relations between objects (or bags), instances, and labels, which can convey important contextual information for M3L. As such, they may have a compromised performance.\
 In this paper, we propose a collaborative matrix factorization based solution called M3Lcmf. M3Lcmf first uses a heterogeneous network composed of nodes of bags, instanc
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Tianshui, Tao Pu, Hefeng Wu, Yuan Xie, and Liang Lin. "Structured Semantic Transfer for Multi-Label Recognition with Partial Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 339–46. http://dx.doi.org/10.1609/aaai.v36i1.19910.

Full text
Abstract:
Multi-label image recognition is a fundamental yet practical task because real-world images inherently possess multiple semantic labels. However, it is difficult to collect large-scale multi-label annotations due to the complexity of both the input images and output label spaces. To reduce the annotation cost, we propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels, i.e., merely some labels are known while other labels are missing (also called unknown labels) per image. The framework consists of two complementary transf
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Lei, Yuqi Chu, Guanfeng Liu, and Xindong Wu. "Multi-Objective Optimization-Based Networked Multi-Label Active Learning." Journal of Database Management 30, no. 2 (2019): 1–26. http://dx.doi.org/10.4018/jdm.2019040101.

Full text
Abstract:
Along with the fast development of network applications, network research has attracted more and more attention, where one of the most important research directions is networked multi-label classification. Based on it, unknown labels of nodes can be inferred by known labels of nodes in the neighborhood. As both the scale and complexity of networks are increasing, the problems of previously neglected system overhead are turning more and more seriously. In this article, a novel multi-objective optimization-based networked multi-label seed node selection algorithm (named as MOSS) is proposed to i
APA, Harvard, Vancouver, ISO, and other styles
9

Tan, Z. M., J. Y. Liu, Q. Li, D. Y. Wang, and C. Y. Wang. "An approach to error label discrimination based on joint clustering." Journal of Physics: Conference Series 2294, no. 1 (2022): 012018. http://dx.doi.org/10.1088/1742-6596/2294/1/012018.

Full text
Abstract:
Abstract Inaccurate multi-label learning aims at dealing with multi-label data with wrong labels. Error labels in data sets usually result in cognitive bias for objects. To discriminate and correct wrong labels is a significant issue in multi-label learning. In this paper, a joint discrimination model based on fuzzy C-means (FCM) and possible C-means (PCM) is proposed to find wrong labels in data sets. In this model, the connection between samples and their labels is analyzed based on the assumption of consistence between samples and their labels. Samples and labels are clustered by considerin
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Jun, Linchuan Xu, Kun Qian, Jing Wang, and Kenji Yamanishi. "Multi-label learning with missing and completely unobserved labels." Data Mining and Knowledge Discovery 35, no. 3 (2021): 1061–86. http://dx.doi.org/10.1007/s10618-021-00743-x.

Full text
Abstract:
AbstractMulti-label learning deals with data examples which are associated with multiple class labels simultaneously. Despite the success of existing approaches to multi-label learning, there is still a problem neglected by researchers, i.e., not only are some of the values of observed labels missing, but also some of the labels are completely unobserved for the training data. We refer to the problem as multi-label learning with missing and completely unobserved labels, and argue that it is necessary to discover these completely unobserved labels in order to mine useful knowledge and make a de
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Ze-Sen, Xuan Wu, Qing-Guo Chen, Yao Hu, and Min-Ling Zhang. "Multi-View Partial Multi-Label Learning with Graph-Based Disambiguation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3553–60. http://dx.doi.org/10.1609/aaai.v34i04.5761.

Full text
Abstract:
In multi-view multi-label learning (MVML), each training example is represented by different feature vectors and associated with multiple labels simultaneously. Nonetheless, the labeling quality of training examples is tend to be affected by annotation noises. In this paper, the problem of multi-view partial multi-label learning (MVPML) is studied, where the set of associated labels are assumed to be candidate ones and only partially valid. To solve the MVPML problem, a two-stage graph-based disambiguation approach is proposed. Firstly, the ground-truth labels of each training example are esti
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Jun, Haowei Rui, Guorong Li, Xiwen Qu, Tao Tao, and Xiao Zheng. "Multi-Label Learning With Hidden Labels." IEEE Access 8 (2020): 29667–76. http://dx.doi.org/10.1109/access.2020.2972599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Xinda, and Lili Wang. "Multi-granularity sequence generation for hierarchical image classification." Computational Visual Media 10, no. 2 (2024): 243–60. http://dx.doi.org/10.1007/s41095-022-0332-2.

Full text
Abstract:
AbstractHierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously. Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities, and also insufficiently consider relationships between the hierarchical multi-granularity labels. We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation (MGSG) approach for the hierarchical multi-granularity image classificatio
APA, Harvard, Vancouver, ISO, and other styles
14

Xie, Ming-Kun, and Sheng-Jun Huang. "Partial Multi-Label Learning with Noisy Label Identification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6454–61. http://dx.doi.org/10.1609/aaai.v34i04.6117.

Full text
Abstract:
Partial multi-label learning (PML) deals with problems where each instance is assigned with a candidate label set, which contains multiple relevant labels and some noisy labels. Recent studies usually solve PML problems with the disambiguation strategy, which recovers ground-truth labels from the candidate label set by simply assuming that the noisy labels are generated randomly. In real applications, however, noisy labels are usually caused by some ambiguous contents of the example. Based on this observation, we propose a partial multi-label learning approach to simultaneously recover the gro
APA, Harvard, Vancouver, ISO, and other styles
15

Peng, Cheng, Ke Chen, Lidan Shou, and Gang Chen. "CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14581–89. http://dx.doi.org/10.1609/aaai.v38i13.29374.

Full text
Abstract:
Multi-modal multi-label emotion recognition (MMER) aims to identify relevant emotions from multiple modalities. The challenge of MMER is how to effectively capture discriminative features for multiple labels from heterogeneous data. Recent studies are mainly devoted to exploring various fusion strategies to integrate multi-modal information into a unified representation for all labels. However, such a learning scheme not only overlooks the specificity of each modality but also fails to capture individual discriminative features for different labels. Moreover, dependencies of labels and modalit
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Ping, Wanfu Gao, Juncheng Hu, and Yonghao Li. "Multi-Label Feature Selection Based on High-Order Label Correlation Assumption." Entropy 22, no. 7 (2020): 797. http://dx.doi.org/10.3390/e22070797.

Full text
Abstract:
Multi-label data often involve features with high dimensionality and complicated label correlations, resulting in a great challenge for multi-label learning. Feature selection plays an important role in multi-label learning to address multi-label data. Exploring label correlations is crucial for multi-label feature selection. Previous information-theoretical-based methods employ the strategy of cumulative summation approximation to evaluate candidate features, which merely considers low-order label correlations. In fact, there exist high-order label correlations in label set, labels naturally
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Anqi, and Lin Zhang. "Multi-Label Text Classification Based on Label-Sentence Bi-Attention Fusion Network with Multi-Level Feature Extraction." Electronics 14, no. 1 (2025): 185. https://doi.org/10.3390/electronics14010185.

Full text
Abstract:
Multi-label text classification (MLTC) aims to assign the most appropriate label or labels to each input text. Previous studies have focused on mining textual information, ignoring the interdependence of labels and texts, thus leading to the loss of information about labels. In addition, previous studies have tended to focus on the single granularity of information in documents, ignoring the degree of inclination towards labels in different sentences in multi-labeled texts. In order to solve the above problems, this paper proposes a Label-Sentence Bi-Attention Fusion Network (LSBAFN) with mult
APA, Harvard, Vancouver, ISO, and other styles
18

Lidén, Mats, Ola Hjelmgren, Jenny Vikgren, and Per Thunberg. "Multi-Reader–Multi-Split Annotation of Emphysema in Computed Tomography." Journal of Digital Imaging 33, no. 5 (2020): 1185–93. http://dx.doi.org/10.1007/s10278-020-00378-2.

Full text
Abstract:
Abstract Emphysema is visible on computed tomography (CT) as low-density lesions representing the destruction of the pulmonary alveoli. To train a machine learning model on the emphysema extent in CT images, labeled image data is needed. The provision of these labels requires trained readers, who are a limited resource. The purpose of the study was to test the reading time, inter-observer reliability and validity of the multi-reader–multi-split method for acquiring CT image labels from radiologists. The approximately 500 slices of each stack of lung CT images were split into 1-cm chunks, with
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Xingyu, Bingbing Jiang, Kui Yu, Huanhuan Chen, and Chunyan Miao. "Multi-Label Causal Feature Selection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6430–37. http://dx.doi.org/10.1609/aaai.v34i04.6114.

Full text
Abstract:
Multi-label feature selection has received considerable attentions during the past decade. However, existing algorithms do not attempt to uncover the underlying causal mechanism, and individually solve different types of variable relationships, ignoring the mutual effects between them. Furthermore, these algorithms lack of interpretability, which can only select features for all labels, but cannot explain the correlation between a selected feature and a certain label. To address these problems, in this paper, we theoretically study the causal relationships in multi-label data, and propose a no
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Shan, Wenlong Hu, Bin Lu, et al. "Application of Label Correlation in Multi-Label Classification: A Survey." Applied Sciences 14, no. 19 (2024): 9034. http://dx.doi.org/10.3390/app14199034.

Full text
Abstract:
Multi-Label Classification refers to the classification task where a data sample is associated with multiple labels simultaneously, which is widely used in text classification, image classification, and other fields. Different from the traditional single-label classification, each instance in Multi-Label Classification corresponds to multiple labels, and there is a correlation between these labels, which contains a wealth of information. Therefore, the ability to effectively mine and utilize the complex correlations between labels has become a key factor in Multi-Label Classification methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Jin-Yu, Xian-Ling Mao, Tian-Yi Che, and Rong-Cheng Tu. "Distribution-Consistency-Guided Multi-modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12174–82. https://doi.org/10.1609/aaai.v39i11.33326.

Full text
Abstract:
Multi-modal hashing methods have gained popularity due to their fast speed and low storage requirements. Among them, the supervised methods demonstrate better performance by utilizing labels as supervisory signals compared with unsupervised methods. Currently, for almost all supervised multi-modal hashing methods, there is a hidden assumption that training sets have no noisy labels. However, labels are often annotated incorrectly due to manual labeling in real-world scenarios, which will greatly harm the retrieval performance. To address this issue, we first discover a significant distribution
APA, Harvard, Vancouver, ISO, and other styles
22

Fang, Jun-Peng, and Min-Ling Zhang. "Partial Multi-Label Learning via Credible Label Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3518–25. http://dx.doi.org/10.1609/aaai.v33i01.33013518.

Full text
Abstract:
In partial multi-label learning (PML), each training example is associated with multiple candidate labels which are only partially valid. The task of PML naturally arises in learning scenarios with inaccurate supervision, and the goal is to induce a multi-label predictor which can assign a set of proper labels for unseen instance. To learn from PML training examples, the training procedure is prone to be misled by the false positive labels concealed in candidate label set. In light of this major difficulty, a novel two-stage PML approach is proposed which works by eliciting credible labels fro
APA, Harvard, Vancouver, ISO, and other styles
23

ZHANG, Yongwei. "Learning Label Correlations for Multi-Label Online Passive Aggressive Classification Algorithm." Wuhan University Journal of Natural Sciences 29, no. 1 (2024): 51–58. http://dx.doi.org/10.1051/wujns/2024291051.

Full text
Abstract:
Label correlations are an essential technique for data mining that solves the possible correlation problem between different labels in multi-label classification. Although this technique is widely used in multi-label classification problems, batch learning deals with most issues, which consumes a lot of time and space resources. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale datasets. However, existing online learning research has done little to consider correlations between labels.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Yue, Kai Ming Ting, and Zhi-Hua Zhou. "Multi-Label Learning with Emerging New Labels." IEEE Transactions on Knowledge and Data Engineering 30, no. 10 (2018): 1901–14. http://dx.doi.org/10.1109/tkde.2018.2810872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Pengfei, Qian Xu, Qinghua Hu, Changqing Zhang, and Hong Zhao. "Multi-label feature selection with missing labels." Pattern Recognition 74 (February 2018): 488–502. http://dx.doi.org/10.1016/j.patcog.2017.09.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Yaojin, Qinghua Hu, Jia Zhang, and Xindong Wu. "Multi-label feature selection with streaming labels." Information Sciences 372 (December 2016): 256–75. http://dx.doi.org/10.1016/j.ins.2016.08.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xu, Miao, Yu-Feng Li, and Zhi-Hua Zhou. "Multi-Label Learning with PRO Loss." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 998–1004. http://dx.doi.org/10.1609/aaai.v27i1.8689.

Full text
Abstract:
Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however, cannot be met because most existing methods were designed to optimize existing criteria, yet there is no criterion which encodes the aforementioned requirement. In this paper, we present a new criterion, Pro Loss, concerning the prediction on all labels as well as the rankings of only relevant lab
APA, Harvard, Vancouver, ISO, and other styles
28

Kolber, Anna, and Oliver Meixner. "Effects of Multi-Level Eco-Labels on the Product Evaluation of Meat and Meat Alternatives—A Discrete Choice Experiment." Foods 12, no. 15 (2023): 2941. http://dx.doi.org/10.3390/foods12152941.

Full text
Abstract:
Eco-labels are an instrument for enabling informed food choices and supporting a demand-sided change towards an urgently needed sustainable food system. Lately, novel eco-labels that depict a product’s environmental life cycle assessment on a multi-level scale are being tested across Europe’s retailers. This study elicits consumers’ preferences and willingness to pay (WTP) for a multi-level eco-label. A Discrete Choice Experiment was conducted; a representative sample (n = 536) for the Austrian population was targeted via an online survey. Individual partworth utilities were estimated by means
APA, Harvard, Vancouver, ISO, and other styles
29

Song, Hwanjun, Minseok Kim, and Jae-Gil Lee. "Toward Robustness in Multi-Label Classification: A Data Augmentation Strategy against Imbalance and Noise." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 21592–601. http://dx.doi.org/10.1609/aaai.v38i19.30157.

Full text
Abstract:
Multi-label classification poses challenges due to imbalanced and noisy labels in training data. In this paper, we propose a unified data augmentation method, named BalanceMix, to address these challenges. Our approach includes two samplers for imbalanced labels, generating minority-augmented instances with high diversity. It also refines multi-labels at the label-wise granularity, categorizing noisy labels as clean, re-labeled, or ambiguous for robust optimization. Extensive experiments on three benchmark datasets demonstrate that BalanceMix outperforms existing state-of-the-art methods. We r
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Xiujuan, and Yuchen Zhou. "Multi-Label Feature Selection with Conditional Mutual Information." Computational Intelligence and Neuroscience 2022 (October 8, 2022): 1–13. http://dx.doi.org/10.1155/2022/9243893.

Full text
Abstract:
Feature selection is an important way to optimize the efficiency and accuracy of classifiers. However, traditional feature selection methods cannot work with many kinds of data in the real world, such as multi-label data. To overcome this challenge, multi-label feature selection is developed. Multi-label feature selection plays an irreplaceable role in pattern recognition and data mining. This process can improve the efficiency and accuracy of multi-label classification. However, traditional multi-label feature selection based on mutual information does not fully consider the effect of redunda
APA, Harvard, Vancouver, ISO, and other styles
31

Pu, Tao, Tianshui Chen, Hefeng Wu, and Liang Lin. "Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2091–98. http://dx.doi.org/10.1609/aaai.v36i2.20105.

Full text
Abstract:
Training the multi-label image recognition models with partial labels, in which merely some labels are known while others are unknown for each image, is a considerably challenging and practical task. To address this task, current algorithms mainly depend on pre-training classification or similarity models to generate pseudo labels for the unknown labels. However, these algorithms depend on sufficient multi-label annotations to train the models, leading to poor performance especially with low known label proportion. In this work, we propose to blend category-specific representation across diffe
APA, Harvard, Vancouver, ISO, and other styles
32

Kang, Yujin, and Yoon-Sik Cho. "Beyond Single Emotion: Multi-label Approach to Conversational Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 23 (2025): 24321–29. https://doi.org/10.1609/aaai.v39i23.34609.

Full text
Abstract:
Emotion recognition in conversation (ERC) has been promoted with diverse approaches in the recent years. However, many studies have pointed out that emotion shift and confusing labels make it difficult for models to distinguish between different emotions. Existing ERC models suffer from these problems when the emotions are forced to be mapped into single label. In this paper, we utilize our strategies for extending single label to multi-labels. We then propose a multi-label classification framework for emotion recognition in conversation (ML-ERC). Specifically, we introduce weighted supervised
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Zhen, Yiqun Duan, Liu Liu, and Dacheng Tao. "Multi-label Few-shot Learning with Semantic Inference (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (2021): 15917–18. http://dx.doi.org/10.1609/aaai.v35i18.17955.

Full text
Abstract:
Few-shot learning can adapt the classification model to new labels with only a few labeled examples. Previous studies mainly focus on the scenario of a single category label per example but have not solved the more challenging multi-label scenario with exponential-sized output space and low-data effectively. In this paper, we propose a semantic-aware meta-learning model for multi-label few-shot learning. Our approach can learn and infer the semantic correlation between unseen labels and historical labels to quickly adapt multi-label tasks from only a few examples. Specifically, features can be
APA, Harvard, Vancouver, ISO, and other styles
34

Cui, Zijun, Yong Zhang, and Qiang Ji. "Label Error Correction and Generation through Label Relationships." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3693–700. http://dx.doi.org/10.1609/aaai.v34i04.5778.

Full text
Abstract:
For multi-label supervised learning, the quality of the label annotation is important. However, for many real world multi-label classification applications, label annotations often lack quality, in particular when label annotation requires special expertise, such as annotating fine-grained labels. The relationships among labels, on other hand, are usually stable and robust to errors. For this reason, we propose to capture and leverage label relationships at different levels to improve fine-grained label annotation quality and to generate labels. Two levels of labels, including object-level lab
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Tianxiang, and Shuqun Yang. "Contrastive Enhanced Learning for Multi-Label Text Classification." Applied Sciences 14, no. 19 (2024): 8650. http://dx.doi.org/10.3390/app14198650.

Full text
Abstract:
Multi-label text classification (MLTC) aims to assign appropriate labels to each document from a given set. Prior research has acknowledged the significance of label information, but its utilization remains insufficient. Existing approaches often focus on either label correlation or label textual semantics, without fully leveraging the information contained within labels. In this paper, we propose a multi-perspective contrastive model (MPCM) with an attention mechanism to integrate labels and documents, utilizing contrastive methods to enhance label information from both textual semantic and c
APA, Harvard, Vancouver, ISO, and other styles
36

Lyu, Gengyu, Bohang Sun, Xiang Deng, and Songhe Feng. "Addressing Multi-Label Learning with Partial Labels: From Sample Selection to Label Selection." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19251–59. https://doi.org/10.1609/aaai.v39i18.34119.

Full text
Abstract:
Multi-label Learning with Partial Labels (ML-PL) learns from training data, where each sample is annotated with part of positive labels while leaving the rest of positive labels unannotated. Existing methods mainly focus on extending multi-label losses to estimate unannotated labels, further inducing a missing-robust network. However, training with single network could lead to confirmation bias (i.e., the model tends to confirm its mistakes). To tackle this issue, we propose a novel learning paradigm termed Co-Label Selection (CLS), where two networks feed forward all data and cooperate in a c
APA, Harvard, Vancouver, ISO, and other styles
37

Xiao, Lin, Xiangliang Zhang, Liping Jing, Chi Huang, and Mingyang Song. "Does Head Label Help for Long-Tailed Multi-Label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14103–11. http://dx.doi.org/10.1609/aaai.v35i16.17660.

Full text
Abstract:
Multi-label text classification (MLTC) aims to annotate documents with the most relevant labels from a number of candidate labels. In real applications, the distribution of label frequency often exhibits a long tail, i.e., a few labels are associated with a large number of documents (a.k.a. head labels), while a large fraction of labels are associated with a small number of documents (a.k.a. tail labels). To address the challenge of insufficient training data on tail label classification, we propose a Head-to-Tail Network (HTTN) to transfer the meta-knowledge from the data-rich head labels to
APA, Harvard, Vancouver, ISO, and other styles
38

Siringoringo, Rimbun, Jamaluddin Jamaluddin, and Resianta Perangin-angin. "TEXT MINING DAN KLASIFIKASI MULTI LABEL MENGGUNAKAN XGBOOST." METHOMIKA Jurnal Manajemen Informatika dan Komputerisasi Akuntansi 6, no. 6 (2022): 234–38. http://dx.doi.org/10.46880/jmika.vol6no2.pp234-238.

Full text
Abstract:
The conventional classification process is applied to find a single criterion or label. The multi-label classification process is more complex because a large number of labels results in more classes. Another aspect that must be considered in multi-label classification is the existence of mutual dependencies between data labels. In traditional binary classification, classification analysis only aims to determine the label in the text, whether positive or negative. This method is sub-optimal because the relationship between labels cannot be determined. To overcome the weaknesses of these tradit
APA, Harvard, Vancouver, ISO, and other styles
39

Rottoli, Giovanni Daian, and Carlos Casanova. "Multi-criteria and Multi-expert Requirement Prioritization using Fuzzy Linguistic Labels." ParadigmPlus 3, no. 1 (2022): 1–18. http://dx.doi.org/10.55969/paradigmplus.v3n1a1.

Full text
Abstract:
Requirement prioritization in Software Engineering is the activity that helps to select and order for the requirements to be implemented in each software development process iteration. Thus, requirement prioritization assists the decision-making process during iteration management. This work presents a method for requirement prioritization that considers many experts' opinions on multiple decision criteria provided using fuzzy linguistic labels, a tool that allows capturing the imprecision of each experts' judgment. These opinions are then aggregated using the fuzzy aggregation operator MLIOWA
APA, Harvard, Vancouver, ISO, and other styles
40

Jiang, Ting, Deqing Wang, Leilei Sun, Huayi Yang, Zhengyang Zhao, and Fuzhen Zhuang. "LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 7987–94. http://dx.doi.org/10.1609/aaai.v35i9.16974.

Full text
Abstract:
Extreme multi-label text classification(XMC) is a task for finding the most relevant labels from a large label set. Nowadays deep learning-based methods have shown significant success in XMC. However, the existing methods (e.g., AttentionXML and X-Transformer etc) still suffer from 1) combining several models to train and predict for one dataset, and 2) sampling negative labels statically during the process of training label ranking model, which will harm the performance and accuracy of model. To address the above problems, we propose LightXML, which adopts end-to-end training and dynamical ne
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yi, Zhecheng Zhang, Mingyuan Chen, Hengyang Lu, Lei Zhang, and Chongjun Wang. "LAMB: A novel algorithm of label collaboration based multi-label learning." Intelligent Data Analysis 26, no. 5 (2022): 1229–45. http://dx.doi.org/10.3233/ida-215946.

Full text
Abstract:
Exploiting label correlation is crucially important in multi-label learning, where each instance is associated with multiple labels simultaneously. Multi-label learning is more complex than single-label learning for that the labels tend to be correlated. Traditional multi-label learning algorithms learn independent classifiers for each label and employ ranking or threshold on the classification results. Most existing methods take label correlation as prior knowledge, which have worked well, but they failed to make full use of label dependency. As a result, the real relationship among labels ma
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Tianci, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, and Jing Gao. "SimFair: A Unified Framework for Fairness-Aware Multi-Label Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14338–46. http://dx.doi.org/10.1609/aaai.v37i12.26677.

Full text
Abstract:
Recent years have witnessed increasing concerns towards unfair decisions made by machine learning algorithms. To improve fairness in model decisions, various fairness notions have been proposed and many fairness-aware methods are developed. However, most of existing definitions and methods focus only on single-label classification. Fairness for multi-label classification, where each instance is associated with more than one labels, is still yet to establish. To fill this gap, we study fairness-aware multi-label classification in this paper. We start by extending Demographic Parity (DP) and Equ
APA, Harvard, Vancouver, ISO, and other styles
43

Zheng, Maoji, Ziyu Xu, Qiming Xia, Hai Wu, Chenglu Wen, and Cheng Wang. "Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 10591–98. https://doi.org/10.1609/aaai.v39i10.33150.

Full text
Abstract:
LIDAR-based 3D object detection and semantic segmentation are critical tasks in 3D scene understanding. Traditional detection and segmentation methods supervise their models through bounding box labels and semantic mask labels. However, these two independent labels inherently contain significant redundancy. This paper aims to eliminate the redundancy by supervising 3D object detection using only semantic labels. However, the challenge arises due to the incomplete geometry structure and boundary ambiguity of point cloud instances, leading to inaccurate pseudo-labels and poor detection results.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Pengyu, Lin Xiao, Bing Liu, Sijin Lu, Liping Jing, and Jian Yu. "Label-Specific Feature Augmentation for Long-Tailed Multi-Label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10602–10. http://dx.doi.org/10.1609/aaai.v37i9.26259.

Full text
Abstract:
Multi-label text classification (MLTC) involves tagging a document with its most relevant subset of labels from a label set. In real applications, labels usually follow a long-tailed distribution, where most labels (called as tail-label) only contain a small number of documents and limit the performance of MLTC. To facilitate this low-resource problem, researchers introduced a simple but effective strategy, data augmentation (DA). However, most existing DA approaches struggle in multi-label settings. The main reason is that the augmented documents for one label may inevitably influence the oth
APA, Harvard, Vancouver, ISO, and other styles
45

Yu, Tianyu, Cuiwei Liu, Zhuo Yan, and Xiangbin Shi. "A Multi-Task Framework for Action Prediction." Information 11, no. 3 (2020): 158. http://dx.doi.org/10.3390/info11030158.

Full text
Abstract:
Predicting the categories of actions in partially observed videos is a challenging task in the computer vision field. The temporal progress of an ongoing action is of great importance for action prediction, since actions can present different characteristics at different temporal stages. To this end, we propose a novel multi-task deep forest framework, which treats temporal progress analysis as a relevant task to action prediction and takes advantage of observation ratio labels of incomplete videos during training. The proposed multi-task deep forest is a cascade structure of random forests an
APA, Harvard, Vancouver, ISO, and other styles
46

Khandagale, Sujay, Han Xiao, and Rohit Babbar. "Bonsai: diverse and shallow trees for extreme multi-label classification." Machine Learning 109, no. 11 (2020): 2099–119. http://dx.doi.org/10.1007/s10994-020-05888-2.

Full text
Abstract:
Abstract Extreme multi-label classification (XMC) refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. In this paper, we develop a suite of algorithms, called , which generalizes the notion of label representation in XMC, and partitions the labels in the representation space to learn shallow trees. We show three concrete realizations of this label representation space including: (i) the input space which is spanned by the input features, (ii) the output space spanned by label vectors based on their co-occurrence with other labels, and (iii) the
APA, Harvard, Vancouver, ISO, and other styles
47

Mu, Dejun, Junhong Duan, Xiaoyu Li, Hang Dai, Xiaoyan Cai, and Lantian Guo. "Expede Herculem: Learning Multi Labels From Single Label." IEEE Access 6 (2018): 61410–18. http://dx.doi.org/10.1109/access.2018.2876014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ma, Jianghong, Zhaoyang Tian, Haijun Zhang, and Tommy W. S. Chow. "Multi-Label Low-dimensional Embedding with Missing Labels." Knowledge-Based Systems 137 (December 2017): 65–82. http://dx.doi.org/10.1016/j.knosys.2017.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Frasca, Marco, Simone Bassis, and Giorgio Valentini. "Learning node labels with multi-category Hopfield networks." Neural Computing and Applications 27, no. 6 (2015): 1677–92. http://dx.doi.org/10.1007/s00521-015-1965-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Xinran, Wuyin Jin, Xiangyang Xu, and Hao Yang. "A Domain-Adversarial Multi-Graph Convolutional Network for Unsupervised Domain Adaptation Rolling Bearing Fault Diagnosis." Symmetry 14, no. 12 (2022): 2654. http://dx.doi.org/10.3390/sym14122654.

Full text
Abstract:
The transfer learning method, based on unsupervised domain adaptation (UDA), has been broadly utilized in research on fault diagnosis under variable working conditions with certain results. However, traditional UDA methods pay more attention to extracting information for the class labels and domain labels of data, ignoring the influence of data structure information on the extracted features. Therefore, we propose a domain-adversarial multi-graph convolutional network (DAMGCN) for UDA. A multi-graph convolutional network (MGCN), integrating three graph convolutional layers (multi-receptive fie
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!