To see the other types of publications on this topic, follow the link: Multi-labels.

Journal articles on the topic 'Multi-labels'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-labels.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Seongmin, Hyunsik Jeon, and U. Kang. "Multi-EPL: Accurate multi-source domain adaptation." PLOS ONE 16, no. 8 (2021): e0255754. http://dx.doi.org/10.1371/journal.pone.0255754.

Full text
Abstract:
Given multiple source datasets with labels, how can we train a target model with no labeled data? Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels. MSDA is a crucial problem applicable to many practical cases where labels for the target data are unavailable due to privacy issues. Existing MSDA frameworks are limited since they align data without considering labels of the features of each domain. They also do not fully utilize the target data without labels and rely on limited feature extraction with a single extractor. In this paper, we propose Multi-EPL, a novel method for MSDA. Multi-EPL exploits label-wise moment matching to align the conditional distributions of the features for the labels, uses pseudolabels for the unavailable target labels, and introduces an ensemble of multiple feature extractors for accurate domain adaptation. Extensive experiments show that Multi-EPL provides the state-of-the-art performance for MSDA tasks in both image domains and text domains, improving the accuracy by up to 13.20%.
APA, Harvard, Vancouver, ISO, and other styles
2

Hao, Pingting, Kunpeng Liu, and Wanfu Gao. "Double-Layer Hybrid-Label Identification Feature Selection for Multi-View Multi-Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12295–303. http://dx.doi.org/10.1609/aaai.v38i11.29120.

Full text
Abstract:
Multi-view multi-label feature selection aims to select informative features where the data are collected from multiple sources with multiple interdependent class labels. For fully exploiting multi-view information, most prior works mainly focus on the common part in the ideal circumstance. However, the inconsistent part hidden in each view, including noises and specific elements, may affect the quality of mapping between labels and feature representations. Meanwhile, ignoring the specific part might lead to a suboptimal result, as each label is supposed to possess specific characteristics of its own. To deal with the double problems in multi-view multi-label feature selection, we propose a unified loss function which is a totally splitting structure for observed labels as hybrid labels that is, common labels, view-to-all specific labels and noisy labels, and the view-to-all specific labels further splits into several specific labels of each view. The proposed method simultaneously considers the consistency and complementarity of different views. Through exploring the feature weights of hybrid labels, the mapping relationships between labels and features can be established sequentially based on their attributes. Additionally, the interrelatedness among hybrid labels is also investigated and injected into the function. Specific to the specific labels of each view, we construct the novel regularization paradigm incorporating logic operations. Finally, the convergence of the result is proved after applying the multiplicative update rules. Experiments on six datasets demonstrate the effectiveness and superiority of our method compared with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Ziquan, Mingxuan Xia, Xiangyu Ren, et al. "Multi-Instance Multi-Label Classification from Crowdsourced Labels." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21438–46. https://doi.org/10.1609/aaai.v39i20.35445.

Full text
Abstract:
Multi-instance multi-label classification (MIML) is a fundamental task in machine learning, where each data sample comprises a bag containing several instances and multiple binary labels. Despite its wide applications, the data collection process involves matching multiple instances and labels, typically resulting in high annotation costs. In this paper, we study a novel yet practical crowdsourced multi-instance multi-label classification (CMIML) setup, where labels are collected from multiple crowd sources. To address this problem, we first propose a novel data generation process for CMIML, i.e., cross-label transition, where cross-label annotation error is more likely to appear rather than previous single-label transition assumption, due to the inherent similarity of localized instances from different classes. Then, we formally define the cross-label transition by cross-label transition matrices which are dependent across classes. Subsequently, we establish the first unbiased risk estimator for CMIML and further improve it through aggregation techniques, along with a rigorous generalization error bound. We also provide a practical implementation of cross-label transition matrix estimation. Comprehensive experiments on six benchmark datasets under various scenarios demonstrate that our algorithm outperforms the baselines by a large margin, validating its effectiveness in handling the CMIML problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Kai-Wei, Chong Ho Lee, and Xiao-Feng Xie. "MLHN: A Hypernetwork Model for Multi-Label Classification." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 06 (2015): 1550020. http://dx.doi.org/10.1142/s0218001415500202.

Full text
Abstract:
Multi-label classification has attracted significant attentions in machine learning. In multi-label classification, exploiting correlations among labels is an essential but nontrivial task. First, labels may be correlated in various degrees. Second, the scalability may suffer from the large number of labels, because the number of combinations among labels grows exponentially as the number of labels increases. In this paper, a multi-label hypernetwork (MLHN) is proposed to deal with these problems. By extending the traditional hypernetwork model, MLHN can represent arbitrary order correlations among labels. The classification model of MLHN is simple and the computational complexity of MLHN is linear with respect to the number of labels, which contribute to the good scalability of MLHN. We perform experiments on a variety of datasets. The results illustrate that the proposed MLHN achieves competitive performances against state-of-the-art multi-label classification algorithms in terms of both effectiveness and scalability with respect to the number of labels.
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Hai-Feng, Lixin Han, Shoubao Su, and Zhou-Bao Sun. "Deep Multi-Instance Multi-Label Learning for Image Annotation." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 03 (2017): 1859005. http://dx.doi.org/10.1142/s021800141859005x.

Full text
Abstract:
Multi-Instance Multi-Label learning (MIML) is a popular framework for supervised classification where an example is described by multiple instances and associated with multiple labels. Previous MIML approaches have focused on predicting labels for instances. The idea of tackling the problem is to identify its equivalence in the traditional supervised learning framework. Motivated by the recent advancement in deep learning, in this paper, we still consider the problem of predicting labels and attempt to model deep learning in MIML learning framework. The proposed approach enables us to train deep convolutional neural network with images from social networks where images are well labeled, even labeled with several labels or uncorrelated labels. Experiments on real-world datasets demonstrate the effectiveness of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Xing, Yuying, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang, and Maozu Guo. "Multi-View Multi-Instance Multi-Label Learning Based on Collaborative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5508–15. http://dx.doi.org/10.1609/aaai.v33i01.33015508.

Full text
Abstract:
Multi-view Multi-instance Multi-label Learning (M3L) deals with complex objects encompassing diverse instances, represented with different feature views, and annotated with multiple labels. Existing M3L solutions only partially explore the inter or intra relations between objects (or bags), instances, and labels, which can convey important contextual information for M3L. As such, they may have a compromised performance.\
 In this paper, we propose a collaborative matrix factorization based solution called M3Lcmf. M3Lcmf first uses a heterogeneous network composed of nodes of bags, instances, and labels, to encode different types of relations via multiple relational data matrices. To preserve the intrinsic structure of the data matrices, M3Lcmf collaboratively factorizes them into low-rank matrices, explores the latent relationships between bags, instances, and labels, and selectively merges the data matrices. An aggregation scheme is further introduced to aggregate the instance-level labels into bag-level and to guide the factorization. An empirical study on benchmark datasets show that M3Lcmf outperforms other related competitive solutions both in the instance-level and bag-level prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Tianshui, Tao Pu, Hefeng Wu, Yuan Xie, and Liang Lin. "Structured Semantic Transfer for Multi-Label Recognition with Partial Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 339–46. http://dx.doi.org/10.1609/aaai.v36i1.19910.

Full text
Abstract:
Multi-label image recognition is a fundamental yet practical task because real-world images inherently possess multiple semantic labels. However, it is difficult to collect large-scale multi-label annotations due to the complexity of both the input images and output label spaces. To reduce the annotation cost, we propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels, i.e., merely some labels are known while other labels are missing (also called unknown labels) per image. The framework consists of two complementary transfer modules that explore within-image and cross-image semantic correlations to transfer knowledge of known labels to generate pseudo labels for unknown labels. Specifically, an intra-image semantic transfer module learns image-specific label co-occurrence matrix and maps the known labels to complement unknown labels based on this matrix. Meanwhile, a cross-image transfer module learns category-specific feature similarities and helps complement unknown labels with high similarities. Finally, both known and generated labels are used to train the multi-label recognition models. Extensive experiments on the Microsoft COCO, Visual Genome and Pascal VOC datasets show that the proposed SST framework obtains superior performance over current state-of-the-art algorithms. Codes are available at https://github.com/HCPLab-SYSU/HCP-MLR-PL.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Lei, Yuqi Chu, Guanfeng Liu, and Xindong Wu. "Multi-Objective Optimization-Based Networked Multi-Label Active Learning." Journal of Database Management 30, no. 2 (2019): 1–26. http://dx.doi.org/10.4018/jdm.2019040101.

Full text
Abstract:
Along with the fast development of network applications, network research has attracted more and more attention, where one of the most important research directions is networked multi-label classification. Based on it, unknown labels of nodes can be inferred by known labels of nodes in the neighborhood. As both the scale and complexity of networks are increasing, the problems of previously neglected system overhead are turning more and more seriously. In this article, a novel multi-objective optimization-based networked multi-label seed node selection algorithm (named as MOSS) is proposed to improve both the prediction accuracy for unknown labels of nodes from labels of seed nodes during classification and the system overhead for mining the labels of seed nodes with third parties before classification. Compared with other algorithms on several real networked data sets, MOSS algorithm not only greatly reduces the system overhead before classification but also improves the prediction accuracy during classification.
APA, Harvard, Vancouver, ISO, and other styles
9

Tan, Z. M., J. Y. Liu, Q. Li, D. Y. Wang, and C. Y. Wang. "An approach to error label discrimination based on joint clustering." Journal of Physics: Conference Series 2294, no. 1 (2022): 012018. http://dx.doi.org/10.1088/1742-6596/2294/1/012018.

Full text
Abstract:
Abstract Inaccurate multi-label learning aims at dealing with multi-label data with wrong labels. Error labels in data sets usually result in cognitive bias for objects. To discriminate and correct wrong labels is a significant issue in multi-label learning. In this paper, a joint discrimination model based on fuzzy C-means (FCM) and possible C-means (PCM) is proposed to find wrong labels in data sets. In this model, the connection between samples and their labels is analyzed based on the assumption of consistence between samples and their labels. Samples and labels are clustered by considering this connection in the joint FCM-PCM clustering model. An inconsistence measure between a sample and its label is established to recognize wrong labels. A series of simulated experiments are comparatively implemented on several real multi-label data sets and experimental results show superior performance of the proposed model in comparison with two state of the art methods of mislabeling correction.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Jun, Linchuan Xu, Kun Qian, Jing Wang, and Kenji Yamanishi. "Multi-label learning with missing and completely unobserved labels." Data Mining and Knowledge Discovery 35, no. 3 (2021): 1061–86. http://dx.doi.org/10.1007/s10618-021-00743-x.

Full text
Abstract:
AbstractMulti-label learning deals with data examples which are associated with multiple class labels simultaneously. Despite the success of existing approaches to multi-label learning, there is still a problem neglected by researchers, i.e., not only are some of the values of observed labels missing, but also some of the labels are completely unobserved for the training data. We refer to the problem as multi-label learning with missing and completely unobserved labels, and argue that it is necessary to discover these completely unobserved labels in order to mine useful knowledge and make a deeper understanding of what is behind the data. In this paper, we propose a new approach named MCUL to solve multi-label learning with Missing and Completely Unobserved Labels. We try to discover the unobserved labels of a multi-label data set with a clustering based regularization term and describe the semantic meanings of them based on the label-specific features learned by MCUL, and overcome the problem of missing labels by exploiting label correlations. The proposed method MCUL can predict both the observed and newly discovered labels simultaneously for unseen data examples. Experimental results validated over ten benchmark datasets demonstrate that the proposed method can outperform other state-of-the-art approaches on observed labels and obtain an acceptable performance on the new discovered labels as well.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Ze-Sen, Xuan Wu, Qing-Guo Chen, Yao Hu, and Min-Ling Zhang. "Multi-View Partial Multi-Label Learning with Graph-Based Disambiguation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3553–60. http://dx.doi.org/10.1609/aaai.v34i04.5761.

Full text
Abstract:
In multi-view multi-label learning (MVML), each training example is represented by different feature vectors and associated with multiple labels simultaneously. Nonetheless, the labeling quality of training examples is tend to be affected by annotation noises. In this paper, the problem of multi-view partial multi-label learning (MVPML) is studied, where the set of associated labels are assumed to be candidate ones and only partially valid. To solve the MVPML problem, a two-stage graph-based disambiguation approach is proposed. Firstly, the ground-truth labels of each training example are estimated by disambiguating the candidate labels with fused similarity graph. After that, the predictive model for each label is learned from embedding features generated from disambiguation-guided clustering analysis. Extensive experimental studies clearly validate the effectiveness of the proposed approach in solving the MVPML problem.
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Jun, Haowei Rui, Guorong Li, Xiwen Qu, Tao Tao, and Xiao Zheng. "Multi-Label Learning With Hidden Labels." IEEE Access 8 (2020): 29667–76. http://dx.doi.org/10.1109/access.2020.2972599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Xinda, and Lili Wang. "Multi-granularity sequence generation for hierarchical image classification." Computational Visual Media 10, no. 2 (2024): 243–60. http://dx.doi.org/10.1007/s41095-022-0332-2.

Full text
Abstract:
AbstractHierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously. Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities, and also insufficiently consider relationships between the hierarchical multi-granularity labels. We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation (MGSG) approach for the hierarchical multi-granularity image classification task. Specifically, we introduce a transformer architecture to encode the image into visual representation sequences. Next, we traverse the taxonomic tree and organize the multi-granularity labels into sequences, and vectorize them and add positional information. The proposed multi-granularity sequence generation method builds a decoder that takes visual representation sequences and semantic label embedding as inputs, and outputs the predicted multi-granularity label sequence. The decoder models dependencies and correlations between multi-granularity labels through a masked multi-head self-attention mechanism, and relates visual information to the semantic label information through a cross-modality attention mechanism. In this way, the proposed method preserves the relationships between labels at different granularity levels and takes into account the influence of different image regions on labels with different granularities. Evaluations on six public benchmarks qualitatively and quantitatively demonstrate the advantages of the proposed method. Our project is available at https://github.com/liuxindazz/mgsg.
APA, Harvard, Vancouver, ISO, and other styles
14

Xie, Ming-Kun, and Sheng-Jun Huang. "Partial Multi-Label Learning with Noisy Label Identification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6454–61. http://dx.doi.org/10.1609/aaai.v34i04.6117.

Full text
Abstract:
Partial multi-label learning (PML) deals with problems where each instance is assigned with a candidate label set, which contains multiple relevant labels and some noisy labels. Recent studies usually solve PML problems with the disambiguation strategy, which recovers ground-truth labels from the candidate label set by simply assuming that the noisy labels are generated randomly. In real applications, however, noisy labels are usually caused by some ambiguous contents of the example. Based on this observation, we propose a partial multi-label learning approach to simultaneously recover the ground-truth information and identify the noisy labels. The two objectives are formalized in a unified framework with trace norm and ℓ1 norm regularizers. Under the supervision of the observed noise-corrupted label matrix, the multi-label classifier and noisy label identifier are jointly optimized by incorporating the label correlation exploitation and feature-induced noise model. Extensive experiments on synthetic as well as real-world data sets validate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
15

Peng, Cheng, Ke Chen, Lidan Shou, and Gang Chen. "CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14581–89. http://dx.doi.org/10.1609/aaai.v38i13.29374.

Full text
Abstract:
Multi-modal multi-label emotion recognition (MMER) aims to identify relevant emotions from multiple modalities. The challenge of MMER is how to effectively capture discriminative features for multiple labels from heterogeneous data. Recent studies are mainly devoted to exploring various fusion strategies to integrate multi-modal information into a unified representation for all labels. However, such a learning scheme not only overlooks the specificity of each modality but also fails to capture individual discriminative features for different labels. Moreover, dependencies of labels and modalities cannot be effectively modeled. To address these issues, this paper presents ContrAstive feature Reconstruction and AggregaTion (CARAT) for the MMER task. Specifically, we devise a reconstruction-based fusion mechanism to better model fine-grained modality-to-label dependencies by contrastively learning modal-separated and label-specific features. To further exploit the modality complementarity, we introduce a shuffle-based aggregation strategy to enrich co-occurrence collaboration among labels. Experiments on two benchmark datasets CMU-MOSEI and M3ED demonstrate the effectiveness of CARAT over state-of-the-art methods. Code is available at https://github.com/chengzju/CARAT.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Ping, Wanfu Gao, Juncheng Hu, and Yonghao Li. "Multi-Label Feature Selection Based on High-Order Label Correlation Assumption." Entropy 22, no. 7 (2020): 797. http://dx.doi.org/10.3390/e22070797.

Full text
Abstract:
Multi-label data often involve features with high dimensionality and complicated label correlations, resulting in a great challenge for multi-label learning. Feature selection plays an important role in multi-label learning to address multi-label data. Exploring label correlations is crucial for multi-label feature selection. Previous information-theoretical-based methods employ the strategy of cumulative summation approximation to evaluate candidate features, which merely considers low-order label correlations. In fact, there exist high-order label correlations in label set, labels naturally cluster into several groups, similar labels intend to cluster into the same group, different labels belong to different groups. However, the strategy of cumulative summation approximation tends to select the features related to the groups containing more labels while ignoring the classification information of groups containing less labels. Therefore, many features related to similar labels are selected, which leads to poor classification performance. To this end, Max-Correlation term considering high-order label correlations is proposed. Additionally, we combine the Max-Correlation term with feature redundancy term to ensure that selected features are relevant to different label groups. Finally, a new method named Multi-label Feature Selection considering Max-Correlation (MCMFS) is proposed. Experimental results demonstrate the classification superiority of MCMFS in comparison to eight state-of-the-art multi-label feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Anqi, and Lin Zhang. "Multi-Label Text Classification Based on Label-Sentence Bi-Attention Fusion Network with Multi-Level Feature Extraction." Electronics 14, no. 1 (2025): 185. https://doi.org/10.3390/electronics14010185.

Full text
Abstract:
Multi-label text classification (MLTC) aims to assign the most appropriate label or labels to each input text. Previous studies have focused on mining textual information, ignoring the interdependence of labels and texts, thus leading to the loss of information about labels. In addition, previous studies have tended to focus on the single granularity of information in documents, ignoring the degree of inclination towards labels in different sentences in multi-labeled texts. In order to solve the above problems, this paper proposes a Label-Sentence Bi-Attention Fusion Network (LSBAFN) with multi-level feature extraction for mining multi-granularity information and label information in documents. Specifically, document-level and sentence-level word embeddings are first obtained. Then, the textual relevance of the labels to these two levels is utilized to construct sentence-level textual representations. Next, a multi-level feature extraction mechanism is utilized to acquire a sentence-level textual representation that incorporates contextual information and a document-level textual representation that reflects label features. Subsequently, the label-sentence bi-attention fusion mechanism is used to learn the feature relationships in the two text representations and fuse them. Label attention identifies text features related to labels from the document-level text representation, while sentence attention focuses on the tendency of sentences towards labels. Finally, the effective portion of the fused features is extracted for classification by a multi-layer perceptron. The experimental findings indicate that the LSBAFN can improve the effectiveness of the MLTC task. Compared with the baseline models, the LSBAFN obtains a significant improvement of 0.6% and 7.81% in Micro-F1 and Macro-F1 on the Article Topic dataset and improvements of 1.03% and 0.47% in P@k and 1.02% and 0.38% in nDCG@k on the Software Category dataset and RCV1 dataset.
APA, Harvard, Vancouver, ISO, and other styles
18

Lidén, Mats, Ola Hjelmgren, Jenny Vikgren, and Per Thunberg. "Multi-Reader–Multi-Split Annotation of Emphysema in Computed Tomography." Journal of Digital Imaging 33, no. 5 (2020): 1185–93. http://dx.doi.org/10.1007/s10278-020-00378-2.

Full text
Abstract:
Abstract Emphysema is visible on computed tomography (CT) as low-density lesions representing the destruction of the pulmonary alveoli. To train a machine learning model on the emphysema extent in CT images, labeled image data is needed. The provision of these labels requires trained readers, who are a limited resource. The purpose of the study was to test the reading time, inter-observer reliability and validity of the multi-reader–multi-split method for acquiring CT image labels from radiologists. The approximately 500 slices of each stack of lung CT images were split into 1-cm chunks, with 17 thin axial slices per chunk. The chunks were randomly distributed to 26 readers, radiologists and radiology residents. Each chunk was given a quick score concerning emphysema type and severity in the left and right lung separately. A cohort of 102 subjects, with varying degrees of visible emphysema in the lung CT images, was selected from the SCAPIS pilot, performed in 2012 in Gothenburg, Sweden. In total, the readers created 9050 labels for 2881 chunks. Image labels were compared with regional annotations already provided at the SCAPIS pilot inclusion. The median reading time per chunk was 15 s. The inter-observer Krippendorff’s alpha was 0.40 and 0.53 for emphysema type and score, respectively, and higher in the apical part than in the basal part of the lungs. The multi-split emphysema scores were generally consistent with regional annotations. In conclusion, the multi-reader–multi-split method provided reasonably valid image labels, with an estimation of the inter-observer reliability.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Xingyu, Bingbing Jiang, Kui Yu, Huanhuan Chen, and Chunyan Miao. "Multi-Label Causal Feature Selection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6430–37. http://dx.doi.org/10.1609/aaai.v34i04.6114.

Full text
Abstract:
Multi-label feature selection has received considerable attentions during the past decade. However, existing algorithms do not attempt to uncover the underlying causal mechanism, and individually solve different types of variable relationships, ignoring the mutual effects between them. Furthermore, these algorithms lack of interpretability, which can only select features for all labels, but cannot explain the correlation between a selected feature and a certain label. To address these problems, in this paper, we theoretically study the causal relationships in multi-label data, and propose a novel Markov blanket based multi-label causal feature selection (MB-MCF) algorithm. MB-MCF mines the causal mechanism of labels and features first, to obtain a complete representation of information about labels. Based on the causal relationships, MB-MCF then selects predictive features and simultaneously distinguishes common features shared by multiple labels and label-specific features owned by single labels. Experiments on real-world data sets validate that MB-MCF could automatically determine the number of selected features and simultaneously achieve the best performance compared with state-of-the-art methods. An experiment in Emotions data set further demonstrates the interpretability of MB-MCF.
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Shan, Wenlong Hu, Bin Lu, et al. "Application of Label Correlation in Multi-Label Classification: A Survey." Applied Sciences 14, no. 19 (2024): 9034. http://dx.doi.org/10.3390/app14199034.

Full text
Abstract:
Multi-Label Classification refers to the classification task where a data sample is associated with multiple labels simultaneously, which is widely used in text classification, image classification, and other fields. Different from the traditional single-label classification, each instance in Multi-Label Classification corresponds to multiple labels, and there is a correlation between these labels, which contains a wealth of information. Therefore, the ability to effectively mine and utilize the complex correlations between labels has become a key factor in Multi-Label Classification methods. In recent years, research on label correlations has shown a significant growth trend internationally, reflecting its importance. Given that, this paper presents a survey on the label correlations in Multi-Label Classification to provide valuable references and insights for future researchers. The paper introduces multi-label datasets across various fields, elucidates and categorizes the concept of label correlations, emphasizes their utilization in Multi-Label Classification and associated subproblems, and provides a prospect for future work on label correlations.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Jin-Yu, Xian-Ling Mao, Tian-Yi Che, and Rong-Cheng Tu. "Distribution-Consistency-Guided Multi-modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12174–82. https://doi.org/10.1609/aaai.v39i11.33326.

Full text
Abstract:
Multi-modal hashing methods have gained popularity due to their fast speed and low storage requirements. Among them, the supervised methods demonstrate better performance by utilizing labels as supervisory signals compared with unsupervised methods. Currently, for almost all supervised multi-modal hashing methods, there is a hidden assumption that training sets have no noisy labels. However, labels are often annotated incorrectly due to manual labeling in real-world scenarios, which will greatly harm the retrieval performance. To address this issue, we first discover a significant distribution consistency pattern through experiments, i.e., the 1-0 distribution of the presence or absence of each category in the label is consistent with the high-low distribution of similarity scores of the hash codes relative to category centers. Then, inspired by this pattern, we propose a novel Distribution-Consistency-Guided Multi-modal Hashing (DCGMH), which aims to filter and reconstruct noisy labels to enhance retrieval performance. Specifically, the proposed method first randomly initializes several category centers, each representing the region's centroid of its respective category, which are used to compute the high-low distribution of similarity scores; Noisy and clean labels are then separately filtered out via the discovered distribution consistency pattern to mitigate the impact of noisy labels; Subsequently, a correction strategy, which is indirectly designed via the distribution consistency pattern, is applied to the filtered noisy labels, correcting high-confidence ones while treating low-confidence ones as unlabeled for unsupervised learning, thereby further enhancing the model’s performance. Extensive experiments on three widely used datasets demonstrate the superiority of the proposed method compared to state-of-the-art baselines in multi-modal retrieval tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

Fang, Jun-Peng, and Min-Ling Zhang. "Partial Multi-Label Learning via Credible Label Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3518–25. http://dx.doi.org/10.1609/aaai.v33i01.33013518.

Full text
Abstract:
In partial multi-label learning (PML), each training example is associated with multiple candidate labels which are only partially valid. The task of PML naturally arises in learning scenarios with inaccurate supervision, and the goal is to induce a multi-label predictor which can assign a set of proper labels for unseen instance. To learn from PML training examples, the training procedure is prone to be misled by the false positive labels concealed in candidate label set. In light of this major difficulty, a novel two-stage PML approach is proposed which works by eliciting credible labels from the candidate label set for model induction. In this way, most false positive labels are expected to be excluded from the training procedure. Specifically, in the first stage, the labeling confidence of candidate label for each PML training example is estimated via iterative label propagation. In the second stage, by utilizing credible labels with high labeling confidence, multi-label predictor is induced via pairwise label ranking with virtual label splitting or maximum a posteriori (MAP) reasoning. Extensive experiments on synthetic as well as real-world data sets clearly validate the effectiveness of credible label elicitation in learning from PML examples.
APA, Harvard, Vancouver, ISO, and other styles
23

ZHANG, Yongwei. "Learning Label Correlations for Multi-Label Online Passive Aggressive Classification Algorithm." Wuhan University Journal of Natural Sciences 29, no. 1 (2024): 51–58. http://dx.doi.org/10.1051/wujns/2024291051.

Full text
Abstract:
Label correlations are an essential technique for data mining that solves the possible correlation problem between different labels in multi-label classification. Although this technique is widely used in multi-label classification problems, batch learning deals with most issues, which consumes a lot of time and space resources. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale datasets. However, existing online learning research has done little to consider correlations between labels. On the basis of existing research, this paper proposes a multi-label online learning algorithm based on label correlations by maximizing the interval between related labels and unrelated labels in multi-label samples. We evaluate the performance of the proposed algorithm on several public datasets. Experiments show the effectiveness of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Yue, Kai Ming Ting, and Zhi-Hua Zhou. "Multi-Label Learning with Emerging New Labels." IEEE Transactions on Knowledge and Data Engineering 30, no. 10 (2018): 1901–14. http://dx.doi.org/10.1109/tkde.2018.2810872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Pengfei, Qian Xu, Qinghua Hu, Changqing Zhang, and Hong Zhao. "Multi-label feature selection with missing labels." Pattern Recognition 74 (February 2018): 488–502. http://dx.doi.org/10.1016/j.patcog.2017.09.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Yaojin, Qinghua Hu, Jia Zhang, and Xindong Wu. "Multi-label feature selection with streaming labels." Information Sciences 372 (December 2016): 256–75. http://dx.doi.org/10.1016/j.ins.2016.08.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xu, Miao, Yu-Feng Li, and Zhi-Hua Zhou. "Multi-Label Learning with PRO Loss." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 998–1004. http://dx.doi.org/10.1609/aaai.v27i1.8689.

Full text
Abstract:
Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however, cannot be met because most existing methods were designed to optimize existing criteria, yet there is no criterion which encodes the aforementioned requirement. In this paper, we present a new criterion, Pro Loss, concerning the prediction on all labels as well as the rankings of only relevant labels. We then propose ProSVM which optimizes Pro Lossefficiently using alternating direction method of multipliers. We further improve its efficiency with an upper approximation that reduces the number of constraints from O(T,2) to O(T), where T is the number of labels. Experiments show that our proposals are not only superior on Pro Loss, but also highly competitive on existing evaluation criteria.
APA, Harvard, Vancouver, ISO, and other styles
28

Kolber, Anna, and Oliver Meixner. "Effects of Multi-Level Eco-Labels on the Product Evaluation of Meat and Meat Alternatives—A Discrete Choice Experiment." Foods 12, no. 15 (2023): 2941. http://dx.doi.org/10.3390/foods12152941.

Full text
Abstract:
Eco-labels are an instrument for enabling informed food choices and supporting a demand-sided change towards an urgently needed sustainable food system. Lately, novel eco-labels that depict a product’s environmental life cycle assessment on a multi-level scale are being tested across Europe’s retailers. This study elicits consumers’ preferences and willingness to pay (WTP) for a multi-level eco-label. A Discrete Choice Experiment was conducted; a representative sample (n = 536) for the Austrian population was targeted via an online survey. Individual partworth utilities were estimated by means of the Hierarchical Bayes. The results show higher WTP for a positively evaluated multi-level label, revealing consumers’ perceived benefits of colorful multi-level labels over binary black-and-white designs. Even a negatively evaluated multi-level label was associated with a higher WTP compared to one with no label, pointing towards the limited effectiveness of eco-labels. Respondents’ preferences for eco-labels were independent from their subjective eco-label knowledge, health consciousness, and environmental concern. The attribute “protein source” was most important, and preference for an animal-based protein source (beef) was strongly correlated with consumers’ meat attachment, implying that a shift towards more sustainable protein sources is challenging, and sustainability labels have only a small impact on the meat product choice of average consumers.
APA, Harvard, Vancouver, ISO, and other styles
29

Song, Hwanjun, Minseok Kim, and Jae-Gil Lee. "Toward Robustness in Multi-Label Classification: A Data Augmentation Strategy against Imbalance and Noise." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 21592–601. http://dx.doi.org/10.1609/aaai.v38i19.30157.

Full text
Abstract:
Multi-label classification poses challenges due to imbalanced and noisy labels in training data. In this paper, we propose a unified data augmentation method, named BalanceMix, to address these challenges. Our approach includes two samplers for imbalanced labels, generating minority-augmented instances with high diversity. It also refines multi-labels at the label-wise granularity, categorizing noisy labels as clean, re-labeled, or ambiguous for robust optimization. Extensive experiments on three benchmark datasets demonstrate that BalanceMix outperforms existing state-of-the-art methods. We release the code at https://github.com/DISL-Lab/BalanceMix.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Xiujuan, and Yuchen Zhou. "Multi-Label Feature Selection with Conditional Mutual Information." Computational Intelligence and Neuroscience 2022 (October 8, 2022): 1–13. http://dx.doi.org/10.1155/2022/9243893.

Full text
Abstract:
Feature selection is an important way to optimize the efficiency and accuracy of classifiers. However, traditional feature selection methods cannot work with many kinds of data in the real world, such as multi-label data. To overcome this challenge, multi-label feature selection is developed. Multi-label feature selection plays an irreplaceable role in pattern recognition and data mining. This process can improve the efficiency and accuracy of multi-label classification. However, traditional multi-label feature selection based on mutual information does not fully consider the effect of redundancy among labels. The deficiency may lead to repeated computing of mutual information and leave room to enhance the accuracy of multi-label feature selection. To deal with this challenge, this paper proposed a multi-label feature selection based on conditional mutual information among labels (CRMIL). Firstly, we analyze how to reduce the redundancy among features based on existing papers. Secondly, we propose a new approach to diminish the redundancy among labels. This method takes label sets as conditions to calculate the relevance between features and labels. This approach can weaken the impact of the redundancy among labels on feature selection results. Finally, we analyze this algorithm and balance the effects of relevance and redundancy on the evaluation function. For testing CRMIL, we compare it with the other eight multi-label feature selection algorithms on ten datasets and use four evaluation criteria to examine the results. Experimental results illustrate that CRMIL performs better than other existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

Pu, Tao, Tianshui Chen, Hefeng Wu, and Liang Lin. "Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2091–98. http://dx.doi.org/10.1609/aaai.v36i2.20105.

Full text
Abstract:
Training the multi-label image recognition models with partial labels, in which merely some labels are known while others are unknown for each image, is a considerably challenging and practical task. To address this task, current algorithms mainly depend on pre-training classification or similarity models to generate pseudo labels for the unknown labels. However, these algorithms depend on sufficient multi-label annotations to train the models, leading to poor performance especially with low known label proportion. In this work, we propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels, which can get rid of pre-training models and thus does not depend on sufficient annotations. To this end, we design a unified semantic-aware representation blending (SARB) framework that exploits instance-level and prototype-level semantic representation to complement unknown labels by two complementary modules: 1) an instance-level representation blending (ILRB) module blends the representations of the known labels in an image to the representations of the unknown labels in another image to complement these unknown labels. 2) a prototype-level representation blending (PLRB) module learns more stable representation prototypes for each category and blends the representation of unknown labels with the prototypes of corresponding labels to complement these labels. Extensive experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors on all known label proportion settings, i.e., with the mAP improvement of 4.6%, 4.6%, 2.2% on these three datasets when the known label proportion is 10%. Codes are available at https://github.com/HCPLab-SYSU/HCP-MLR-PL.
APA, Harvard, Vancouver, ISO, and other styles
32

Kang, Yujin, and Yoon-Sik Cho. "Beyond Single Emotion: Multi-label Approach to Conversational Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 23 (2025): 24321–29. https://doi.org/10.1609/aaai.v39i23.34609.

Full text
Abstract:
Emotion recognition in conversation (ERC) has been promoted with diverse approaches in the recent years. However, many studies have pointed out that emotion shift and confusing labels make it difficult for models to distinguish between different emotions. Existing ERC models suffer from these problems when the emotions are forced to be mapped into single label. In this paper, we utilize our strategies for extending single label to multi-labels. We then propose a multi-label classification framework for emotion recognition in conversation (ML-ERC). Specifically, we introduce weighted supervised contrastive learning tailored for multi-label, which can easily applied to previous ERC models. The empirical results on existing task with single label support the efficacy of our approach, which is more effective in the most challenging settings: emotion shift or confusing labels. We also evaluate ML-ERC with the multi-labels we produced to support our contrastive learning scheme.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Zhen, Yiqun Duan, Liu Liu, and Dacheng Tao. "Multi-label Few-shot Learning with Semantic Inference (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (2021): 15917–18. http://dx.doi.org/10.1609/aaai.v35i18.17955.

Full text
Abstract:
Few-shot learning can adapt the classification model to new labels with only a few labeled examples. Previous studies mainly focus on the scenario of a single category label per example but have not solved the more challenging multi-label scenario with exponential-sized output space and low-data effectively. In this paper, we propose a semantic-aware meta-learning model for multi-label few-shot learning. Our approach can learn and infer the semantic correlation between unseen labels and historical labels to quickly adapt multi-label tasks from only a few examples. Specifically, features can be mapped into the semantic embedding space via label word vectors to explore and exploit the label correlation, and thus cope with the challenge on the overwhelming size of the output space. Then a novel semantic inference mechanism is designed for leveraging prior knowledge learned from historical labels, which will produce good generalization performance on new labels to alleviate the low-data problem. Finally, extensive empirical results show that the proposed method significantly outperforms the existing state-of-the-art methods on the multi-label few-shot learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
34

Cui, Zijun, Yong Zhang, and Qiang Ji. "Label Error Correction and Generation through Label Relationships." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3693–700. http://dx.doi.org/10.1609/aaai.v34i04.5778.

Full text
Abstract:
For multi-label supervised learning, the quality of the label annotation is important. However, for many real world multi-label classification applications, label annotations often lack quality, in particular when label annotation requires special expertise, such as annotating fine-grained labels. The relationships among labels, on other hand, are usually stable and robust to errors. For this reason, we propose to capture and leverage label relationships at different levels to improve fine-grained label annotation quality and to generate labels. Two levels of labels, including object-level labels and property-level labels, are considered. The object-level labels characterize object category based on its overall appearance, while the property-level labels describe specific local object properties. A Bayesian network (BN) is learned to capture the relationships among the multiple labels at the two levels. A MAP inference is then performed to identify the most stable and consistent label relationships and they are then used to improve data annotations for the same dataset and to generate labels for a new dataset. Experimental evaluations on six benchmark databases for two different tasks (facial action unit and object attribute classification) demonstrate the effectiveness of the proposed method in improving data annotation and in generating effective new labels.
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Tianxiang, and Shuqun Yang. "Contrastive Enhanced Learning for Multi-Label Text Classification." Applied Sciences 14, no. 19 (2024): 8650. http://dx.doi.org/10.3390/app14198650.

Full text
Abstract:
Multi-label text classification (MLTC) aims to assign appropriate labels to each document from a given set. Prior research has acknowledged the significance of label information, but its utilization remains insufficient. Existing approaches often focus on either label correlation or label textual semantics, without fully leveraging the information contained within labels. In this paper, we propose a multi-perspective contrastive model (MPCM) with an attention mechanism to integrate labels and documents, utilizing contrastive methods to enhance label information from both textual semantic and correlation perspectives. Additionally, we introduce techniques for contrastive global representation learning and positive label representation alignment to improve the model’s perception of accurate labels. The experimental results demonstrate that our algorithm achieves superior performance compared to existing methods when evaluated on the AAPD and RCV1-V2 datasets.
APA, Harvard, Vancouver, ISO, and other styles
36

Lyu, Gengyu, Bohang Sun, Xiang Deng, and Songhe Feng. "Addressing Multi-Label Learning with Partial Labels: From Sample Selection to Label Selection." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19251–59. https://doi.org/10.1609/aaai.v39i18.34119.

Full text
Abstract:
Multi-label Learning with Partial Labels (ML-PL) learns from training data, where each sample is annotated with part of positive labels while leaving the rest of positive labels unannotated. Existing methods mainly focus on extending multi-label losses to estimate unannotated labels, further inducing a missing-robust network. However, training with single network could lead to confirmation bias (i.e., the model tends to confirm its mistakes). To tackle this issue, we propose a novel learning paradigm termed Co-Label Selection (CLS), where two networks feed forward all data and cooperate in a co-training manner for critical label selection. Different from traditional co-training based methods that networks select confident samples for each other, we start from a new perspective that two networks are encouraged to remove false-negative labels while keep training samples reserved. Meanwhile, considering the extreme positive-negative label imbalance in ML-PL that leads the model to focus on negative labels, we enforce the model to concentrate on positive labels by abandoning non-informative negative labels to alleviate such issue. By shifting the cooperation strategy from "Sample Selection'' to "Label Selection'', CLS avoids directly dropping samples and reserves training data in most extent, thus enhancing the utilization of supervised signals and the generalization of the learning model. Empirical results performed on various multi-label datasets demonstrate that our CLS is significantly superior to other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Xiao, Lin, Xiangliang Zhang, Liping Jing, Chi Huang, and Mingyang Song. "Does Head Label Help for Long-Tailed Multi-Label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14103–11. http://dx.doi.org/10.1609/aaai.v35i16.17660.

Full text
Abstract:
Multi-label text classification (MLTC) aims to annotate documents with the most relevant labels from a number of candidate labels. In real applications, the distribution of label frequency often exhibits a long tail, i.e., a few labels are associated with a large number of documents (a.k.a. head labels), while a large fraction of labels are associated with a small number of documents (a.k.a. tail labels). To address the challenge of insufficient training data on tail label classification, we propose a Head-to-Tail Network (HTTN) to transfer the meta-knowledge from the data-rich head labels to data-poor tail labels. The meta-knowledge is the mapping from few-shot network parameters to many-shot network parameters, which aims to promote the generalizability of tail classifiers. Extensive experimental results on three benchmark datasets demonstrate that HTTN consistently outperforms the state-of-the-art methods. The code and hyper-parameter settings are released for reproducibility.
APA, Harvard, Vancouver, ISO, and other styles
38

Siringoringo, Rimbun, Jamaluddin Jamaluddin, and Resianta Perangin-angin. "TEXT MINING DAN KLASIFIKASI MULTI LABEL MENGGUNAKAN XGBOOST." METHOMIKA Jurnal Manajemen Informatika dan Komputerisasi Akuntansi 6, no. 6 (2022): 234–38. http://dx.doi.org/10.46880/jmika.vol6no2.pp234-238.

Full text
Abstract:
The conventional classification process is applied to find a single criterion or label. The multi-label classification process is more complex because a large number of labels results in more classes. Another aspect that must be considered in multi-label classification is the existence of mutual dependencies between data labels. In traditional binary classification, classification analysis only aims to determine the label in the text, whether positive or negative. This method is sub-optimal because the relationship between labels cannot be determined. To overcome the weaknesses of these traditional methods, multi-label classification is one of the solutions in data labeling. With multi-label text classification, it allows the existence of many labels in a document and there is a semantic correlation between these labels. This research performs multi-label classification on research article texts using the ensemble classifier approach, namely XGBoost. Classification performance evaluation is based on several metrics criteria of confusion matrix, accuracy, and f1 score. Model evaluation is also carried out by comparing the performance of XGBoost with Logistic Regression. The results of the study using the train test split and cross-validation obtained an average accuracy of training and testing for Regression Logistics of 0.81, and an average f1 score of 0.47. The average accuracy for XGBoost is 0.88, and the average f1 score is 0.78. The results show that the XGBoost classifier model can be applied to produce a good classification performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Rottoli, Giovanni Daian, and Carlos Casanova. "Multi-criteria and Multi-expert Requirement Prioritization using Fuzzy Linguistic Labels." ParadigmPlus 3, no. 1 (2022): 1–18. http://dx.doi.org/10.55969/paradigmplus.v3n1a1.

Full text
Abstract:
Requirement prioritization in Software Engineering is the activity that helps to select and order for the requirements to be implemented in each software development process iteration. Thus, requirement prioritization assists the decision-making process during iteration management. This work presents a method for requirement prioritization that considers many experts' opinions on multiple decision criteria provided using fuzzy linguistic labels, a tool that allows capturing the imprecision of each experts' judgment. These opinions are then aggregated using the fuzzy aggregation operator MLIOWA considering different weights for each expert. Then, an order for the requirements is given considering the aggregated opinions and different weights for each evaluated dimension or criteria. The method proposed in this work has been implemented and demonstrated using a synthetic dataset. A statistical evaluation of the results obtained using different t-norms was also carried out.
APA, Harvard, Vancouver, ISO, and other styles
40

Jiang, Ting, Deqing Wang, Leilei Sun, Huayi Yang, Zhengyang Zhao, and Fuzhen Zhuang. "LightXML: Transformer with Dynamic Negative Sampling for High-Performance Extreme Multi-label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 7987–94. http://dx.doi.org/10.1609/aaai.v35i9.16974.

Full text
Abstract:
Extreme multi-label text classification(XMC) is a task for finding the most relevant labels from a large label set. Nowadays deep learning-based methods have shown significant success in XMC. However, the existing methods (e.g., AttentionXML and X-Transformer etc) still suffer from 1) combining several models to train and predict for one dataset, and 2) sampling negative labels statically during the process of training label ranking model, which will harm the performance and accuracy of model. To address the above problems, we propose LightXML, which adopts end-to-end training and dynamical negative labels sampling. In LightXML, we use GAN like networks to recall and rank labels. The label recalling part will generate negative and positive labels, and the label ranking part will distinguish positive labels from these labels. Based on these networks, negative labels are sampled dynamically during label ranking part training. With feeding both label recalling and ranking parts with the same text representation, LightXML can reach high performance. Extensive experiments show that LightXML outperforms state-of-the-art methods in five extreme multi-label datasets with much smaller model size and lower computational complexity. In particular, on the Amazon dataset with 670K labels, LightXML can reduce the model size up to 72% compared to AttentionXML. Our code is available at http://github.com/kongds/LightXML.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yi, Zhecheng Zhang, Mingyuan Chen, Hengyang Lu, Lei Zhang, and Chongjun Wang. "LAMB: A novel algorithm of label collaboration based multi-label learning." Intelligent Data Analysis 26, no. 5 (2022): 1229–45. http://dx.doi.org/10.3233/ida-215946.

Full text
Abstract:
Exploiting label correlation is crucially important in multi-label learning, where each instance is associated with multiple labels simultaneously. Multi-label learning is more complex than single-label learning for that the labels tend to be correlated. Traditional multi-label learning algorithms learn independent classifiers for each label and employ ranking or threshold on the classification results. Most existing methods take label correlation as prior knowledge, which have worked well, but they failed to make full use of label dependency. As a result, the real relationship among labels may not be correctly characterized and the final prediction is not explicitly correlated. To address these problems, we propose a novel high-order multi-label learning algorithm of Label collAboration based Multi-laBel learning (LAMB). With regard to each label, LAMB utilizes collaboration between its own prediction and the prediction of other labels. Extensive experiments on various datasets demonstrate that our proposed LAMB algorithm achieves superior performance over existing state-of-the-art algorithms. In addition, one real-world dataset of channelrhodopsins chimeras is assessed, which would be of great value as pre-screen for membrane proteins function.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Tianci, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, and Jing Gao. "SimFair: A Unified Framework for Fairness-Aware Multi-Label Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14338–46. http://dx.doi.org/10.1609/aaai.v37i12.26677.

Full text
Abstract:
Recent years have witnessed increasing concerns towards unfair decisions made by machine learning algorithms. To improve fairness in model decisions, various fairness notions have been proposed and many fairness-aware methods are developed. However, most of existing definitions and methods focus only on single-label classification. Fairness for multi-label classification, where each instance is associated with more than one labels, is still yet to establish. To fill this gap, we study fairness-aware multi-label classification in this paper. We start by extending Demographic Parity (DP) and Equalized Opportunity (EOp), two popular fairness notions, to multi-label classification scenarios. Through a systematic study, we show that on multi-label data, because of unevenly distributed labels, EOp usually fails to construct a reliable estimate on labels with few instances. We then propose a new framework named Similarity s-induced Fairness (sγ -SimFair). This new framework utilizes data that have similar labels when estimating fairness on a particular label group for better stability, and can unify DP and EOp. Theoretical analysis and experimental results on real-world datasets together demonstrate the advantage of sγ -SimFair over existing methods on multi-label classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
43

Zheng, Maoji, Ziyu Xu, Qiming Xia, Hai Wu, Chenglu Wen, and Cheng Wang. "Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 10591–98. https://doi.org/10.1609/aaai.v39i10.33150.

Full text
Abstract:
LIDAR-based 3D object detection and semantic segmentation are critical tasks in 3D scene understanding. Traditional detection and segmentation methods supervise their models through bounding box labels and semantic mask labels. However, these two independent labels inherently contain significant redundancy. This paper aims to eliminate the redundancy by supervising 3D object detection using only semantic labels. However, the challenge arises due to the incomplete geometry structure and boundary ambiguity of point cloud instances, leading to inaccurate pseudo-labels and poor detection results. To address these challenges, we propose a novel method, named Seg2Box. We first introduce a Multi-Frame Multi-Scale Clustering (MFMS-C) module, which leverages the spatio-temporal consistency of point clouds to generate accurate box-level pseudo-labels. Additionally, the Semantic-Guiding Iterative-Mining Self-Training (SGIM-ST) module is proposed to enhance the performance by progressively refining the pseudo-labels and mining the instances without generating pseudo-labels. Experiments on the Waymo Open Dataset and nuScenes Dataset show that our method significantly outperforms other competitive methods by 23.7% and 10.3% in mAP, respectively. The results demonstrate the great label-efficient potential and advancement of our method.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Pengyu, Lin Xiao, Bing Liu, Sijin Lu, Liping Jing, and Jian Yu. "Label-Specific Feature Augmentation for Long-Tailed Multi-Label Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10602–10. http://dx.doi.org/10.1609/aaai.v37i9.26259.

Full text
Abstract:
Multi-label text classification (MLTC) involves tagging a document with its most relevant subset of labels from a label set. In real applications, labels usually follow a long-tailed distribution, where most labels (called as tail-label) only contain a small number of documents and limit the performance of MLTC. To facilitate this low-resource problem, researchers introduced a simple but effective strategy, data augmentation (DA). However, most existing DA approaches struggle in multi-label settings. The main reason is that the augmented documents for one label may inevitably influence the other co-occurring labels and further exaggerate the long-tailed problem. To mitigate this issue, we propose a new pair-level augmentation framework for MLTC, called Label-Specific Feature Augmentation (LSFA), which merely augments positive feature-label pairs for the tail-labels. LSFA contains two main parts. The first is for label-specific document representation learning in the high-level latent space, the second is for augmenting tail-label features in latent space by transferring the documents second-order statistics (intra-class semantic variations) from head labels to tail labels. At last, we design a new loss function for adjusting classifiers based on augmented datasets. The whole learning procedure can be effectively trained. Comprehensive experiments on benchmark datasets have shown that the proposed LSFA outperforms the state-of-the-art counterparts.
APA, Harvard, Vancouver, ISO, and other styles
45

Yu, Tianyu, Cuiwei Liu, Zhuo Yan, and Xiangbin Shi. "A Multi-Task Framework for Action Prediction." Information 11, no. 3 (2020): 158. http://dx.doi.org/10.3390/info11030158.

Full text
Abstract:
Predicting the categories of actions in partially observed videos is a challenging task in the computer vision field. The temporal progress of an ongoing action is of great importance for action prediction, since actions can present different characteristics at different temporal stages. To this end, we propose a novel multi-task deep forest framework, which treats temporal progress analysis as a relevant task to action prediction and takes advantage of observation ratio labels of incomplete videos during training. The proposed multi-task deep forest is a cascade structure of random forests and multi-task random forests. Unlike the traditional single-task random forests, multi-task random forests are built upon incomplete training videos annotated with action labels as well as temporal progress labels. Meanwhile, incorporating both random forests and multi-task random forests can increase the diversity of classifiers and improve the discriminative power of the multi-task deep forest. Experiments on the UT-Interaction and the BIT-Interaction datasets demonstrate the effectiveness of the proposed multi-task deep forest.
APA, Harvard, Vancouver, ISO, and other styles
46

Khandagale, Sujay, Han Xiao, and Rohit Babbar. "Bonsai: diverse and shallow trees for extreme multi-label classification." Machine Learning 109, no. 11 (2020): 2099–119. http://dx.doi.org/10.1007/s10994-020-05888-2.

Full text
Abstract:
Abstract Extreme multi-label classification (XMC) refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. In this paper, we develop a suite of algorithms, called , which generalizes the notion of label representation in XMC, and partitions the labels in the representation space to learn shallow trees. We show three concrete realizations of this label representation space including: (i) the input space which is spanned by the input features, (ii) the output space spanned by label vectors based on their co-occurrence with other labels, and (iii) the joint space by combining the input and output representations. Furthermore, the constraint-free multi-way partitions learnt iteratively in these spaces lead to shallow trees. By combining the effect of shallow trees and generalized label representation, achieves the best of both worlds—fast training which is comparable to state-of-the-art tree-based methods in XMC, and much better prediction accuracy, particularly on tail-labels. On a benchmark Amazon-3M dataset with 3 million labels, outperforms a state-of-the-art one-vs-rest method in terms of prediction accuracy, while being approximately 200 times faster to train. The code for is available at https://github.com/xmc-aalto/bonsai.
APA, Harvard, Vancouver, ISO, and other styles
47

Mu, Dejun, Junhong Duan, Xiaoyu Li, Hang Dai, Xiaoyan Cai, and Lantian Guo. "Expede Herculem: Learning Multi Labels From Single Label." IEEE Access 6 (2018): 61410–18. http://dx.doi.org/10.1109/access.2018.2876014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ma, Jianghong, Zhaoyang Tian, Haijun Zhang, and Tommy W. S. Chow. "Multi-Label Low-dimensional Embedding with Missing Labels." Knowledge-Based Systems 137 (December 2017): 65–82. http://dx.doi.org/10.1016/j.knosys.2017.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Frasca, Marco, Simone Bassis, and Giorgio Valentini. "Learning node labels with multi-category Hopfield networks." Neural Computing and Applications 27, no. 6 (2015): 1677–92. http://dx.doi.org/10.1007/s00521-015-1965-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Xinran, Wuyin Jin, Xiangyang Xu, and Hao Yang. "A Domain-Adversarial Multi-Graph Convolutional Network for Unsupervised Domain Adaptation Rolling Bearing Fault Diagnosis." Symmetry 14, no. 12 (2022): 2654. http://dx.doi.org/10.3390/sym14122654.

Full text
Abstract:
The transfer learning method, based on unsupervised domain adaptation (UDA), has been broadly utilized in research on fault diagnosis under variable working conditions with certain results. However, traditional UDA methods pay more attention to extracting information for the class labels and domain labels of data, ignoring the influence of data structure information on the extracted features. Therefore, we propose a domain-adversarial multi-graph convolutional network (DAMGCN) for UDA. A multi-graph convolutional network (MGCN), integrating three graph convolutional layers (multi-receptive field graph convolutional (MRFConv) layer, local extreme value convolutional (LEConv) layer, and graph attention convolutional (GATConv) layer) was used to mine data structure information. The domain discriminators and classifiers were utilized to model domain labels and class labels, respectively, and align the data structure differences through the correlation alignment (CORAL) index. The classification and feature extraction ability of the DAMGCN was significantly enhanced compared with other UDA algorithms by two example validation results, which can effectively achieve rolling bearing cross-domain fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!