To see the other types of publications on this topic, follow the link: Multitask learning.

Journal articles on the topic 'Multitask learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multitask learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qiuhua Liu, Xuejun Liao, Hui Li, J. R. Stack, and L. Carin. "Semisupervised Multitask Learning." IEEE Transactions on Pattern Analysis and Machine Intelligence 31, no. 6 (2009): 1074–86. http://dx.doi.org/10.1109/tpami.2008.296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Zhen Xing, and Wei Hua Li. "Multitask Similarity Cluster." Advanced Materials Research 765-767 (September 2013): 1662–66. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1662.

Full text
Abstract:
Single task learning is widely used training in artificial neural network. Before, people usually see other tasks as noise in same learning machine. However, multitask learning, proposed by Rich Caruana, sees simultaneously training several correlated tasks is helpful to improve single tasks performance. In this paper, we propose a new neural network multitask similarity cluster. Combined with hellinger distance, multitask similarity cluster can estimate distances among clusters more accurate. Experimental results show multitask learning is helpful to improve performance of single task and multitask similarity cluster can get satisfactory result.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Peng, Peilin Zhao, Jiayu Zhou, and Xin Gao. "Confidence Weighted Multitask Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5636–43. http://dx.doi.org/10.1609/aaai.v33i01.33015636.

Full text
Abstract:
Traditional online multitask learning only utilizes the firstorder information of the datastream. To remedy this issue, we propose a confidence weighted multitask learning algorithm, which maintains a Gaussian distribution over each task model to guide online learning process. The mean (covariance) of the Gaussian Distribution is a sum of a local component and a global component that is shared among all the tasks. In addition, this paper also addresses the challenge of active learning on the online multitask setting. Instead of requiring labels of all the instances, the proposed algorithm determines whether the learner should acquire a label by considering the confidence from its related tasks over label prediction. Theoretical results show the regret bounds can be significantly reduced. Empirical results demonstrate that the proposed algorithm is able to achieve promising learning efficacy, while simultaneously minimizing the labeling cost.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Guangxia, Steven C. H. Hoi, Kuiyu Chang, Wenting Liu, and Ramesh Jain. "Collaborative Online Multitask Learning." IEEE Transactions on Knowledge and Data Engineering 26, no. 8 (2014): 1866–76. http://dx.doi.org/10.1109/tkde.2013.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Zhen Xing, and Wei Hua Li. "Multitask Fuzzy Learning with Rule Weight." Advanced Materials Research 774-776 (September 2013): 1883–86. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.1883.

Full text
Abstract:
In fuzzy learning system based on rule weight, certainty grade, denoted by membership function of fuzzy set, defines how close a rule to a classification. In this system, several rules can correspond to same classification. But it cannot reflect the changing while training several tasks simultaneously. In this paper, we propose multitask fuzzy learning based on error-correction, and define belonging grade to show how much a sample belongs to a rule. Experimental results demonstrate efficiency of multitask fuzzy learning, and multitask learning could help to improve learning machines prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Yin, Jichong, Fang Wu, Yue Qiu, Anping Li, Chengyi Liu, and Xianyong Gong. "A Multiscale and Multitask Deep Learning Framework for Automatic Building Extraction." Remote Sensing 14, no. 19 (2022): 4744. http://dx.doi.org/10.3390/rs14194744.

Full text
Abstract:
Detecting buildings, segmenting building footprints, and extracting building edges from high-resolution remote sensing images are vital in applications such as urban planning, change detection, smart cities, and map-making and updating. The tasks of building detection, footprint segmentation, and edge extraction affect each other to a certain extent. However, most previous works have focused on one of these three tasks and have lacked a multitask learning framework that can simultaneously solve the tasks of building detection, footprint segmentation and edge extraction, making it difficult to obtain smooth and complete buildings. This study proposes a novel multiscale and multitask deep learning framework to consider the dependencies among building detection, footprint segmentation, and edge extraction while completing all three tasks. In addition, a multitask feature fusion module is introduced into the deep learning framework to increase the robustness of feature extraction. A multitask loss function is also introduced to balance the training losses among the various tasks to obtain the best training results. Finally, the proposed method is applied to open-source building datasets and large-scale high-resolution remote sensing images and compared with other advanced building extraction methods. To verify the effectiveness of multitask learning, the performance of multitask learning and single-task training is compared in ablation experiments. The experimental results show that the proposed method has certain advantages over other methods and that multitask learning can effectively improve single-task performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Menghi, Nicholas, Kemal Kacar, and Will Penny. "Multitask learning over shared subspaces." PLOS Computational Biology 17, no. 7 (2021): e1009092. http://dx.doi.org/10.1371/journal.pcbi.1009092.

Full text
Abstract:
This paper uses constructs from machine learning to define pairs of learning tasks that either shared or did not share a common subspace. Human subjects then learnt these tasks using a feedback-based approach and we hypothesised that learning would be boosted for shared subspaces. Our findings broadly supported this hypothesis with either better performance on the second task if it shared the same subspace as the first, or positive correlations over task performance for shared subspaces. These empirical findings were compared to the behaviour of a Neural Network model trained using sequential Bayesian learning and human performance was found to be consistent with a minimal capacity variant of this model. Networks with an increased representational capacity, and networks without Bayesian learning, did not show these transfer effects. We propose that the concept of shared subspaces provides a useful framework for the experimental study of human multitask and transfer learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Kato, Tsuyoshi, Hisashi Kashima, Masashi Sugiyama, and Kiyoshi Asai. "Conic Programming for Multitask Learning." IEEE Transactions on Knowledge and Data Engineering 22, no. 7 (2010): 957–68. http://dx.doi.org/10.1109/tkde.2009.142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kong, Yu, Ming Shao, Kang Li, and Yun Fu. "Probabilistic Low-Rank Multitask Learning." IEEE Transactions on Neural Networks and Learning Systems 29, no. 3 (2018): 670–80. http://dx.doi.org/10.1109/tnnls.2016.2641160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Szyszkowska, Joanna, Anna Kinga Zduńczyk-Kłos, Antonina Doroszewska, Barbara Banaszczak, Milena Michalska, and Katarzyna Potocka. "Zdolność do skupienia uwagi i wielozadaniowości u studentów uczelni wyższych w okresie pandemicznej nauki na odległość." Kwartalnik Pedagogiczny 68, no. 3 (2023): 71–90. http://dx.doi.org/10.31338/2657-6007.kp.2023-3.4.

Full text
Abstract:
The study aimed to investigate the impact of the changes in higher education during the COVID-19 pandemic on Polish university students’ ability to focus and multitask, and the presumed disproportions in these skills between medical students and other students. We also analysed the differences in the evaluation of the organisation of classes during the pandemic in medicine and in other programmes. The study consisted of a survey on distance learning during the COVID-19 pandemic, an assessment of cognitive and motivational functions based on the PDQ-20 questionnaire and the authors’ original questions, and a test examining the ability to multitask on the Psytoolkit platform. 201 students participated in the study – 111 medical students and 90 other students. The respondents’ answers indicate their greater exposure to distracting stimuli and their increased tendency to multitask during distance learning. The results of the experimental test show that multitasking affects longer task processing and higher error rates. Medical students were less satisfied with the quality of distance classes. The level of subjective cognitive deficits and multitasking intensity was similar in both respondent groups. According to the above results, the use of methods engaging students in distance learning may be helpful for learning, enhancing the focusing processes. It is the first study investigating university students’ ability to focus and multitask during the pandemic distance learning.
APA, Harvard, Vancouver, ISO, and other styles
11

Saylam, Berrenur, and Özlem Durmaz İncel. "Multitask Learning for Mental Health: Depression, Anxiety, Stress (DAS) Using Wearables." Diagnostics 14, no. 5 (2024): 501. http://dx.doi.org/10.3390/diagnostics14050501.

Full text
Abstract:
This study investigates the prediction of mental well-being factors—depression, stress, and anxiety—using the NetHealth dataset from college students. The research addresses four key questions, exploring the impact of digital biomarkers on these factors, their alignment with conventional psychology literature, the time-based performance of applied methods, and potential enhancements through multitask learning. The findings reveal modality rankings aligned with psychology literature, validated against paper-based studies. Improved predictions are noted with temporal considerations, and further enhanced by multitasking. Mental health multitask prediction results show aligned baseline and multitask performances, with notable enhancements using temporal aspects, particularly with the random forest (RF) classifier. Multitask learning improves outcomes for depression and stress but not anxiety using RF and XGBoost.
APA, Harvard, Vancouver, ISO, and other styles
12

Sun, Kai, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. "Progressive Multi-task Learning with Controlled Information Flow for Joint Entity and Relation Extraction." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (2021): 13851–59. http://dx.doi.org/10.1609/aaai.v35i15.17632.

Full text
Abstract:
Multitask learning has shown promising performance in learning multiple related tasks simultaneously, and variants of model architectures have been proposed, especially for supervised classification problems. One goal of multitask learning is to extract a good representation that sufficiently captures the relevant part of the input about the output for each learning task. To achieve this objective, in this paper we design a multitask learning architecture based on the observation that correlations exist between outputs of some related tasks (e.g. entity recognition and relation extraction tasks), and they reflect the relevant features that need to be extracted from the input. As outputs are unobserved, our proposed model exploits task predictions in lower layers of the neural model, also referred to as early predictions in this work. But we control the injection of early predictions to ensure that we extract good task-specific representations for classification. We refer to this model as a Progressive Multitask learning model with Explicit Interactions (PMEI). Extensive experiments on multiple benchmark datasets produce state-of-the-art results on the joint entity and relation extraction task.
APA, Harvard, Vancouver, ISO, and other styles
13

Yu, Qingtian, Haopeng Wang, Fedwa Laamarti, and Abdulmotaleb El Saddik. "Deep Learning-Enabled Multitask System for Exercise Recognition and Counting." Multimodal Technologies and Interaction 5, no. 9 (2021): 55. http://dx.doi.org/10.3390/mti5090055.

Full text
Abstract:
Exercise is a prevailing topic in modern society as more people are pursuing a healthy lifestyle. Physical activities provide significant benefits to human well-being from the inside out. Human pose estimation, action recognition and repetitive counting fields developed rapidly in the past several years. However, few works combined them together to assist people in exercise. In this paper, we propose a multitask system covering the three domains. Different from existing methods, heatmaps, which are the byproducts of 2D human pose estimation models, are adopted for exercise recognition and counting. Recent heatmap processing methods have been proven effective in extracting dynamic body pose information. Inspired by this, we propose a deep-learning multitask model of exercise recognition and repetition counting. To the best of our knowledge, this approach is attempted for the first time. To meet the needs of the multitask model, we create a new dataset Rep-Penn with action, counting and speed labels. Our multitask system can estimate human pose, identify physical activities and count repeated motions. We achieved 95.69% accuracy in exercise recognition on the Rep-Penn dataset. The multitask model also performed well in repetitive counting with 0.004 Mean Average Error (MAE) and 0.997 Off-By-One (OBO) accuracy on the Rep-Penn dataset. Compared with existing frameworks, our method obtained state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
14

Kim, Hyuncheol, and Joonki Paik. "Low-Rank Representation-Based Object Tracking Using Multitask Feature Learning with Joint Sparsity." Abstract and Applied Analysis 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/147353.

Full text
Abstract:
We address object tracking problem as a multitask feature learning process based on low-rank representation of features with joint sparsity. We first select features with low-rank representation within a number of initial frames to obtain subspace basis. Next, the features represented by the low-rank and sparse property are learned using a modified joint sparsity-based multitask feature learning framework. Both the features and sparse errors are then optimally updated using a novel incremental alternating direction method. The low-rank minimization problem for learning multitask features can be achieved by a few sequences of efficient closed form update process. Since the proposed method attempts to perform the feature learning problem in both multitask and low-rank manner, it can not only reduce the dimension but also improve the tracking performance without drift. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art tracking methods for tracking objects in challenging image sequences.
APA, Harvard, Vancouver, ISO, and other styles
15

Su, Fang, Hai-Yang Shang, and Jing-Yan Wang. "Low-Rank Deep Convolutional Neural Network for Multitask Learning." Computational Intelligence and Neuroscience 2019 (May 20, 2019): 1–10. http://dx.doi.org/10.1155/2019/7410701.

Full text
Abstract:
In this paper, we propose a novel multitask learning method based on the deep convolutional network. The proposed deep network has four convolutional layers, three max-pooling layers, and two parallel fully connected layers. To adjust the deep network to multitask learning problem, we propose to learn a low-rank deep network so that the relation among different tasks can be explored. We proposed to minimize the number of independent parameter rows of one fully connected layer to explore the relations among different tasks, which is measured by the nuclear norm of the parameter of one fully connected layer, and seek a low-rank parameter matrix. Meanwhile, we also propose to regularize another fully connected layer by sparsity penalty so that the useful features learned by the lower layers can be selected. The learning problem is solved by an iterative algorithm based on gradient descent and back-propagation algorithms. The proposed algorithm is evaluated over benchmark datasets of multiple face attribute prediction, multitask natural language processing, and joint economics index predictions. The evaluation results show the advantage of the low-rank deep CNN model over multitask problems.
APA, Harvard, Vancouver, ISO, and other styles
16

Pan, Haixia, Yanan Li, Hongqiang Wang, and Xiaomeng Tian. "Railway Obstacle Intrusion Detection Based on Convolution Neural Network Multitask Learning." Electronics 11, no. 17 (2022): 2697. http://dx.doi.org/10.3390/electronics11172697.

Full text
Abstract:
The detection of train obstacle intrusion is very important for the safe running of trains. In this paper, we design a multitask intrusion detection model to warn of the intrusion of detected target obstacles in railway scenes. In addition, we design a multiobjective optimization algorithm that performs with different task complexity. Through the shared structure reparameterized backbone network, our multitask learning model utilizes resources effectively. Our work achieves competitive results on both object detection and line detection, and achieves excellent inference time performance(50 FPS). Our work is the first to introduce a multitask approach to realize the assisted-driving function in a railway scene.
APA, Harvard, Vancouver, ISO, and other styles
17

Jaśkowski, Wojciech, Krzysztof Krawiec, and Bartosz Wieloch. "Multitask Visual Learning Using Genetic Programming." Evolutionary Computation 16, no. 4 (2008): 439–59. http://dx.doi.org/10.1162/evco.2008.16.4.439.

Full text
Abstract:
We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Jiangtao, Zijia Wang, and Zheng Kou. "Multitask Level-Based Learning Swarm Optimizer." Biomimetics 9, no. 11 (2024): 664. http://dx.doi.org/10.3390/biomimetics9110664.

Full text
Abstract:
Evolutionary multitasking optimization (EMTO) is currently one of the hottest research topics that aims to utilize the correlation between tasks to optimize them simultaneously. Although many evolutionary multitask algorithms (EMTAs) based on traditional differential evolution (DE) and the genetic algorithm (GA) have been proposed, there are relatively few EMTAs based on particle swarm optimization (PSO). Compared with DE and GA, PSO has a faster convergence speed, especially during the later state of the evolutionary process. Therefore, this paper proposes a multitask level-based learning swarm optimizer (MTLLSO). In MTLLSO, multiple populations are maintained and each population corresponds to the optimization of one task separately using LLSO, leveraging high-level individuals with better fitness to guide the evolution of low-level individuals with worse fitness. When information transfer occurs, high-level individuals from a source population are used to guide the evolution of low-level individuals in the target population to facilitate the effectiveness of knowledge transfer. In this way, MTLLSO can obtain the satisfying balance between self-evolution and knowledge transfer. We have illustrated the effectiveness of MTLLSO on the CEC2017 benchmark, where MTLLSO significantly outperformed other compared algorithms in most problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Skolidis, Grigorios, and Guido Sanguinetti. "Semisupervised Multitask Learning With Gaussian Processes." IEEE Transactions on Neural Networks and Learning Systems 24, no. 12 (2013): 2101–12. http://dx.doi.org/10.1109/tnnls.2013.2272403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Cong, Michael Georgiopoulos, and Georgios C. Anagnostopoulos. "Pareto-Path Multitask Multiple Kernel Learning." IEEE Transactions on Neural Networks and Learning Systems 26, no. 1 (2015): 51–61. http://dx.doi.org/10.1109/tnnls.2014.2309939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Jeong Yoon, Youngmin Oh, Sung Shin Kim, Robert A. Scheidt, and Nicolas Schweighofer. "Optimal Schedules in Multitask Motor Learning." Neural Computation 28, no. 4 (2016): 667–85. http://dx.doi.org/10.1162/neco_a_00823.

Full text
Abstract:
Although scheduling multiple tasks in motor learning to maximize long-term retention of performance is of great practical importance in sports training and motor rehabilitation after brain injury, it is unclear how to do so. We propose here a novel theoretical approach that uses optimal control theory and computational models of motor adaptation to determine schedules that maximize long-term retention predictively. Using Pontryagin’s maximum principle, we derived a control law that determines the trial-by-trial task choice that maximizes overall delayed retention for all tasks, as predicted by the state-space model. Simulations of a single session of adaptation with two tasks show that when task interference is high, there exists a threshold in relative task difficulty below which the alternating schedule is optimal. Only for large differences in task difficulties do optimal schedules assign more trials to the harder task. However, over the parameter range tested, alternating schedules yield long-term retention performance that is only slightly inferior to performance given by the true optimal schedules. Our results thus predict that in a large number of learning situations wherein tasks interfere, intermixing tasks with an equal number of trials is an effective strategy in enhancing long-term retention.
APA, Harvard, Vancouver, ISO, and other styles
22

Dahan, Elay, and Israel Cohen. "Deep-Learning-Based Multitask Ultrasound Beamforming." Information 14, no. 10 (2023): 582. http://dx.doi.org/10.3390/info14100582.

Full text
Abstract:
In this paper, we present a new method for multitask learning applied to ultrasound beamforming. Beamforming is a critical component in the ultrasound image formation pipeline. Ultrasound images are constructed using sensor readings from multiple transducer elements, with each element typically capturing multiple acquisitions per frame. Hence, the beamformer is crucial for framerate performance and overall image quality. Furthermore, post-processing, such as image denoising, is usually applied to the beamformed image to achieve high clarity for diagnosis. This work shows a fully convolutional neural network that can learn different tasks by applying a new weight normalization scheme. We adapt our model to both high frame rate requirements by fitting weight normalization parameters for the sub-sampling task and image denoising by optimizing the normalization parameters for the speckle reduction task. Our model outperforms single-angle delay and sum on pixel-level measures for speckle noise reduction, subsampling, and single-angle reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Wenzheng, Chenyan Xiong, Karl Stratos, and Arnold Overwijk. "Improving Multitask Retrieval by Promoting Task Specialization." Transactions of the Association for Computational Linguistics 11 (2023): 1201–12. http://dx.doi.org/10.1162/tacl_a_00597.

Full text
Abstract:
Abstract In multitask retrieval, a single retriever is trained to retrieve relevant contexts for multiple tasks. Despite its practical appeal, naive multitask retrieval lags behind task-specific retrieval, in which a separate retriever is trained for each task. We show that it is possible to train a multitask retriever that outperforms task-specific retrievers by promoting task specialization. The main ingredients are: (1) a better choice of pretrained model—one that is explicitly optimized for multitasking—along with compatible prompting, and (2) a novel adaptive learning method that encourages each parameter to specialize in a particular task. The resulting multitask retriever is highly performant on the KILT benchmark. Upon analysis, we find that the model indeed learns parameters that are more task-specialized compared to naive multitasking without prompting or adaptive learning.1
APA, Harvard, Vancouver, ISO, and other styles
24

Arefeen, Asiful, and Hassan Ghasemzadeh. "Cost-Effective Multitask Active Learning in Wearable Sensor Systems." Sensors 25, no. 5 (2025): 1522. https://doi.org/10.3390/s25051522.

Full text
Abstract:
Multitask learning models provide benefits by reducing model complexity and improving accuracy by concurrently learning multiple tasks with shared representations. Leveraging inductive knowledge transfer, these models mitigate the risk of overfitting on any specific task, leading to enhanced overall performance. However, supervised multitask learning models, like many neural networks, require substantial amounts of labeled data. Given the cost associated with data labeling, there is a need for an efficient label acquisition mechanism, known as multitask active learning (MTAL). In wearable sensor systems, success of MTAL largely hinges on its query strategies because active learning in such settings involves interaction with end-users (e.g., patients) for annotation. However, these strategies have not been studied in mobile health settings and wearable systems to date. While strategies like one-sided sampling, alternating sampling, and rank-combination-based sampling have been proposed in the past, their applicability in mobile sensor settings—a domain constrained by label deficit—remains largely unexplored. This study investigates the MTAL querying approaches and addresses crucial questions related to the choice of sampling methods and the effectiveness of multitask learning in mobile health applications. Utilizing two datasets on activity recognition and emotion classification, our findings reveal that rank-based sampling outperforms other techniques, particularly in tasks with high correlation. However, sole reliance on informativeness for sample selection may introduce biases into models. To address this issue, we also propose a Clustered Stratified Sampling (CSS) method in tandem with the multitask active learning query process. CSS identifies clustered mini-batches of samples, optimizing budget utilization and maximizing performance. When employed alongside rank-based query selection, our proposed CSS algorithm demonstrates up to 9% improvement in accuracy over traditional querying approaches for a 2000-query budget.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Xiaoqi, Yingjie Cheng, Yaning Yang, Yue Yu, Fei Li, and Shaoliang Peng. "Multitask joint strategies of self-supervised representation learning on biomedical networks for drug discovery." Nature Machine Intelligence 5, no. 4 (2023): 445–56. http://dx.doi.org/10.1038/s42256-023-00640-6.

Full text
Abstract:
AbstractSelf-supervised representation learning (SSL) on biomedical networks provides new opportunities for drug discovery; however, effectively combining multiple SSL models is still challenging and has been rarely explored. We therefore propose multitask joint strategies of SSL on biomedical networks for drug discovery, named MSSL2drug. We design six basic SSL tasks that are inspired by the knowledge of various modalities, inlcuding structures, semantics and attributes in heterogeneous biomedical networks. Importantly, fifteen combinations of multiple tasks are evaluated using a graph-attention-based multitask adversarial learning framework in two drug discovery scenarios. The results suggest two important findings: (1) combinations of multimodal tasks achieve better performance than other multitask joint models; (2) the local–global combination models yield higher performance than random two-task combinations when there are the same number of modalities. We thus conjecture that the multimodal and local–global combination strategies can be treated as the guideline of multitask SSL for drug discovery.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Lu, Yongjiu Dai, Zhongwang Wei, et al. "Enforcing Water Balance in Multitask Deep Learning Models for Hydrological Forecasting." Journal of Hydrometeorology 25, no. 1 (2024): 89–103. http://dx.doi.org/10.1175/jhm-d-23-0073.1.

Full text
Abstract:
Abstract Accurate prediction of hydrological variables (HVs) is critical for understanding hydrological processes. Deep learning (DL) models have shown excellent forecasting abilities for different HVs. However, most DL models typically predicted HVs independently, without satisfying the principle of water balance. This missed the interactions between different HVs in the hydrological system and the underlying physical rules. In this study, we developed a DL model based on multitask learning and hybrid physically constrained schemes to simultaneously forecast soil moisture, evapotranspiration, and runoff. The models were trained using ERA5-Land data, which have water budget closure. We thoroughly assessed the advantages of the multitask framework and the proposed constrained schemes. Results showed that multitask models with different loss-weighted strategies produced comparable or better performance compared to the single-task model. The multitask model with a scaling factor of 5 achieved the best among all multitask models and performed better than the single-task model over 70.5% of grids. In addition, the hybrid constrained scheme took advantage of both soft and hard constrained models, providing physically consistent predictions with better model performance. The hybrid constrained models performed the best among different constrained models in terms of both general and extreme performance. Moreover, the hybrid model was affected the least as the training data were artificially reduced, and provided better spatiotemporal extrapolation ability under different artificial prediction challenges. These findings suggest that the hybrid model provides better performance compared to previously reported constrained models when facing limited training data and extrapolation challenges.
APA, Harvard, Vancouver, ISO, and other styles
27

Forouzannezhad, Parisa, Dominic Maes, Daniel S. Hippe, et al. "Multitask Learning Radiomics on Longitudinal Imaging to Predict Survival Outcomes following Risk-Adaptive Chemoradiation for Non-Small Cell Lung Cancer." Cancers 14, no. 5 (2022): 1228. http://dx.doi.org/10.3390/cancers14051228.

Full text
Abstract:
Medical imaging provides quantitative and spatial information to evaluate treatment response in the management of patients with non-small cell lung cancer (NSCLC). High throughput extraction of radiomic features on these images can potentially phenotype tumors non-invasively and support risk stratification based on survival outcome prediction. The prognostic value of radiomics from different imaging modalities and time points prior to and during chemoradiation therapy of NSCLC, relative to conventional imaging biomarker or delta radiomics models, remains uncharacterized. We investigated the utility of multitask learning of multi-time point radiomic features, as opposed to single-task learning, for improving survival outcome prediction relative to conventional clinical imaging feature model benchmarks. Survival outcomes were prospectively collected for 45 patients with unresectable NSCLC enrolled on the FLARE-RT phase II trial of risk-adaptive chemoradiation and optional consolidation PD-L1 checkpoint blockade (NCT02773238). FDG-PET, CT, and perfusion SPECT imaging pretreatment and week 3 mid-treatment was performed and 110 IBSI-compliant pyradiomics shape-/intensity-/texture-based features from the metabolic tumor volume were extracted. Outcome modeling consisted of a fused Laplacian sparse group LASSO with component-wise gradient boosting survival regression in a multitask learning framework. Testing performance under stratified 10-fold cross-validation was evaluated for multitask learning radiomics of different imaging modalities and time points. Multitask learning models were benchmarked against conventional clinical imaging and delta radiomics models and evaluated with the concordance index (c-index) and index of prediction accuracy (IPA). FDG-PET radiomics had higher prognostic value for overall survival in test folds (c-index 0.71 [0.67, 0.75]) than CT radiomics (c-index 0.64 [0.60, 0.71]) or perfusion SPECT radiomics (c-index 0.60 [0.57, 0.63]). Multitask learning of pre-/mid-treatment FDG-PET radiomics (c-index 0.71 [0.67, 0.75]) outperformed benchmark clinical imaging (c-index 0.65 [0.59, 0.71]) and FDG-PET delta radiomics (c-index 0.52 [0.48, 0.58]) models. Similarly, the IPA for multitask learning FDG-PET radiomics (30%) was higher than clinical imaging (26%) and delta radiomics (15%) models. Radiomics models performed consistently under different voxel resampling conditions. Multitask learning radiomics for outcome modeling provides a clinical decision support platform that leverages longitudinal imaging information. This framework can reveal the relative importance of different imaging modalities and time points when designing risk-adaptive cancer treatment strategies.
APA, Harvard, Vancouver, ISO, and other styles
28

Tseng, Shao-Yen, Brian Baucom, and Panayiotis Georgiou. "Unsupervised online multitask learning of behavioral sentence embeddings." PeerJ Computer Science 5 (June 10, 2019): e200. http://dx.doi.org/10.7717/peerj-cs.200.

Full text
Abstract:
Appropriate embedding transformation of sentences can aid in downstream tasks such as NLP and emotion and behavior analysis. Such efforts evolved from word vectors which were trained in an unsupervised manner using large-scale corpora. Recent research, however, has shown that sentence embeddings trained using in-domain data or supervised techniques, often through multitask learning, perform better than unsupervised ones. Representations have also been shown to be applicable in multiple tasks, especially when training incorporates multiple information sources. In this work we aspire to combine the simplicity of using abundant unsupervised data with transfer learning by introducing an online multitask objective. We present a multitask paradigm for unsupervised learning of sentence embeddings which simultaneously addresses domain adaption. We show that embeddings generated through this process increase performance in subsequent domain-relevant tasks. We evaluate on the affective tasks of emotion recognition and behavior analysis and compare our results with state-of-the-art general-purpose supervised sentence embeddings. Our unsupervised sentence embeddings outperform the alternative universal embeddings in both identifying behaviors within couples therapy and in emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
29

Yan, Yuguang, Gan Li, Qingliang Li, and Jinlong Zhu. "Enhancing Hydrological Variable Prediction through Multitask LSTM Models." Water 16, no. 15 (2024): 2156. http://dx.doi.org/10.3390/w16152156.

Full text
Abstract:
Deep learning models possess the capacity to accurately forecast various hydrological variables, encompassing flow, temperature, and runoff, notably leveraging Long Short-Term Memory (LSTM) networks to exhibit exceptional performance in capturing long-term dynamics. Nonetheless, these deep learning models often fixate solely on singular predictive tasks, thus overlooking the interdependencies among variables within the hydrological cycle. To address this gap, our study introduces a model that amalgamates Multitask Learning (MTL) and LSTM, harnessing inter-variable information to achieve high-precision forecasting across multiple tasks. We evaluate our proposed model on the global ERA5-Land dataset and juxtapose the results against those of a single-task model predicting a sole variable. Furthermore, experiments explore the impact of task weight allocation on the performance of multitask learning. The results indicate that when there is positive transfer among variables, multitask learning aids in enhancing predictive performance. When jointly forecasting first-layer soil moisture (SM1) and evapotranspiration (ET), the Nash–Sutcliffe Efficiency (NSE) increases by 19.6% and 4.1%, respectively, compared to the single-task baseline model; Kling–Gupta Efficiency (KGE) improves by 8.4% and 6.1%. Additionally, the model exhibits greater forecast stability when confronted with extreme data variations in tropical monsoon regions (AM). In conclusion, our study substantiates the applicability of multitask learning in the realm of hydrological variable prediction.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Yan, Lei Zhang, Lituan Wang, and Zizhou Wang. "Multitask Learning for Object Localization With Deep Reinforcement Learning." IEEE Transactions on Cognitive and Developmental Systems 11, no. 4 (2019): 573–80. http://dx.doi.org/10.1109/tcds.2018.2885813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Linjuan, Jiaqi Shi, Lili Wang, and Changqing Xu. "Electricity, Heat, and Gas Load Forecasting Based on Deep Multitask Learning in Industrial-Park Integrated Energy System." Entropy 22, no. 12 (2020): 1355. http://dx.doi.org/10.3390/e22121355.

Full text
Abstract:
Different energy systems are closely connected with each other in industrial-park integrated energy system (IES). The energy demand forecasting has important impact on IES dispatching and planning. This paper proposes an approach of short-term energy forecasting for electricity, heat, and gas by employing deep multitask learning whose structure is constructed by deep belief network (DBN) and multitask regression layer. The DBN can extract abstract and effective characteristics in an unsupervised fashion, and the multitask regression layer above the DBN is used for supervised prediction. Then, subject to condition of practical demand and model integrity, the whole energy forecasting model is introduced, including preprocessing, normalization, input properties, training stage, and evaluating indicator. Finally, the validity of the algorithm and the accuracy of the energy forecasts for an industrial-park IES system are verified through the simulations using actual operating data from load system. The positive results turn out that the deep multitask learning has great prospects for load forecast.
APA, Harvard, Vancouver, ISO, and other styles
32

Zheng, Weiping, Zhenyao Mo, and Gansen Zhao. "Clustering by Errors: A Self-Organized Multitask Learning Method for Acoustic Scene Classification." Sensors 22, no. 1 (2021): 36. http://dx.doi.org/10.3390/s22010036.

Full text
Abstract:
Acoustic scene classification (ASC) tries to inference information about the environment using audio segments. The inter-class similarity is a significant issue in ASC as acoustic scenes with different labels may sound quite similar. In this paper, the similarity relations amongst scenes are correlated with the classification error. A class hierarchy construction method by using classification error is then proposed and integrated into a multitask learning framework. The experiments have shown that the proposed multitask learning method improves the performance of ASC. On the TUT Acoustic Scene 2017 dataset, we obtain the ensemble fine-grained accuracy of 81.4%, which is better than the state-of-the-art. By using multitask learning, the basic Convolutional Neural Network (CNN) model can be improved by about 2.0 to 3.5 percent according to different spectrograms. The coarse category accuracies (for two to six super-classes) range from 77.0% to 96.2% by single models. On the revised version of the LITIS Rouen dataset, we achieve the ensemble fine-grained accuracy of 83.9%. The multitask learning models obtain an improvement of 1.6% to 1.8% compared to their basic models. The coarse category accuracies range from 94.9% to 97.9% for two to six super-classes with single models.
APA, Harvard, Vancouver, ISO, and other styles
33

Flamary, R., N. Jrad, R. Phlypo, M. Congedo, and A. Rakotomamonjy. "Mixed-Norm Regularization for Brain Decoding." Computational and Mathematical Methods in Medicine 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/317056.

Full text
Abstract:
This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Junkai, Lianlei Lin, Zaiming Teng, and Yu Zhang. "Multitask Learning Based on Improved Uncertainty Weighted Loss for Multi-Parameter Meteorological Data Prediction." Atmosphere 13, no. 6 (2022): 989. http://dx.doi.org/10.3390/atmos13060989.

Full text
Abstract:
With the exponential growth in the amount of available data, traditional meteorological data processing algorithms have become overwhelmed. The application of artificial intelligence in simultaneous prediction of multi-parameter meteorological data has attracted much attention. However, existing single-task network models are generally limited by the data correlation dependence problem. In this paper, we use a priori knowledge for network design and propose a multitask model based on an asymmetric sharing mechanism, which effectively solves the correlation dependence problem in multi-parameter meteorological data prediction and achieves simultaneous prediction of multiple meteorological parameters with complex correlations for the first time. The performance of the multitask model depends largely on the relative weights among the task losses, and manually adjusting these weights is a difficult and expensive process, which makes it difficult for multitask learning to achieve the expected results in practice. In this paper, we propose an improved multitask loss processing method based on the assumptions of homoscedasticity uncertainty and the Laplace loss distribution and validate it using the German Jena dataset. The results show that the method can automatically balance the losses of each subtask and has better performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles
35

Cui, Mingxiu. "DQN and dynamic feedback for multitask scheduling optimization in engineering management." International Journal of Low-Carbon Technologies 19 (2024): 2279–86. http://dx.doi.org/10.1093/ijlct/ctae163.

Full text
Abstract:
Abstract Within the realm of complex project management, the prevailing multitask scheduling optimization algorithm grapples with the dual constraints of sluggish convergence and exorbitant computational overhead. To address this challenge, this paper introduces a novel multitask scheduling optimization algorithm anchored in Deep Q Network (DQN) and dynamic feedback mechanisms. This innovative approach endeavors to ameliorate the algorithm’s learning prowess and scheduling efficiency through a reinforcement learning framework and dynamic feedback mechanisms. To substantiate the efficacy of the proposed algorithm, comprehensive experiments are conducted utilizing a sizable project management dataset. Comparative analyses are performed against established algorithms such as the ant colony algorithm, particle swarm algorithm, and adaptive genetic algorithm. Evaluation metrics encompass convergence speed and algorithmic runtime. Notably, experimental results unveil the superiority of the proposed algorithm across these metrics, underscoring its prowess in multitask scheduling within project management contexts. Looking ahead, future endeavors may entail further optimization of reinforcement learning parameters and their application in larger-scale engineering projects. Such endeavors hold the promise of augmenting the algorithm’s stability and efficacy, thereby fostering advancements in multitask scheduling optimization within the project management domain.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Jiafei, Qingsong Wang, Jianda Cheng, Deliang Xiang, and Wenbo Jing. "Multitask Learning-Based for SAR Image Superpixel Generation." Remote Sensing 14, no. 4 (2022): 899. http://dx.doi.org/10.3390/rs14040899.

Full text
Abstract:
Most of the existing synthetic aperture radar (SAR) image superpixel generation methods are designed based on the raw SAR images or artificially designed features. However, such methods have the following limitations: (1) SAR images are severely affected by speckle noise, resulting in unstable pixel distance estimation. (2) Artificially designed features cannot be well-adapted to complex SAR image scenes, such as the building regions. Aiming to overcome these shortcomings, we propose a multitask learning-based superpixel generation network (ML-SGN) for SAR images. ML-SGN firstly utilizes a multitask feature extractor to extract deep features, and constructs a high-dimensional feature space containing intensity information, deep semantic informantion, and spatial information. Then, we define an effective pixel distance measure based on the high-dimensional feature space. In addition, we design a differentiable soft assignment operation instead of the non-differentiable nearest neighbor operation, so that the differentiable Simple Linear Iterative Clustering (SLIC) and multitask feature extractor can be combined into an end-to-end superpixel generation network. Comprehensive evaluations are performed on two real SAR images with different bands, which demonstrate that our proposed method outperforms other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Fang, Cheng, Feifei Liang, Tianchi Li, and Fangheng Guan. "Learning Modality Consistency and Difference Information with Multitask Learning for Multimodal Sentiment Analysis." Future Internet 16, no. 6 (2024): 213. http://dx.doi.org/10.3390/fi16060213.

Full text
Abstract:
The primary challenge in Multimodal sentiment analysis (MSA) lies in developing robust joint representations that can effectively learn mutual information from diverse modalities. Previous research in this field tends to rely on feature concatenation to obtain joint representations. However, these approaches fail to fully exploit interactive patterns to ensure consistency and differentiation across different modalities. To address this limitation, we propose a novel framework for multimodal sentiment analysis, named CDML (Consistency and Difference using a Multitask Learning network). Specifically, CDML uses an attention mechanism to assign the attention weights of each modality efficiently. Adversarial training is used to obtain consistent information between modalities. Finally, the difference among the modalities is acquired by the multitask learning framework. Experiments on two benchmark MSA datasets, CMU-MOSI and CMU-MOSEI, showcase that our proposed method outperforms the seven existing approaches by at least 1.3% for Acc-2 and 1.7% for F1.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Zhicheng, Ze Luo, Jian Li, Can Chen, and Yingchao Piao. "When Self-Supervised Learning Meets Scene Classification: Remote Sensing Scene Classification Based on a Multitask Learning Framework." Remote Sensing 12, no. 20 (2020): 3276. http://dx.doi.org/10.3390/rs12203276.

Full text
Abstract:
In recent years, the development of convolutional neural networks (CNNs) has promoted continuous progress in scene classification of remote sensing images. Compared with natural image datasets, however, the acquisition of remote sensing scene images is more difficult, and consequently the scale of remote sensing image datasets is generally small. In addition, many problems related to small objects and complex backgrounds arise in remote sensing image scenes, presenting great challenges for CNN-based recognition methods. In this article, to improve the feature extraction ability and generalization ability of such models and to enable better use of the information contained in the original remote sensing images, we introduce a multitask learning framework which combines the tasks of self-supervised learning and scene classification. Unlike previous multitask methods, we adopt a new mixup loss strategy to combine the two tasks with dynamic weight. The proposed multitask learning framework empowers a deep neural network to learn more discriminative features without increasing the amounts of parameters. Comprehensive experiments were conducted on four representative remote sensing scene classification datasets. We achieved state-of-the-art performance, with average accuracies of 94.21%, 96.89%, 99.11%, and 98.98% on the NWPU, AID, UC Merced, and WHU-RS19 datasets, respectively. The experimental results and visualizations show that our proposed method can learn more discriminative features and simultaneously encode orientation information while effectively improving the accuracy of remote sensing scene classification.
APA, Harvard, Vancouver, ISO, and other styles
39

Nimbal, Pratik, and Gopal Krishna Shyam. "Multitask sparse Learning based Facial Expression Classification." International Journal of Computer Sciences and Engineering 7, no. 6 (2019): 197–202. http://dx.doi.org/10.26438/ijcse/v7i6.197202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yao, Chunhua, Xinyu Song, Xuelei Zhang, Weicheng Zhao, and Ao Feng. "Multitask Learning for Aspect-Based Sentiment Classification." Scientific Programming 2021 (November 29, 2021): 1–9. http://dx.doi.org/10.1155/2021/2055555.

Full text
Abstract:
Aspect-level sentiment analysis identifies the sentiment polarity of aspect terms in complex sentences, which is useful in a wide range of applications. It is a highly challenging task and attracts the attention of many researchers in the natural language processing field. In order to obtain a better aspect representation, a wide range of existing methods design complex attention mechanisms to establish the connection between entity words and their context. With the limited size of data collections in aspect-level sentiment analysis, mainly because of the high annotation workload, the risk of overfitting is greatly increased. In this paper, we propose a Shared Multitask Learning Network (SMLN), which jointly trains auxiliary tasks that are highly related to aspect-level sentiment analysis. Specifically, we use opinion term extraction due to its high correlation with the main task. Through a custom-designed Cross Interaction Unit (CIU), effective information of the opinion term extraction task is passed to the main task, with performance improvement in both directions. Experimental results on SemEval-2014 and SemEval-2015 datasets demonstrate the competitive performance of SMLN in comparison to baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Jin, Ran, Tengda Hou, Tongrui Yu, Min Luo, and Haoliang Hu. "A Multitask Deep Learning Framework for DNER." Computational Intelligence and Neuroscience 2022 (April 16, 2022): 1–10. http://dx.doi.org/10.1155/2022/3321296.

Full text
Abstract:
Over the years, the explosive growth of drug-related text information has resulted in heavy loads of work for manual data processing. However, the domain knowledge hidden is believed to be crucial to biomedical research and applications. In this article, the multi-DTR model that can accurately recognize drug-specific name by joint modeling of DNER and DNEN was proposed. Character features were extracted by CNN out of the input text, and the context-sensitive word vectors were obtained using ELMo. Next, the pretrained biomedical words were embedded into BiLSTM-CRF and the output labels were interacted to update the task parameters until DNER and DNEN would support each other. The proposed method was found with better performance on the DDI2011 and DDI2013 datasets.
APA, Harvard, Vancouver, ISO, and other styles
42

Xiong, Fangzhou, Biao Sun, Xu Yang, et al. "Guided Policy Search for Sequential Multitask Learning." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, no. 1 (2019): 216–26. http://dx.doi.org/10.1109/tsmc.2018.2800040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Pillonetto, G., F. Dinuzzo, and G. De Nicolao. "Bayesian Online Multitask Learning of Gaussian Processes." IEEE Transactions on Pattern Analysis and Machine Intelligence 32, no. 2 (2010): 193–205. http://dx.doi.org/10.1109/tpami.2008.297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Singh, Loitongbam Gyanendro, Akash Anil, and Sanasam Ranbir Singh. "SHE: Sentiment Hashtag Embedding Through Multitask Learning." IEEE Transactions on Computational Social Systems 7, no. 2 (2020): 417–24. http://dx.doi.org/10.1109/tcss.2019.2962718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Stambrouski, Tsimafei, and Rodrigo Alves. "Multitask learning for cognitive sciences triplet analysis." Expert Systems with Applications 267 (April 2025): 126187. https://doi.org/10.1016/j.eswa.2024.126187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Barbour, Dennis, Zhiting Zhou, Dom Marticorena, et al. "Multitask Machine Learning of Contrast Sensitivity Functions." Journal of Vision 24, no. 10 (2024): 1082. http://dx.doi.org/10.1167/jov.24.10.1082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Qian Xu, Sinno Jialin Pan, Hannah Hong Xue, and Qiang Yang. "Multitask Learning for Protein Subcellular Location Prediction." IEEE/ACM Transactions on Computational Biology and Bioinformatics 8, no. 3 (2011): 748–59. http://dx.doi.org/10.1109/tcbb.2010.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gibert, Xavier, Vishal M. Patel, and Rama Chellappa. "Deep Multitask Learning for Railway Track Inspection." IEEE Transactions on Intelligent Transportation Systems 18, no. 1 (2017): 153–64. http://dx.doi.org/10.1109/tits.2016.2568758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Habic, Vuk, Alexander Semenov, and Eduardo L. Pasiliao. "Multitask deep learning for native language identification." Knowledge-Based Systems 209 (December 2020): 106440. http://dx.doi.org/10.1016/j.knosys.2020.106440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ramsundar, Bharath, Bowen Liu, Zhenqin Wu, et al. "Is Multitask Deep Learning Practical for Pharma?" Journal of Chemical Information and Modeling 57, no. 8 (2017): 2068–76. http://dx.doi.org/10.1021/acs.jcim.7b00146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography