{"title":"Multi-Task Learning With Localized Generalization Error Model","authors":"Wendi Li, Yi Zhu, Ting Wang, Wing W. Y. Ng","doi":"10.1109/ICMLC48188.2019.8949255","DOIUrl":null,"url":null,"abstract":"In cases, the same or similar network architecture is used to deal with related but different tasks, where tasks come from different statistical distributions in the sample input space and share some common features. Multi-Task Learning (MTL) combines multiple related tasks for training at the same time, so as to learn some shared feature representation among multiple tasks. However, it is difficult to improve each task when statistical distributions of these related tasks are greatly different. This is caused by the difficulty of extracting an effective generalization of feature representation from multiple tasks. Moreover, it also slows down the convergence rate of MTL. Therefore, we propose a MTL method based on the Localized Generalization Error Model (L-GEM). The L-GEM improves the generalization capability of the trained model by minimizing the upper bound of generalization error of it with respect to unseen samples similar to training samples. It also helps to narrow the gap between different tasks due to different statistical distributions in MTL. Experimental results show that the L-GEM speeds up the training process while significantly improves the final convergence results.","PeriodicalId":221349,"journal":{"name":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC48188.2019.8949255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In cases, the same or similar network architecture is used to deal with related but different tasks, where tasks come from different statistical distributions in the sample input space and share some common features. Multi-Task Learning (MTL) combines multiple related tasks for training at the same time, so as to learn some shared feature representation among multiple tasks. However, it is difficult to improve each task when statistical distributions of these related tasks are greatly different. This is caused by the difficulty of extracting an effective generalization of feature representation from multiple tasks. Moreover, it also slows down the convergence rate of MTL. Therefore, we propose a MTL method based on the Localized Generalization Error Model (L-GEM). The L-GEM improves the generalization capability of the trained model by minimizing the upper bound of generalization error of it with respect to unseen samples similar to training samples. It also helps to narrow the gap between different tasks due to different statistical distributions in MTL. Experimental results show that the L-GEM speeds up the training process while significantly improves the final convergence results.