Robust Lifelong Multi-task Multi-view Representation Learning

Gan Sun, Yang Cong, Jun Li, Y. Fu
{"title":"Robust Lifelong Multi-task Multi-view Representation Learning","authors":"Gan Sun, Yang Cong, Jun Li, Y. Fu","doi":"10.1109/ICBK.2018.00020","DOIUrl":null,"url":null,"abstract":"The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multi-view task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying task-specific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.","PeriodicalId":144958,"journal":{"name":"2018 IEEE International Conference on Big Knowledge (ICBK)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Big Knowledge (ICBK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICBK.2018.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multi-view task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying task-specific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.
鲁棒终身多任务多视图表示学习
最先进的多任务多视图学习(MTMV)解决了通过多个共享特征视图将多个任务相互关联的学习场景。然而,在在线实际场景中,学习任务具有从多个视图(例如多个来源)收集的异构特征,单一视图的最先进技术无法很好地工作。为了解决这一问题,本文提出了一种鲁棒终身多任务多视图表示学习(rLM2L)模型,用于从在线多视图任务中积累知识。更具体地说,我们首先设计了一组特定于视图的库来维护每个视图的视图内相关信息,并进一步施加正交促进项来强制库尽可能独立。当在线出现新的多视图任务时,rLM2L模型通过传递相应库的知识,将新任务的所有视图分解为一个公共的视图不变空间。在这个视图不变的空间中,通过一个鲁棒的多任务学习公式,捕获潜在的视图间相关性和识别新任务的任务特定视图被联合使用。然后,可以随着时间的推移对特定于视图的库进行改进,以便在所有任务中不断改进。在模型优化方面,采用近端交替线性化最小化算法对非凸模型进行交替优化,实现终身学习。最后,在基准数据集上的大量实验表明,我们提出的rLM2L模型优于现有的终身学习模型,同时它可以从顺序多视图任务中发现特定于任务的视图,并且计算负担较小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信