{"title":"鲁棒终身多任务多视图表示学习","authors":"Gan Sun, Yang Cong, Jun Li, Y. Fu","doi":"10.1109/ICBK.2018.00020","DOIUrl":null,"url":null,"abstract":"The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multi-view task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying task-specific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.","PeriodicalId":144958,"journal":{"name":"2018 IEEE International Conference on Big Knowledge (ICBK)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Robust Lifelong Multi-task Multi-view Representation Learning\",\"authors\":\"Gan Sun, Yang Cong, Jun Li, Y. Fu\",\"doi\":\"10.1109/ICBK.2018.00020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multi-view task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying task-specific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.\",\"PeriodicalId\":144958,\"journal\":{\"name\":\"2018 IEEE International Conference on Big Knowledge (ICBK)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Big Knowledge (ICBK)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICBK.2018.00020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Big Knowledge (ICBK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICBK.2018.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multi-view task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying task-specific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.