Jianhan Pan, Teng Cui, Mingjing Du, Qingyang Zhang, Bingbing Song, Qiaoli Qu
{"title":"跨领域文本分类的多重潜在空间学习","authors":"Jianhan Pan, Teng Cui, Mingjing Du, Qingyang Zhang, Bingbing Song, Qiaoli Qu","doi":"10.1109/acait53529.2021.9730891","DOIUrl":null,"url":null,"abstract":"When the training data and test data are drawn from similar but different data distributions, transfer learning (TL) can be exploited to learn a consistent distribution for knowledge transfer. To reduce distribution differences, some recent transfer learning approaches typically build potential feature spaces to exploit the potential information and learn multiple high-level concepts to model a latent potential shared structure. However, only utilizing the potential information in one latent space will neglect some other potential information existing in different latent feature spaces. And this neglected potential information may also help model potential structures shared as bridges. In this paper, we propose Multiple Latent Spaces Learning (MLSL), a novel approach which mines a massive amount of potential information on multiple latent spaces to construct a shared bridge (or multiple bridges) across domains by learning different high-level concepts. Our strategy can dig out the latent information that exists in the latent space ignored by the previous methods to build a knowledge transfer bridge. Compared with the TL method that only learns a latent space, our strategy is more suitable for actual scenarios, and the use of data is also fuller. In addition, an iterative algorithm is developed to solve the optimization problem. Finally, the system test on benchmark data sets shows the superiority of the MLSL method.","PeriodicalId":173633,"journal":{"name":"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"5 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Multiple Latent Spaces Learning for Cross-Domain Text Classification\",\"authors\":\"Jianhan Pan, Teng Cui, Mingjing Du, Qingyang Zhang, Bingbing Song, Qiaoli Qu\",\"doi\":\"10.1109/acait53529.2021.9730891\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When the training data and test data are drawn from similar but different data distributions, transfer learning (TL) can be exploited to learn a consistent distribution for knowledge transfer. To reduce distribution differences, some recent transfer learning approaches typically build potential feature spaces to exploit the potential information and learn multiple high-level concepts to model a latent potential shared structure. However, only utilizing the potential information in one latent space will neglect some other potential information existing in different latent feature spaces. And this neglected potential information may also help model potential structures shared as bridges. In this paper, we propose Multiple Latent Spaces Learning (MLSL), a novel approach which mines a massive amount of potential information on multiple latent spaces to construct a shared bridge (or multiple bridges) across domains by learning different high-level concepts. Our strategy can dig out the latent information that exists in the latent space ignored by the previous methods to build a knowledge transfer bridge. Compared with the TL method that only learns a latent space, our strategy is more suitable for actual scenarios, and the use of data is also fuller. In addition, an iterative algorithm is developed to solve the optimization problem. Finally, the system test on benchmark data sets shows the superiority of the MLSL method.\",\"PeriodicalId\":173633,\"journal\":{\"name\":\"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)\",\"volume\":\"5 5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/acait53529.2021.9730891\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/acait53529.2021.9730891","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multiple Latent Spaces Learning for Cross-Domain Text Classification
When the training data and test data are drawn from similar but different data distributions, transfer learning (TL) can be exploited to learn a consistent distribution for knowledge transfer. To reduce distribution differences, some recent transfer learning approaches typically build potential feature spaces to exploit the potential information and learn multiple high-level concepts to model a latent potential shared structure. However, only utilizing the potential information in one latent space will neglect some other potential information existing in different latent feature spaces. And this neglected potential information may also help model potential structures shared as bridges. In this paper, we propose Multiple Latent Spaces Learning (MLSL), a novel approach which mines a massive amount of potential information on multiple latent spaces to construct a shared bridge (or multiple bridges) across domains by learning different high-level concepts. Our strategy can dig out the latent information that exists in the latent space ignored by the previous methods to build a knowledge transfer bridge. Compared with the TL method that only learns a latent space, our strategy is more suitable for actual scenarios, and the use of data is also fuller. In addition, an iterative algorithm is developed to solve the optimization problem. Finally, the system test on benchmark data sets shows the superiority of the MLSL method.