{"title":"基于多核学习的领域自适应","authors":"Liyan Han, Weixin Ling","doi":"10.1145/3579654.3579735","DOIUrl":null,"url":null,"abstract":"Abstract: Domain adaptation is used to solve the inconsistency in distributions of training samples (source domain) and test samples (target domain) and improve the accuracies of traditional learning machines. Domain adaptation methods attempt to map the two domains to a latent space where the distributions of them are aligned. The model trained in the source domain then can be effectively generalized to the target domain. However, the linear mapping adopted by the existing distribution matching methods has a limited ability to represent the complex transformation between source domain and target domain. In order to overcome this defect, we put forward Domain Adaptation based on Multi-Kernel learning (DAMK) method, which uses a nonlinear mapping. In order to satisfy the different requirements in the nonlinearity of the feature mapping of different datasets, DAMK uses the weighted sum of multiple mappings and optimizes the weighted coefficients. Because of the difficulty in obtaining nonlinear mapping directly, we adopt multi-kernel function instead of explicit expression to express the nonlinear mapping function. Experiments conducted on object recognition datasets and face recognition datasets show that DAMK is more effective that the existing linear mapping methods.","PeriodicalId":146783,"journal":{"name":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Domain Adaptation Based on Multi-Kernel Learning\",\"authors\":\"Liyan Han, Weixin Ling\",\"doi\":\"10.1145/3579654.3579735\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract: Domain adaptation is used to solve the inconsistency in distributions of training samples (source domain) and test samples (target domain) and improve the accuracies of traditional learning machines. Domain adaptation methods attempt to map the two domains to a latent space where the distributions of them are aligned. The model trained in the source domain then can be effectively generalized to the target domain. However, the linear mapping adopted by the existing distribution matching methods has a limited ability to represent the complex transformation between source domain and target domain. In order to overcome this defect, we put forward Domain Adaptation based on Multi-Kernel learning (DAMK) method, which uses a nonlinear mapping. In order to satisfy the different requirements in the nonlinearity of the feature mapping of different datasets, DAMK uses the weighted sum of multiple mappings and optimizes the weighted coefficients. Because of the difficulty in obtaining nonlinear mapping directly, we adopt multi-kernel function instead of explicit expression to express the nonlinear mapping function. Experiments conducted on object recognition datasets and face recognition datasets show that DAMK is more effective that the existing linear mapping methods.\",\"PeriodicalId\":146783,\"journal\":{\"name\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"73 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579654.3579735\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579654.3579735","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Abstract: Domain adaptation is used to solve the inconsistency in distributions of training samples (source domain) and test samples (target domain) and improve the accuracies of traditional learning machines. Domain adaptation methods attempt to map the two domains to a latent space where the distributions of them are aligned. The model trained in the source domain then can be effectively generalized to the target domain. However, the linear mapping adopted by the existing distribution matching methods has a limited ability to represent the complex transformation between source domain and target domain. In order to overcome this defect, we put forward Domain Adaptation based on Multi-Kernel learning (DAMK) method, which uses a nonlinear mapping. In order to satisfy the different requirements in the nonlinearity of the feature mapping of different datasets, DAMK uses the weighted sum of multiple mappings and optimizes the weighted coefficients. Because of the difficulty in obtaining nonlinear mapping directly, we adopt multi-kernel function instead of explicit expression to express the nonlinear mapping function. Experiments conducted on object recognition datasets and face recognition datasets show that DAMK is more effective that the existing linear mapping methods.