{"title":"基于连续子空间对齐的跨域目标分类","authors":"Kecheng Chen, Hao Li, Hong Yan","doi":"10.1109/ICASSP49357.2023.10096792","DOIUrl":null,"url":null,"abstract":"Recently, successive subspace learning (SSL)-based methods have shown to be effective for the task of visual object classification with mild data desire and mathematically transparent interpretable capability. However, existing SSL-based methods rely heavily on the data-centric subspace representations, leading to potential performance degradation problem in case of the domain shift between the training (a.k.a., source domain) and testing (a.k.a., target domain) data. To address this limitation, we propose an effective successive subspace learning method based on existing SSL-based methods. Specifically, we introduce a novel linear transformation layer to align eigenvectors in SSL module between source and target domains, as such, the discrepancy between source and target domains will be reduced, resulting in better cross-domain performance. The effectiveness of our proposed method is demonstrated on the Office-Caltech-10 and Office-31 benchmark datasets by using features extracted from pre-trained deep neural networks as input.","PeriodicalId":113072,"journal":{"name":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Domain Object Classification Via Successive Subspace Alignment\",\"authors\":\"Kecheng Chen, Hao Li, Hong Yan\",\"doi\":\"10.1109/ICASSP49357.2023.10096792\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, successive subspace learning (SSL)-based methods have shown to be effective for the task of visual object classification with mild data desire and mathematically transparent interpretable capability. However, existing SSL-based methods rely heavily on the data-centric subspace representations, leading to potential performance degradation problem in case of the domain shift between the training (a.k.a., source domain) and testing (a.k.a., target domain) data. To address this limitation, we propose an effective successive subspace learning method based on existing SSL-based methods. Specifically, we introduce a novel linear transformation layer to align eigenvectors in SSL module between source and target domains, as such, the discrepancy between source and target domains will be reduced, resulting in better cross-domain performance. The effectiveness of our proposed method is demonstrated on the Office-Caltech-10 and Office-31 benchmark datasets by using features extracted from pre-trained deep neural networks as input.\",\"PeriodicalId\":113072,\"journal\":{\"name\":\"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICASSP49357.2023.10096792\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP49357.2023.10096792","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-Domain Object Classification Via Successive Subspace Alignment
Recently, successive subspace learning (SSL)-based methods have shown to be effective for the task of visual object classification with mild data desire and mathematically transparent interpretable capability. However, existing SSL-based methods rely heavily on the data-centric subspace representations, leading to potential performance degradation problem in case of the domain shift between the training (a.k.a., source domain) and testing (a.k.a., target domain) data. To address this limitation, we propose an effective successive subspace learning method based on existing SSL-based methods. Specifically, we introduce a novel linear transformation layer to align eigenvectors in SSL module between source and target domains, as such, the discrepancy between source and target domains will be reduced, resulting in better cross-domain performance. The effectiveness of our proposed method is demonstrated on the Office-Caltech-10 and Office-31 benchmark datasets by using features extracted from pre-trained deep neural networks as input.