{"title":"基于领域对齐和任务对齐协同学习的SAR目标识别零射击域自适应","authors":"Guo Chen;Siqian Zhang;Zheng Zhou;Lingjun Zhao;Gangyao Kuang","doi":"10.1109/TRS.2025.3580543","DOIUrl":null,"url":null,"abstract":"The objective of zero-shot synthetic aperture radar (SAR) image target recognition is to identify the novel unobserved targets for which no training samples are available. The zero-shot recognition method for SAR targets merits investigation, where using electromagnetic simulated images as training data is a viable approach. Nevertheless, the networks trained on the simulated images exhibit difficulty in generalizing to the real images due to the inherent discrepancies in the distribution of the simulated and the real domains. The majority of existing research employs unsupervised domain adaptation methods to address such cross-domain recognition problems. However, these methods are not applicable in zero-shot scenarios, as they require the availability of unlabeled real data from unknown classes during training. Therefore, to address the challenging issue of zero-shot cross-domain recognition for SAR targets, a zero-shot domain adaptation (ZSDA) for SAR target recognition based on cooperative learning of domain alignment and task alignment is proposed. Specifically, we perform domain adaptation using the simulated and real data from the seen classes, to ensure that this alignment can be generalized to the unseen classes. First, a transfer-weighted domain adversarial learning method is proposed to achieve a more robust domain alignment of the seen classes. Second, a classification-based adversarial learning method is proposed to achieve task alignment between the seen and unseen classes within two domains. Finally, a feature fusion refinement module is proposed for the cooperative learning of the two alignment processes. In the context of collaborative learning, task alignment facilitates the transfer of the domain alignment learned from the seen classes to the unseen classes. The experimental results demonstrate the efficacy of the proposed method in SAR zero-shot cross-domain recognition, achieving recognition accuracies of 91.68%, 85.83%, 83.90%, and 77.73% for three unseen class real images across four distinct experimental groups, surpassing the current state-of-the-art methods.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"890-904"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Zero-Shot Domain Adaptation for SAR Target Recognition Based on Cooperative Learning of Domain Alignment and Task Alignment\",\"authors\":\"Guo Chen;Siqian Zhang;Zheng Zhou;Lingjun Zhao;Gangyao Kuang\",\"doi\":\"10.1109/TRS.2025.3580543\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The objective of zero-shot synthetic aperture radar (SAR) image target recognition is to identify the novel unobserved targets for which no training samples are available. The zero-shot recognition method for SAR targets merits investigation, where using electromagnetic simulated images as training data is a viable approach. Nevertheless, the networks trained on the simulated images exhibit difficulty in generalizing to the real images due to the inherent discrepancies in the distribution of the simulated and the real domains. The majority of existing research employs unsupervised domain adaptation methods to address such cross-domain recognition problems. However, these methods are not applicable in zero-shot scenarios, as they require the availability of unlabeled real data from unknown classes during training. Therefore, to address the challenging issue of zero-shot cross-domain recognition for SAR targets, a zero-shot domain adaptation (ZSDA) for SAR target recognition based on cooperative learning of domain alignment and task alignment is proposed. Specifically, we perform domain adaptation using the simulated and real data from the seen classes, to ensure that this alignment can be generalized to the unseen classes. First, a transfer-weighted domain adversarial learning method is proposed to achieve a more robust domain alignment of the seen classes. Second, a classification-based adversarial learning method is proposed to achieve task alignment between the seen and unseen classes within two domains. Finally, a feature fusion refinement module is proposed for the cooperative learning of the two alignment processes. In the context of collaborative learning, task alignment facilitates the transfer of the domain alignment learned from the seen classes to the unseen classes. The experimental results demonstrate the efficacy of the proposed method in SAR zero-shot cross-domain recognition, achieving recognition accuracies of 91.68%, 85.83%, 83.90%, and 77.73% for three unseen class real images across four distinct experimental groups, surpassing the current state-of-the-art methods.\",\"PeriodicalId\":100645,\"journal\":{\"name\":\"IEEE Transactions on Radar Systems\",\"volume\":\"3 \",\"pages\":\"890-904\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radar Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11037615/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radar Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11037615/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Zero-Shot Domain Adaptation for SAR Target Recognition Based on Cooperative Learning of Domain Alignment and Task Alignment
The objective of zero-shot synthetic aperture radar (SAR) image target recognition is to identify the novel unobserved targets for which no training samples are available. The zero-shot recognition method for SAR targets merits investigation, where using electromagnetic simulated images as training data is a viable approach. Nevertheless, the networks trained on the simulated images exhibit difficulty in generalizing to the real images due to the inherent discrepancies in the distribution of the simulated and the real domains. The majority of existing research employs unsupervised domain adaptation methods to address such cross-domain recognition problems. However, these methods are not applicable in zero-shot scenarios, as they require the availability of unlabeled real data from unknown classes during training. Therefore, to address the challenging issue of zero-shot cross-domain recognition for SAR targets, a zero-shot domain adaptation (ZSDA) for SAR target recognition based on cooperative learning of domain alignment and task alignment is proposed. Specifically, we perform domain adaptation using the simulated and real data from the seen classes, to ensure that this alignment can be generalized to the unseen classes. First, a transfer-weighted domain adversarial learning method is proposed to achieve a more robust domain alignment of the seen classes. Second, a classification-based adversarial learning method is proposed to achieve task alignment between the seen and unseen classes within two domains. Finally, a feature fusion refinement module is proposed for the cooperative learning of the two alignment processes. In the context of collaborative learning, task alignment facilitates the transfer of the domain alignment learned from the seen classes to the unseen classes. The experimental results demonstrate the efficacy of the proposed method in SAR zero-shot cross-domain recognition, achieving recognition accuracies of 91.68%, 85.83%, 83.90%, and 77.73% for three unseen class real images across four distinct experimental groups, surpassing the current state-of-the-art methods.