{"title":"基于自我训练的无监督领域适应,用于遥感图像中的物体检测","authors":"Sihao Luo;Li Ma;Xiaoquan Yang;Dapeng Luo;Qian Du","doi":"10.1109/TGRS.2024.3457789","DOIUrl":null,"url":null,"abstract":"We propose a novel two-stage cross-domain self-training (CDST) framework for unsupervised domain adaptive object detection in remote sensing. The first stage introduces the generative adversarial network (GAN)-based domain transfer strategy to preliminarily mitigate the domain shift for higher quality initial pseudo-labeled images, which utilizes the CycleGAN to transfer source-domain images to match the target domain. Moreover, the key issue in tailoring the self-training (ST) to unsupervised domain adaptive detection lies in the quality of pseudo-labeled images. To select high-quality pseudo-labeled images under the domain-shift circumstance, we propose hard example selection-based self-training (HES-ST) with the three key steps: 1) detector-based example division (DED), which divides the detected examples into easy examples and hard ones according to their confidence level; 2) confidence and relation joint score (CRJS)-based hard example selection, which combines two reliability levels calculated, respectively, by the detector and relation network (RN) module to mine reliable examples; and 3) union example (UE)-based training image selection, which combines both easy and reliable hard examples to choose target-domain images that may contain fewer detection errors. The experimental results on several remote sensing datasets demonstrate the effectiveness of our proposed framework. Compared with the baseline detector trained on the source dataset, our approach consistently improves the detection performance on the target dataset by 15.7%–16.8% mean average precision (mAP) and achieves the state-of-the-art (SOTA) results under various domain adaptation scenarios.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Training-Based Unsupervised Domain Adaptation for Object Detection in Remote Sensing Imagery\",\"authors\":\"Sihao Luo;Li Ma;Xiaoquan Yang;Dapeng Luo;Qian Du\",\"doi\":\"10.1109/TGRS.2024.3457789\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a novel two-stage cross-domain self-training (CDST) framework for unsupervised domain adaptive object detection in remote sensing. The first stage introduces the generative adversarial network (GAN)-based domain transfer strategy to preliminarily mitigate the domain shift for higher quality initial pseudo-labeled images, which utilizes the CycleGAN to transfer source-domain images to match the target domain. Moreover, the key issue in tailoring the self-training (ST) to unsupervised domain adaptive detection lies in the quality of pseudo-labeled images. To select high-quality pseudo-labeled images under the domain-shift circumstance, we propose hard example selection-based self-training (HES-ST) with the three key steps: 1) detector-based example division (DED), which divides the detected examples into easy examples and hard ones according to their confidence level; 2) confidence and relation joint score (CRJS)-based hard example selection, which combines two reliability levels calculated, respectively, by the detector and relation network (RN) module to mine reliable examples; and 3) union example (UE)-based training image selection, which combines both easy and reliable hard examples to choose target-domain images that may contain fewer detection errors. The experimental results on several remote sensing datasets demonstrate the effectiveness of our proposed framework. Compared with the baseline detector trained on the source dataset, our approach consistently improves the detection performance on the target dataset by 15.7%–16.8% mean average precision (mAP) and achieves the state-of-the-art (SOTA) results under various domain adaptation scenarios.\",\"PeriodicalId\":13213,\"journal\":{\"name\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10677431/\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10677431/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Self-Training-Based Unsupervised Domain Adaptation for Object Detection in Remote Sensing Imagery
We propose a novel two-stage cross-domain self-training (CDST) framework for unsupervised domain adaptive object detection in remote sensing. The first stage introduces the generative adversarial network (GAN)-based domain transfer strategy to preliminarily mitigate the domain shift for higher quality initial pseudo-labeled images, which utilizes the CycleGAN to transfer source-domain images to match the target domain. Moreover, the key issue in tailoring the self-training (ST) to unsupervised domain adaptive detection lies in the quality of pseudo-labeled images. To select high-quality pseudo-labeled images under the domain-shift circumstance, we propose hard example selection-based self-training (HES-ST) with the three key steps: 1) detector-based example division (DED), which divides the detected examples into easy examples and hard ones according to their confidence level; 2) confidence and relation joint score (CRJS)-based hard example selection, which combines two reliability levels calculated, respectively, by the detector and relation network (RN) module to mine reliable examples; and 3) union example (UE)-based training image selection, which combines both easy and reliable hard examples to choose target-domain images that may contain fewer detection errors. The experimental results on several remote sensing datasets demonstrate the effectiveness of our proposed framework. Compared with the baseline detector trained on the source dataset, our approach consistently improves the detection performance on the target dataset by 15.7%–16.8% mean average precision (mAP) and achieves the state-of-the-art (SOTA) results under various domain adaptation scenarios.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.