{"title":"利用多作物基因组(MultiCropGAN)进行标签空间差异的跨域早期作物绘图","authors":"Yiqun Wang, Hui Huang, Radu State","doi":"10.5194/isprs-annals-x-1-2024-241-2024","DOIUrl":null,"url":null,"abstract":"Abstract. Mapping target crops before the harvest season for regions lacking crop-specific ground truth is critical for global food security. Utilizing multispectral remote sensing and domain adaptation methods, prior studies strive to produce precise crop maps in these regions (target domain) with the help of the crop-specific labelled remote sensing data from the source regions (source domain). However, existing approaches assume identical label spaces across those domains, a challenge often unmet in reality, necessitating a more adaptable solution. This paper introduces the Multiple Crop Mapping Generative Adversarial Neural Network (MultiCropGAN) model, comprising a generator, discriminator, and classifier. The generator transforms target domain data into the source domain, employing identity losses to retain the characteristics of the target data. The discriminator aims to distinguish them and shares the structure and weights with the classifier, which locates crops in the target domain using the generator’s output. This model’s novel capability lies in locating target crops within the target domain, overcoming differences in crop type label spaces between the target and source domains. In experiments, MultiCropGAN is benchmarked against various baseline methods. Notably, when facing differing label spaces, MultiCropGAN significantly outperforms other baseline methods. The Overall Accuracy is improved by about 10%.\n","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 17","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross Domain Early Crop Mapping with Label Spaces Discrepancies using MultiCropGAN\",\"authors\":\"Yiqun Wang, Hui Huang, Radu State\",\"doi\":\"10.5194/isprs-annals-x-1-2024-241-2024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract. Mapping target crops before the harvest season for regions lacking crop-specific ground truth is critical for global food security. Utilizing multispectral remote sensing and domain adaptation methods, prior studies strive to produce precise crop maps in these regions (target domain) with the help of the crop-specific labelled remote sensing data from the source regions (source domain). However, existing approaches assume identical label spaces across those domains, a challenge often unmet in reality, necessitating a more adaptable solution. This paper introduces the Multiple Crop Mapping Generative Adversarial Neural Network (MultiCropGAN) model, comprising a generator, discriminator, and classifier. The generator transforms target domain data into the source domain, employing identity losses to retain the characteristics of the target data. The discriminator aims to distinguish them and shares the structure and weights with the classifier, which locates crops in the target domain using the generator’s output. This model’s novel capability lies in locating target crops within the target domain, overcoming differences in crop type label spaces between the target and source domains. In experiments, MultiCropGAN is benchmarked against various baseline methods. Notably, when facing differing label spaces, MultiCropGAN significantly outperforms other baseline methods. The Overall Accuracy is improved by about 10%.\\n\",\"PeriodicalId\":508124,\"journal\":{\"name\":\"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences\",\"volume\":\" 17\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5194/isprs-annals-x-1-2024-241-2024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5194/isprs-annals-x-1-2024-241-2024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross Domain Early Crop Mapping with Label Spaces Discrepancies using MultiCropGAN
Abstract. Mapping target crops before the harvest season for regions lacking crop-specific ground truth is critical for global food security. Utilizing multispectral remote sensing and domain adaptation methods, prior studies strive to produce precise crop maps in these regions (target domain) with the help of the crop-specific labelled remote sensing data from the source regions (source domain). However, existing approaches assume identical label spaces across those domains, a challenge often unmet in reality, necessitating a more adaptable solution. This paper introduces the Multiple Crop Mapping Generative Adversarial Neural Network (MultiCropGAN) model, comprising a generator, discriminator, and classifier. The generator transforms target domain data into the source domain, employing identity losses to retain the characteristics of the target data. The discriminator aims to distinguish them and shares the structure and weights with the classifier, which locates crops in the target domain using the generator’s output. This model’s novel capability lies in locating target crops within the target domain, overcoming differences in crop type label spaces between the target and source domains. In experiments, MultiCropGAN is benchmarked against various baseline methods. Notably, when facing differing label spaces, MultiCropGAN significantly outperforms other baseline methods. The Overall Accuracy is improved by about 10%.