Jian Xu, Xiaowen Zhou, Chaolin Han, Bing Dong, Hongwei Li
{"title":"SAM-GAN:基于生成对抗网络的监督学习航空图像到地图的翻译","authors":"Jian Xu, Xiaowen Zhou, Chaolin Han, Bing Dong, Hongwei Li","doi":"10.3390/ijgi12040159","DOIUrl":null,"url":null,"abstract":"Accurate translation of aerial imagery to maps is a direction of great value and challenge in mapping, a method of generating maps that does not require using vector data as traditional mapping methods do. The tremendous progress made in recent years in image translation based on generative adversarial networks has led to rapid progress in aerial image-to-map translation. Still, the generated results could be better regarding quality, accuracy, and visual impact. This paper proposes a supervised model (SAM-GAN) based on generative adversarial networks (GAN) to improve the performance of aerial image-to-map translation. In the model, we introduce a new generator and multi-scale discriminator. The generator is a conditional GAN model that extracts the content and style space from aerial images and maps and learns to generalize the patterns of aerial image-to-map style transformation. We introduce image style loss and topological consistency loss to improve the model’s pixel-level accuracy and topological performance. Furthermore, using the Maps dataset, a comprehensive qualitative and quantitative comparison is made between the SAM-GAN model and previous methods used for aerial image-to-map translation in combination with excellent evaluation metrics. Experiments showed that SAM-GAN outperformed existing methods in both quantitative and qualitative results.","PeriodicalId":14614,"journal":{"name":"ISPRS Int. J. Geo Inf.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAM-GAN: Supervised Learning-Based Aerial Image-to-Map Translation via Generative Adversarial Networks\",\"authors\":\"Jian Xu, Xiaowen Zhou, Chaolin Han, Bing Dong, Hongwei Li\",\"doi\":\"10.3390/ijgi12040159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Accurate translation of aerial imagery to maps is a direction of great value and challenge in mapping, a method of generating maps that does not require using vector data as traditional mapping methods do. The tremendous progress made in recent years in image translation based on generative adversarial networks has led to rapid progress in aerial image-to-map translation. Still, the generated results could be better regarding quality, accuracy, and visual impact. This paper proposes a supervised model (SAM-GAN) based on generative adversarial networks (GAN) to improve the performance of aerial image-to-map translation. In the model, we introduce a new generator and multi-scale discriminator. The generator is a conditional GAN model that extracts the content and style space from aerial images and maps and learns to generalize the patterns of aerial image-to-map style transformation. We introduce image style loss and topological consistency loss to improve the model’s pixel-level accuracy and topological performance. Furthermore, using the Maps dataset, a comprehensive qualitative and quantitative comparison is made between the SAM-GAN model and previous methods used for aerial image-to-map translation in combination with excellent evaluation metrics. Experiments showed that SAM-GAN outperformed existing methods in both quantitative and qualitative results.\",\"PeriodicalId\":14614,\"journal\":{\"name\":\"ISPRS Int. J. Geo Inf.\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Int. J. Geo Inf.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/ijgi12040159\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Int. J. Geo Inf.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/ijgi12040159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SAM-GAN: Supervised Learning-Based Aerial Image-to-Map Translation via Generative Adversarial Networks
Accurate translation of aerial imagery to maps is a direction of great value and challenge in mapping, a method of generating maps that does not require using vector data as traditional mapping methods do. The tremendous progress made in recent years in image translation based on generative adversarial networks has led to rapid progress in aerial image-to-map translation. Still, the generated results could be better regarding quality, accuracy, and visual impact. This paper proposes a supervised model (SAM-GAN) based on generative adversarial networks (GAN) to improve the performance of aerial image-to-map translation. In the model, we introduce a new generator and multi-scale discriminator. The generator is a conditional GAN model that extracts the content and style space from aerial images and maps and learns to generalize the patterns of aerial image-to-map style transformation. We introduce image style loss and topological consistency loss to improve the model’s pixel-level accuracy and topological performance. Furthermore, using the Maps dataset, a comprehensive qualitative and quantitative comparison is made between the SAM-GAN model and previous methods used for aerial image-to-map translation in combination with excellent evaluation metrics. Experiments showed that SAM-GAN outperformed existing methods in both quantitative and qualitative results.