基于像素级和表示级域自适应的端到端遥感图像语义分割网络

IF 4 3区 地球科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Lukui Shi, Ziyuan Wang, Bin Pan, Zhenwei Shi
{"title":"基于像素级和表示级域自适应的端到端遥感图像语义分割网络","authors":"Lukui Shi, Ziyuan Wang, Bin Pan, Zhenwei Shi","doi":"10.1109/lgrs.2020.3010591","DOIUrl":null,"url":null,"abstract":"It requires pixel-by-pixel annotations to obtain sufficient training data in supervised remote sensing image segmentation, which is a quite time-consuming process. In recent years, a series of domain-adaptation methods was developed for image semantic segmentation. In general, these methods are trained on the source domain and then validated on the target domain to avoid labeling new data repeatedly. However, most domain-adaptation algorithms only tried to align the source domain and the target domain in the pixel level or the representation level, while ignored their cooperation. In this letter, we propose an unsupervised domain-adaptation method by Joint Pixel and Representation level Network (JPRNet) alignment. The major novelty of the JPRNet is that it achieves joint domain adaptation in an end-to-end manner, so as to avoid the multisource problem in the remote sensing images. JPRNet is composed of two branches, each of which is a generative-adversarial network (GAN). In one branch, pixel-level domain adaptation is implemented by the style transfer with the Cycle GAN, which could transfer the source domain to a target domain. In the other branch, the representation-level domain adaptation is realized by adversarial learning between the transferred source-domain images and the target-domain images. The experimental results on the public data sets have indicated the effectiveness of the JPRNet.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1896-1900"},"PeriodicalIF":4.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3010591","citationCount":"18","resultStr":"{\"title\":\"An End-to-End Network for Remote Sensing Imagery Semantic Segmentation via Joint Pixel- and Representation-Level Domain Adaptation\",\"authors\":\"Lukui Shi, Ziyuan Wang, Bin Pan, Zhenwei Shi\",\"doi\":\"10.1109/lgrs.2020.3010591\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It requires pixel-by-pixel annotations to obtain sufficient training data in supervised remote sensing image segmentation, which is a quite time-consuming process. In recent years, a series of domain-adaptation methods was developed for image semantic segmentation. In general, these methods are trained on the source domain and then validated on the target domain to avoid labeling new data repeatedly. However, most domain-adaptation algorithms only tried to align the source domain and the target domain in the pixel level or the representation level, while ignored their cooperation. In this letter, we propose an unsupervised domain-adaptation method by Joint Pixel and Representation level Network (JPRNet) alignment. The major novelty of the JPRNet is that it achieves joint domain adaptation in an end-to-end manner, so as to avoid the multisource problem in the remote sensing images. JPRNet is composed of two branches, each of which is a generative-adversarial network (GAN). In one branch, pixel-level domain adaptation is implemented by the style transfer with the Cycle GAN, which could transfer the source domain to a target domain. In the other branch, the representation-level domain adaptation is realized by adversarial learning between the transferred source-domain images and the target-domain images. The experimental results on the public data sets have indicated the effectiveness of the JPRNet.\",\"PeriodicalId\":13046,\"journal\":{\"name\":\"IEEE Geoscience and Remote Sensing Letters\",\"volume\":\"18 1\",\"pages\":\"1896-1900\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/lgrs.2020.3010591\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Geoscience and Remote Sensing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/lgrs.2020.3010591\",\"RegionNum\":3,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Geoscience and Remote Sensing Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/lgrs.2020.3010591","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 18

摘要

在监督遥感图像分割中,需要逐像素注释来获得足够的训练数据,这是一个相当耗时的过程。近年来,人们开发了一系列用于图像语义分割的领域自适应方法。通常,这些方法在源域上进行训练,然后在目标域上进行验证,以避免重复标记新数据。然而,大多数域自适应算法只试图在像素级或表示级上对齐源域和目标域,而忽略了它们的合作。在这封信中,我们提出了一种通过联合像素和表示级网络(JPRNet)对齐的无监督领域自适应方法。JPRNet的主要新颖之处在于,它以端到端的方式实现了联合域自适应,从而避免了遥感图像中的多源问题。JPRNet由两个分支组成,每个分支都是生成对抗性网络(GAN)。在一个分支中,像素级域自适应是通过循环GAN的风格转移来实现的,它可以将源域转移到目标域。在另一个分支中,通过在传输的源域图像和目标域图像之间的对抗性学习来实现表示级域自适应。在公共数据集上的实验结果表明了JPRNet的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An End-to-End Network for Remote Sensing Imagery Semantic Segmentation via Joint Pixel- and Representation-Level Domain Adaptation
It requires pixel-by-pixel annotations to obtain sufficient training data in supervised remote sensing image segmentation, which is a quite time-consuming process. In recent years, a series of domain-adaptation methods was developed for image semantic segmentation. In general, these methods are trained on the source domain and then validated on the target domain to avoid labeling new data repeatedly. However, most domain-adaptation algorithms only tried to align the source domain and the target domain in the pixel level or the representation level, while ignored their cooperation. In this letter, we propose an unsupervised domain-adaptation method by Joint Pixel and Representation level Network (JPRNet) alignment. The major novelty of the JPRNet is that it achieves joint domain adaptation in an end-to-end manner, so as to avoid the multisource problem in the remote sensing images. JPRNet is composed of two branches, each of which is a generative-adversarial network (GAN). In one branch, pixel-level domain adaptation is implemented by the style transfer with the Cycle GAN, which could transfer the source domain to a target domain. In the other branch, the representation-level domain adaptation is realized by adversarial learning between the transferred source-domain images and the target-domain images. The experimental results on the public data sets have indicated the effectiveness of the JPRNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Geoscience and Remote Sensing Letters
IEEE Geoscience and Remote Sensing Letters 工程技术-地球化学与地球物理
CiteScore
7.60
自引率
12.50%
发文量
1113
审稿时长
3.4 months
期刊介绍: IEEE Geoscience and Remote Sensing Letters (GRSL) is a monthly publication for short papers (maximum length 5 pages) addressing new ideas and formative concepts in remote sensing as well as important new and timely results and concepts. Papers should relate to the theory, concepts and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space, and the processing, interpretation, and dissemination of this information. The technical content of papers must be both new and significant. Experimental data must be complete and include sufficient description of experimental apparatus, methods, and relevant experimental conditions. GRSL encourages the incorporation of "extended objects" or "multimedia" such as animations to enhance the shorter papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信