Cheng Shi;Peiwen Han;Minghua Zhao;Li Fang;Qiguang Miao;Chi-Man Pun
{"title":"面向遥感图像语义分割的自适应多类型对比视图生成","authors":"Cheng Shi;Peiwen Han;Minghua Zhao;Li Fang;Qiguang Miao;Chi-Man Pun","doi":"10.1109/TGRS.2024.3525133","DOIUrl":null,"url":null,"abstract":"Self-supervised contrastive learning is a powerful pretraining framework for learning the invariant features from the different views of remote sensing images, therefore, the performance of contrastive learning heavily depends on the generation of views. Current view generation is primarily accomplished through different transformations, and the types and parameters of the transformations are required hand-crafted. Hence, the diversity and discriminability of generated views cannot be guaranteed. To address this, we propose a multitype views optimization method to optimize these transformations. We formulate contrastive learning as a min-max optimization problem, and transformation parameters are optimized by maximizing the contrastive loss. The optimized transformations encourage the negative sample pairs to be close and the positive sample pairs to be far apart. Different from the current adversarial view generation methods, our method can optimize both photometric transformations and geometric transformations. For remote sensing images, the geometric transformation is more critical for view generation, while the existing view optimization methods fail to achieve this. We consider the hue, saturation, brightness, contrast, and geometric rotation transformations in contrastive learning, and evaluate the optimized views on the downstream remote sensing images semantic segmentation task. Extensive experiments are carried out on the three remote sensing image segmentation datasets, including the ISPRS Potsdam dataset, the ISPRS Vaihingen dataset, and the LoveDA dataset. Results show that the learned views obtain high advantages compared to the hand-crafted views and other optimized views. The code associated with this article has been released and can be accessed at <uri>https://github.com/AAAA-CS/AMView</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-13"},"PeriodicalIF":8.6000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Multitype Contrastive Views Generation for Remote Sensing Image Semantic Segmentation\",\"authors\":\"Cheng Shi;Peiwen Han;Minghua Zhao;Li Fang;Qiguang Miao;Chi-Man Pun\",\"doi\":\"10.1109/TGRS.2024.3525133\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-supervised contrastive learning is a powerful pretraining framework for learning the invariant features from the different views of remote sensing images, therefore, the performance of contrastive learning heavily depends on the generation of views. Current view generation is primarily accomplished through different transformations, and the types and parameters of the transformations are required hand-crafted. Hence, the diversity and discriminability of generated views cannot be guaranteed. To address this, we propose a multitype views optimization method to optimize these transformations. We formulate contrastive learning as a min-max optimization problem, and transformation parameters are optimized by maximizing the contrastive loss. The optimized transformations encourage the negative sample pairs to be close and the positive sample pairs to be far apart. Different from the current adversarial view generation methods, our method can optimize both photometric transformations and geometric transformations. For remote sensing images, the geometric transformation is more critical for view generation, while the existing view optimization methods fail to achieve this. We consider the hue, saturation, brightness, contrast, and geometric rotation transformations in contrastive learning, and evaluate the optimized views on the downstream remote sensing images semantic segmentation task. Extensive experiments are carried out on the three remote sensing image segmentation datasets, including the ISPRS Potsdam dataset, the ISPRS Vaihingen dataset, and the LoveDA dataset. Results show that the learned views obtain high advantages compared to the hand-crafted views and other optimized views. The code associated with this article has been released and can be accessed at <uri>https://github.com/AAAA-CS/AMView</uri>.\",\"PeriodicalId\":13213,\"journal\":{\"name\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"volume\":\"63 \",\"pages\":\"1-13\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2025-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10838338/\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10838338/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Self-supervised contrastive learning is a powerful pretraining framework for learning the invariant features from the different views of remote sensing images, therefore, the performance of contrastive learning heavily depends on the generation of views. Current view generation is primarily accomplished through different transformations, and the types and parameters of the transformations are required hand-crafted. Hence, the diversity and discriminability of generated views cannot be guaranteed. To address this, we propose a multitype views optimization method to optimize these transformations. We formulate contrastive learning as a min-max optimization problem, and transformation parameters are optimized by maximizing the contrastive loss. The optimized transformations encourage the negative sample pairs to be close and the positive sample pairs to be far apart. Different from the current adversarial view generation methods, our method can optimize both photometric transformations and geometric transformations. For remote sensing images, the geometric transformation is more critical for view generation, while the existing view optimization methods fail to achieve this. We consider the hue, saturation, brightness, contrast, and geometric rotation transformations in contrastive learning, and evaluate the optimized views on the downstream remote sensing images semantic segmentation task. Extensive experiments are carried out on the three remote sensing image segmentation datasets, including the ISPRS Potsdam dataset, the ISPRS Vaihingen dataset, and the LoveDA dataset. Results show that the learned views obtain high advantages compared to the hand-crafted views and other optimized views. The code associated with this article has been released and can be accessed at https://github.com/AAAA-CS/AMView.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.