{"title":"基于伪暹罗对抗生成网络的多模态遥感图像对抗示例生成","authors":"Haifeng Li;Hang Cao;Jiahao Cui;Jing Geng","doi":"10.1109/JSTARS.2025.3602278","DOIUrl":null,"url":null,"abstract":"In the field of remote sensing, the increasing diversity of remote sensing image modalities has made the integration of multimodal remote sensing image information a prevailing trend to increase classification accuracy. Concurrently, the study of adversarial samples for multimodal remote sensing images has emerged as a crucial area for enhancing network robustness. However, existing adversarial attack strategies designed for single-modal data often fail to extend effectively to multimodal adversarial attack tasks, mainly due to the following challenges: Multimodal correlation: Since multimodal data provide complementary auxiliary information, attacking a single modality alone cannot disrupt the correlated features across modalities; directional differences in multimodal adversarial samples: The adversarial perturbation directions exhibit substantial discrepancies and conflicts, which considerably diminish the overall attack efficacy. To address the first challenge, we propose a pseudo-Siamese generative adversarial network that employs modality-specific generators to simultaneously produce perturbations for each modality from the latent feature space, enabling simultaneous attacks on multiple modalities. To address the second challenge, we introduce a collaborative adversarial loss that enforces consistency in the perturbation directions across modalities, thereby mitigating the conflicts between multimodal perturbations and improving attack effectiveness on multimodal classification networks. Extensive experiments demonstrate the vulnerability of multimodal fusion models to adversarial attacks, even when only a single modality is attacked. Specifically, we show that our proposed pseudo-Siamese adversarial attack method considerably reduces the overall accuracy of the U-Net and Deeplabv3 models from 81.92% and 82.20% to 0.22% and 4.16%, respectively, thereby validating the efficacy of our approach.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"24588-24601"},"PeriodicalIF":5.3000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11134792","citationCount":"0","resultStr":"{\"title\":\"Adversarial Example Generation With Pseudo-Siamese Adversarial Generative Networks for Multimodal Remote Sensing Images\",\"authors\":\"Haifeng Li;Hang Cao;Jiahao Cui;Jing Geng\",\"doi\":\"10.1109/JSTARS.2025.3602278\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of remote sensing, the increasing diversity of remote sensing image modalities has made the integration of multimodal remote sensing image information a prevailing trend to increase classification accuracy. Concurrently, the study of adversarial samples for multimodal remote sensing images has emerged as a crucial area for enhancing network robustness. However, existing adversarial attack strategies designed for single-modal data often fail to extend effectively to multimodal adversarial attack tasks, mainly due to the following challenges: Multimodal correlation: Since multimodal data provide complementary auxiliary information, attacking a single modality alone cannot disrupt the correlated features across modalities; directional differences in multimodal adversarial samples: The adversarial perturbation directions exhibit substantial discrepancies and conflicts, which considerably diminish the overall attack efficacy. To address the first challenge, we propose a pseudo-Siamese generative adversarial network that employs modality-specific generators to simultaneously produce perturbations for each modality from the latent feature space, enabling simultaneous attacks on multiple modalities. To address the second challenge, we introduce a collaborative adversarial loss that enforces consistency in the perturbation directions across modalities, thereby mitigating the conflicts between multimodal perturbations and improving attack effectiveness on multimodal classification networks. Extensive experiments demonstrate the vulnerability of multimodal fusion models to adversarial attacks, even when only a single modality is attacked. Specifically, we show that our proposed pseudo-Siamese adversarial attack method considerably reduces the overall accuracy of the U-Net and Deeplabv3 models from 81.92% and 82.20% to 0.22% and 4.16%, respectively, thereby validating the efficacy of our approach.\",\"PeriodicalId\":13116,\"journal\":{\"name\":\"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing\",\"volume\":\"18 \",\"pages\":\"24588-24601\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11134792\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11134792/\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11134792/","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Adversarial Example Generation With Pseudo-Siamese Adversarial Generative Networks for Multimodal Remote Sensing Images
In the field of remote sensing, the increasing diversity of remote sensing image modalities has made the integration of multimodal remote sensing image information a prevailing trend to increase classification accuracy. Concurrently, the study of adversarial samples for multimodal remote sensing images has emerged as a crucial area for enhancing network robustness. However, existing adversarial attack strategies designed for single-modal data often fail to extend effectively to multimodal adversarial attack tasks, mainly due to the following challenges: Multimodal correlation: Since multimodal data provide complementary auxiliary information, attacking a single modality alone cannot disrupt the correlated features across modalities; directional differences in multimodal adversarial samples: The adversarial perturbation directions exhibit substantial discrepancies and conflicts, which considerably diminish the overall attack efficacy. To address the first challenge, we propose a pseudo-Siamese generative adversarial network that employs modality-specific generators to simultaneously produce perturbations for each modality from the latent feature space, enabling simultaneous attacks on multiple modalities. To address the second challenge, we introduce a collaborative adversarial loss that enforces consistency in the perturbation directions across modalities, thereby mitigating the conflicts between multimodal perturbations and improving attack effectiveness on multimodal classification networks. Extensive experiments demonstrate the vulnerability of multimodal fusion models to adversarial attacks, even when only a single modality is attacked. Specifically, we show that our proposed pseudo-Siamese adversarial attack method considerably reduces the overall accuracy of the U-Net and Deeplabv3 models from 81.92% and 82.20% to 0.22% and 4.16%, respectively, thereby validating the efficacy of our approach.
期刊介绍:
The IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing addresses the growing field of applications in Earth observations and remote sensing, and also provides a venue for the rapidly expanding special issues that are being sponsored by the IEEE Geosciences and Remote Sensing Society. The journal draws upon the experience of the highly successful “IEEE Transactions on Geoscience and Remote Sensing” and provide a complementary medium for the wide range of topics in applied earth observations. The ‘Applications’ areas encompasses the societal benefit areas of the Global Earth Observations Systems of Systems (GEOSS) program. Through deliberations over two years, ministers from 50 countries agreed to identify nine areas where Earth observation could positively impact the quality of life and health of their respective countries. Some of these are areas not traditionally addressed in the IEEE context. These include biodiversity, health and climate. Yet it is the skill sets of IEEE members, in areas such as observations, communications, computers, signal processing, standards and ocean engineering, that form the technical underpinnings of GEOSS. Thus, the Journal attracts a broad range of interests that serves both present members in new ways and expands the IEEE visibility into new areas.