基于伪暹罗对抗生成网络的多模态遥感图像对抗示例生成

IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Haifeng Li;Hang Cao;Jiahao Cui;Jing Geng
{"title":"基于伪暹罗对抗生成网络的多模态遥感图像对抗示例生成","authors":"Haifeng Li;Hang Cao;Jiahao Cui;Jing Geng","doi":"10.1109/JSTARS.2025.3602278","DOIUrl":null,"url":null,"abstract":"In the field of remote sensing, the increasing diversity of remote sensing image modalities has made the integration of multimodal remote sensing image information a prevailing trend to increase classification accuracy. Concurrently, the study of adversarial samples for multimodal remote sensing images has emerged as a crucial area for enhancing network robustness. However, existing adversarial attack strategies designed for single-modal data often fail to extend effectively to multimodal adversarial attack tasks, mainly due to the following challenges: Multimodal correlation: Since multimodal data provide complementary auxiliary information, attacking a single modality alone cannot disrupt the correlated features across modalities; directional differences in multimodal adversarial samples: The adversarial perturbation directions exhibit substantial discrepancies and conflicts, which considerably diminish the overall attack efficacy. To address the first challenge, we propose a pseudo-Siamese generative adversarial network that employs modality-specific generators to simultaneously produce perturbations for each modality from the latent feature space, enabling simultaneous attacks on multiple modalities. To address the second challenge, we introduce a collaborative adversarial loss that enforces consistency in the perturbation directions across modalities, thereby mitigating the conflicts between multimodal perturbations and improving attack effectiveness on multimodal classification networks. Extensive experiments demonstrate the vulnerability of multimodal fusion models to adversarial attacks, even when only a single modality is attacked. Specifically, we show that our proposed pseudo-Siamese adversarial attack method considerably reduces the overall accuracy of the U-Net and Deeplabv3 models from 81.92% and 82.20% to 0.22% and 4.16%, respectively, thereby validating the efficacy of our approach.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"24588-24601"},"PeriodicalIF":5.3000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11134792","citationCount":"0","resultStr":"{\"title\":\"Adversarial Example Generation With Pseudo-Siamese Adversarial Generative Networks for Multimodal Remote Sensing Images\",\"authors\":\"Haifeng Li;Hang Cao;Jiahao Cui;Jing Geng\",\"doi\":\"10.1109/JSTARS.2025.3602278\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of remote sensing, the increasing diversity of remote sensing image modalities has made the integration of multimodal remote sensing image information a prevailing trend to increase classification accuracy. Concurrently, the study of adversarial samples for multimodal remote sensing images has emerged as a crucial area for enhancing network robustness. However, existing adversarial attack strategies designed for single-modal data often fail to extend effectively to multimodal adversarial attack tasks, mainly due to the following challenges: Multimodal correlation: Since multimodal data provide complementary auxiliary information, attacking a single modality alone cannot disrupt the correlated features across modalities; directional differences in multimodal adversarial samples: The adversarial perturbation directions exhibit substantial discrepancies and conflicts, which considerably diminish the overall attack efficacy. To address the first challenge, we propose a pseudo-Siamese generative adversarial network that employs modality-specific generators to simultaneously produce perturbations for each modality from the latent feature space, enabling simultaneous attacks on multiple modalities. To address the second challenge, we introduce a collaborative adversarial loss that enforces consistency in the perturbation directions across modalities, thereby mitigating the conflicts between multimodal perturbations and improving attack effectiveness on multimodal classification networks. Extensive experiments demonstrate the vulnerability of multimodal fusion models to adversarial attacks, even when only a single modality is attacked. Specifically, we show that our proposed pseudo-Siamese adversarial attack method considerably reduces the overall accuracy of the U-Net and Deeplabv3 models from 81.92% and 82.20% to 0.22% and 4.16%, respectively, thereby validating the efficacy of our approach.\",\"PeriodicalId\":13116,\"journal\":{\"name\":\"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing\",\"volume\":\"18 \",\"pages\":\"24588-24601\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11134792\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11134792/\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11134792/","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

在遥感领域,遥感影像模态的日益多样化,使得多模态遥感影像信息的整合成为提高分类精度的主流趋势。同时,多模态遥感图像的对抗样本研究已成为增强网络鲁棒性的关键领域。然而,现有针对单模态数据设计的对抗性攻击策略往往不能有效地扩展到多模态对抗性攻击任务,主要是由于以下挑战:多模态相关性:由于多模态数据提供了互补的辅助信息,单独攻击单一模态不能破坏跨模态的相关特征;多模态对抗样本的方向差异:对抗摄动方向表现出实质性的差异和冲突,这大大降低了整体攻击效率。为了解决第一个挑战,我们提出了一个伪暹罗生成对抗网络,该网络使用特定于模态的生成器同时从潜在特征空间为每个模态产生扰动,从而能够同时攻击多个模态。为了解决第二个挑战,我们引入了一种协作对抗损失,该损失强制了跨模态扰动方向的一致性,从而减轻了多模态扰动之间的冲突,提高了多模态分类网络的攻击效率。大量的实验表明,即使只有单一模态受到攻击,多模态融合模型也容易受到对抗性攻击。具体来说,我们表明我们提出的伪暹罗对抗性攻击方法大大降低了U-Net和Deeplabv3模型的总体准确率,分别从81.92%和82.20%降至0.22%和4.16%,从而验证了我们方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Example Generation With Pseudo-Siamese Adversarial Generative Networks for Multimodal Remote Sensing Images
In the field of remote sensing, the increasing diversity of remote sensing image modalities has made the integration of multimodal remote sensing image information a prevailing trend to increase classification accuracy. Concurrently, the study of adversarial samples for multimodal remote sensing images has emerged as a crucial area for enhancing network robustness. However, existing adversarial attack strategies designed for single-modal data often fail to extend effectively to multimodal adversarial attack tasks, mainly due to the following challenges: Multimodal correlation: Since multimodal data provide complementary auxiliary information, attacking a single modality alone cannot disrupt the correlated features across modalities; directional differences in multimodal adversarial samples: The adversarial perturbation directions exhibit substantial discrepancies and conflicts, which considerably diminish the overall attack efficacy. To address the first challenge, we propose a pseudo-Siamese generative adversarial network that employs modality-specific generators to simultaneously produce perturbations for each modality from the latent feature space, enabling simultaneous attacks on multiple modalities. To address the second challenge, we introduce a collaborative adversarial loss that enforces consistency in the perturbation directions across modalities, thereby mitigating the conflicts between multimodal perturbations and improving attack effectiveness on multimodal classification networks. Extensive experiments demonstrate the vulnerability of multimodal fusion models to adversarial attacks, even when only a single modality is attacked. Specifically, we show that our proposed pseudo-Siamese adversarial attack method considerably reduces the overall accuracy of the U-Net and Deeplabv3 models from 81.92% and 82.20% to 0.22% and 4.16%, respectively, thereby validating the efficacy of our approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.30
自引率
10.90%
发文量
563
审稿时长
4.7 months
期刊介绍: The IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing addresses the growing field of applications in Earth observations and remote sensing, and also provides a venue for the rapidly expanding special issues that are being sponsored by the IEEE Geosciences and Remote Sensing Society. The journal draws upon the experience of the highly successful “IEEE Transactions on Geoscience and Remote Sensing” and provide a complementary medium for the wide range of topics in applied earth observations. The ‘Applications’ areas encompasses the societal benefit areas of the Global Earth Observations Systems of Systems (GEOSS) program. Through deliberations over two years, ministers from 50 countries agreed to identify nine areas where Earth observation could positively impact the quality of life and health of their respective countries. Some of these are areas not traditionally addressed in the IEEE context. These include biodiversity, health and climate. Yet it is the skill sets of IEEE members, in areas such as observations, communications, computers, signal processing, standards and ocean engineering, that form the technical underpinnings of GEOSS. Thus, the Journal attracts a broad range of interests that serves both present members in new ways and expands the IEEE visibility into new areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信