S3OIL: Semi-Supervised SAR-to-Optical Image Translation via Multi-Scale and Cross-Set Matching.

IF 13.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xi Yang,Haoyuan Shi,Ziyun Li,Maoying Qiao,Fei Gao,Nannan Wang
{"title":"S3OIL: Semi-Supervised SAR-to-Optical Image Translation via Multi-Scale and Cross-Set Matching.","authors":"Xi Yang,Haoyuan Shi,Ziyun Li,Maoying Qiao,Fei Gao,Nannan Wang","doi":"10.1109/tip.2025.3616576","DOIUrl":null,"url":null,"abstract":"Image-to-image translation has achieved great success, but still faces the significant challenge of limited paired data, particularly in translating Synthetic Aperture Radar (SAR) images to optical images. Furthermore, most existing semi-supervised methods place limited emphasis on leveraging the data distribution. To address those challenges, we propose a Semi-Supervised SAR-to-Optical Image Translation (S3OIL) method that achieves high-quality image generation using minimal paired data and extensive unpaired data while strategically exploiting the data distribution. To this end, we first introduce a Cross-Set Alignment Matching (CAM) mechanism to create local correspondences between the generated results of paired and unpaired data, ensuring cross-set consistency. In addition, for unpaired data, we apply weak and strong perturbations and establish intra-set Multi-Scale Matching (MSM) constraints. For paired data, intra-modal semantic consistency (ISC) is presented to ensure alignment with the ground truth. Finally, we propose local and global cross-modal semantic consistency (CSC) to boost structural identity during translation. We conduct extensive experiments on SAR-to-optical datasets and another sketch-to-anime task, demonstrating that S3OIL delivers competitive performance compared to state-of-the-art unsupervised, supervised, and semi-supervised methods, both quantitatively and qualitatively. Ablation studies further reveal that S3OIL can ensure the preservation of both semantic content and structural integrity of the generated images. Our code is available at: https://github.com/XduShi/SOIL.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"30 1","pages":""},"PeriodicalIF":13.7000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Image Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tip.2025.3616576","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Image-to-image translation has achieved great success, but still faces the significant challenge of limited paired data, particularly in translating Synthetic Aperture Radar (SAR) images to optical images. Furthermore, most existing semi-supervised methods place limited emphasis on leveraging the data distribution. To address those challenges, we propose a Semi-Supervised SAR-to-Optical Image Translation (S3OIL) method that achieves high-quality image generation using minimal paired data and extensive unpaired data while strategically exploiting the data distribution. To this end, we first introduce a Cross-Set Alignment Matching (CAM) mechanism to create local correspondences between the generated results of paired and unpaired data, ensuring cross-set consistency. In addition, for unpaired data, we apply weak and strong perturbations and establish intra-set Multi-Scale Matching (MSM) constraints. For paired data, intra-modal semantic consistency (ISC) is presented to ensure alignment with the ground truth. Finally, we propose local and global cross-modal semantic consistency (CSC) to boost structural identity during translation. We conduct extensive experiments on SAR-to-optical datasets and another sketch-to-anime task, demonstrating that S3OIL delivers competitive performance compared to state-of-the-art unsupervised, supervised, and semi-supervised methods, both quantitatively and qualitatively. Ablation studies further reveal that S3OIL can ensure the preservation of both semantic content and structural integrity of the generated images. Our code is available at: https://github.com/XduShi/SOIL.
S3OIL:基于多尺度和交叉集匹配的半监督sar -光学图像转换。
图像到图像的转换已经取得了巨大的成功,但仍然面临着有限的配对数据的重大挑战,特别是在将合成孔径雷达(SAR)图像转换为光学图像时。此外,大多数现有的半监督方法对利用数据分布的重视程度有限。为了解决这些挑战,我们提出了一种半监督sar到光学图像转换(S3OIL)方法,该方法使用最少的成对数据和大量的非成对数据实现高质量的图像生成,同时战略性地利用数据分布。为此,我们首先引入交叉集对齐匹配(Cross-Set Alignment Matching, CAM)机制,在成对数据和未成对数据生成的结果之间创建局部对应关系,确保交叉集一致性。此外,对于未配对的数据,我们应用弱和强扰动,并建立集内多尺度匹配(MSM)约束。对于配对数据,提出了模态内语义一致性(ISC),以确保与基础事实一致。最后,我们提出了局部和全局跨模态语义一致性(CSC)来提高翻译过程中的结构一致性。我们对SAR-to-optical数据集和另一个从草图到动画的任务进行了广泛的实验,证明与最先进的无监督、有监督和半监督方法相比,S3OIL在定量和定性方面都具有竞争力。消融研究进一步表明,S3OIL可以确保生成图像的语义内容和结构完整性的保存。我们的代码可在:https://github.com/XduShi/SOIL。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Image Processing
IEEE Transactions on Image Processing 工程技术-工程:电子与电气
CiteScore
20.90
自引率
6.60%
发文量
774
审稿时长
7.6 months
期刊介绍: The IEEE Transactions on Image Processing delves into groundbreaking theories, algorithms, and structures concerning the generation, acquisition, manipulation, transmission, scrutiny, and presentation of images, video, and multidimensional signals across diverse applications. Topics span mathematical, statistical, and perceptual aspects, encompassing modeling, representation, formation, coding, filtering, enhancement, restoration, rendering, halftoning, search, and analysis of images, video, and multidimensional signals. Pertinent applications range from image and video communications to electronic imaging, biomedical imaging, image and video systems, and remote sensing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信