Vehicle Target Detection Based on Cross-Modality Projective-Invariant Features Extracted from Unpaired SAR and Infrared Images

IF 0.7 4区 工程技术 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC
Zhe Geng, Chongqi Xu, Chen Xin, Xiang Yu, Daiyin Zhu
{"title":"Vehicle Target Detection Based on Cross-Modality Projective-Invariant Features Extracted from Unpaired SAR and Infrared Images","authors":"Zhe Geng,&nbsp;Chongqi Xu,&nbsp;Chen Xin,&nbsp;Xiang Yu,&nbsp;Daiyin Zhu","doi":"10.1049/ell2.70336","DOIUrl":null,"url":null,"abstract":"<p>Synthetic aperture radar (SAR) automatic target recognition (ATR) is remarkably challenging since the SAR image defies the foundation for human and computer vision, i.e., the Gestalt perceptual principles. We propose to address this problem by fusing the target features reflected in SAR and infrared (IR) images via a novel dual-channel context-guided feature-alignment network (CGFAN) that is capable of fusing the cross-modality projective-invariant features extracted from unpaired SAR and IR images. First, region of interest (ROI) matching between SAR and IR images is realized based on special landmarks exhibiting consistent cross-modality features. After that, generative models trained with historical SAR and IR images are used to synthesize SAR images based on the IR images collected in real time for the current mission. Since SAR imaging takes more time than IR imaging, by using these synthesized SAR images as auxiliary data, the spatial-coverage rate in a typical collaborative SAR/IR ATR mission carried out by drone swarms is effectively improved. The proposed CGFAN is tested against the proprietary monostatic-bistatic circular SAR and IR dataset constructed by the researchers at our institution, which consists of nine types of military vehicles. Experimental results show that the proposed CGFAN offers better ATR performance than the baseline networks.A novel dual-channel CGFAN that is capable of fusing the cross-modality projective-invariant features extracted from unpaired SAR and IR images is proposed. First, ROI matching between SAR and IR images are realized based on special landmarks exhibiting consistent cross-modality features. After that, generative models trained with historical SAR and IR images are used to synthesize SAR images based on the IR images collected in real time for the current mission.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":"61 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70336","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronics Letters","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70336","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Synthetic aperture radar (SAR) automatic target recognition (ATR) is remarkably challenging since the SAR image defies the foundation for human and computer vision, i.e., the Gestalt perceptual principles. We propose to address this problem by fusing the target features reflected in SAR and infrared (IR) images via a novel dual-channel context-guided feature-alignment network (CGFAN) that is capable of fusing the cross-modality projective-invariant features extracted from unpaired SAR and IR images. First, region of interest (ROI) matching between SAR and IR images is realized based on special landmarks exhibiting consistent cross-modality features. After that, generative models trained with historical SAR and IR images are used to synthesize SAR images based on the IR images collected in real time for the current mission. Since SAR imaging takes more time than IR imaging, by using these synthesized SAR images as auxiliary data, the spatial-coverage rate in a typical collaborative SAR/IR ATR mission carried out by drone swarms is effectively improved. The proposed CGFAN is tested against the proprietary monostatic-bistatic circular SAR and IR dataset constructed by the researchers at our institution, which consists of nine types of military vehicles. Experimental results show that the proposed CGFAN offers better ATR performance than the baseline networks.A novel dual-channel CGFAN that is capable of fusing the cross-modality projective-invariant features extracted from unpaired SAR and IR images is proposed. First, ROI matching between SAR and IR images are realized based on special landmarks exhibiting consistent cross-modality features. After that, generative models trained with historical SAR and IR images are used to synthesize SAR images based on the IR images collected in real time for the current mission.

Abstract Image

基于非配对SAR和红外图像交叉模态投影不变特征的车辆目标检测
合成孔径雷达(SAR)自动目标识别(ATR)非常具有挑战性,因为SAR图像违背了人类和计算机视觉的基础,即格式塔感知原理。为了解决这一问题,我们提出了一种新的双通道上下文引导特征对齐网络(CGFAN),该网络能够融合从未配对的SAR和IR图像中提取的跨模态投影不变特征,从而融合SAR和IR图像中反射的目标特征。首先,基于具有一致交叉模态特征的特殊地标,实现SAR与IR图像的感兴趣区域匹配;然后,利用历史SAR和红外图像训练的生成模型,在实时采集的当前任务红外图像的基础上合成SAR图像。由于SAR成像比红外成像耗时长,利用合成的SAR图像作为辅助数据,有效提高了典型无人机群协同SAR/IR ATR任务的空间覆盖率。本文提出的CGFAN在单静态-双静态圆形SAR和IR数据集上进行了测试,该数据集由9种军用车辆组成。实验结果表明,CGFAN具有比基线网络更好的ATR性能。提出了一种新的双通道CGFAN,能够融合从未配对的SAR和IR图像中提取的跨模态投影不变特征。首先,基于具有一致交叉模态特征的特殊地标,实现SAR与IR图像的ROI匹配;然后,利用历史SAR和红外图像训练的生成模型,在实时采集的当前任务红外图像的基础上合成SAR图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Electronics Letters
Electronics Letters 工程技术-工程:电子与电气
CiteScore
2.70
自引率
0.00%
发文量
268
审稿时长
3.6 months
期刊介绍: Electronics Letters is an internationally renowned peer-reviewed rapid-communication journal that publishes short original research papers every two weeks. Its broad and interdisciplinary scope covers the latest developments in all electronic engineering related fields including communication, biomedical, optical and device technologies. Electronics Letters also provides further insight into some of the latest developments through special features and interviews. Scope As a journal at the forefront of its field, Electronics Letters publishes papers covering all themes of electronic and electrical engineering. The major themes of the journal are listed below. Antennas and Propagation Biomedical and Bioinspired Technologies, Signal Processing and Applications Control Engineering Electromagnetism: Theory, Materials and Devices Electronic Circuits and Systems Image, Video and Vision Processing and Applications Information, Computing and Communications Instrumentation and Measurement Microwave Technology Optical Communications Photonics and Opto-Electronics Power Electronics, Energy and Sustainability Radar, Sonar and Navigation Semiconductor Technology Signal Processing MIMO
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信