FESAR:基于局部空间关系捕捉和融合卷积增强的合成孔径雷达船舶探测模型

IF 2.4 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chongchong Liu, Chunman Yan
{"title":"FESAR:基于局部空间关系捕捉和融合卷积增强的合成孔径雷达船舶探测模型","authors":"Chongchong Liu, Chunman Yan","doi":"10.1007/s00138-024-01516-4","DOIUrl":null,"url":null,"abstract":"<p>Synthetic aperture radar (SAR) is instrumental in ship monitoring owing to its all-weather capabilities and high resolution. In SAR images, ship targets frequently display blurred or mixed boundaries with the background, and instances of occlusion or partial occlusion may occur. Additionally, multi-scale transformations and small-target ships pose challenges for ship detection. To tackle these challenges, we propose a novel SAR ship detection model, FESAR. Firstly, in addressing multi-scale transformations in ship detection, we propose the Fused Convolution Enhancement Module (FCEM). This network incorporates distinct convolutional branches designed to capture local and global features, which are subsequently fused and enhanced. Secondly, a Spatial Relationship Analysis Module (SRAM) with a spatial-mixing layer is designed to analyze the local spatial relationship between the ship target and the background, effectively combining local information to discern feature distinctions between the ship target and the background. Finally, a new backbone network, SPD-YOLO, is designed to perform deep downsampling for the comprehensive extraction of semantic information related to ships. To validate the model’s performance, an extensive series of experiments was conducted on the public datasets HRSID, LS-SSDD-v1.0, and SSDD. The results demonstrate the outstanding performance of the proposed FESAR model compared to numerous state-of-the-art (SOTA) models. Relative to the baseline model, FESAR exhibits an improvement in mAP by 2.6% on the HRSID dataset, 5.5% on LS-SSDD-v1.0, and 0.2% on the SSDD dataset. In comparison with numerous SAR ship detection models, FESAR demonstrates superior comprehensive performance.</p>","PeriodicalId":51116,"journal":{"name":"Machine Vision and Applications","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FESAR: SAR ship detection model based on local spatial relationship capture and fused convolutional enhancement\",\"authors\":\"Chongchong Liu, Chunman Yan\",\"doi\":\"10.1007/s00138-024-01516-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Synthetic aperture radar (SAR) is instrumental in ship monitoring owing to its all-weather capabilities and high resolution. In SAR images, ship targets frequently display blurred or mixed boundaries with the background, and instances of occlusion or partial occlusion may occur. Additionally, multi-scale transformations and small-target ships pose challenges for ship detection. To tackle these challenges, we propose a novel SAR ship detection model, FESAR. Firstly, in addressing multi-scale transformations in ship detection, we propose the Fused Convolution Enhancement Module (FCEM). This network incorporates distinct convolutional branches designed to capture local and global features, which are subsequently fused and enhanced. Secondly, a Spatial Relationship Analysis Module (SRAM) with a spatial-mixing layer is designed to analyze the local spatial relationship between the ship target and the background, effectively combining local information to discern feature distinctions between the ship target and the background. Finally, a new backbone network, SPD-YOLO, is designed to perform deep downsampling for the comprehensive extraction of semantic information related to ships. To validate the model’s performance, an extensive series of experiments was conducted on the public datasets HRSID, LS-SSDD-v1.0, and SSDD. The results demonstrate the outstanding performance of the proposed FESAR model compared to numerous state-of-the-art (SOTA) models. Relative to the baseline model, FESAR exhibits an improvement in mAP by 2.6% on the HRSID dataset, 5.5% on LS-SSDD-v1.0, and 0.2% on the SSDD dataset. In comparison with numerous SAR ship detection models, FESAR demonstrates superior comprehensive performance.</p>\",\"PeriodicalId\":51116,\"journal\":{\"name\":\"Machine Vision and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine Vision and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00138-024-01516-4\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Vision and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00138-024-01516-4","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

合成孔径雷达(SAR)因其全天候功能和高分辨率而在船舶监测方面发挥着重要作用。在合成孔径雷达图像中,船舶目标经常显示模糊或与背景混合的边界,并可能出现遮挡或部分遮挡的情况。此外,多尺度变换和小目标船只也给船只检测带来了挑战。为了应对这些挑战,我们提出了一种新型合成孔径雷达船舶检测模型 FESAR。首先,针对船舶检测中的多尺度变换,我们提出了融合卷积增强模块(FCEM)。该网络包含不同的卷积分支,旨在捕捉局部和全局特征,然后进行融合和增强。其次,设计了一个带有空间混合层的空间关系分析模块(SRAM),用于分析船舶目标与背景之间的局部空间关系,有效地结合局部信息来辨别船舶目标与背景之间的特征区别。最后,设计了一个新的骨干网络 SPD-YOLO,用于执行深度下采样,以全面提取与船舶相关的语义信息。为了验证模型的性能,我们在公共数据集 HRSID、LS-SSDD-v1.0 和 SSDD 上进行了一系列广泛的实验。结果表明,与众多最先进的(SOTA)模型相比,所提出的 FESAR 模型性能卓越。与基线模型相比,FESAR 在 HRSID 数据集上的 mAP 提高了 2.6%,在 LS-SSDD-v1.0 上提高了 5.5%,在 SSDD 数据集上提高了 0.2%。与众多合成孔径雷达船舶探测模型相比,FESAR 的综合性能更胜一筹。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

FESAR: SAR ship detection model based on local spatial relationship capture and fused convolutional enhancement

FESAR: SAR ship detection model based on local spatial relationship capture and fused convolutional enhancement

Synthetic aperture radar (SAR) is instrumental in ship monitoring owing to its all-weather capabilities and high resolution. In SAR images, ship targets frequently display blurred or mixed boundaries with the background, and instances of occlusion or partial occlusion may occur. Additionally, multi-scale transformations and small-target ships pose challenges for ship detection. To tackle these challenges, we propose a novel SAR ship detection model, FESAR. Firstly, in addressing multi-scale transformations in ship detection, we propose the Fused Convolution Enhancement Module (FCEM). This network incorporates distinct convolutional branches designed to capture local and global features, which are subsequently fused and enhanced. Secondly, a Spatial Relationship Analysis Module (SRAM) with a spatial-mixing layer is designed to analyze the local spatial relationship between the ship target and the background, effectively combining local information to discern feature distinctions between the ship target and the background. Finally, a new backbone network, SPD-YOLO, is designed to perform deep downsampling for the comprehensive extraction of semantic information related to ships. To validate the model’s performance, an extensive series of experiments was conducted on the public datasets HRSID, LS-SSDD-v1.0, and SSDD. The results demonstrate the outstanding performance of the proposed FESAR model compared to numerous state-of-the-art (SOTA) models. Relative to the baseline model, FESAR exhibits an improvement in mAP by 2.6% on the HRSID dataset, 5.5% on LS-SSDD-v1.0, and 0.2% on the SSDD dataset. In comparison with numerous SAR ship detection models, FESAR demonstrates superior comprehensive performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Machine Vision and Applications
Machine Vision and Applications 工程技术-工程:电子与电气
CiteScore
6.30
自引率
3.00%
发文量
84
审稿时长
8.7 months
期刊介绍: Machine Vision and Applications publishes high-quality technical contributions in machine vision research and development. Specifically, the editors encourage submittals in all applications and engineering aspects of image-related computing. In particular, original contributions dealing with scientific, commercial, industrial, military, and biomedical applications of machine vision, are all within the scope of the journal. Particular emphasis is placed on engineering and technology aspects of image processing and computer vision. The following aspects of machine vision applications are of interest: algorithms, architectures, VLSI implementations, AI techniques and expert systems for machine vision, front-end sensing, multidimensional and multisensor machine vision, real-time techniques, image databases, virtual reality and visualization. Papers must include a significant experimental validation component.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信