水下低质量多源数据鲁棒多尺度特征融合模型

IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yixiang Luo , Ning Li , Yuting Zhang , Mengyun Liu , Yun Peng , Yuyan Luo , Xiaoying Wang
{"title":"水下低质量多源数据鲁棒多尺度特征融合模型","authors":"Yixiang Luo ,&nbsp;Ning Li ,&nbsp;Yuting Zhang ,&nbsp;Mengyun Liu ,&nbsp;Yun Peng ,&nbsp;Yuyan Luo ,&nbsp;Xiaoying Wang","doi":"10.1016/j.compeleceng.2025.110469","DOIUrl":null,"url":null,"abstract":"<div><div>Efficient and accurate perception of complex underwater scenes is crucial for ensuring the success of subsequent tasks. Multi-source image fusion techniques offer an effective solution, however, the presence of complex factors such as feature distortion, imaging blur, and lighting variations in low-quality multi-source (sonar-optical) underwater images leads to significant degradation in fusion performance. To address this issue, we propose a novel underwater multi-source data fusion model, incorporating multi-scale features detection and fusion. First, we extract shallow and deep features from multi-source data to detect rich local texture features and global structural features. Then, the detailed features and semantic information in the fusion process were enhanced through the designed multi-scale feature fusion module, and the problems such as low saturation and partial feature loss in the fusion image reconstruction were alleviated. This provides accurate multi-source fusion capabilities for downstream tasks. Extensive experiments on the public dataset demonstrate that our fusion method significantly improves the performance of tasks by 0.74% and 3.34%, surpassing related state-of-the-art methods.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"126 ","pages":"Article 110469"},"PeriodicalIF":4.0000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A robust multi-scale feature fusion model for low-quality multi-source data in underwater environments\",\"authors\":\"Yixiang Luo ,&nbsp;Ning Li ,&nbsp;Yuting Zhang ,&nbsp;Mengyun Liu ,&nbsp;Yun Peng ,&nbsp;Yuyan Luo ,&nbsp;Xiaoying Wang\",\"doi\":\"10.1016/j.compeleceng.2025.110469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Efficient and accurate perception of complex underwater scenes is crucial for ensuring the success of subsequent tasks. Multi-source image fusion techniques offer an effective solution, however, the presence of complex factors such as feature distortion, imaging blur, and lighting variations in low-quality multi-source (sonar-optical) underwater images leads to significant degradation in fusion performance. To address this issue, we propose a novel underwater multi-source data fusion model, incorporating multi-scale features detection and fusion. First, we extract shallow and deep features from multi-source data to detect rich local texture features and global structural features. Then, the detailed features and semantic information in the fusion process were enhanced through the designed multi-scale feature fusion module, and the problems such as low saturation and partial feature loss in the fusion image reconstruction were alleviated. This provides accurate multi-source fusion capabilities for downstream tasks. Extensive experiments on the public dataset demonstrate that our fusion method significantly improves the performance of tasks by 0.74% and 3.34%, surpassing related state-of-the-art methods.</div></div>\",\"PeriodicalId\":50630,\"journal\":{\"name\":\"Computers & Electrical Engineering\",\"volume\":\"126 \",\"pages\":\"Article 110469\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2025-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Electrical Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0045790625004124\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625004124","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

高效、准确地感知复杂的水下场景,对于后续任务的顺利完成至关重要。多源图像融合技术提供了一种有效的解决方案,然而,在低质量的多源(声纳光学)水下图像中,存在复杂的因素,如特征失真、成像模糊和光照变化,导致融合性能显著下降。为了解决这一问题,我们提出了一种新的水下多源数据融合模型,该模型结合了多尺度特征检测和融合。首先,从多源数据中提取浅层和深层特征,检测丰富的局部纹理特征和全局结构特征;然后,通过设计的多尺度特征融合模块增强融合过程中的细节特征和语义信息,缓解融合图像重建中存在的饱和度低、部分特征丢失等问题。这为下游任务提供了精确的多源融合功能。在公共数据集上的大量实验表明,我们的融合方法显著提高了0.74%和3.34%的任务性能,超过了相关的最新方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A robust multi-scale feature fusion model for low-quality multi-source data in underwater environments
Efficient and accurate perception of complex underwater scenes is crucial for ensuring the success of subsequent tasks. Multi-source image fusion techniques offer an effective solution, however, the presence of complex factors such as feature distortion, imaging blur, and lighting variations in low-quality multi-source (sonar-optical) underwater images leads to significant degradation in fusion performance. To address this issue, we propose a novel underwater multi-source data fusion model, incorporating multi-scale features detection and fusion. First, we extract shallow and deep features from multi-source data to detect rich local texture features and global structural features. Then, the detailed features and semantic information in the fusion process were enhanced through the designed multi-scale feature fusion module, and the problems such as low saturation and partial feature loss in the fusion image reconstruction were alleviated. This provides accurate multi-source fusion capabilities for downstream tasks. Extensive experiments on the public dataset demonstrate that our fusion method significantly improves the performance of tasks by 0.74% and 3.34%, surpassing related state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Electrical Engineering
Computers & Electrical Engineering 工程技术-工程:电子与电气
CiteScore
9.20
自引率
7.00%
发文量
661
审稿时长
47 days
期刊介绍: The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency. Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信