PGF-Net:融合自关注物理成像模型的鲁棒水下特征检测

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zheng Cong , Yifeng Zhou , Li Wu , Lin Tian , Zhipeng Chen , Minglei Guan , Li He
{"title":"PGF-Net:融合自关注物理成像模型的鲁棒水下特征检测","authors":"Zheng Cong ,&nbsp;Yifeng Zhou ,&nbsp;Li Wu ,&nbsp;Lin Tian ,&nbsp;Zhipeng Chen ,&nbsp;Minglei Guan ,&nbsp;Li He","doi":"10.1016/j.inffus.2025.103732","DOIUrl":null,"url":null,"abstract":"<div><div>Robust feature detection in underwater environments is severely impeded by image degradation from light absorption and scattering. Traditional algorithms fail in these low-contrast, blurred conditions, while deep learning methods suffer from the domain gap between terrestrial and underwater imagery and a scarcity of annotated data. To address these challenges, this paper introduces PGF-Net, a systematic framework that fuses physical imaging principles with deep learning. The framework leverages a dual-fusion strategy: First, a parametric underwater imaging model is proposed to guide the synthesis of a large-scale, physically realistic training dataset, effectively injecting prior knowledge of the degradation process into the data domain. Second, a novel detection network architecture is designed, which incorporates a self-attention mechanism to fuse local features with global contextual information, enhancing robustness against detail loss. This end-to-end network is trained on the synthesized data using a curriculum learning strategy, progressing from mild to severe degradation conditions. Extensive experiments on public datasets demonstrate that PGF-Net significantly outperforms classic and state-of-the-art deep learning methods in both keypoint detection and matching, particularly in turbid water. The proposed framework validates the efficacy of integrating physical priors with data-driven models for challenging computer vision tasks and provides a robust solution for underwater visual perception.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103732"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PGF-Net: fusing physical imaging model with self-attention for robust underwater feature detection\",\"authors\":\"Zheng Cong ,&nbsp;Yifeng Zhou ,&nbsp;Li Wu ,&nbsp;Lin Tian ,&nbsp;Zhipeng Chen ,&nbsp;Minglei Guan ,&nbsp;Li He\",\"doi\":\"10.1016/j.inffus.2025.103732\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Robust feature detection in underwater environments is severely impeded by image degradation from light absorption and scattering. Traditional algorithms fail in these low-contrast, blurred conditions, while deep learning methods suffer from the domain gap between terrestrial and underwater imagery and a scarcity of annotated data. To address these challenges, this paper introduces PGF-Net, a systematic framework that fuses physical imaging principles with deep learning. The framework leverages a dual-fusion strategy: First, a parametric underwater imaging model is proposed to guide the synthesis of a large-scale, physically realistic training dataset, effectively injecting prior knowledge of the degradation process into the data domain. Second, a novel detection network architecture is designed, which incorporates a self-attention mechanism to fuse local features with global contextual information, enhancing robustness against detail loss. This end-to-end network is trained on the synthesized data using a curriculum learning strategy, progressing from mild to severe degradation conditions. Extensive experiments on public datasets demonstrate that PGF-Net significantly outperforms classic and state-of-the-art deep learning methods in both keypoint detection and matching, particularly in turbid water. The proposed framework validates the efficacy of integrating physical priors with data-driven models for challenging computer vision tasks and provides a robust solution for underwater visual perception.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"127 \",\"pages\":\"Article 103732\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525007948\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525007948","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

水下环境的鲁棒特征检测受到光吸收和散射引起的图像退化的严重阻碍。传统算法在这些低对比度、模糊的条件下失败,而深度学习方法则受到陆地和水下图像之间的领域差距以及缺乏注释数据的影响。为了应对这些挑战,本文介绍了PGF-Net,这是一个融合了物理成像原理和深度学习的系统框架。该框架利用了双重融合策略:首先,提出了一个参数化水下成像模型来指导大规模、物理逼真的训练数据集的合成,有效地将退化过程的先验知识注入数据域。其次,设计了一种新的检测网络结构,该结构采用自关注机制融合局部特征和全局上下文信息,增强了对细节丢失的鲁棒性;该端到端网络使用课程学习策略在合成数据上进行训练,从轻度退化到严重退化。在公共数据集上的大量实验表明,PGF-Net在关键点检测和匹配方面明显优于经典和最先进的深度学习方法,特别是在浑浊水中。提出的框架验证了将物理先验与数据驱动模型集成在具有挑战性的计算机视觉任务中的有效性,并为水下视觉感知提供了一个强大的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PGF-Net: fusing physical imaging model with self-attention for robust underwater feature detection
Robust feature detection in underwater environments is severely impeded by image degradation from light absorption and scattering. Traditional algorithms fail in these low-contrast, blurred conditions, while deep learning methods suffer from the domain gap between terrestrial and underwater imagery and a scarcity of annotated data. To address these challenges, this paper introduces PGF-Net, a systematic framework that fuses physical imaging principles with deep learning. The framework leverages a dual-fusion strategy: First, a parametric underwater imaging model is proposed to guide the synthesis of a large-scale, physically realistic training dataset, effectively injecting prior knowledge of the degradation process into the data domain. Second, a novel detection network architecture is designed, which incorporates a self-attention mechanism to fuse local features with global contextual information, enhancing robustness against detail loss. This end-to-end network is trained on the synthesized data using a curriculum learning strategy, progressing from mild to severe degradation conditions. Extensive experiments on public datasets demonstrate that PGF-Net significantly outperforms classic and state-of-the-art deep learning methods in both keypoint detection and matching, particularly in turbid water. The proposed framework validates the efficacy of integrating physical priors with data-driven models for challenging computer vision tasks and provides a robust solution for underwater visual perception.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信