STMFuse: spatiotemporal feature saliency change-driven mimic fusion for infrared and visible video

IF 5 2区 物理与天体物理 Q1 OPTICS
Yanchen Meng , Fengbao Yang , Jiangtao Xi , Xiaoxia Wang , Linna Ji , Bo Li , Xiaoming Guo
{"title":"STMFuse: spatiotemporal feature saliency change-driven mimic fusion for infrared and visible video","authors":"Yanchen Meng ,&nbsp;Fengbao Yang ,&nbsp;Jiangtao Xi ,&nbsp;Xiaoxia Wang ,&nbsp;Linna Ji ,&nbsp;Bo Li ,&nbsp;Xiaoming Guo","doi":"10.1016/j.optlastec.2025.113640","DOIUrl":null,"url":null,"abstract":"<div><div>The variable and unpredictable distribution of spatiotemporal features in infrared–visible videos under complex dynamic environments is addressed, where traditional fusion algorithms with fixed architectures are found to fail when adapting to continuous dynamic changes in feature distributions, resulting in blurred fusion outcomes and loss of critical detail information. To overcome this bottleneck, a mimic fusion method based on spatiotemporal feature saliency change-driven approach, termed STMFuse, is proposed, whereby spatiotemporal feature variations are continuously monitored and the fusion architecture is dynamically reconfigured, thereby ensuring optimal fusion performance. Specifically, a spatiotemporal dual-domain feature perception module extracts single-modal temporal change features and cross-modal difference features to comprehensively capture dynamic scene characteristics. To address the inadequacy of single threshold settings in measuring dynamic feature variations, a possibility distribution function and synthesis rules is constructed to quantify feature change degree, utilizing significant change features as driving factors for mimic variant adjustments. By investigating the dynamic correlation between feature changes and fusion quality metrics, a mimic variant evaluation function based on correlation coefficient weighting is established, mapping the relationship between features and mimic variants. Finally, a multi-feature collaborative decision mechanism based on energy functions is designed to determine the optimal mimic variant combination by integrating various feature change characteristics. Extensive experimental validation demonstrates that STMFuse significantly outperforms existing methods in terms of adaptive fusion performance and robustness in dynamic scenes, exhibiting superior preservation of edges, texture details, and enhanced visual perception quality.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"192 ","pages":"Article 113640"},"PeriodicalIF":5.0000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030399225012319","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

Abstract

The variable and unpredictable distribution of spatiotemporal features in infrared–visible videos under complex dynamic environments is addressed, where traditional fusion algorithms with fixed architectures are found to fail when adapting to continuous dynamic changes in feature distributions, resulting in blurred fusion outcomes and loss of critical detail information. To overcome this bottleneck, a mimic fusion method based on spatiotemporal feature saliency change-driven approach, termed STMFuse, is proposed, whereby spatiotemporal feature variations are continuously monitored and the fusion architecture is dynamically reconfigured, thereby ensuring optimal fusion performance. Specifically, a spatiotemporal dual-domain feature perception module extracts single-modal temporal change features and cross-modal difference features to comprehensively capture dynamic scene characteristics. To address the inadequacy of single threshold settings in measuring dynamic feature variations, a possibility distribution function and synthesis rules is constructed to quantify feature change degree, utilizing significant change features as driving factors for mimic variant adjustments. By investigating the dynamic correlation between feature changes and fusion quality metrics, a mimic variant evaluation function based on correlation coefficient weighting is established, mapping the relationship between features and mimic variants. Finally, a multi-feature collaborative decision mechanism based on energy functions is designed to determine the optimal mimic variant combination by integrating various feature change characteristics. Extensive experimental validation demonstrates that STMFuse significantly outperforms existing methods in terms of adaptive fusion performance and robustness in dynamic scenes, exhibiting superior preservation of edges, texture details, and enhanced visual perception quality.

Abstract Image

STMFuse:时空特征显著性变化驱动的红外和可见光视频模拟融合
针对复杂动态环境下红外可见光视频中时空特征分布的多变性和不可预测性,传统的固定架构融合算法无法适应特征分布的持续动态变化,导致融合结果模糊,丢失关键细节信息。为了克服这一瓶颈,提出了一种基于时空特征显著性变化驱动的模拟融合方法,即STMFuse,该方法连续监测时空特征变化,动态重构融合架构,从而确保最佳融合性能。其中,时空双域特征感知模块提取单模态时间变化特征和跨模态差异特征,全面捕捉动态场景特征。为了解决单一阈值设置在测量动态特征变化时的不足,构建了可能性分布函数和综合规则来量化特征变化程度,利用显著变化特征作为模拟变量调整的驱动因素。通过研究特征变化与融合质量指标之间的动态相关性,建立了基于相关系数加权的模拟变量评价函数,映射了特征与模拟变量之间的关系。最后,设计了一种基于能量函数的多特征协同决策机制,通过整合各种特征变化特征来确定最优的模拟变体组合。大量的实验验证表明,STMFuse在动态场景的自适应融合性能和鲁棒性方面明显优于现有方法,具有优越的边缘、纹理细节保存和增强的视觉感知质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.50
自引率
10.00%
发文量
1060
审稿时长
3.4 months
期刊介绍: Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication. The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas: •development in all types of lasers •developments in optoelectronic devices and photonics •developments in new photonics and optical concepts •developments in conventional optics, optical instruments and components •techniques of optical metrology, including interferometry and optical fibre sensors •LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow •applications of lasers to materials processing, optical NDT display (including holography) and optical communication •research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume) •developments in optical computing and optical information processing •developments in new optical materials •developments in new optical characterization methods and techniques •developments in quantum optics •developments in light assisted micro and nanofabrication methods and techniques •developments in nanophotonics and biophotonics •developments in imaging processing and systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信