Yanchen Meng , Fengbao Yang , Jiangtao Xi , Xiaoxia Wang , Linna Ji , Bo Li , Xiaoming Guo
{"title":"STMFuse:时空特征显著性变化驱动的红外和可见光视频模拟融合","authors":"Yanchen Meng , Fengbao Yang , Jiangtao Xi , Xiaoxia Wang , Linna Ji , Bo Li , Xiaoming Guo","doi":"10.1016/j.optlastec.2025.113640","DOIUrl":null,"url":null,"abstract":"<div><div>The variable and unpredictable distribution of spatiotemporal features in infrared–visible videos under complex dynamic environments is addressed, where traditional fusion algorithms with fixed architectures are found to fail when adapting to continuous dynamic changes in feature distributions, resulting in blurred fusion outcomes and loss of critical detail information. To overcome this bottleneck, a mimic fusion method based on spatiotemporal feature saliency change-driven approach, termed STMFuse, is proposed, whereby spatiotemporal feature variations are continuously monitored and the fusion architecture is dynamically reconfigured, thereby ensuring optimal fusion performance. Specifically, a spatiotemporal dual-domain feature perception module extracts single-modal temporal change features and cross-modal difference features to comprehensively capture dynamic scene characteristics. To address the inadequacy of single threshold settings in measuring dynamic feature variations, a possibility distribution function and synthesis rules is constructed to quantify feature change degree, utilizing significant change features as driving factors for mimic variant adjustments. By investigating the dynamic correlation between feature changes and fusion quality metrics, a mimic variant evaluation function based on correlation coefficient weighting is established, mapping the relationship between features and mimic variants. Finally, a multi-feature collaborative decision mechanism based on energy functions is designed to determine the optimal mimic variant combination by integrating various feature change characteristics. Extensive experimental validation demonstrates that STMFuse significantly outperforms existing methods in terms of adaptive fusion performance and robustness in dynamic scenes, exhibiting superior preservation of edges, texture details, and enhanced visual perception quality.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"192 ","pages":"Article 113640"},"PeriodicalIF":5.0000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"STMFuse: spatiotemporal feature saliency change-driven mimic fusion for infrared and visible video\",\"authors\":\"Yanchen Meng , Fengbao Yang , Jiangtao Xi , Xiaoxia Wang , Linna Ji , Bo Li , Xiaoming Guo\",\"doi\":\"10.1016/j.optlastec.2025.113640\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The variable and unpredictable distribution of spatiotemporal features in infrared–visible videos under complex dynamic environments is addressed, where traditional fusion algorithms with fixed architectures are found to fail when adapting to continuous dynamic changes in feature distributions, resulting in blurred fusion outcomes and loss of critical detail information. To overcome this bottleneck, a mimic fusion method based on spatiotemporal feature saliency change-driven approach, termed STMFuse, is proposed, whereby spatiotemporal feature variations are continuously monitored and the fusion architecture is dynamically reconfigured, thereby ensuring optimal fusion performance. Specifically, a spatiotemporal dual-domain feature perception module extracts single-modal temporal change features and cross-modal difference features to comprehensively capture dynamic scene characteristics. To address the inadequacy of single threshold settings in measuring dynamic feature variations, a possibility distribution function and synthesis rules is constructed to quantify feature change degree, utilizing significant change features as driving factors for mimic variant adjustments. By investigating the dynamic correlation between feature changes and fusion quality metrics, a mimic variant evaluation function based on correlation coefficient weighting is established, mapping the relationship between features and mimic variants. Finally, a multi-feature collaborative decision mechanism based on energy functions is designed to determine the optimal mimic variant combination by integrating various feature change characteristics. Extensive experimental validation demonstrates that STMFuse significantly outperforms existing methods in terms of adaptive fusion performance and robustness in dynamic scenes, exhibiting superior preservation of edges, texture details, and enhanced visual perception quality.</div></div>\",\"PeriodicalId\":19511,\"journal\":{\"name\":\"Optics and Laser Technology\",\"volume\":\"192 \",\"pages\":\"Article 113640\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Laser Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0030399225012319\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030399225012319","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
STMFuse: spatiotemporal feature saliency change-driven mimic fusion for infrared and visible video
The variable and unpredictable distribution of spatiotemporal features in infrared–visible videos under complex dynamic environments is addressed, where traditional fusion algorithms with fixed architectures are found to fail when adapting to continuous dynamic changes in feature distributions, resulting in blurred fusion outcomes and loss of critical detail information. To overcome this bottleneck, a mimic fusion method based on spatiotemporal feature saliency change-driven approach, termed STMFuse, is proposed, whereby spatiotemporal feature variations are continuously monitored and the fusion architecture is dynamically reconfigured, thereby ensuring optimal fusion performance. Specifically, a spatiotemporal dual-domain feature perception module extracts single-modal temporal change features and cross-modal difference features to comprehensively capture dynamic scene characteristics. To address the inadequacy of single threshold settings in measuring dynamic feature variations, a possibility distribution function and synthesis rules is constructed to quantify feature change degree, utilizing significant change features as driving factors for mimic variant adjustments. By investigating the dynamic correlation between feature changes and fusion quality metrics, a mimic variant evaluation function based on correlation coefficient weighting is established, mapping the relationship between features and mimic variants. Finally, a multi-feature collaborative decision mechanism based on energy functions is designed to determine the optimal mimic variant combination by integrating various feature change characteristics. Extensive experimental validation demonstrates that STMFuse significantly outperforms existing methods in terms of adaptive fusion performance and robustness in dynamic scenes, exhibiting superior preservation of edges, texture details, and enhanced visual perception quality.
期刊介绍:
Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication.
The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas:
•development in all types of lasers
•developments in optoelectronic devices and photonics
•developments in new photonics and optical concepts
•developments in conventional optics, optical instruments and components
•techniques of optical metrology, including interferometry and optical fibre sensors
•LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow
•applications of lasers to materials processing, optical NDT display (including holography) and optical communication
•research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume)
•developments in optical computing and optical information processing
•developments in new optical materials
•developments in new optical characterization methods and techniques
•developments in quantum optics
•developments in light assisted micro and nanofabrication methods and techniques
•developments in nanophotonics and biophotonics
•developments in imaging processing and systems