BSMEF: Optimized multi-exposure image fusion using B-splines and Mamba

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jinyong Cheng , Qinghao Cui , Guohua Lv
{"title":"BSMEF: Optimized multi-exposure image fusion using B-splines and Mamba","authors":"Jinyong Cheng ,&nbsp;Qinghao Cui ,&nbsp;Guohua Lv","doi":"10.1016/j.imavis.2025.105660","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, multi-exposure image fusion has been widely applied to process overexposed or underexposed images due to its simplicity, effectiveness, and low cost. With the development of deep learning techniques, related fusion methods have been continuously optimized. However, retaining global information from source images while preserving fine local details remains challenging, especially when fusing images with extreme exposure differences, where boundary transitions often exhibit shadows and noise. To address this, we propose a multi-exposure image fusion network model, BSMEF, based on B-Spline basis functions and Mamba. The B-Spline basis function, known for its smoothness, reduces edge artifacts and enables smooth transitions between images with varying exposure levels. In BSMEF, the feature extraction module, combining B-Spline and deformable convolutions, preserves global features while effectively extracting fine-grained local details. Additionally, we design a feature enhancement module based on Mamba blocks, leveraging its powerful global perception ability to capture contextual information. Furthermore, the fusion module integrates three feature enhancement methods: B-Spline basis functions, attention mechanisms, and Fourier transforms, addressing shadow and noise issues at fusion boundaries and enhancing the focus on important features. Experimental results demonstrate that BSMEF outperforms existing methods across multiple public datasets.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"161 ","pages":"Article 105660"},"PeriodicalIF":4.2000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002483","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, multi-exposure image fusion has been widely applied to process overexposed or underexposed images due to its simplicity, effectiveness, and low cost. With the development of deep learning techniques, related fusion methods have been continuously optimized. However, retaining global information from source images while preserving fine local details remains challenging, especially when fusing images with extreme exposure differences, where boundary transitions often exhibit shadows and noise. To address this, we propose a multi-exposure image fusion network model, BSMEF, based on B-Spline basis functions and Mamba. The B-Spline basis function, known for its smoothness, reduces edge artifacts and enables smooth transitions between images with varying exposure levels. In BSMEF, the feature extraction module, combining B-Spline and deformable convolutions, preserves global features while effectively extracting fine-grained local details. Additionally, we design a feature enhancement module based on Mamba blocks, leveraging its powerful global perception ability to capture contextual information. Furthermore, the fusion module integrates three feature enhancement methods: B-Spline basis functions, attention mechanisms, and Fourier transforms, addressing shadow and noise issues at fusion boundaries and enhancing the focus on important features. Experimental results demonstrate that BSMEF outperforms existing methods across multiple public datasets.
BSMEF:使用b样条和曼巴优化的多曝光图像融合
近年来,多曝光图像融合以其简单、有效、成本低等优点被广泛应用于过曝光或欠曝光图像的处理。随着深度学习技术的发展,相关的融合方法不断得到优化。然而,从源图像中保留全局信息同时保留精细的局部细节仍然具有挑战性,特别是在融合具有极端曝光差异的图像时,其中边界过渡通常表现出阴影和噪声。为了解决这个问题,我们提出了一种基于b样条基函数和Mamba的多曝光图像融合网络模型BSMEF。b样条基函数以其平滑性而闻名,可以减少边缘伪影,并在不同曝光水平的图像之间实现平滑过渡。在BSMEF中,特征提取模块结合了b样条和可变形卷积,在保留全局特征的同时有效地提取了细粒度的局部细节。此外,我们设计了一个基于曼巴块的功能增强模块,利用其强大的全局感知能力来捕获上下文信息。此外,融合模块集成了b样条基函数、注意机制和傅立叶变换三种特征增强方法,解决了融合边界的阴影和噪声问题,增强了对重要特征的关注。实验结果表明,BSMEF在多个公共数据集上优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信