Dual-branch visible and infrared image fusion transformer

Xiao-jing Shi, Zhen Wang, Xinping Pan, Junjie Li, Ke Wang
{"title":"Dual-branch visible and infrared image fusion transformer","authors":"Xiao-jing Shi, Zhen Wang, Xinping Pan, Junjie Li, Ke Wang","doi":"10.1117/12.2691207","DOIUrl":null,"url":null,"abstract":"The process of combining features from two images of different sources to generate a new image is called image fusion. In order to adapt to different application scenarios, deep learning was widely used. However, existing fusion networks focued on the extraction of local information, neglected the long-term dependencies. In order to improve the defect, a fusion network based on Transformer was proposed. To accommodate our experimental equipment, we made some modifications to Transformer. A dual-branch autoencoder network was designed with detail and semantic branches, the fusion layer consists of CNN and Transformer, and the decoder reconstructs the features to get the fused image. A new loss function was proposed to train the network. Based on the results, an infrared feature compensation network was designed to enhance the fusion effect. In several metrics that we focus on, we compared with several other algorithms. As the experiments on some datasets, our method had improvement on SCD, SSIM and MS-SSIM metrics, and was basically equal to other algorithms on saliency-based structural similarity, weighted quality assessment, and dge-based structural similarity. From the experimental results, we can see that our method was feasible.","PeriodicalId":114868,"journal":{"name":"International Conference on Optoelectronic Information and Computer Engineering (OICE)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Optoelectronic Information and Computer Engineering (OICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2691207","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The process of combining features from two images of different sources to generate a new image is called image fusion. In order to adapt to different application scenarios, deep learning was widely used. However, existing fusion networks focued on the extraction of local information, neglected the long-term dependencies. In order to improve the defect, a fusion network based on Transformer was proposed. To accommodate our experimental equipment, we made some modifications to Transformer. A dual-branch autoencoder network was designed with detail and semantic branches, the fusion layer consists of CNN and Transformer, and the decoder reconstructs the features to get the fused image. A new loss function was proposed to train the network. Based on the results, an infrared feature compensation network was designed to enhance the fusion effect. In several metrics that we focus on, we compared with several other algorithms. As the experiments on some datasets, our method had improvement on SCD, SSIM and MS-SSIM metrics, and was basically equal to other algorithms on saliency-based structural similarity, weighted quality assessment, and dge-based structural similarity. From the experimental results, we can see that our method was feasible.
双支路可见光和红外图像融合变压器
将不同来源的两幅图像的特征结合起来生成新图像的过程称为图像融合。为了适应不同的应用场景,深度学习被广泛应用。然而,现有的融合网络主要关注局部信息的提取,忽略了长期依赖关系。为了改进这一缺陷,提出了一种基于Transformer的融合网络。为了适应我们的实验设备,我们对变形金刚做了一些修改。设计了带有细节分支和语义分支的双分支自编码器网络,融合层由CNN和Transformer组成,解码器重构特征得到融合图像。提出了一种新的损失函数来训练网络。在此基础上,设计了红外特征补偿网络来增强融合效果。在我们关注的几个指标中,我们与其他几个算法进行了比较。通过对部分数据集的实验,我们的方法在SCD、SSIM和MS-SSIM指标上都有改进,并且在基于显著性的结构相似度、加权质量评价和基于边缘的结构相似度上与其他算法基本持平。从实验结果可以看出,我们的方法是可行的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信