Learning Stage-wise Fusion Transformer for light field saliency detection

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wenhui Jiang , Qi Shu , Hongwei Cheng , Yuming Fang , Yifan Zuo , Xiaowei Zhao
{"title":"Learning Stage-wise Fusion Transformer for light field saliency detection","authors":"Wenhui Jiang ,&nbsp;Qi Shu ,&nbsp;Hongwei Cheng ,&nbsp;Yuming Fang ,&nbsp;Yifan Zuo ,&nbsp;Xiaowei Zhao","doi":"10.1016/j.patrec.2025.07.005","DOIUrl":null,"url":null,"abstract":"<div><div>Light field salient object detection (SOD) has attracted tremendous research efforts recently. As the light field data contains multiple images with different characteristics, effectively integrating the valuable information from these images remains under-explored. Recent efforts focus on aggregating the complementary information from all-in-focus (AiF) and focal stack images (FS) late in the decoding stage. In this paper, we explore how learning the AiF and FS image encoders jointly can strengthen light field SOD. Towards this goal, we propose a Stage-wise Fusion Transformer (SF-Transformer) to aggregate the rich information from AiF image and FS images at different levels. Specifically, we present a Focal Stack Transformer (FST) for focal stacks encoding, which makes full use of the spatial-stack correlations for performant FS representation. We further introduce a Stage-wise Deep Fusion (SDF) which refines both AiF and FS image representation by capturing the multi-modal feature interactions in each encoding stage, thus effectively exploring the advantages of AiF and FS characteristics. We conduct comprehensive experiments on DUT-LFSD, HFUT-LFSD, and LFSD. The experimental results validate the effectiveness of the proposed method.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"197 ","pages":"Pages 117-123"},"PeriodicalIF":3.3000,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525002570","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Light field salient object detection (SOD) has attracted tremendous research efforts recently. As the light field data contains multiple images with different characteristics, effectively integrating the valuable information from these images remains under-explored. Recent efforts focus on aggregating the complementary information from all-in-focus (AiF) and focal stack images (FS) late in the decoding stage. In this paper, we explore how learning the AiF and FS image encoders jointly can strengthen light field SOD. Towards this goal, we propose a Stage-wise Fusion Transformer (SF-Transformer) to aggregate the rich information from AiF image and FS images at different levels. Specifically, we present a Focal Stack Transformer (FST) for focal stacks encoding, which makes full use of the spatial-stack correlations for performant FS representation. We further introduce a Stage-wise Deep Fusion (SDF) which refines both AiF and FS image representation by capturing the multi-modal feature interactions in each encoding stage, thus effectively exploring the advantages of AiF and FS characteristics. We conduct comprehensive experiments on DUT-LFSD, HFUT-LFSD, and LFSD. The experimental results validate the effectiveness of the proposed method.
学习阶段明智的融合变压器光场显著性检测
近年来,光场显著目标检测引起了广泛的研究。由于光场数据包含多幅不同特征的图像,有效地整合这些图像中的有价值信息仍然是一个有待探索的问题。目前的研究主要集中在解码后期对全焦图像(AiF)和焦叠图像(FS)的互补信息进行聚合。在本文中,我们探讨了如何联合学习AiF和FS图像编码器来增强光场SOD。为了实现这一目标,我们提出了一种分阶段融合变压器(SF-Transformer)来聚合不同层次的AiF图像和FS图像的丰富信息。具体来说,我们提出了一个用于焦点堆栈编码的焦点堆栈转换器(FST),它充分利用了空间堆栈相关性来实现高性能的FS表示。我们进一步引入了一种分阶段深度融合(SDF),通过捕获每个编码阶段的多模态特征交互来细化AiF和FS图像表示,从而有效地探索AiF和FS特征的优势。我们对DUT-LFSD、HFUT-LFSD、LFSD进行了综合实验。实验结果验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信