Xin Hu, Fen Chen, Zongju Peng, Lian Huang, Jiawei Xu
{"title":"MMCFNet: Multi-scale and multi-modal complementary fusion network for light field salient object detection","authors":"Xin Hu, Fen Chen, Zongju Peng, Lian Huang, Jiawei Xu","doi":"10.1016/j.imavis.2025.105680","DOIUrl":null,"url":null,"abstract":"<div><div>Light field salient object detection (LFSOD) has received growing attention in recent years. Light field cameras record the direction and intensity of light in a scene, and they provide focal stacks and all-focus images with different but complementary characteristics. Previous LFSOD models lack effective feature fusion for multi-scale and multi-modal information, which leads to background interference or incomplete salient objects. In this paper, we propose a new multi-scale and multi-modal complementary fusion network (MMCFNet) for LFSOD. For the focal stacks, we design a slice interweaving enhancement module (SIEM) to emphasize the useful features among different slices and reduce inconsistency. In addition, we propose a new multi-scale and multi-modal fusion strategy, which contains high-level feature fusion module (HFFM), cross attention module (CrossA), and compact pyramid refinement (CPR) module. The HFFM fuses high-level multi-scale and multi-modal semantic information to accurately locate salient objects. The CrossA enhances low-level spatial-channel information and refines salient object edges. Finally, we use the CPR module to aggregate the multi-scale information and decode it into high-quality saliency maps. Extensive experiments on public datasets show that our method outperforms 11 state-of-the-art LFSOD methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105680"},"PeriodicalIF":4.2000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002689","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Light field salient object detection (LFSOD) has received growing attention in recent years. Light field cameras record the direction and intensity of light in a scene, and they provide focal stacks and all-focus images with different but complementary characteristics. Previous LFSOD models lack effective feature fusion for multi-scale and multi-modal information, which leads to background interference or incomplete salient objects. In this paper, we propose a new multi-scale and multi-modal complementary fusion network (MMCFNet) for LFSOD. For the focal stacks, we design a slice interweaving enhancement module (SIEM) to emphasize the useful features among different slices and reduce inconsistency. In addition, we propose a new multi-scale and multi-modal fusion strategy, which contains high-level feature fusion module (HFFM), cross attention module (CrossA), and compact pyramid refinement (CPR) module. The HFFM fuses high-level multi-scale and multi-modal semantic information to accurately locate salient objects. The CrossA enhances low-level spatial-channel information and refines salient object edges. Finally, we use the CPR module to aggregate the multi-scale information and decode it into high-quality saliency maps. Extensive experiments on public datasets show that our method outperforms 11 state-of-the-art LFSOD methods.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.