{"title":"基于门控特征聚合和纹理语义关注的掩模感知光场去遮挡","authors":"Jieyu Chen;Ping An;Xinpeng Huang;Yilei Chen;Chao Yang;Liquan Shen","doi":"10.1109/TMM.2025.3543048","DOIUrl":null,"url":null,"abstract":"A light field image records rich information of a scene from multiple views, thereby providing complementary information for occlusion removal. However, current occlusion removal methods have several issues: 1) inefficient exploitation of spatial and angular complementary information among views; 2) indistinguishable treatment of pixels from foreground occlusion and background; and 3) insufficient exploration of spatial detail supplementation. Therefore, in this article, we propose a mask-aware de-occlusion network (MANet). Specifically, MANet is a joint training network that integrates the occlusion mask predictor (OMP) and the occlusion remover (OR). First, OMP is proposed to provide the location of occluded regions for OR, as the occlusion removal task is ill-posed without occluded region localization. In OR, we introduce gated spatial-angular feature aggregation, which uses a soft gating mechanism to focus on spatial-angular interaction features in non-occluded regions, extracting effective aggregated features specific to the de-occlusion. Then, we design a complementary strategy to fully utilize spatial-angular information among views. Finally, we propose texture-semantic attention to improve the performance of detail generation. Experimental results demonstrate the superiority of MANet, with substantial improvements in both PSNR and SSIM metrics. Moreover, MANet stands out with an efficient parameter count of 2.4 M, making it a promising solution for real-world applications in public safety and security surveillance.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"5296-5311"},"PeriodicalIF":9.7000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mask-Aware Light Field De-Occlusion With Gated Feature Aggregation and Texture-Semantic Attention\",\"authors\":\"Jieyu Chen;Ping An;Xinpeng Huang;Yilei Chen;Chao Yang;Liquan Shen\",\"doi\":\"10.1109/TMM.2025.3543048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A light field image records rich information of a scene from multiple views, thereby providing complementary information for occlusion removal. However, current occlusion removal methods have several issues: 1) inefficient exploitation of spatial and angular complementary information among views; 2) indistinguishable treatment of pixels from foreground occlusion and background; and 3) insufficient exploration of spatial detail supplementation. Therefore, in this article, we propose a mask-aware de-occlusion network (MANet). Specifically, MANet is a joint training network that integrates the occlusion mask predictor (OMP) and the occlusion remover (OR). First, OMP is proposed to provide the location of occluded regions for OR, as the occlusion removal task is ill-posed without occluded region localization. In OR, we introduce gated spatial-angular feature aggregation, which uses a soft gating mechanism to focus on spatial-angular interaction features in non-occluded regions, extracting effective aggregated features specific to the de-occlusion. Then, we design a complementary strategy to fully utilize spatial-angular information among views. Finally, we propose texture-semantic attention to improve the performance of detail generation. Experimental results demonstrate the superiority of MANet, with substantial improvements in both PSNR and SSIM metrics. Moreover, MANet stands out with an efficient parameter count of 2.4 M, making it a promising solution for real-world applications in public safety and security surveillance.\",\"PeriodicalId\":13273,\"journal\":{\"name\":\"IEEE Transactions on Multimedia\",\"volume\":\"27 \",\"pages\":\"5296-5311\"},\"PeriodicalIF\":9.7000,\"publicationDate\":\"2025-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Multimedia\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10891423/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10891423/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Mask-Aware Light Field De-Occlusion With Gated Feature Aggregation and Texture-Semantic Attention
A light field image records rich information of a scene from multiple views, thereby providing complementary information for occlusion removal. However, current occlusion removal methods have several issues: 1) inefficient exploitation of spatial and angular complementary information among views; 2) indistinguishable treatment of pixels from foreground occlusion and background; and 3) insufficient exploration of spatial detail supplementation. Therefore, in this article, we propose a mask-aware de-occlusion network (MANet). Specifically, MANet is a joint training network that integrates the occlusion mask predictor (OMP) and the occlusion remover (OR). First, OMP is proposed to provide the location of occluded regions for OR, as the occlusion removal task is ill-posed without occluded region localization. In OR, we introduce gated spatial-angular feature aggregation, which uses a soft gating mechanism to focus on spatial-angular interaction features in non-occluded regions, extracting effective aggregated features specific to the de-occlusion. Then, we design a complementary strategy to fully utilize spatial-angular information among views. Finally, we propose texture-semantic attention to improve the performance of detail generation. Experimental results demonstrate the superiority of MANet, with substantial improvements in both PSNR and SSIM metrics. Moreover, MANet stands out with an efficient parameter count of 2.4 M, making it a promising solution for real-world applications in public safety and security surveillance.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.