{"title":"FCNet:夜间信号弹清除的功能互补网络","authors":"Kejing Qi , Bo Wang , Chongyi Li","doi":"10.1016/j.cviu.2025.104495","DOIUrl":null,"url":null,"abstract":"<div><div>Nighttime image flare removal is a very challenging task due to the presence of various types of unfavorable degrading effects, including glare, shimmer, streak and saturated blobs. Most of the existing methods focus on the spatial domain and limited perception field, resulting in incomplete flare removal and severe artifacts. To address these challenges, we propose a two-stage feature complementary network for nighttime flare removal, which is used for flare perception and removal, respectively. In the first stage, a Spatial-Frequency Complementary Module (SFCM) is designed to perceive the flare region from different domains to get a mask of the flare. In the second stage, the flare mask and image are fed into the Spatial-Frequency Complementary Gating Module (SFCGM) to preserve the background information, while removing the flares from different angles and restoring the detailed features. Finally the flare and non-flare regions are modeled by the Flare Interactive Module (FIM) to refine the flare regions at a fine-grained level to suppress the artifact problem. Extensive experiments on Flare 7K++ validate the superiority of the proposed approach over state-of-the-arts, both qualitatively and quantitatively.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"261 ","pages":"Article 104495"},"PeriodicalIF":3.5000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FCNet: A feature complementary network for nighttime flare removal\",\"authors\":\"Kejing Qi , Bo Wang , Chongyi Li\",\"doi\":\"10.1016/j.cviu.2025.104495\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Nighttime image flare removal is a very challenging task due to the presence of various types of unfavorable degrading effects, including glare, shimmer, streak and saturated blobs. Most of the existing methods focus on the spatial domain and limited perception field, resulting in incomplete flare removal and severe artifacts. To address these challenges, we propose a two-stage feature complementary network for nighttime flare removal, which is used for flare perception and removal, respectively. In the first stage, a Spatial-Frequency Complementary Module (SFCM) is designed to perceive the flare region from different domains to get a mask of the flare. In the second stage, the flare mask and image are fed into the Spatial-Frequency Complementary Gating Module (SFCGM) to preserve the background information, while removing the flares from different angles and restoring the detailed features. Finally the flare and non-flare regions are modeled by the Flare Interactive Module (FIM) to refine the flare regions at a fine-grained level to suppress the artifact problem. Extensive experiments on Flare 7K++ validate the superiority of the proposed approach over state-of-the-arts, both qualitatively and quantitatively.</div></div>\",\"PeriodicalId\":50633,\"journal\":{\"name\":\"Computer Vision and Image Understanding\",\"volume\":\"261 \",\"pages\":\"Article 104495\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision and Image Understanding\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1077314225002188\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314225002188","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
FCNet: A feature complementary network for nighttime flare removal
Nighttime image flare removal is a very challenging task due to the presence of various types of unfavorable degrading effects, including glare, shimmer, streak and saturated blobs. Most of the existing methods focus on the spatial domain and limited perception field, resulting in incomplete flare removal and severe artifacts. To address these challenges, we propose a two-stage feature complementary network for nighttime flare removal, which is used for flare perception and removal, respectively. In the first stage, a Spatial-Frequency Complementary Module (SFCM) is designed to perceive the flare region from different domains to get a mask of the flare. In the second stage, the flare mask and image are fed into the Spatial-Frequency Complementary Gating Module (SFCGM) to preserve the background information, while removing the flares from different angles and restoring the detailed features. Finally the flare and non-flare regions are modeled by the Flare Interactive Module (FIM) to refine the flare regions at a fine-grained level to suppress the artifact problem. Extensive experiments on Flare 7K++ validate the superiority of the proposed approach over state-of-the-arts, both qualitatively and quantitatively.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems