Haishun Du , Wenzhe Zhang , Sen Wang , Zhengyang Zhang , Linbing Cao
{"title":"FFENet:一种用于伪装目标检测的频率融合和增强网络","authors":"Haishun Du , Wenzhe Zhang , Sen Wang , Zhengyang Zhang , Linbing Cao","doi":"10.1016/j.imavis.2025.105733","DOIUrl":null,"url":null,"abstract":"<div><div>The goal of camouflaged object detection (COD) is to accurately find camouflaged objects hidden in their surroundings. Although most of the existing frequency-domain based COD models can boost the performance of COD to a certain extent by utilizing the frequency domain information, the frequency feature fusion strategies they adopt tend to ignore the complementary effects between high-frequency features and low-frequency features. In addition, most of the existing frequency-domain based COD models also do not consider enhancing camouflaged objects using low-level frequency-domain features. In order to solve these problems, we present a frequency fusion and enhancement network (FFENet) for camouflaged object detection, which mainly includes three stages. In the frequency feature extraction stage, we design a frequency feature learning module (FLM) to extract corresponding high-frequency features and low-frequency features. In the frequency feature fusion stage, we design a frequency feature fusion module (FFM) that can increase the representation ability of the fused features by adaptively assigning weights to the high-frequency features and the low-frequency features using a cross-attention mechanism. In the frequency feature guidance information enhancement stage, we design a frequency feature guidance information enhancement module (FGIEM) to enhance the contextual information and detail information of camouflaged objects in the fused features under the guidance of the low-level frequency features. Extensive experimental results on the COD10K, CHAMELEON, NC4K and CAMO datasets show that our model is superior to most existing COD models.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"163 ","pages":"Article 105733"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FFENet: A frequency fusion and enhancement network for camouflaged object detection\",\"authors\":\"Haishun Du , Wenzhe Zhang , Sen Wang , Zhengyang Zhang , Linbing Cao\",\"doi\":\"10.1016/j.imavis.2025.105733\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The goal of camouflaged object detection (COD) is to accurately find camouflaged objects hidden in their surroundings. Although most of the existing frequency-domain based COD models can boost the performance of COD to a certain extent by utilizing the frequency domain information, the frequency feature fusion strategies they adopt tend to ignore the complementary effects between high-frequency features and low-frequency features. In addition, most of the existing frequency-domain based COD models also do not consider enhancing camouflaged objects using low-level frequency-domain features. In order to solve these problems, we present a frequency fusion and enhancement network (FFENet) for camouflaged object detection, which mainly includes three stages. In the frequency feature extraction stage, we design a frequency feature learning module (FLM) to extract corresponding high-frequency features and low-frequency features. In the frequency feature fusion stage, we design a frequency feature fusion module (FFM) that can increase the representation ability of the fused features by adaptively assigning weights to the high-frequency features and the low-frequency features using a cross-attention mechanism. In the frequency feature guidance information enhancement stage, we design a frequency feature guidance information enhancement module (FGIEM) to enhance the contextual information and detail information of camouflaged objects in the fused features under the guidance of the low-level frequency features. Extensive experimental results on the COD10K, CHAMELEON, NC4K and CAMO datasets show that our model is superior to most existing COD models.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"163 \",\"pages\":\"Article 105733\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S026288562500321X\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026288562500321X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
FFENet: A frequency fusion and enhancement network for camouflaged object detection
The goal of camouflaged object detection (COD) is to accurately find camouflaged objects hidden in their surroundings. Although most of the existing frequency-domain based COD models can boost the performance of COD to a certain extent by utilizing the frequency domain information, the frequency feature fusion strategies they adopt tend to ignore the complementary effects between high-frequency features and low-frequency features. In addition, most of the existing frequency-domain based COD models also do not consider enhancing camouflaged objects using low-level frequency-domain features. In order to solve these problems, we present a frequency fusion and enhancement network (FFENet) for camouflaged object detection, which mainly includes three stages. In the frequency feature extraction stage, we design a frequency feature learning module (FLM) to extract corresponding high-frequency features and low-frequency features. In the frequency feature fusion stage, we design a frequency feature fusion module (FFM) that can increase the representation ability of the fused features by adaptively assigning weights to the high-frequency features and the low-frequency features using a cross-attention mechanism. In the frequency feature guidance information enhancement stage, we design a frequency feature guidance information enhancement module (FGIEM) to enhance the contextual information and detail information of camouflaged objects in the fused features under the guidance of the low-level frequency features. Extensive experimental results on the COD10K, CHAMELEON, NC4K and CAMO datasets show that our model is superior to most existing COD models.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.