基于倍频变换器的伪装目标交叉感知网络

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Feng Dong , Jinchao Zhu , Hongpeng Wang
{"title":"基于倍频变换器的伪装目标交叉感知网络","authors":"Feng Dong ,&nbsp;Jinchao Zhu ,&nbsp;Hongpeng Wang","doi":"10.1016/j.knosys.2025.114638","DOIUrl":null,"url":null,"abstract":"<div><div>Camouflaged object detection (COD) aims to identify objects that are fully blended into their surrounding environments. Current mainstream COD methods primarily focus on pixel-level optimization using convolutional neural networks (CNNs), without sufficiently addressing the significance of frequency interactions between candidate targets and noisy backgrounds, which are crucial for obtaining accurate edge and localization information. This paper explores the integration of multi-frequency features, and constructs a cross-frequency aware network (CFANet). The proposed network utilizes precisely learned deep-layer low-frequency features to guide other layers, achieving coarse localization. To further refine segmentation, the network employs both Transformer and CNN structures to facilitate the interaction and optimization of high- and low-frequency features at local and global levels. The model adopts a localization-guided decoder structure (LGS) that allows deep-layer low-frequency features to play a key role in guiding localization. The discussion module (DM) comprises three feature extraction experts, who engage in a teacher-student learning framework to derive more accurate deep-layer low-frequency features. In the Octave-Transformer module (OTM), the high- and low-frequency fused features based on octave convolution (OctConv) and Transformer deeply mine semantic features and detailed information. Compared to 33 existing state-of-the-art COD methods, the proposed network achieves overall superior performance across four benchmark datasets. Additionally, the network demonstrates excellent performance in other downstream tasks, such as polyp segmentation, surface defect detection. Our code is available at <span><span>https://github.com/wkkwll-df/CFANet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"330 ","pages":"Article 114638"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-frequency aware network for camouflaged object detection with octave-transformer\",\"authors\":\"Feng Dong ,&nbsp;Jinchao Zhu ,&nbsp;Hongpeng Wang\",\"doi\":\"10.1016/j.knosys.2025.114638\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Camouflaged object detection (COD) aims to identify objects that are fully blended into their surrounding environments. Current mainstream COD methods primarily focus on pixel-level optimization using convolutional neural networks (CNNs), without sufficiently addressing the significance of frequency interactions between candidate targets and noisy backgrounds, which are crucial for obtaining accurate edge and localization information. This paper explores the integration of multi-frequency features, and constructs a cross-frequency aware network (CFANet). The proposed network utilizes precisely learned deep-layer low-frequency features to guide other layers, achieving coarse localization. To further refine segmentation, the network employs both Transformer and CNN structures to facilitate the interaction and optimization of high- and low-frequency features at local and global levels. The model adopts a localization-guided decoder structure (LGS) that allows deep-layer low-frequency features to play a key role in guiding localization. The discussion module (DM) comprises three feature extraction experts, who engage in a teacher-student learning framework to derive more accurate deep-layer low-frequency features. In the Octave-Transformer module (OTM), the high- and low-frequency fused features based on octave convolution (OctConv) and Transformer deeply mine semantic features and detailed information. Compared to 33 existing state-of-the-art COD methods, the proposed network achieves overall superior performance across four benchmark datasets. Additionally, the network demonstrates excellent performance in other downstream tasks, such as polyp segmentation, surface defect detection. Our code is available at <span><span>https://github.com/wkkwll-df/CFANet</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"330 \",\"pages\":\"Article 114638\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125016776\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125016776","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

伪装目标检测(COD)旨在识别完全融入周围环境的物体。目前主流的COD方法主要集中在卷积神经网络(cnn)的像素级优化上,没有充分考虑候选目标与噪声背景之间频率相互作用的重要性,而这些频率相互作用对于获得准确的边缘和定位信息至关重要。本文探讨了多频特征的融合,构建了一个跨频感知网络(CFANet)。该网络利用精确学习到的深层低频特征来引导其他层,实现粗定位。为了进一步细化分割,该网络同时采用Transformer和CNN结构,以促进局部和全局高低频特征的交互和优化。该模型采用定位引导解码器结构(LGS),允许深层低频特征在引导定位中发挥关键作用。讨论模块(DM)由三位特征提取专家组成,他们参与师生学习框架,以获得更准确的深层低频特征。在octave -Transformer模块(OTM)中,基于OctConv和Transformer的高低频融合特征深度挖掘了语义特征和细节信息。与现有的33种最先进的COD方法相比,所提出的网络在四个基准数据集上实现了总体优越的性能。此外,该网络在其他下游任务中表现出优异的性能,如息肉分割,表面缺陷检测。我们的代码可在https://github.com/wkkwll-df/CFANet上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cross-frequency aware network for camouflaged object detection with octave-transformer
Camouflaged object detection (COD) aims to identify objects that are fully blended into their surrounding environments. Current mainstream COD methods primarily focus on pixel-level optimization using convolutional neural networks (CNNs), without sufficiently addressing the significance of frequency interactions between candidate targets and noisy backgrounds, which are crucial for obtaining accurate edge and localization information. This paper explores the integration of multi-frequency features, and constructs a cross-frequency aware network (CFANet). The proposed network utilizes precisely learned deep-layer low-frequency features to guide other layers, achieving coarse localization. To further refine segmentation, the network employs both Transformer and CNN structures to facilitate the interaction and optimization of high- and low-frequency features at local and global levels. The model adopts a localization-guided decoder structure (LGS) that allows deep-layer low-frequency features to play a key role in guiding localization. The discussion module (DM) comprises three feature extraction experts, who engage in a teacher-student learning framework to derive more accurate deep-layer low-frequency features. In the Octave-Transformer module (OTM), the high- and low-frequency fused features based on octave convolution (OctConv) and Transformer deeply mine semantic features and detailed information. Compared to 33 existing state-of-the-art COD methods, the proposed network achieves overall superior performance across four benchmark datasets. Additionally, the network demonstrates excellent performance in other downstream tasks, such as polyp segmentation, surface defect detection. Our code is available at https://github.com/wkkwll-df/CFANet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信