Adversarial Feature Training for Few-Shot Object Detection

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Tianxu Wu;Zhimeng Xin;Shiming Chen;Yixiong Zou;Xinge You
{"title":"Adversarial Feature Training for Few-Shot Object Detection","authors":"Tianxu Wu;Zhimeng Xin;Shiming Chen;Yixiong Zou;Xinge You","doi":"10.1109/TCSVT.2025.3552138","DOIUrl":null,"url":null,"abstract":"Currently, most few-shot object detection (FSOD) methods apply the two-stage training strategy, which first requires training in abundant base classes and transfers the learned prior knowledge to the novel stage. However, due to the inherent imbalance between the base and novel classes, the trained model tends to have a bias toward recognizing novel classes as base ones when they are similar. To address this problem, we propose an adversarial feature training (AFT) strategy aimed at effectively calibrating the decision boundary between novel and base classes to alleviate classification confusion in FSOD. Specifically, we introduce the Classification Level Fast Gradient Sign Method (CL-FGSM), which leverages gradient information from the classifier module to generate adversarial samples with extra feature attention. By attacking the high-level features, we can create adversarial feature samples that are combined with clean high-level features in a suitable range of proportions. Such adversarial feature samples, generated by CL-FGSM, are then combined with clean high-level features in a suitable range of proportions to train the few-shot detector. By this, the novel model is forced to learn extra class-specific features that improve the robustness of the classifier to establish a correct decision boundary, which avoids confusion between base and novel classes in FSOD. Extensive experiments demonstrate that our proposed AFT strategy effectively calibrates the classification decision boundary to avoid classification confusion between base and novel classes and significantly improves the performance of FSOD. Our code is available at <uri>https://github.com/wutianxu/AFT</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"9324-9336"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10930667/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Currently, most few-shot object detection (FSOD) methods apply the two-stage training strategy, which first requires training in abundant base classes and transfers the learned prior knowledge to the novel stage. However, due to the inherent imbalance between the base and novel classes, the trained model tends to have a bias toward recognizing novel classes as base ones when they are similar. To address this problem, we propose an adversarial feature training (AFT) strategy aimed at effectively calibrating the decision boundary between novel and base classes to alleviate classification confusion in FSOD. Specifically, we introduce the Classification Level Fast Gradient Sign Method (CL-FGSM), which leverages gradient information from the classifier module to generate adversarial samples with extra feature attention. By attacking the high-level features, we can create adversarial feature samples that are combined with clean high-level features in a suitable range of proportions. Such adversarial feature samples, generated by CL-FGSM, are then combined with clean high-level features in a suitable range of proportions to train the few-shot detector. By this, the novel model is forced to learn extra class-specific features that improve the robustness of the classifier to establish a correct decision boundary, which avoids confusion between base and novel classes in FSOD. Extensive experiments demonstrate that our proposed AFT strategy effectively calibrates the classification decision boundary to avoid classification confusion between base and novel classes and significantly improves the performance of FSOD. Our code is available at https://github.com/wutianxu/AFT.
针对少射目标检测的对抗特征训练
目前,大多数小样本目标检测方法都采用了两阶段训练策略,首先需要在大量的基类中进行训练,然后将学习到的先验知识转移到新的阶段。然而,由于基础类和新类之间固有的不平衡,训练后的模型往往倾向于在相似的情况下将新类识别为基础类。为了解决这个问题,我们提出了一种对抗特征训练(AFT)策略,旨在有效地校准新类和基类之间的决策边界,以减轻FSOD中的分类混乱。具体来说,我们引入了分类水平快速梯度符号方法(CL-FGSM),该方法利用来自分类器模块的梯度信息来生成具有额外特征关注的对抗样本。通过攻击高级特征,我们可以创建对抗性的特征样本,这些样本与干净的高级特征在合适的比例范围内结合在一起。由CL-FGSM生成的这种对抗性特征样本,然后以合适的比例范围与干净的高级特征相结合,以训练少镜头检测器。这样,新模型被迫学习额外的类特定特征,以提高分类器的鲁棒性,从而建立正确的决策边界,避免了FSOD中基本类和新类之间的混淆。大量的实验表明,我们提出的AFT策略有效地校准了分类决策边界,避免了基本类和新类之间的分类混淆,显著提高了FSOD的性能。我们的代码可在https://github.com/wutianxu/AFT上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信