Tianxu Wu;Zhimeng Xin;Shiming Chen;Yixiong Zou;Xinge You
{"title":"Adversarial Feature Training for Few-Shot Object Detection","authors":"Tianxu Wu;Zhimeng Xin;Shiming Chen;Yixiong Zou;Xinge You","doi":"10.1109/TCSVT.2025.3552138","DOIUrl":null,"url":null,"abstract":"Currently, most few-shot object detection (FSOD) methods apply the two-stage training strategy, which first requires training in abundant base classes and transfers the learned prior knowledge to the novel stage. However, due to the inherent imbalance between the base and novel classes, the trained model tends to have a bias toward recognizing novel classes as base ones when they are similar. To address this problem, we propose an adversarial feature training (AFT) strategy aimed at effectively calibrating the decision boundary between novel and base classes to alleviate classification confusion in FSOD. Specifically, we introduce the Classification Level Fast Gradient Sign Method (CL-FGSM), which leverages gradient information from the classifier module to generate adversarial samples with extra feature attention. By attacking the high-level features, we can create adversarial feature samples that are combined with clean high-level features in a suitable range of proportions. Such adversarial feature samples, generated by CL-FGSM, are then combined with clean high-level features in a suitable range of proportions to train the few-shot detector. By this, the novel model is forced to learn extra class-specific features that improve the robustness of the classifier to establish a correct decision boundary, which avoids confusion between base and novel classes in FSOD. Extensive experiments demonstrate that our proposed AFT strategy effectively calibrates the classification decision boundary to avoid classification confusion between base and novel classes and significantly improves the performance of FSOD. Our code is available at <uri>https://github.com/wutianxu/AFT</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"9324-9336"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10930667/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Currently, most few-shot object detection (FSOD) methods apply the two-stage training strategy, which first requires training in abundant base classes and transfers the learned prior knowledge to the novel stage. However, due to the inherent imbalance between the base and novel classes, the trained model tends to have a bias toward recognizing novel classes as base ones when they are similar. To address this problem, we propose an adversarial feature training (AFT) strategy aimed at effectively calibrating the decision boundary between novel and base classes to alleviate classification confusion in FSOD. Specifically, we introduce the Classification Level Fast Gradient Sign Method (CL-FGSM), which leverages gradient information from the classifier module to generate adversarial samples with extra feature attention. By attacking the high-level features, we can create adversarial feature samples that are combined with clean high-level features in a suitable range of proportions. Such adversarial feature samples, generated by CL-FGSM, are then combined with clean high-level features in a suitable range of proportions to train the few-shot detector. By this, the novel model is forced to learn extra class-specific features that improve the robustness of the classifier to establish a correct decision boundary, which avoids confusion between base and novel classes in FSOD. Extensive experiments demonstrate that our proposed AFT strategy effectively calibrates the classification decision boundary to avoid classification confusion between base and novel classes and significantly improves the performance of FSOD. Our code is available at https://github.com/wutianxu/AFT.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.