{"title":"Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning","authors":"Fang-Yi Liang;Yu-Wei Zhan;Jiale Liu;Chong-Yu Zhang;Zhen-Duo Chen;Xin Luo;Xin-Shun Xu","doi":"10.1109/TCSVT.2025.3551612","DOIUrl":null,"url":null,"abstract":"Few-Shot Class-Incremental Learning (FSCIL) aims to continuously learn new classes from limited samples while preventing catastrophic forgetting. With the increasing distribution of learning data across different clients and privacy concerns, FSCIL faces a more realistic scenario where few learning samples are distributed across different clients, thereby necessitating a Federated Few-Shot Class-Incremental Learning (FedFSCIL) scenario. However, this integration faces challenges from non-IID problem, which affects model generalization and training efficiency. The communication overhead in federated settings also presents a significant challenge. To address these issues, we propose Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning (FedCAP). Our framework leverages pre-trained models enhanced by a class-wise prompt pool, where shared class-wise keys enable clients to utilize global class information during training. This unifies the understanding of base class features across clients and enhances model consistency. We further incorporate a class-level information fusion module to improve class representation and model generalization. Our approach requires very few parameter transmission during model aggregation, ensuring communication efficiency. To our knowledge, this is the first study to explore the scenario of FedFSCIL. Consequently, we designed comprehensive experimental setups and made the code publicly available.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8520-8532"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10926539/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Few-Shot Class-Incremental Learning (FSCIL) aims to continuously learn new classes from limited samples while preventing catastrophic forgetting. With the increasing distribution of learning data across different clients and privacy concerns, FSCIL faces a more realistic scenario where few learning samples are distributed across different clients, thereby necessitating a Federated Few-Shot Class-Incremental Learning (FedFSCIL) scenario. However, this integration faces challenges from non-IID problem, which affects model generalization and training efficiency. The communication overhead in federated settings also presents a significant challenge. To address these issues, we propose Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning (FedCAP). Our framework leverages pre-trained models enhanced by a class-wise prompt pool, where shared class-wise keys enable clients to utilize global class information during training. This unifies the understanding of base class features across clients and enhances model consistency. We further incorporate a class-level information fusion module to improve class representation and model generalization. Our approach requires very few parameter transmission during model aggregation, ensuring communication efficiency. To our knowledge, this is the first study to explore the scenario of FedFSCIL. Consequently, we designed comprehensive experimental setups and made the code publicly available.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.