联邦少次类增量学习的类感知提示

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Fang-Yi Liang;Yu-Wei Zhan;Jiale Liu;Chong-Yu Zhang;Zhen-Duo Chen;Xin Luo;Xin-Shun Xu
{"title":"联邦少次类增量学习的类感知提示","authors":"Fang-Yi Liang;Yu-Wei Zhan;Jiale Liu;Chong-Yu Zhang;Zhen-Duo Chen;Xin Luo;Xin-Shun Xu","doi":"10.1109/TCSVT.2025.3551612","DOIUrl":null,"url":null,"abstract":"Few-Shot Class-Incremental Learning (FSCIL) aims to continuously learn new classes from limited samples while preventing catastrophic forgetting. With the increasing distribution of learning data across different clients and privacy concerns, FSCIL faces a more realistic scenario where few learning samples are distributed across different clients, thereby necessitating a Federated Few-Shot Class-Incremental Learning (FedFSCIL) scenario. However, this integration faces challenges from non-IID problem, which affects model generalization and training efficiency. The communication overhead in federated settings also presents a significant challenge. To address these issues, we propose Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning (FedCAP). Our framework leverages pre-trained models enhanced by a class-wise prompt pool, where shared class-wise keys enable clients to utilize global class information during training. This unifies the understanding of base class features across clients and enhances model consistency. We further incorporate a class-level information fusion module to improve class representation and model generalization. Our approach requires very few parameter transmission during model aggregation, ensuring communication efficiency. To our knowledge, this is the first study to explore the scenario of FedFSCIL. Consequently, we designed comprehensive experimental setups and made the code publicly available.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8520-8532"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning\",\"authors\":\"Fang-Yi Liang;Yu-Wei Zhan;Jiale Liu;Chong-Yu Zhang;Zhen-Duo Chen;Xin Luo;Xin-Shun Xu\",\"doi\":\"10.1109/TCSVT.2025.3551612\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Few-Shot Class-Incremental Learning (FSCIL) aims to continuously learn new classes from limited samples while preventing catastrophic forgetting. With the increasing distribution of learning data across different clients and privacy concerns, FSCIL faces a more realistic scenario where few learning samples are distributed across different clients, thereby necessitating a Federated Few-Shot Class-Incremental Learning (FedFSCIL) scenario. However, this integration faces challenges from non-IID problem, which affects model generalization and training efficiency. The communication overhead in federated settings also presents a significant challenge. To address these issues, we propose Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning (FedCAP). Our framework leverages pre-trained models enhanced by a class-wise prompt pool, where shared class-wise keys enable clients to utilize global class information during training. This unifies the understanding of base class features across clients and enhances model consistency. We further incorporate a class-level information fusion module to improve class representation and model generalization. Our approach requires very few parameter transmission during model aggregation, ensuring communication efficiency. To our knowledge, this is the first study to explore the scenario of FedFSCIL. Consequently, we designed comprehensive experimental setups and made the code publicly available.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 9\",\"pages\":\"8520-8532\"},\"PeriodicalIF\":11.1000,\"publicationDate\":\"2025-03-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10926539/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10926539/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

FSCIL (Few-Shot Class-Incremental Learning)旨在从有限的样本中不断学习新的类,同时防止灾难性遗忘。随着学习数据在不同客户端的分布和隐私问题的增加,FSCIL面临着一个更现实的场景,即很少的学习样本分布在不同的客户端,因此需要联邦少射类增量学习(FedFSCIL)场景。然而,这种集成面临非iid问题的挑战,影响了模型的泛化和训练效率。联邦设置中的通信开销也带来了重大挑战。为了解决这些问题,我们提出了联邦少次类增量学习的类感知提示(FedCAP)。我们的框架利用预先训练的模型,通过类智能提示池进行增强,其中共享的类智能密钥使客户端能够在训练期间利用全局类信息。这统一了跨客户端对基类特性的理解,并增强了模型的一致性。我们进一步集成了类级信息融合模块,以改进类表示和模型泛化。该方法在模型聚合过程中只需要很少的参数传输,保证了通信效率。据我们所知,这是第一个探索FedFSCIL情景的研究。因此,我们设计了全面的实验设置,并使代码公开可用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning
Few-Shot Class-Incremental Learning (FSCIL) aims to continuously learn new classes from limited samples while preventing catastrophic forgetting. With the increasing distribution of learning data across different clients and privacy concerns, FSCIL faces a more realistic scenario where few learning samples are distributed across different clients, thereby necessitating a Federated Few-Shot Class-Incremental Learning (FedFSCIL) scenario. However, this integration faces challenges from non-IID problem, which affects model generalization and training efficiency. The communication overhead in federated settings also presents a significant challenge. To address these issues, we propose Class-Aware Prompting for Federated Few-Shot Class-Incremental Learning (FedCAP). Our framework leverages pre-trained models enhanced by a class-wise prompt pool, where shared class-wise keys enable clients to utilize global class information during training. This unifies the understanding of base class features across clients and enhances model consistency. We further incorporate a class-level information fusion module to improve class representation and model generalization. Our approach requires very few parameter transmission during model aggregation, ensuring communication efficiency. To our knowledge, this is the first study to explore the scenario of FedFSCIL. Consequently, we designed comprehensive experimental setups and made the code publicly available.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信