Wentao Zhang;Tong Yu;Ruixuan Wang;Jianhui Xie;Emanuele Trucco;Wei-Shi Zheng;Xiaobo Yang
{"title":"Visual Class Incremental Learning With Textual Priors Guidance Based on an Adapted Vision-Language Model","authors":"Wentao Zhang;Tong Yu;Ruixuan Wang;Jianhui Xie;Emanuele Trucco;Wei-Shi Zheng;Xiaobo Yang","doi":"10.1109/TMM.2025.3543109","DOIUrl":null,"url":null,"abstract":"An ideal artificial intelligence (AI) system should have the capability to continually learn like humans. However, when learning new knowledge, AI systems often suffer from catastrophic forgetting of old knowledge. Although many continual learning methods have been proposed, they often ignore the issue of misclassifying similar classes and make insufficient use of textual priors of visual classes to improve continual learning performance. In this study, we propose a continual learning framework based on a pre-trained vision-language model (VLM) that does not require storing old class data. This framework utilizes parameter-efficient fine-tuning of the VLM's text encoder for constructing a shared and consistent semantic textual space throughout the continual learning process. The textual priors of visual classes are encoded by the adapted VLM's text encoder to generate discriminative semantic representations, which are then used to guide the learning of visual classes. Additionally, fake out-of-distribution (OOD) images constructed from each training image further assist in the learning of visual classes. Extensive empirical evaluations on three natural datasets and one medical dataset demonstrate the superiority of the proposed framework.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"5426-5438"},"PeriodicalIF":9.7000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10897910/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
An ideal artificial intelligence (AI) system should have the capability to continually learn like humans. However, when learning new knowledge, AI systems often suffer from catastrophic forgetting of old knowledge. Although many continual learning methods have been proposed, they often ignore the issue of misclassifying similar classes and make insufficient use of textual priors of visual classes to improve continual learning performance. In this study, we propose a continual learning framework based on a pre-trained vision-language model (VLM) that does not require storing old class data. This framework utilizes parameter-efficient fine-tuning of the VLM's text encoder for constructing a shared and consistent semantic textual space throughout the continual learning process. The textual priors of visual classes are encoded by the adapted VLM's text encoder to generate discriminative semantic representations, which are then used to guide the learning of visual classes. Additionally, fake out-of-distribution (OOD) images constructed from each training image further assist in the learning of visual classes. Extensive empirical evaluations on three natural datasets and one medical dataset demonstrate the superiority of the proposed framework.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.