Visual Class Incremental Learning with Textual Priors Guidance based on an Adapted Vision-Language Model

Wentao Zhang, Tong Yu, Ruixuan Wang, Jianhui Xie, Manuel Trucco, Wei-Shi Zheng, Xiaobo Yang

Research output: Contribution to journalArticlepeer-review

Abstract

An ideal artificial intelligence (AI) system should have the capability to continually learn like humans. However, when learning new knowledge, AI systems often suffer from catastrophic forgetting of old knowledge. Although many continual learning methods have been proposed, they often ignore the issue of misclassifying similar classes and make insufficient use of textual priors of visual classes to improve continual learning performance. In this study, we propose a continual learning framework based on a pre-trained vision-language model (VLM) that does not require storing old class data. This framework utilizes parameter-efficient fine-tuning of the VLM’s text encoder for constructing a shared and consistent semantic textual space throughout the continual learning process. The textual priors of visual classes are encoded by the adapted VLM’s text encoder to generate discriminative semantic representations, which are then used to guide the learning of visual classes. Additionally, fake out-of-distribution (OOD) images constructed from each training image further assist in the learning of visual classes. Extensive empirical evaluations on three natural datasets and one medical dataset demonstrate the superiority of the proposed framework. The source code is available athttps://openi.pcl.ac.cn/OpenMedIA/CIL Adapterd VLM.
Original languageEnglish
Number of pages13
JournalIEEE Transactions on Multimedia
DOIs
Publication statusE-pub ahead of print - 21 Feb 2025

Keywords

  • continual learning
  • OOD detection
  • vision language mode

Fingerprint

Dive into the research topics of 'Visual Class Incremental Learning with Textual Priors Guidance based on an Adapted Vision-Language Model'. Together they form a unique fingerprint.

Cite this