Git Product home page Git Product logo

revisitingcil's Introduction

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

1State Key Laboratory for Novel Software Technology, Nanjing University 

2S-Lab, Nanyang Technological University 

The code repository for "Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need" in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:

@article{zhou2023revisiting,
    author = {Zhou, Da-Wei and Ye, Han-Jia and Zhan, De-Chuan and Liu, Ziwei},
    title = {Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need},
    journal = {arXiv preprint arXiv:2303.07338},
    year = {2023}
}

Updates

[03/2023] arXiv paper has been released.

[03/2023] Code has been released.

Introduction

Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones. Traditional CIL models are trained from scratch to continually acquire knowledge as data evolves. Recently, pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL. Contrary to traditional methods, PTMs possess generalizable embeddings, which can be easily transferred for CIL. In this work, we revisit CIL with PTMs and argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring. 1) We first reveal that frozen PTM can already provide generalizable embeddings for CIL. Surprisingly, a simple baseline (SimpleCIL) which continually sets the classifiers of PTM to prototype features can beat state-of-the-art even without training on the downstream task. 2) Due to the distribution gap between pre-trained and downstream datasets, PTM can be further cultivated with adaptivity via model adaptation. We propose ADapt And Merge (ADAM), which aggregates the embeddings of PTM and adapted models for classifier construction. ADAM is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM’s generalizability and adapted model’s adaptivity. 3) Additionally, we find previous benchmarks are unsuitable in the era of PTM due to data overlapping and propose four new benchmarks for assessment, namely ImageNet-A, ObjectNet, OmniBenchmark, and VTAB. Extensive experiments validate the effectiveness of ADAM with a unified and concise framework.

TL;DR

A simple baseline (SimpleCIL) beats SOTA even without training on the downstream task. ADapt And Merge (ADAM) extends SimpleCIL with better adaptivity and generalizability. Four new benchmarks are proposed for assessment.

Requirements

Environment

  1. torch 1.11.0
  2. torchvision 0.12.0
  3. timm 0.6.12

Dataset

We provide the processed datasets as follows:

  • CIFAR100: will be automatically downloaded by the code.
  • CUB200: Google Drive: link or Onedrive: link
  • ImageNet-R: Google Drive: link or Onedrive: link
  • ImageNet-A:Google Drive: link or Onedrive: link
  • OmniBenchmark: Google Drive: link or Onedrive: link
  • VTAB: Google Drive: link or Onedrive: link
  • ObjectNet: Onedrive: link You can also refer to the filelist if the file is too large to download.

These subsets are sampled from the original datasets. Please note that I do not have the right to distribute these datasets. If the distribution violates the license, I shall provide the filenames instead.

The md5sum information can be found in this issue.

You need to modify the path of the datasets in ./utils/data.py according to your own path.

Running scripts

Please follow the settings in the exps folder to prepare your json files, and then run:

python main.py --config ./exps/[configname].json

Acknolegment

This repo is based on CIL_Survey and PyCIL.

The implemenations of parameter-efficient tuning methods are based on VPT, AdaptFormer, and SSF.

Correspondence

If you have any questions, please contact me via email or open an issue.

visitors

revisitingcil's People

Contributors

zhoudw-zdw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

revisitingcil's Issues

What if the new classes all are out of pretrain domain?

Hello! I'm doing some landing work, which could benefit from CIL. I've read your paper. The method is quite simple and impressive.

However there is one thing I'm interested in. In your work, the new classes do seem to be close to the PTM domain. That makes your method seems to be something like a transfer learning. If new classes are all far from the pretrain classes (far from any classes in ImageNet,Cifar). Can the method deal with domain gap ? If not, can the incremental classes have the nice metric in paper?

[ ObjectNet dataset ]

Hi,

Wonderful work, thanks for sharing the code. Could you please, share the script to split the original ObjectNet dataset into train, and test according to the filenames you provided?

Thanks !

About datasets OmniBenchmark

I found that at least three same class are treated as different classes in you 300 train split:
image
image
image
image
the file name of the corresponding pictures are even the same.
Why this happen?

huggingface cannot connection

Hi, thank you very much for your work. When I reproduce your code, I have the problem that Huggingface cannot connect, how can I solve this? Can you update the code to make it work?

md5sum check the dataset

Your work is truly inspiring.

For the dataset, I was wondering if the authors could provide the md5sum code to ensure that I have downloaded it correctly.

Thx in advance.

BN Tuning

Hi!

First of all, wonderful work, it's so interesting and useful. I'm working with CNN and I'm interested in Batch Normalization Tuning for CIL, but I can't find its implementation on code, something like "adam_bn_tuning" in the "models" folder.

I don't know if it is in another part of the code, but I'm not able to find it. It would be fantastic if you could help me with this issue.

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.