Git Product home page Git Product logo

Comments (3)

KingSpencer avatar KingSpencer commented on June 2, 2024

Hi Prateek,

Thanks for your interest in our work and your great questions!

  1. I did not evaluate the standard few-shot and zero-shot setting for the base ViT model, since they are not directly related to continual learning. However, one can treat one of the baselines -- GDumb -- as a few-shot learning method. To my understanding, GDumb trains on the buffered data only, which is the subsampled full dataset.

  2. I believe I conducted such experiments, but not showed it in the L2P paper. However, I have to say task-specific prompts are not directly applicable to class-incremental learning, since you have no idea how to choose task-specific prompts at inference when the task ID is unknown. If I remember correctly, task-specific prompt did a bit worse than L2P on CIFAR100, but is comparable or better than L2P on 5-datasets. Intuitively, task-specific prompt does not have the ability to share knowledge between tasks, so that might be the reason. Nevertheless, feel free to add your experiments if you are interested and correct me if I am wrong.

Best,
Zifeng

from l2p.

prateeky2806 avatar prateeky2806 commented on June 2, 2024

The reason why I asked for zero/few shot number is that I suspect that the model might perform well even when we prepend some random vectors or slightly trained vectors along with the input image because the ViT model is pre-trained on ImageNet21k and CIFAR100 is very similar to it but easier. If the model has good zero/few-shot performance then it invalidates some of the claims made in the paper regarding continual learning and preventing forgetting.
Furthermore, if the performance is not better than task specific prompts then the claim regarding sharing knowledge might not be well supported as well. This comparison is completely skipped in the paper which I guess if the most important method to compare with.

Thanks,
Prateek

from l2p.

zhangyuanscall avatar zhangyuanscall commented on June 2, 2024

DualPrompt validates that share prefixtuning is better in paper 5.4?

from l2p.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.