Git Product home page Git Product logo

cil_survey's People

Contributors

vita-qzh avatar wangkiw avatar zhoudw-zdw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cil_survey's Issues

Bugs in Der and Foster?

Thanks for your sharing!
I'd be grateful if you could help me with the bug when reproducing the der.json & foster.json on VTab.

Details belike:
File "LAMDA-PILOT-main/models/base.py", line 260, in _construct_exemplar
i = np.argmin(np.sqrt(np.sum((class_mean - mu_p) ** 2, axis=1)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "envs/pilot/lib/python3.11/site-packages/numpy/core/fromnumeric.py", line 1325, in argmin
return _wrapfunc(a, 'argmin', axis=axis, out=out, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pilot/lib/python3.11/site-packages/numpy/core/fromnumeric.py", line 59, in _wrapfunc
return bound(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^
ValueError: attempt to get argmin of an empty sequence

Labels for task>0

Hi.

Thank you for this great Library for Continual Learning.

I wanted to ask regarding the label mapping when we are at the task 2 of class incremental training. Let's say I am training a simple finetuning model.

I am training on CIFAR100 on an increment of 10 classes in each task. The first training will have labels [0,1,2,3,4,5,6,7,8,9]. For the second task, the labels are [10,11,12,13,14,15,16,17,18,19]. Do you map these to [0,1,2,3,4,5,6,7,8,9] or use some other loss function instead of cross-entropy? If you map it, can you please point out the code piece. And how do you handle this at inference time?

Thanks in Advance.

Fake target in Loss function problems.

I got a problems when using your code in different datasets. The loss reduced to 0 but the training accuracy did not increase. This is because of only using partial data /tensor in the last layers (fake target implementation). That leads to loss equals while the prediction results is not correct. So this affect methods using custom losses (finetune, lwf, memo, ...)

For example, I set number of classes = 2, this is the task 3 (training on classes 4-5)

from torch import Tensor
import torch

from torch.nn import functional as F
tensor = Tensor(
[
    [-3.6737,   3.6739,   7.9670,  -6.8142,   7.5660,  -7.4853],
    [-3.8698,   3.8700,   8.3494,  -7.1354,   7.9662,  -7.8834],
    [3.0694,  -3.0693,  -6.3453,   4.8575, -14.1109,  14.8372],
    [3.0781,  -3.0780,  -6.3650,   4.8734, -14.1494,  14.8778],
    [-3.7971,   3.7972,   8.2072,  -7.0160,   7.8165,  -7.7347],
    [-3.7736,   3.7738,   8.1614,  -6.9775,   7.7684,  -7.6869],
    [-3.7105,   3.7107,   8.0386,  -6.8743,   7.6404,  -7.5594],
    [-3.7717,   3.7719,   8.1577,  -6.9744,   7.7645,  -7.6830],
    [-3.8698,   3.8700,   8.3495,  -7.1355,   7.9662,  -7.8834],
    [3.1876,  -3.1874,  -6.6086,   5.0743, -14.6059,  15.3588],
    [-3.7871,   3.7873,   8.1883,  -7.0001,   7.7980,  -7.7159],
    [3.1329,  -3.1327,  -6.4858,   4.9743, -14.3710,  15.1113],
    [3.1295,  -3.1293,  -6.4794,   4.9677, -14.3636,  15.1034],
    [-3.6428,   3.6429,   7.9025,  -6.7601,   7.4887,  -7.4108],
    [3.0746,  -3.0745,  -6.3574,   4.8670, -14.1357,  14.8634],
    [3.2236,  -3.2235,  -6.6936,   5.1387, -14.7841,  15.5461],
    [3.1713,  -3.1712,  -6.5750,   5.0436, -14.5523,  15.3020],
    [-3.7287,   3.7289,   8.0736,  -6.9037,   7.6759,  -7.5950],
    [3.0622,  -3.0621,  -6.3294,   4.8443, -14.0822,  14.8070],
    [3.2201,  -3.2200,  -6.6833,   5.1333, -14.7547,  15.5154],
    [-3.4313,   3.4315,   7.4917,  -6.4150,   7.0624,  -6.9858],
    [-3.6644,   3.6645,   7.9488,  -6.7989,   7.5468,  -7.4662]
]
)
target = Tensor(
    [4, 4, 5, 5, 4, 4, 4, 4, 4, 5, 4, 5, 5, 4, 5, 5, 5, 4, 5, 5, 4, 4]
)
print(target)
prediction = torch.max(tensor, dim=1)
print(prediction)
target = target.to(dtype=torch.int64)
fake_target = target - 4
print(F.cross_entropy(tensor[:, 4:], fake_target))

Accuracy logging information

Hi, I found that the 'increment' parameter of the 'accuracy' method in 'toolkit.py' is always 10 by default. Shall the actual increment number be passed to it when calling the method in the base model?

About select methods

Hello, I think your work is very meaningful!
I would like to ask you about the 16 selected methods mentioned in your paper, including the methods using transformer structure, such as dytox, l2p and dualprompt, which have not been included in this project. How can I reproduce the comparison of these parts? Is there any plan to release it later?

Looking forward to your reply, best wishes!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.