Git Product home page Git Product logo

Comments (9)

daddydrac avatar daddydrac commented on June 5, 2024

from brain-inspired-replay.

GMvandeVen avatar GMvandeVen commented on June 5, 2024

Thanks for your interest in the code! It is indeed a good suggestion to create an option to run the method with arbitrary datasets. I’ll try to look into whether I can make something like that, although unfortunately I won’t have time at least until next week.

For now, let me point out a few things that might be helpful in this regard:

  • Most components of the brain-inspired replay method, as described in the paper, are not specific to a particular input domain and could be applied to any arbitrary dataset. The only exception to this is the “internal replay” component, as this component relies on pre-trained convolutional layers.
  • One option is to not use the internal replay component; for example, for our experiments on permuted MNIST we did not use internal replay. In the code this can be achieved by setting the option --depth=0, which means that no convolutional layers are used. However, there are a few other small changes that need to be made for the code to work on an arbitrary 1D dataset. For example, you’ll need to add your own dataset here (
    def get_multitask_experiment(name, scenario, tasks, data_dir="./store/datasets", normalize=False, augment=False,
    ) and here (
    task_params.add_argument('--experiment', type=str, default=task_default, choices=task_choices)
    ) , and probably at a few other places as well. You’ll also need to add your dataset here:
    DATASET_CONFIGS = {
    . The code is currently set up to only expect data in image-format, but if you set --depth=0 this is not actually a requirement of the models themselves and a “hack” here would be to set `size’ to 1 and ‘channels’ to the number of input features in your 1D dataset.
  • Another option could be to replace the pre-trained convolutional layers that we used in our study by another pre-trained feature extractor suitable for the input modality you are working with. This will require more changes to code, and probably means you’ll need to get a bit more familiar with it yourself. I’d be very interested to hear about your experiences if you try this!

Hope this helps a bit.

from brain-inspired-replay.

hockman1 avatar hockman1 commented on June 5, 2024

Thank you very much for the explanation! I have a question though; it seems like the get_multitask_experiment() returns something like :

Dataset MNIST
Number of datapoints: 60000
Root location: ./store/datasets/mnist
Split: Train
StandardTransform
Transform: Compose(
ToTensor()
)

and after looking deeper, the code seems to be calling from torchvision.datasets.MNIST. So does this mean that I need to convert my dataset to some form of module similar to that? Thanks!

from brain-inspired-replay.

GMvandeVen avatar GMvandeVen commented on June 5, 2024

Sorry for the late reply to your follow-up question! (To explain the late reply, I got a notification when you initially posted your reply, but not when you edited it.)

To use this code on another dataset, you will indeed need to modify the “get_multitask_experiment()”-function. I guess there are two options.
The first option would be to convert your dataset to some form of module similar to torchvision.datasets.MNIST, and then you could leave the structure of the get_multitask_experiment()-function largely the same.
Another option, if you want to avoid converting your dataset to some form of module similar to torchvision.datasets.MNIST, would be to rewrite the get_multitask_experiment()-function.

from brain-inspired-replay.

puneater avatar puneater commented on June 5, 2024

Hi @GMvandeVen ,

I had a doubt regarding this, let's say I add a custom image dataset to your framework. Then, should I use pre-trained convolution layers for the custom dataset? Or, are they (pre-trained layers) specific for CIFAR100 dataset?

from brain-inspired-replay.

GMvandeVen avatar GMvandeVen commented on June 5, 2024

Hi, the pre-trained convolutional layers used in this repository are not necessarily specific for the CIFAR-100 dataset, but at the same time they might also not be the best choice for other image datasets. The convolutional layers I used were pre-trained on the CIFAR-10 dataset, which has a similar type of images to the CIFAR-100 dataset. For other type of image datasets (e.g., with larger input images), it might thus be a good idea to replace the convolutional layers with a different feature extractor.

from brain-inspired-replay.

puneater avatar puneater commented on June 5, 2024

Thanks for the early reply @GMvandeVen !

I get it now.
So, can we opt out of using the pre-trained convolution layers if the --pre-convE flag is not used? Or does brain inspired replay uses these layers by default in the internal replay component?
Also, is brain inspired replay the only algorithm which uses these layers by default?

from brain-inspired-replay.

GMvandeVen avatar GMvandeVen commented on June 5, 2024

In principle, the flag --pre-convE controls whether or not pre-trained convolutional layers are used. But it is indeed the case that if you use the flag --brain-inspired, the pre-trained convolutional layers are selected by default as well. If you want, you could change this behaviour here:

args.pre_convE = True #--> internal replay

In my code the other algorithms do not use pre-trained convolutional layers by default, but in the comparisons on CIFAR-100 reported in the paper all compared algorithms did use the same pre-trained convolutional layers.

from brain-inspired-replay.

puneater avatar puneater commented on June 5, 2024

I got it.

Thanks! @GMvandeVen

from brain-inspired-replay.

Related Issues (12)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.