Comments (9)
from brain-inspired-replay.
Thanks for your interest in the code! It is indeed a good suggestion to create an option to run the method with arbitrary datasets. I’ll try to look into whether I can make something like that, although unfortunately I won’t have time at least until next week.
For now, let me point out a few things that might be helpful in this regard:
- Most components of the brain-inspired replay method, as described in the paper, are not specific to a particular input domain and could be applied to any arbitrary dataset. The only exception to this is the “internal replay” component, as this component relies on pre-trained convolutional layers.
- One option is to not use the internal replay component; for example, for our experiments on permuted MNIST we did not use internal replay. In the code this can be achieved by setting the option
--depth=0
, which means that no convolutional layers are used. However, there are a few other small changes that need to be made for the code to work on an arbitrary 1D dataset. For example, you’ll need to add your own dataset here (brain-inspired-replay/data/load.py
Line 83 in 05d175d
brain-inspired-replay/options.py
Line 86 in 05d175d
brain-inspired-replay/data/available.py
Line 50 in 05d175d
--depth=0
this is not actually a requirement of the models themselves and a “hack” here would be to set `size’ to 1 and ‘channels’ to the number of input features in your 1D dataset. - Another option could be to replace the pre-trained convolutional layers that we used in our study by another pre-trained feature extractor suitable for the input modality you are working with. This will require more changes to code, and probably means you’ll need to get a bit more familiar with it yourself. I’d be very interested to hear about your experiences if you try this!
Hope this helps a bit.
from brain-inspired-replay.
Thank you very much for the explanation! I have a question though; it seems like the get_multitask_experiment() returns something like :
Dataset MNIST
Number of datapoints: 60000
Root location: ./store/datasets/mnist
Split: Train
StandardTransform
Transform: Compose(
ToTensor()
)
and after looking deeper, the code seems to be calling from torchvision.datasets.MNIST. So does this mean that I need to convert my dataset to some form of module similar to that? Thanks!
from brain-inspired-replay.
Sorry for the late reply to your follow-up question! (To explain the late reply, I got a notification when you initially posted your reply, but not when you edited it.)
To use this code on another dataset, you will indeed need to modify the “get_multitask_experiment()”-function. I guess there are two options.
The first option would be to convert your dataset to some form of module similar to torchvision.datasets.MNIST, and then you could leave the structure of the get_multitask_experiment()-function largely the same.
Another option, if you want to avoid converting your dataset to some form of module similar to torchvision.datasets.MNIST, would be to rewrite the get_multitask_experiment()-function.
from brain-inspired-replay.
Hi @GMvandeVen ,
I had a doubt regarding this, let's say I add a custom image dataset to your framework. Then, should I use pre-trained convolution layers for the custom dataset? Or, are they (pre-trained layers) specific for CIFAR100 dataset?
from brain-inspired-replay.
Hi, the pre-trained convolutional layers used in this repository are not necessarily specific for the CIFAR-100 dataset, but at the same time they might also not be the best choice for other image datasets. The convolutional layers I used were pre-trained on the CIFAR-10 dataset, which has a similar type of images to the CIFAR-100 dataset. For other type of image datasets (e.g., with larger input images), it might thus be a good idea to replace the convolutional layers with a different feature extractor.
from brain-inspired-replay.
Thanks for the early reply @GMvandeVen !
I get it now.
So, can we opt out of using the pre-trained convolution layers if the --pre-convE flag is not used? Or does brain inspired replay uses these layers by default in the internal replay component?
Also, is brain inspired replay the only algorithm which uses these layers by default?
from brain-inspired-replay.
In principle, the flag --pre-convE
controls whether or not pre-trained convolutional layers are used. But it is indeed the case that if you use the flag --brain-inspired
, the pre-trained convolutional layers are selected by default as well. If you want, you could change this behaviour here:
brain-inspired-replay/options.py
Line 280 in cf35a50
In my code the other algorithms do not use pre-trained convolutional layers by default, but in the comparisons on CIFAR-100 reported in the paper all compared algorithms did use the same pre-trained convolutional layers.
from brain-inspired-replay.
I got it.
Thanks! @GMvandeVen
from brain-inspired-replay.
Related Issues (12)
- where i got confuse HOT 1
- command to run baselines HOT 2
- Precision or Accuracy? HOT 4
- What is the role of replay_mode='current' here HOT 2
- xdg_prop attribute HOT 1
- How to reload model and predict? HOT 2
- Regarding the "reinitialization of networks" HOT 2
- result for 'scenario = class' HOT 2
- Unable to reproduce the results HOT 13
- Can't run cmopare_permMNIST100_bir.py HOT 2
- pre-trained model HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from brain-inspired-replay.