Comments (4)
- Why do the results have 18 tasks instead of 20?
A large volume of CL works still tune hyperparameters in an offline manner by sweeping over the whole data sequence and selecting the best hyperparameter set with grid-search on a validation set. After that, metrics are reported on the test set with the
selected set of hyperparameters. This tuning protocol violates the online CL setting where a classifier can only make a single pass over the data, which implies that the reported results in the CL literature may be too ideal and cannot be reproduced in real online CL applications.
Thus, we use the first two tasks for hyper-parameter tuning. For more information, please check Section.4 in our paper.
- Is there a specified random seed to arrange the class order for these two datasets?
Since the task order and task composition may impact the performance, we take the average over multiple runs
for each experiment with different task orders and compositions to reliably assess the robustness of the methods. For CORe50-NC and CORe50-NI, we follow the number of runs (i.e., 10), task order and composition provided by the authors. For Split CIFAR-100 and Split MiniImagenet, we average over15 runs, and the class composition in each task is randomly selected for each run. The random seed for each run is the same as the run id.
Please let me know if your questions are answered.
from online-continual-learning.
1. Why do the results have 18 tasks instead of 20?
A large volume of CL works still tune hyperparameters in an offline manner by sweeping over the whole data sequence and selecting the best hyperparameter set with grid-search on a validation set. After that, metrics are reported on the test set with the
selected set of hyperparameters. This tuning protocol violates the online CL setting where a classifier can only make a single pass over the data, which implies that the reported results in the CL literature may be too ideal and cannot be reproduced in real online CL applications.Thus, we use the first two tasks for hyper-parameter tuning. For more information, please check Section.4 in our paper.
1. Is there a specified random seed to arrange the class order for these two datasets?
Since the task order and task composition may impact the performance, we take the average over multiple runs
for each experiment with different task orders and compositions to reliably assess the robustness of the methods. For CORe50-NC and CORe50-NI, we follow the number of runs (i.e., 10), task order and composition provided by the authors. For Split CIFAR-100 and Split MiniImagenet, we average over15 runs, and the class composition in each task is randomly selected for each run. The random seed for each run is the same as the run id.Please let me know if your questions are answered.
Thanks for reply, that helps a lot. Btw, for the provided code, dose the cl type of 'nc' refers to the class-incremental setting (single-head classifier)?
from online-continual-learning.
We implemented class-incremental (nc, stands for new class) and domain-incremental (ni, stands for new instance).
Both of them are single-headed.
from online-continual-learning.
Close now, feel free to re-open if you have more questions.
from online-continual-learning.
Related Issues (20)
- Inquiry on Online Class-IL scenario HOT 1
- Question about distillation loss implementation of ICARL HOT 2
- Question about distillation loss implementation of kd_manager.py HOT 2
- Reservoir update potential issue HOT 2
- Small possible bug HOT 2
- A question about EWC++ implementation HOT 2
- Online Continual Hyperparameter Tuning
- code details HOT 2
- Model saving and load HOT 2
- How is the code of trick implemented HOT 1
- Training ImageNet-1K
- Can I find a way to reproduce the experiment result in paper? HOT 2
- How to run ER experiments with a class balanced buffer
- Error running main_tune.py - Trouble exporting results HOT 1
- About the result on SCR HOT 2
- Run error HOT 1
- runtime warning HOT 2
- Best hyperparameters HOT 4
- Results of the ASER methods HOT 4
- Parameter update of Experience Replay HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from online-continual-learning.