Git Product home page Git Product logo

raptormai / online-continual-learning Goto Github PK

View Code? Open in Web Editor NEW
359.0 6.0 57.0 264 KB

A collection of online continual learning paper implementations and tricks for computer vision in PyTorch, including our ASER(AAAI-21), SCR(CVPR21-W) and an online continual learning survey (Neurocomputing).

Python 99.66% Shell 0.34%
continual-learning computer-vision deep-learning convolutional-neural-networks lifelong-learning incremental-learning catastrophic-forgetting online-continual-learning incremental-continual-learning class-incremental-learning

online-continual-learning's Introduction

Online Continual Learning

Official repository of

Requirements

Create a virtual enviroment

virtualenv online-cl

Activating a virtual environment

source online-cl/bin/activate

Installing packages

pip install -r requirements.txt

Datasets

Online Class Incremental

  • Split CIFAR10
  • Split CIFAR100
  • CORe50-NC
  • Split Mini-ImageNet

Online Domain Incremental

  • NonStationary-MiniImageNet (Noise, Occlusion, Blur)
  • CORe50-NI

Data preparation

  • CIFAR10 & CIFAR100 will be downloaded during the first run
  • CORE50 download: source fetch_data_setup.sh
  • Mini-ImageNet: Download from https://www.kaggle.com/whitemoon/miniimagenet/download , and place it in datasets/mini_imagenet/
  • NonStationary-MiniImageNet will be generated on the fly

Algorithms

  • ASER: Adversarial Shapley Value Experience Replay(AAAI, 2021) [Paper]
  • EWC++: Efficient and online version of Elastic Weight Consolidation(EWC) (ECCV, 2018) [Paper]
  • iCaRL: Incremental Classifier and Representation Learning (CVPR, 2017) [Paper]
  • LwF: Learning without forgetting (ECCV, 2016) [Paper]
  • AGEM: Averaged Gradient Episodic Memory (ICLR, 2019) [Paper]
  • ER: Experience Replay (ICML Workshop, 2019) [Paper]
  • MIR: Maximally Interfered Retrieval (NeurIPS, 2019) [Paper]
  • GSS: Gradient-Based Sample Selection (NeurIPS, 2019) [Paper]
  • GDumb: Greedy Sampler and Dumb Learner (ECCV, 2020) [Paper]
  • CN-DPM: Continual Neural Dirichlet Process Mixture (ICLR, 2020) [Paper]
  • SCR: Supervised Contrastive Replay (CVPR Workshop, 2021) [Paper]

Tricks

Run commands

Detailed descriptions of options can be found in general_main.py

Sample commands to run algorithms on Split-CIFAR100

#ER
python general_main.py --data cifar100 --cl_type nc --agent ER --retrieve random --update random --mem_size 5000

#MIR
python general_main.py --data cifar100 --cl_type nc --agent ER --retrieve MIR --update random --mem_size 5000

#GSS
python general_main.py --data cifar100 --cl_type nc --agent ER --retrieve random --update GSS --eps_mem_batch 10 --gss_mem_strength 20 --mem_size 5000

#LwF
python general_main.py --data cifar100 --cl_type nc --agent LWF 

#iCaRL
python general_main.py --data cifar100 --cl_type nc --agent ICARL --retrieve random --update random --mem_size 5000

#EWC++
python general_main.py --data cifar100 --cl_type nc --agent EWC --fisher_update_after 50 --alpha 0.9 --lambda_ 100

#GDumb
python general_main.py --data cifar100 --cl_type nc --agent GDUMB --mem_size 1000 --mem_epoch 30 --minlr 0.0005 --clip 10

#AGEM
python general_main.py --data cifar100 --cl_type nc --agent AGEM --retrieve random --update random --mem_size 5000

#CN-DPM
python general_main.py --data cifar100 --cl_type nc --agent CNDPM --stm_capacity 1000 --classifier_chill 0.01 --log_alpha -300

#ASER
python general_main.py --data cifar100 --cl_type nc --agent ER --update ASER --retrieve ASER --mem_size 5000 --aser_type asvm --n_smp_cls 1.5 --k 3 

#SCR
python general_main.py --data cifar100 --cl_type nc --agent SCR --retrieve random --update random --mem_size 5000 --head mlp --temp 0.07 --eps_mem_batch 100

Sample command to add a trick to memory-based methods

python general_main.py --review_trick True --data cifar100 --cl_type nc --agent ER --retrieve MIR --update random --mem_size 5000 

Sample commands to run hyper-parameters tuning

python main_tune.py --general config/general_1.yml --data config/data/cifar100/cifar100_nc.yml --default config/agent/mir/mir_1k.yml --tune config/agent/mir/mir_tune.yml

There are four config files controling the experiment.

  • general config controls variables that are not changed during the experiment
  • data config controls variables related to the dataset
  • default method config controls variables for a specific method that are not changed during the experiment
  • method tuning config controls variables that are used for tuning during the experiment

Repo Structure & Description

├──agents                       #Files for different algorithms
    ├──base.py                      #Abstract class for algorithms
    ├──agem.py                      #File for A-GEM
    ├──cndpm.py                     #File for CN-DPM
    ├──ewc_pp.py                    #File for EWC++
    ├──exp_replay.py                #File for ER, MIR and GSS
    ├──gdumb.py                     #File for GDumb
    ├──iCaRL.py                     #File for iCaRL
    ├──lwf.py                       #File for LwF
    ├──scr.py                       #File for SCR

├──continuum                    #Files for create the data stream objects
    ├──dataset_scripts              #Files for processing each specific dataset
        ├──dataset_base.py              #Abstract class for dataset
        ├──cifar10.py                   #File for CIFAR10
        ├──cifar100,py                  #File for CIFAR100
        ├──core50.py                    #File for CORe50
        ├──mini_imagenet.py             #File for Mini_ImageNet
        ├──openloris.py                 #File for OpenLORIS
    ├──continuum.py             
    ├──data_utils.py
    ├──non_stationary.py

├──models                       #Files for backbone models
    ├──ndpm                         #Files for models of CN-DPM 
        ├──...
    ├──pretrained.py                #Files for pre-trained models
    ├──resnet.py                    #Files for ResNet

├──utils                        #Files for utilities
    ├──buffer                       #Files related to buffer
        ├──aser_retrieve.py             #File for ASER retrieval
        ├──aser_update.py               #File for ASER update
        ├──aser_utils.py                #File for utilities for ASER
        ├──buffer.py                    #Abstract class for buffer
        ├──buffer_utils.py              #General utilities for all the buffer files
        ├──gss_greedy_update.py         #File for GSS update
        ├──mir_retrieve.py              #File for MIR retrieval
        ├──random_retrieve.py           #File for random retrieval
        ├──reservoir_update.py          #File for random update

    ├──global_vars.py               #Global variables for CN-DPM
    ├──io.py                        #Code related to load and store csv or yarml
    ├──kd_manager.py                #File for knowledge distillation
    ├──name_match.py                #Match name strings to objects 
    ├──setup_elements.py            #Set up and initialize basic elements
    ├──utils.py                     #File for general utilities

├──config                       #Config files for hyper-parameters tuning
    ├──agent                        #Config files related to agents
    ├──data                         #Config files related to dataset

    ├──general_*.yml                #General yml (fixed variables, not tuned)
    ├──global.yml                   #paths to store results 

Duplicate results

The hyperparameters used in the ASER and SCR papers can be found in the folder config_CVPR to duplicate the papers' results.

Citation

If you use this paper/code in your research, please consider citing us:

Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

Accepted at CVPR2021 Workshop.

@inproceedings{mai2021supervised,
  title={Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning},
  author={Mai, Zheda and Li, Ruiwen and Kim, Hyunwoo and Sanner, Scott},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3589--3599},
  year={2021}
}

Online Continual Learning in Image Classification: An Empirical Survey

Published in Neurocomputing, official version
Preprint on arXiv here.

@article{MAI202228,
title = {Online continual learning in image classification: An empirical survey},
journal = {Neurocomputing},
volume = {469},
pages = {28-51},
year = {2022},
issn = {0925-2312},
doi = {https://doi.org/10.1016/j.neucom.2021.10.021},
url = {https://www.sciencedirect.com/science/article/pii/S0925231221014995},
author = {Zheda Mai and Ruiwen Li and Jihwan Jeong and David Quispe and Hyunwoo Kim and Scott Sanner}
}

Online Class-Incremental Continual Learning with Adversarial Shapley Value

Accepted at AAAI2021

@inproceedings{shim2021online,
  title={Online Class-Incremental Continual Learning with Adversarial Shapley Value},
  author={Shim, Dongsub and Mai, Zheda and Jeong, Jihwan and Sanner, Scott and Kim, Hyunwoo and Jang, Jongseong},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={11},
  pages={9630--9638},
  year={2021}
}

Contact & Contribution

Acknowledgments

Note

The PyTorch implementation of ASER in this repository is more efficient than the original TensorFlow implementation and has better performance. The results of the ASER paper can be reproduced in the original TensorFlow implementation repository.

online-continual-learning's People

Contributors

raptormai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

online-continual-learning's Issues

Error running main_tune.py - Trouble exporting results

I have just cloned the repo, successfully installed all the requirements in an Anaconda environment, and run the command for hyper-parameters tuning as shown in the 'readme' file:

python main_tune.py --general config/general_1.yml --data config/data/cifar100/cifar100_nc.yml --default config/agent/mir/mir_1k.yml --tune config/agent/mir/mir_tune.yml

The above command produced the following error:

AttributeError: 'types.SimpleNamespace' object has no attribute 'buffer_tracker'

Any idea on how to resolve this? My main goal is to run the example commands and save the training and evaluation results locally.
I decided to run the 'hyper-parameters tuning' command because running the 'general_main.py' did not save any results despite running successfully (I tried setting the '--store' parameter to 'True').

The complete error from running the hyperparameter tuning command:

Traceback (most recent call last):
  File "main_tune.py", line 59, in <module>
    main(args)
  File "main_tune.py", line 41, in main
    multiple_run_tune_separate(final_default_params, tune_params, args.save_path)
  File "K:\online-continual-learning\experiment\run.py", line 218, in multiple_run_tune_separate
    single_tune(data_continuum, default_params, tune_params, params_keep, tmp_acc, run)
  File "K:\online-continual-learning\experiment\run.py", line 252, in single_tune
    best_params = tune_hyper(tune_data, tune_test_loaders, default_params, tune_params, )
  File "K:\online-continual-learning\experiment\tune_hyperparam.py", line 26, in tune_hyper
    agent = agents[final_params.agent](model, opt, final_params)
  File "K:\online-continual-learning\agents\exp_replay.py", line 13, in __init__
    self.buffer = Buffer(model, params)
  File "K:\online-continual-learning\utils\buffer\buffer.py", line 33, in __init__
    if self.params.buffer_tracker:
AttributeError: 'types.SimpleNamespace' object has no attribute 'buffer_tracker'

Inquiry on Online Class-IL scenario

Hi. Thanks for this wonderful work. Here I want to inquire about how the codes work in the online class incremental learning setting.

In the Class-IL setting, the number of classes to predict dynamically increases with the number of tasks. For example, in split cifar100, the model classifies 10 classes in Task 1 and 20 classes in Task 2, etc. So the dimension of the logits(the number of active neurons in the final fc layer) should be 10 and 20, respectively.

However, in the models/resnet.py, the number of neurons in the final linear layer is set and fixed to num_classes once the model is created (100 for cifar100). During training, the logits seem to always be calculated in the shape of (batch_x, num_classes), no matter which task it is. Take ER (agents/exp_replay.py) as an example:

                for j in range(self.mem_iters):
                    logits = self.model.forward(batch_x)
                    loss = self.criterion(logits, batch_y)
                                       ...
                    _, pred_label = torch.max(logits, 1)

The shape of logits is (batch, num_classes), which means the output neurons corresponding to the classes in future tasks can also be activated. But the model should only classify the sample as the classes observed so far. So, I am confused about how the model implements class-incremental learning. Maybe I got it wrong somewhere. Please help. I would really appreciate it if anyone can help me understand this.

Can I find a way to reproduce the experiment result in paper?

First, thank you for your great effort in your survey and code implementation : )

I am using it well for my research and I have a question. I tried to reproduce the result in your paper.

In Table 11 about ODI experiment, The average accuracy of ER with 1k memory for mini-imagenet-noise dataset is indicated as 19.4 however, in my experiment, it shows only about 10~11. Not only for this, others' performance show different in my experiment. Although I have tried to analyze the hyperparameters in your repo and applied them to mine, nothing special happened.

May I ask you for help reproducing the exact result in Table 11?

Question about distillation loss implementation of kd_manager.py

Hi. Thanks for your wonderful works. BER, ASER, SCR and your survey have enlighten me a lot, and let me know the tendency about online class incremental learning. Here I want to inquire about the codes implemented in kd_manager.py, why we need to choose the logits of new classes to distill?

I found that "End-to-End Incremental Learning" uses the logits of old classes in all samples to distill. But others, such as BiC, I only know they only used the logits of old classes, does they used all samples to distill or only used old classes' samples?

I also want to inquire about the difference and performance of KD loss, especially for replay method, among "used the logits of old classes in all sample to distill","used the logits of old classes in old classes' samples to distill", " used the all logits in old classes' samples to distill" and " used the all logits in all samples to distill" ?

I would really appreciate it if you can help me understand this. May joy and health be with you always.

Implementation error of your code

When I tried to run your code, I encountered the following error. Do you know the reason?

-----------run 0-----------avg_end_acc 0.13230000000000003-----------train time 81.69362449645996
/home/slcheng/anaconda3/envs/votenet/lib/python3.7/site-packages/numpy/core/_methods.py:262: RuntimeWarning: Degrees of freedom <= 0 for slice
keepdims=keepdims, where=where)
/home/slcheng/anaconda3/envs/votenet/lib/python3.7/site-packages/numpy/core/_methods.py:253: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
----------- Total 1 run: 83.46887731552124s -----------
----------- Avg_End_Acc (0.13230000000000003, nan) Avg_End_Fgt (0.14880000000000002, nan) Avg_Acc (0.2081611111111111, nan) Avg_Bwtp (0.0, nan) Avg_Fwt (0.0, nan)-----------

A question about EWC++ implementation

Hello author,
Thanks for the release of the code of your paper.

After reading your paper and code, I wonder if the best 'lambda' for EWC ++ is 0 when you implemented EWC++ because in your CVPR_config file you said the lambda is 0.

If it is, they are just like fine-tuning?
Thank you :-)

Best hyperparameters

Hi, it's possible to have the hyperparameters used for the results reported in the paper "Online Continual Learning in Image Classification: An Empirical Survey"? I'm interested in Core50 results with ICaRL.
Thank you in advance.

Run error

1

Hello scholar, follow the instructions in the readme to run your code, and the above error occurs. What is the reason?

Small possible bug

In the evaluate() function, on agents/base.py, after line 171:

_, pred_label = dists.min(1)

Shouldn't there be a:

pred_label = torch.tensor(self.old_labels, dtype=batch_y.dtype, device=batch_y.device)[pred_label]

Right after 171? Otherwise the error_analysis part of this function, which uses pred_label to see which are the examples assigned to old classes vs new classes, will use a pred_label value that will make no sense in that case.

Then, to make the rest of the code consistent after my suggested addition, lines 175, 176:
correct_cnt = (np.array(self.old_labels)[ pred_label.tolist()] == batch_y.cpu().numpy()).sum().item() / batch_y.size(0)

Would have to be changed to:

correct_cnt = (pred_label == batch_y).sum().item() / batch_y.size(0)

Training ImageNet-1K

Hi, Thanks for the amazing work. I enjoyed reading your paper.
Running experiments I wanted to train ImageNet-1k using some baseline methods you have kindly shared the implementation for. However, I noticed ImageNet-100 is loaded in a pickle file, unlike frameworks that use dataloader for imagenet to load batches in a few phases. The problem is that when it comes to ImageNet1k its not possible to load all data at once.
My question is how can I use dataloaders in Mini_ImageNet class in continuum/dataset_scripts structure, in a way that it does not disrupt other functions in relevant codes?

`
class Mini_ImageNet(DatasetBase):

 def __init__(self, scenario, params): 

 def download_load(self):

 def new_task(self, cur_task, **kwargs):

        elif self.scenario == 'nc':

            labels = self.task_labels[cur_task]

            x_train, y_train = load_task_with_labels(self.train_data, self.train_label, labels)

        return x_train, y_train, labels

`

Model saving and load

I was wondering, how do we save the model in this framework? Is it same as saving torch model?
And can we test the model after the training cycle is done?
An example would be great.
Thank you

Question about the results on the survey paper

Hi, for the online survey paper, I noticed that the average accuracy on Split CIFAR-100 and Split mini-ImageNet only have the results for task 1 to task 18? but each of them has 20 tasks, if we use the first task as initial model, it should contains the results for remaining 19 tasks? In addition, is there a specified random seed to arrange the class order for these two datasets? For example, random seed 1993 is widely applied in offline class-incremental learning. Thanks and look forward to your reply.

Online Continual Hyperparameter Tuning

Hi, thanks for your nice work. I have a question about the hyperparameter tuning:

Regarding hyperparameter tuning, the paper describes it this way:

A data stream is divided into two sub-streams — $D_{CV}$ , the stream for cross-validation, and $D_{EV}$ , the stream for final training and evaluation. Multiple passes over $D_{CV}$ are allowed for tuning, but a CL algorithm can only perform a single pass over $D_{EV}$ for training.

However, when I run the code:

python main_tune.py --general config/general_1.yml --data config/data/cifar100/cifar100_nc.yml --default config/agent/mir/mir_1k.yml --tune config/agent/mir/mir_tune.yml

I noticed that the code doesn't seem to distinguish between $D_{ev}$ and $D_{cv}$, and the model is tuned on the whole $20$ tasks.

image

Are the hyperparameters selected from the total data, i.e., $D_{ev}$ + $D_{cv}$ in the code?

Results of the ASER methods

Hi, I run your code of ASER on cifar10, however, the results of the experiment are far from the paper. The result of ASER with M=1k is 43.5+1.4 in the paper, however, I got 31.20+1.6. Can you give me some help? the log is as follows:

(online-learning) CUDA_VISIBLE_DEVICES=1 python general_main.py --data cifar10 --cl_type nc --agent ER --update ASER --retrieve ASER --mem_size 1000 --aser_type asvm --n_smp_cls 1.5 --k 3 --num_tasks 5
/home/anaconda3/envs/online-learning/lib/python3.6/site-packages/kornia/augmentation/augmentation.py:1833: DeprecationWarning: GaussianBlur is no longer maintained and will be removed from the future versions. Please use RandomGaussianBlur instead.
category=DeprecationWarning,
Namespace(agent='ER', alpha=0.9, aser_type='asvm', batch=10, buffer_tracker=False, cl_type='nc', classifier_chill=0.01, clip=10.0, cuda=True, cumulative_delta=False, data='cifar10', epoch=1, eps_mem_batch=10, error_analysis=False, fisher_update_after=50, fix_order=False, gss_batch_size=10, gss_mem_strength=10, head='mlp', k=3, kd_trick=False, kd_trick_star=False, labels_trick=False, lambda_=100, learning_rate=0.1, log_alpha=-300, mem_epoch=70, mem_iters=1, mem_size=1000, min_delta=0.0, minlr=0.0005, n_smp_cls=1.5, ncm_trick=False, ns_factor=(0.0, 0.4, 0.8, 1.2, 1.6, 2.0, 2.4, 2.8, 3.2, 3.6), ns_task=(1, 1, 2, 2, 2, 2), ns_type='noise', num_runs=15, num_runs_val=3, num_tasks=5, num_val=3, online=True, optimizer='SGD', patience=0, plot_sample=False, retrieve='ASER', review_trick=False, save_path=None, seed=0, separated_softmax=False, stm_capacity=1000, store=False, subsample=50, temp=0.07, test_batch=128, update='ASER', val_size=0.1, verbose=True, warmup=4, weight_decay=0)
Setting up data stream
Files already downloaded and verified
Files already downloaded and verified
data setup time: 1.858407735824585
Task: 0, Labels:[2, 8]
Task: 1, Labels:[4, 9]
Task: 2, Labels:[1, 6]
Task: 3, Labels:[7, 3]
Task: 4, Labels:[0, 5]
buffer has 1000 slots
-----------run 0 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.791308, running train acc: 0.350
==>>> it: 1, mem avg. loss: 1.185606, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.786752, running train acc: 0.713
==>>> it: 101, mem avg. loss: 0.593476, running mem acc: 0.768
==>>> it: 201, avg. loss: 0.661137, running train acc: 0.744
==>>> it: 201, mem avg. loss: 0.539796, running mem acc: 0.790
==>>> it: 301, avg. loss: 0.591301, running train acc: 0.763
==>>> it: 301, mem avg. loss: 0.484983, running mem acc: 0.812
==>>> it: 401, avg. loss: 0.546058, running train acc: 0.780
==>>> it: 401, mem avg. loss: 0.439967, running mem acc: 0.832
==>>> it: 501, avg. loss: 0.500293, running train acc: 0.801
==>>> it: 501, mem avg. loss: 0.400392, running mem acc: 0.848
==>>> it: 601, avg. loss: 0.471567, running train acc: 0.814
==>>> it: 601, mem avg. loss: 0.369235, running mem acc: 0.860
==>>> it: 701, avg. loss: 0.453698, running train acc: 0.822
==>>> it: 701, mem avg. loss: 0.344221, running mem acc: 0.871
==>>> it: 801, avg. loss: 0.439380, running train acc: 0.826
==>>> it: 801, mem avg. loss: 0.320799, running mem acc: 0.881
==>>> it: 901, avg. loss: 0.422853, running train acc: 0.833
==>>> it: 901, mem avg. loss: 0.300931, running mem acc: 0.888
[0.909 0. 0. 0. 0. ]
-----------run 0 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.045866, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.149474, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.746864, running train acc: 0.788
==>>> it: 101, mem avg. loss: 1.092016, running mem acc: 0.532
==>>> it: 201, avg. loss: 0.615448, running train acc: 0.811
==>>> it: 201, mem avg. loss: 1.041162, running mem acc: 0.545
==>>> it: 301, avg. loss: 0.585565, running train acc: 0.817
==>>> it: 301, mem avg. loss: 1.002848, running mem acc: 0.552
==>>> it: 401, avg. loss: 0.552771, running train acc: 0.824
==>>> it: 401, mem avg. loss: 0.965850, running mem acc: 0.560
==>>> it: 501, avg. loss: 0.524904, running train acc: 0.830
==>>> it: 501, mem avg. loss: 0.927491, running mem acc: 0.576
==>>> it: 601, avg. loss: 0.515847, running train acc: 0.831
==>>> it: 601, mem avg. loss: 0.884921, running mem acc: 0.593
==>>> it: 701, avg. loss: 0.502114, running train acc: 0.834
==>>> it: 701, mem avg. loss: 0.837027, running mem acc: 0.617
==>>> it: 801, avg. loss: 0.486156, running train acc: 0.837
==>>> it: 801, mem avg. loss: 0.811804, running mem acc: 0.627
==>>> it: 901, avg. loss: 0.470198, running train acc: 0.840
==>>> it: 901, mem avg. loss: 0.782918, running mem acc: 0.638
[0.194 0.9245 0. 0. 0. ]
-----------run 0 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.129180, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.466675, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.788531, running train acc: 0.808
==>>> it: 101, mem avg. loss: 0.875932, running mem acc: 0.603
==>>> it: 201, avg. loss: 0.609030, running train acc: 0.842
==>>> it: 201, mem avg. loss: 0.837126, running mem acc: 0.600
==>>> it: 301, avg. loss: 0.551524, running train acc: 0.846
==>>> it: 301, mem avg. loss: 0.775877, running mem acc: 0.637
==>>> it: 401, avg. loss: 0.516197, running train acc: 0.846
==>>> it: 401, mem avg. loss: 0.728419, running mem acc: 0.661
==>>> it: 501, avg. loss: 0.487740, running train acc: 0.851
==>>> it: 501, mem avg. loss: 0.681168, running mem acc: 0.686
==>>> it: 601, avg. loss: 0.462485, running train acc: 0.855
==>>> it: 601, mem avg. loss: 0.647602, running mem acc: 0.705
==>>> it: 701, avg. loss: 0.445158, running train acc: 0.857
==>>> it: 701, mem avg. loss: 0.621094, running mem acc: 0.721
==>>> it: 801, avg. loss: 0.435222, running train acc: 0.859
==>>> it: 801, mem avg. loss: 0.588846, running mem acc: 0.738
==>>> it: 901, avg. loss: 0.423762, running train acc: 0.861
==>>> it: 901, mem avg. loss: 0.565514, running mem acc: 0.749
[0.0665 0.0715 0.957 0. 0. ]
-----------run 0 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.825265, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.074840, running mem acc: 0.786
==>>> it: 101, avg. loss: 1.137216, running train acc: 0.608
==>>> it: 101, mem avg. loss: 0.528905, running mem acc: 0.789
==>>> it: 201, avg. loss: 0.945100, running train acc: 0.660
==>>> it: 201, mem avg. loss: 0.488608, running mem acc: 0.817
==>>> it: 301, avg. loss: 0.857927, running train acc: 0.687
==>>> it: 301, mem avg. loss: 0.467130, running mem acc: 0.826
==>>> it: 401, avg. loss: 0.809573, running train acc: 0.697
==>>> it: 401, mem avg. loss: 0.441827, running mem acc: 0.839
==>>> it: 501, avg. loss: 0.755506, running train acc: 0.714
==>>> it: 501, mem avg. loss: 0.412954, running mem acc: 0.852
==>>> it: 601, avg. loss: 0.735609, running train acc: 0.720
==>>> it: 601, mem avg. loss: 0.391411, running mem acc: 0.863
==>>> it: 701, avg. loss: 0.716189, running train acc: 0.727
==>>> it: 701, mem avg. loss: 0.373333, running mem acc: 0.871
==>>> it: 801, avg. loss: 0.697035, running train acc: 0.732
==>>> it: 801, mem avg. loss: 0.359690, running mem acc: 0.878
==>>> it: 901, avg. loss: 0.679202, running train acc: 0.738
==>>> it: 901, mem avg. loss: 0.348886, running mem acc: 0.882
[0.2045 0.1025 0.4885 0.849 0. ]
-----------run 0 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.682149, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.908929, running mem acc: 0.889
==>>> it: 101, avg. loss: 0.921580, running train acc: 0.760
==>>> it: 101, mem avg. loss: 0.467058, running mem acc: 0.820
==>>> it: 201, avg. loss: 0.700234, running train acc: 0.800
==>>> it: 201, mem avg. loss: 0.402738, running mem acc: 0.841
==>>> it: 301, avg. loss: 0.611748, running train acc: 0.814
==>>> it: 301, mem avg. loss: 0.353224, running mem acc: 0.861
==>>> it: 401, avg. loss: 0.549502, running train acc: 0.829
==>>> it: 401, mem avg. loss: 0.319054, running mem acc: 0.881
==>>> it: 501, avg. loss: 0.503226, running train acc: 0.842
==>>> it: 501, mem avg. loss: 0.288226, running mem acc: 0.895
==>>> it: 601, avg. loss: 0.467602, running train acc: 0.851
==>>> it: 601, mem avg. loss: 0.264372, running mem acc: 0.905
==>>> it: 701, avg. loss: 0.444083, running train acc: 0.857
==>>> it: 701, mem avg. loss: 0.242538, running mem acc: 0.916
==>>> it: 801, avg. loss: 0.422293, running train acc: 0.862
==>>> it: 801, mem avg. loss: 0.225252, running mem acc: 0.923
==>>> it: 901, avg. loss: 0.402647, running train acc: 0.868
==>>> it: 901, mem avg. loss: 0.210954, running mem acc: 0.929
[0.006 0.071 0.341 0.1395 0.939 ]
-----------run 0-----------avg_end_acc 0.2993-----------train time 321.69294118881226
Task: 0, Labels:[9, 5]
Task: 1, Labels:[4, 0]
Task: 2, Labels:[3, 8]
Task: 3, Labels:[2, 7]
Task: 4, Labels:[1, 6]
buffer has 1000 slots
-----------run 1 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 4.316780, running train acc: 0.050
==>>> it: 1, mem avg. loss: 0.970680, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.751688, running train acc: 0.737
==>>> it: 101, mem avg. loss: 0.546757, running mem acc: 0.771
==>>> it: 201, avg. loss: 0.564942, running train acc: 0.790
==>>> it: 201, mem avg. loss: 0.477284, running mem acc: 0.803
==>>> it: 301, avg. loss: 0.485543, running train acc: 0.817
==>>> it: 301, mem avg. loss: 0.419923, running mem acc: 0.828
==>>> it: 401, avg. loss: 0.431359, running train acc: 0.836
==>>> it: 401, mem avg. loss: 0.373165, running mem acc: 0.848
==>>> it: 501, avg. loss: 0.396482, running train acc: 0.851
==>>> it: 501, mem avg. loss: 0.335158, running mem acc: 0.865
==>>> it: 601, avg. loss: 0.372252, running train acc: 0.859
==>>> it: 601, mem avg. loss: 0.305227, running mem acc: 0.877
==>>> it: 701, avg. loss: 0.359550, running train acc: 0.865
==>>> it: 701, mem avg. loss: 0.280686, running mem acc: 0.887
==>>> it: 801, avg. loss: 0.344510, running train acc: 0.871
==>>> it: 801, mem avg. loss: 0.260917, running mem acc: 0.896
==>>> it: 901, avg. loss: 0.328728, running train acc: 0.876
==>>> it: 901, mem avg. loss: 0.243219, running mem acc: 0.903
[0.944 0. 0. 0. 0. ]
-----------run 1 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.751608, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.435653, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.813682, running train acc: 0.764
==>>> it: 101, mem avg. loss: 1.125217, running mem acc: 0.493
==>>> it: 201, avg. loss: 0.706034, running train acc: 0.768
==>>> it: 201, mem avg. loss: 1.021101, running mem acc: 0.547
==>>> it: 301, avg. loss: 0.660893, running train acc: 0.775
==>>> it: 301, mem avg. loss: 0.939356, running mem acc: 0.575
==>>> it: 401, avg. loss: 0.637459, running train acc: 0.776
==>>> it: 401, mem avg. loss: 0.890138, running mem acc: 0.595
==>>> it: 501, avg. loss: 0.620821, running train acc: 0.778
==>>> it: 501, mem avg. loss: 0.855687, running mem acc: 0.613
==>>> it: 601, avg. loss: 0.597677, running train acc: 0.783
==>>> it: 601, mem avg. loss: 0.833835, running mem acc: 0.625
==>>> it: 701, avg. loss: 0.583310, running train acc: 0.786
==>>> it: 701, mem avg. loss: 0.809573, running mem acc: 0.639
==>>> it: 801, avg. loss: 0.568838, running train acc: 0.788
==>>> it: 801, mem avg. loss: 0.792180, running mem acc: 0.648
==>>> it: 901, avg. loss: 0.555578, running train acc: 0.792
==>>> it: 901, mem avg. loss: 0.782768, running mem acc: 0.651
[0.2285 0.9095 0. 0. 0. ]
-----------run 1 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.077826, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.485435, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.759535, running train acc: 0.798
==>>> it: 101, mem avg. loss: 0.917225, running mem acc: 0.584
==>>> it: 201, avg. loss: 0.610433, running train acc: 0.831
==>>> it: 201, mem avg. loss: 0.839979, running mem acc: 0.610
==>>> it: 301, avg. loss: 0.574945, running train acc: 0.832
==>>> it: 301, mem avg. loss: 0.772012, running mem acc: 0.641
==>>> it: 401, avg. loss: 0.542059, running train acc: 0.832
==>>> it: 401, mem avg. loss: 0.722409, running mem acc: 0.668
==>>> it: 501, avg. loss: 0.520942, running train acc: 0.833
==>>> it: 501, mem avg. loss: 0.660200, running mem acc: 0.701
==>>> it: 601, avg. loss: 0.498832, running train acc: 0.838
==>>> it: 601, mem avg. loss: 0.625105, running mem acc: 0.721
==>>> it: 701, avg. loss: 0.488917, running train acc: 0.837
==>>> it: 701, mem avg. loss: 0.596193, running mem acc: 0.736
==>>> it: 801, avg. loss: 0.467835, running train acc: 0.844
==>>> it: 801, mem avg. loss: 0.565106, running mem acc: 0.751
==>>> it: 901, avg. loss: 0.457223, running train acc: 0.846
==>>> it: 901, mem avg. loss: 0.545778, running mem acc: 0.761
[0.0085 0.1025 0.8905 0. 0. ]
-----------run 1 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.734640, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.105005, running mem acc: 0.786
==>>> it: 101, avg. loss: 1.101628, running train acc: 0.656
==>>> it: 101, mem avg. loss: 0.596540, running mem acc: 0.779
==>>> it: 201, avg. loss: 0.904360, running train acc: 0.685
==>>> it: 201, mem avg. loss: 0.532966, running mem acc: 0.796
==>>> it: 301, avg. loss: 0.835626, running train acc: 0.697
==>>> it: 301, mem avg. loss: 0.492777, running mem acc: 0.817
==>>> it: 401, avg. loss: 0.780896, running train acc: 0.708
==>>> it: 401, mem avg. loss: 0.448768, running mem acc: 0.836
==>>> it: 501, avg. loss: 0.735283, running train acc: 0.720
==>>> it: 501, mem avg. loss: 0.410894, running mem acc: 0.854
==>>> it: 601, avg. loss: 0.704728, running train acc: 0.728
==>>> it: 601, mem avg. loss: 0.384462, running mem acc: 0.866
==>>> it: 701, avg. loss: 0.672706, running train acc: 0.740
==>>> it: 701, mem avg. loss: 0.358335, running mem acc: 0.877
==>>> it: 801, avg. loss: 0.649617, running train acc: 0.747
==>>> it: 801, mem avg. loss: 0.346864, running mem acc: 0.884
==>>> it: 901, avg. loss: 0.633827, running train acc: 0.751
==>>> it: 901, mem avg. loss: 0.335851, running mem acc: 0.888
[0.0415 0.056 0.312 0.857 0. ]
-----------run 1 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.657494, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.871317, running mem acc: 0.889
==>>> it: 101, avg. loss: 0.857427, running train acc: 0.793
==>>> it: 101, mem avg. loss: 0.412403, running mem acc: 0.839
==>>> it: 201, avg. loss: 0.646688, running train acc: 0.819
==>>> it: 201, mem avg. loss: 0.341180, running mem acc: 0.875
==>>> it: 301, avg. loss: 0.567089, running train acc: 0.827
==>>> it: 301, mem avg. loss: 0.300124, running mem acc: 0.893
==>>> it: 401, avg. loss: 0.517622, running train acc: 0.837
==>>> it: 401, mem avg. loss: 0.270672, running mem acc: 0.905
==>>> it: 501, avg. loss: 0.487198, running train acc: 0.842
==>>> it: 501, mem avg. loss: 0.251788, running mem acc: 0.911
==>>> it: 601, avg. loss: 0.450645, running train acc: 0.853
==>>> it: 601, mem avg. loss: 0.235324, running mem acc: 0.917
==>>> it: 701, avg. loss: 0.436631, running train acc: 0.858
==>>> it: 701, mem avg. loss: 0.220086, running mem acc: 0.924
==>>> it: 801, avg. loss: 0.413854, running train acc: 0.864
==>>> it: 801, mem avg. loss: 0.209625, running mem acc: 0.928
==>>> it: 901, avg. loss: 0.393349, running train acc: 0.869
==>>> it: 901, mem avg. loss: 0.197870, running mem acc: 0.933
[0.016 0.04 0.1295 0.3875 0.9605]
-----------run 1-----------avg_end_acc 0.30670000000000003-----------train time 309.87423610687256
Task: 0, Labels:[7, 3]
Task: 1, Labels:[1, 9]
Task: 2, Labels:[0, 2]
Task: 3, Labels:[6, 4]
Task: 4, Labels:[5, 8]
buffer has 1000 slots
-----------run 2 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.070905, running train acc: 0.350
==>>> it: 1, mem avg. loss: 1.185227, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.847313, running train acc: 0.606
==>>> it: 101, mem avg. loss: 0.810086, running mem acc: 0.622
==>>> it: 201, avg. loss: 0.765655, running train acc: 0.633
==>>> it: 201, mem avg. loss: 0.795742, running mem acc: 0.626
==>>> it: 301, avg. loss: 0.728737, running train acc: 0.645
==>>> it: 301, mem avg. loss: 0.753590, running mem acc: 0.648
==>>> it: 401, avg. loss: 0.694415, running train acc: 0.664
==>>> it: 401, mem avg. loss: 0.706080, running mem acc: 0.674
==>>> it: 501, avg. loss: 0.672010, running train acc: 0.675
==>>> it: 501, mem avg. loss: 0.669378, running mem acc: 0.696
==>>> it: 601, avg. loss: 0.650232, running train acc: 0.689
==>>> it: 601, mem avg. loss: 0.636322, running mem acc: 0.715
==>>> it: 701, avg. loss: 0.633641, running train acc: 0.700
==>>> it: 701, mem avg. loss: 0.598975, running mem acc: 0.734
==>>> it: 801, avg. loss: 0.624935, running train acc: 0.704
==>>> it: 801, mem avg. loss: 0.569165, running mem acc: 0.750
==>>> it: 901, avg. loss: 0.613219, running train acc: 0.710
==>>> it: 901, mem avg. loss: 0.544771, running mem acc: 0.764
[0.7755 0. 0. 0. 0. ]
-----------run 2 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.572063, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.907939, running mem acc: 0.333
==>>> it: 101, avg. loss: 1.148647, running train acc: 0.485
==>>> it: 101, mem avg. loss: 1.544229, running mem acc: 0.369
==>>> it: 201, avg. loss: 0.997712, running train acc: 0.536
==>>> it: 201, mem avg. loss: 1.431323, running mem acc: 0.447
==>>> it: 301, avg. loss: 0.943479, running train acc: 0.560
==>>> it: 301, mem avg. loss: 1.352312, running mem acc: 0.481
==>>> it: 401, avg. loss: 0.905294, running train acc: 0.585
==>>> it: 401, mem avg. loss: 1.279100, running mem acc: 0.521
==>>> it: 501, avg. loss: 0.878678, running train acc: 0.602
==>>> it: 501, mem avg. loss: 1.224859, running mem acc: 0.538
==>>> it: 601, avg. loss: 0.853513, running train acc: 0.616
==>>> it: 601, mem avg. loss: 1.212025, running mem acc: 0.552
==>>> it: 701, avg. loss: 0.837463, running train acc: 0.626
==>>> it: 701, mem avg. loss: 1.185076, running mem acc: 0.559
==>>> it: 801, avg. loss: 0.816079, running train acc: 0.638
==>>> it: 801, mem avg. loss: 1.160962, running mem acc: 0.569
==>>> it: 901, avg. loss: 0.796469, running train acc: 0.648
==>>> it: 901, mem avg. loss: 1.137939, running mem acc: 0.577
[0.5045 0.7655 0. 0. 0. ]
-----------run 2 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.012645, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.417690, running mem acc: 0.300
==>>> it: 101, avg. loss: 1.176264, running train acc: 0.604
==>>> it: 101, mem avg. loss: 1.151371, running mem acc: 0.493
==>>> it: 201, avg. loss: 0.941784, running train acc: 0.664
==>>> it: 201, mem avg. loss: 0.956887, running mem acc: 0.590
==>>> it: 301, avg. loss: 0.859025, running train acc: 0.684
==>>> it: 301, mem avg. loss: 0.870141, running mem acc: 0.629
==>>> it: 401, avg. loss: 0.810119, running train acc: 0.697
==>>> it: 401, mem avg. loss: 0.798958, running mem acc: 0.663
==>>> it: 501, avg. loss: 0.776908, running train acc: 0.707
==>>> it: 501, mem avg. loss: 0.761098, running mem acc: 0.678
==>>> it: 601, avg. loss: 0.753101, running train acc: 0.714
==>>> it: 601, mem avg. loss: 0.724280, running mem acc: 0.696
==>>> it: 701, avg. loss: 0.737988, running train acc: 0.718
==>>> it: 701, mem avg. loss: 0.697596, running mem acc: 0.708
==>>> it: 801, avg. loss: 0.717296, running train acc: 0.723
==>>> it: 801, mem avg. loss: 0.675505, running mem acc: 0.719
==>>> it: 901, avg. loss: 0.696389, running train acc: 0.729
==>>> it: 901, mem avg. loss: 0.657782, running mem acc: 0.726
[0.0365 0.4845 0.8435 0. 0. ]
-----------run 2 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.421310, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.274117, running mem acc: 0.786
==>>> it: 101, avg. loss: 1.433325, running train acc: 0.521
==>>> it: 101, mem avg. loss: 0.781590, running mem acc: 0.731
==>>> it: 201, avg. loss: 1.133271, running train acc: 0.606
==>>> it: 201, mem avg. loss: 0.733022, running mem acc: 0.740
==>>> it: 301, avg. loss: 1.021514, running train acc: 0.634
==>>> it: 301, mem avg. loss: 0.689050, running mem acc: 0.746
==>>> it: 401, avg. loss: 0.938388, running train acc: 0.660
==>>> it: 401, mem avg. loss: 0.647321, running mem acc: 0.765
==>>> it: 501, avg. loss: 0.889315, running train acc: 0.677
==>>> it: 501, mem avg. loss: 0.615720, running mem acc: 0.780
==>>> it: 601, avg. loss: 0.857140, running train acc: 0.686
==>>> it: 601, mem avg. loss: 0.578209, running mem acc: 0.797
==>>> it: 701, avg. loss: 0.828361, running train acc: 0.694
==>>> it: 701, mem avg. loss: 0.551360, running mem acc: 0.807
==>>> it: 801, avg. loss: 0.800339, running train acc: 0.703
==>>> it: 801, mem avg. loss: 0.528395, running mem acc: 0.816
==>>> it: 901, avg. loss: 0.781175, running train acc: 0.709
==>>> it: 901, mem avg. loss: 0.508960, running mem acc: 0.824
[0.006 0.4805 0.2925 0.8345 0. ]
-----------run 2 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.677151, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.092429, running mem acc: 0.778
==>>> it: 101, avg. loss: 0.937997, running train acc: 0.775
==>>> it: 101, mem avg. loss: 0.535428, running mem acc: 0.779
==>>> it: 201, avg. loss: 0.685108, running train acc: 0.807
==>>> it: 201, mem avg. loss: 0.444572, running mem acc: 0.829
==>>> it: 301, avg. loss: 0.586641, running train acc: 0.825
==>>> it: 301, mem avg. loss: 0.376743, running mem acc: 0.858
==>>> it: 401, avg. loss: 0.532075, running train acc: 0.835
==>>> it: 401, mem avg. loss: 0.335621, running mem acc: 0.875
==>>> it: 501, avg. loss: 0.495053, running train acc: 0.843
==>>> it: 501, mem avg. loss: 0.306606, running mem acc: 0.886
==>>> it: 601, avg. loss: 0.478266, running train acc: 0.847
==>>> it: 601, mem avg. loss: 0.283838, running mem acc: 0.896
==>>> it: 701, avg. loss: 0.449836, running train acc: 0.855
==>>> it: 701, mem avg. loss: 0.266960, running mem acc: 0.903
==>>> it: 801, avg. loss: 0.437822, running train acc: 0.856
==>>> it: 801, mem avg. loss: 0.253191, running mem acc: 0.909
==>>> it: 901, avg. loss: 0.417746, running train acc: 0.862
==>>> it: 901, mem avg. loss: 0.239084, running mem acc: 0.915
[0.0015 0.252 0.0505 0.4925 0.928 ]
-----------run 2-----------avg_end_acc 0.3449-----------train time 305.08081126213074
Task: 0, Labels:[7, 0]
Task: 1, Labels:[4, 2]
Task: 2, Labels:[5, 8]
Task: 3, Labels:[6, 9]
Task: 4, Labels:[3, 1]
buffer has 1000 slots
-----------run 3 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.903329, running train acc: 0.250
==>>> it: 1, mem avg. loss: 0.883407, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.701677, running train acc: 0.725
==>>> it: 101, mem avg. loss: 0.608427, running mem acc: 0.743
==>>> it: 201, avg. loss: 0.625359, running train acc: 0.748
==>>> it: 201, mem avg. loss: 0.562049, running mem acc: 0.760
==>>> it: 301, avg. loss: 0.562711, running train acc: 0.771
==>>> it: 301, mem avg. loss: 0.511455, running mem acc: 0.786
==>>> it: 401, avg. loss: 0.517674, running train acc: 0.787
==>>> it: 401, mem avg. loss: 0.467291, running mem acc: 0.808
==>>> it: 501, avg. loss: 0.492820, running train acc: 0.797
==>>> it: 501, mem avg. loss: 0.425321, running mem acc: 0.828
==>>> it: 601, avg. loss: 0.470465, running train acc: 0.807
==>>> it: 601, mem avg. loss: 0.391249, running mem acc: 0.843
==>>> it: 701, avg. loss: 0.442817, running train acc: 0.819
==>>> it: 701, mem avg. loss: 0.363682, running mem acc: 0.855
==>>> it: 801, avg. loss: 0.429279, running train acc: 0.827
==>>> it: 801, mem avg. loss: 0.342071, running mem acc: 0.865
==>>> it: 901, avg. loss: 0.415561, running train acc: 0.833
==>>> it: 901, mem avg. loss: 0.322978, running mem acc: 0.873
[0.9195 0. 0. 0. 0. ]
-----------run 3 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.677553, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.465727, running mem acc: 0.667
==>>> it: 101, avg. loss: 1.135399, running train acc: 0.512
==>>> it: 101, mem avg. loss: 1.428659, running mem acc: 0.333
==>>> it: 201, avg. loss: 0.976777, running train acc: 0.563
==>>> it: 201, mem avg. loss: 1.301986, running mem acc: 0.385
==>>> it: 301, avg. loss: 0.933628, running train acc: 0.570
==>>> it: 301, mem avg. loss: 1.219803, running mem acc: 0.423
==>>> it: 401, avg. loss: 0.913300, running train acc: 0.571
==>>> it: 401, mem avg. loss: 1.157895, running mem acc: 0.453
==>>> it: 501, avg. loss: 0.889673, running train acc: 0.581
==>>> it: 501, mem avg. loss: 1.120018, running mem acc: 0.477
==>>> it: 601, avg. loss: 0.861623, running train acc: 0.595
==>>> it: 601, mem avg. loss: 1.081429, running mem acc: 0.497
==>>> it: 701, avg. loss: 0.848533, running train acc: 0.601
==>>> it: 701, mem avg. loss: 1.045891, running mem acc: 0.514
==>>> it: 801, avg. loss: 0.838123, running train acc: 0.606
==>>> it: 801, mem avg. loss: 1.023079, running mem acc: 0.523
==>>> it: 901, avg. loss: 0.821550, running train acc: 0.616
==>>> it: 901, mem avg. loss: 0.998429, running mem acc: 0.534
[0.48 0.7295 0. 0. 0. ]
-----------run 3 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.364228, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.401534, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.853940, running train acc: 0.779
==>>> it: 101, mem avg. loss: 0.929874, running mem acc: 0.582
==>>> it: 201, avg. loss: 0.686205, running train acc: 0.801
==>>> it: 201, mem avg. loss: 0.818338, running mem acc: 0.644
==>>> it: 301, avg. loss: 0.603114, running train acc: 0.815
==>>> it: 301, mem avg. loss: 0.751472, running mem acc: 0.673
==>>> it: 401, avg. loss: 0.574300, running train acc: 0.815
==>>> it: 401, mem avg. loss: 0.689173, running mem acc: 0.701
==>>> it: 501, avg. loss: 0.538063, running train acc: 0.824
==>>> it: 501, mem avg. loss: 0.638966, running mem acc: 0.724
==>>> it: 601, avg. loss: 0.504818, running train acc: 0.833
==>>> it: 601, mem avg. loss: 0.592791, running mem acc: 0.743
==>>> it: 701, avg. loss: 0.484749, running train acc: 0.837
==>>> it: 701, mem avg. loss: 0.561373, running mem acc: 0.757
==>>> it: 801, avg. loss: 0.466934, running train acc: 0.841
==>>> it: 801, mem avg. loss: 0.540579, running mem acc: 0.764
==>>> it: 901, avg. loss: 0.448764, running train acc: 0.846
==>>> it: 901, mem avg. loss: 0.521101, running mem acc: 0.774
[0.0475 0.1355 0.943 0. 0. ]
-----------run 3 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.997341, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.346816, running mem acc: 0.786
==>>> it: 101, avg. loss: 0.912237, running train acc: 0.763
==>>> it: 101, mem avg. loss: 0.502056, running mem acc: 0.790
==>>> it: 201, avg. loss: 0.675130, running train acc: 0.803
==>>> it: 201, mem avg. loss: 0.427352, running mem acc: 0.828
==>>> it: 301, avg. loss: 0.581894, running train acc: 0.820
==>>> it: 301, mem avg. loss: 0.381851, running mem acc: 0.850
==>>> it: 401, avg. loss: 0.527088, running train acc: 0.832
==>>> it: 401, mem avg. loss: 0.352256, running mem acc: 0.866
==>>> it: 501, avg. loss: 0.489860, running train acc: 0.837
==>>> it: 501, mem avg. loss: 0.332950, running mem acc: 0.875
==>>> it: 601, avg. loss: 0.475969, running train acc: 0.838
==>>> it: 601, mem avg. loss: 0.325335, running mem acc: 0.878
==>>> it: 701, avg. loss: 0.453801, running train acc: 0.844
==>>> it: 701, mem avg. loss: 0.308680, running mem acc: 0.885
==>>> it: 801, avg. loss: 0.431951, running train acc: 0.851
==>>> it: 801, mem avg. loss: 0.301075, running mem acc: 0.889
==>>> it: 901, avg. loss: 0.426752, running train acc: 0.851
==>>> it: 901, mem avg. loss: 0.298158, running mem acc: 0.890
[0.0595 0.0485 0.362 0.9605 0. ]
-----------run 3 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 12.078923, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.959943, running mem acc: 0.833
==>>> it: 101, avg. loss: 0.900623, running train acc: 0.783
==>>> it: 101, mem avg. loss: 0.456886, running mem acc: 0.813
==>>> it: 201, avg. loss: 0.679125, running train acc: 0.810
==>>> it: 201, mem avg. loss: 0.383015, running mem acc: 0.840
==>>> it: 301, avg. loss: 0.576092, running train acc: 0.829
==>>> it: 301, mem avg. loss: 0.328125, running mem acc: 0.868
==>>> it: 401, avg. loss: 0.532713, running train acc: 0.840
==>>> it: 401, mem avg. loss: 0.295174, running mem acc: 0.888
==>>> it: 501, avg. loss: 0.498604, running train acc: 0.845
==>>> it: 501, mem avg. loss: 0.274485, running mem acc: 0.899
==>>> it: 601, avg. loss: 0.474590, running train acc: 0.849
==>>> it: 601, mem avg. loss: 0.254157, running mem acc: 0.908
==>>> it: 701, avg. loss: 0.457026, running train acc: 0.855
==>>> it: 701, mem avg. loss: 0.234994, running mem acc: 0.917
==>>> it: 801, avg. loss: 0.440620, running train acc: 0.859
==>>> it: 801, mem avg. loss: 0.221406, running mem acc: 0.923
==>>> it: 901, avg. loss: 0.417277, running train acc: 0.867
==>>> it: 901, mem avg. loss: 0.211049, running mem acc: 0.927
[0.0235 0.0655 0.288 0.2645 0.947 ]
-----------run 3-----------avg_end_acc 0.3177-----------train time 305.6890342235565
Task: 0, Labels:[5, 9]
Task: 1, Labels:[8, 6]
Task: 2, Labels:[3, 2]
Task: 3, Labels:[4, 7]
Task: 4, Labels:[1, 0]
buffer has 1000 slots
-----------run 4 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.777743, running train acc: 0.250
==>>> it: 1, mem avg. loss: 0.754060, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.746674, running train acc: 0.731
==>>> it: 101, mem avg. loss: 0.605000, running mem acc: 0.773
==>>> it: 201, avg. loss: 0.567443, running train acc: 0.793
==>>> it: 201, mem avg. loss: 0.525892, running mem acc: 0.804
==>>> it: 301, avg. loss: 0.493565, running train acc: 0.815
==>>> it: 301, mem avg. loss: 0.471567, running mem acc: 0.827
==>>> it: 401, avg. loss: 0.437600, running train acc: 0.835
==>>> it: 401, mem avg. loss: 0.423649, running mem acc: 0.845
==>>> it: 501, avg. loss: 0.403540, running train acc: 0.847
==>>> it: 501, mem avg. loss: 0.381000, running mem acc: 0.861
==>>> it: 601, avg. loss: 0.379669, running train acc: 0.856
==>>> it: 601, mem avg. loss: 0.345303, running mem acc: 0.874
==>>> it: 701, avg. loss: 0.363752, running train acc: 0.863
==>>> it: 701, mem avg. loss: 0.318172, running mem acc: 0.884
==>>> it: 801, avg. loss: 0.349904, running train acc: 0.868
==>>> it: 801, mem avg. loss: 0.294788, running mem acc: 0.893
==>>> it: 901, avg. loss: 0.341671, running train acc: 0.871
==>>> it: 901, mem avg. loss: 0.276015, running mem acc: 0.900
[0.925 0. 0. 0. 0. ]
-----------run 4 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.421353, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.221934, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.688177, running train acc: 0.840
==>>> it: 101, mem avg. loss: 1.155652, running mem acc: 0.493
==>>> it: 201, avg. loss: 0.574970, running train acc: 0.844
==>>> it: 201, mem avg. loss: 1.008047, running mem acc: 0.540
==>>> it: 301, avg. loss: 0.506490, running train acc: 0.852
==>>> it: 301, mem avg. loss: 0.926756, running mem acc: 0.582
==>>> it: 401, avg. loss: 0.486034, running train acc: 0.853
==>>> it: 401, mem avg. loss: 0.863702, running mem acc: 0.608
==>>> it: 501, avg. loss: 0.467183, running train acc: 0.854
==>>> it: 501, mem avg. loss: 0.836747, running mem acc: 0.624
==>>> it: 601, avg. loss: 0.450437, running train acc: 0.854
==>>> it: 601, mem avg. loss: 0.811492, running mem acc: 0.642
==>>> it: 701, avg. loss: 0.432798, running train acc: 0.856
==>>> it: 701, mem avg. loss: 0.783123, running mem acc: 0.654
==>>> it: 801, avg. loss: 0.420515, running train acc: 0.859
==>>> it: 801, mem avg. loss: 0.758535, running mem acc: 0.664
==>>> it: 901, avg. loss: 0.414652, running train acc: 0.859
==>>> it: 901, mem avg. loss: 0.745697, running mem acc: 0.671
[0.268 0.9615 0. 0. 0. ]
-----------run 4 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.944627, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.301447, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.220227, running train acc: 0.535
==>>> it: 101, mem avg. loss: 1.013674, running mem acc: 0.605
==>>> it: 201, avg. loss: 1.032587, running train acc: 0.590
==>>> it: 201, mem avg. loss: 0.891787, running mem acc: 0.640
==>>> it: 301, avg. loss: 0.972519, running train acc: 0.600
==>>> it: 301, mem avg. loss: 0.818375, running mem acc: 0.669
==>>> it: 401, avg. loss: 0.938812, running train acc: 0.608
==>>> it: 401, mem avg. loss: 0.778434, running mem acc: 0.685
==>>> it: 501, avg. loss: 0.905085, running train acc: 0.618
==>>> it: 501, mem avg. loss: 0.748541, running mem acc: 0.697
==>>> it: 601, avg. loss: 0.879142, running train acc: 0.627
==>>> it: 601, mem avg. loss: 0.718913, running mem acc: 0.712
==>>> it: 701, avg. loss: 0.859412, running train acc: 0.635
==>>> it: 701, mem avg. loss: 0.702958, running mem acc: 0.717
==>>> it: 801, avg. loss: 0.848633, running train acc: 0.637
==>>> it: 801, mem avg. loss: 0.689267, running mem acc: 0.722
==>>> it: 901, avg. loss: 0.837219, running train acc: 0.642
==>>> it: 901, mem avg. loss: 0.672004, running mem acc: 0.731
[0.113 0.4425 0.7245 0. 0. ]
-----------run 4 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.941148, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.148840, running mem acc: 0.786
==>>> it: 101, avg. loss: 1.247467, running train acc: 0.568
==>>> it: 101, mem avg. loss: 0.754778, running mem acc: 0.686
==>>> it: 201, avg. loss: 0.997021, running train acc: 0.636
==>>> it: 201, mem avg. loss: 0.676340, running mem acc: 0.716
==>>> it: 301, avg. loss: 0.928773, running train acc: 0.644
==>>> it: 301, mem avg. loss: 0.626019, running mem acc: 0.745
==>>> it: 401, avg. loss: 0.879854, running train acc: 0.657
==>>> it: 401, mem avg. loss: 0.582417, running mem acc: 0.767
==>>> it: 501, avg. loss: 0.837393, running train acc: 0.669
==>>> it: 501, mem avg. loss: 0.541420, running mem acc: 0.786
==>>> it: 601, avg. loss: 0.807289, running train acc: 0.680
==>>> it: 601, mem avg. loss: 0.504113, running mem acc: 0.804
==>>> it: 701, avg. loss: 0.779042, running train acc: 0.689
==>>> it: 701, mem avg. loss: 0.474278, running mem acc: 0.818
==>>> it: 801, avg. loss: 0.753585, running train acc: 0.698
==>>> it: 801, mem avg. loss: 0.451595, running mem acc: 0.830
==>>> it: 901, avg. loss: 0.739728, running train acc: 0.702
==>>> it: 901, mem avg. loss: 0.431869, running mem acc: 0.841
[0.074 0.285 0.1205 0.827 0. ]
-----------run 4 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.782269, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.930228, running mem acc: 0.889
==>>> it: 101, avg. loss: 1.123894, running train acc: 0.705
==>>> it: 101, mem avg. loss: 0.447797, running mem acc: 0.850
==>>> it: 201, avg. loss: 0.905806, running train acc: 0.741
==>>> it: 201, mem avg. loss: 0.386683, running mem acc: 0.868
==>>> it: 301, avg. loss: 0.847748, running train acc: 0.747
==>>> it: 301, mem avg. loss: 0.369850, running mem acc: 0.870
==>>> it: 401, avg. loss: 0.802905, running train acc: 0.758
==>>> it: 401, mem avg. loss: 0.360859, running mem acc: 0.872
==>>> it: 501, avg. loss: 0.764899, running train acc: 0.768
==>>> it: 501, mem avg. loss: 0.354839, running mem acc: 0.874
==>>> it: 601, avg. loss: 0.734795, running train acc: 0.774
==>>> it: 601, mem avg. loss: 0.340771, running mem acc: 0.879
==>>> it: 701, avg. loss: 0.699791, running train acc: 0.782
==>>> it: 701, mem avg. loss: 0.339118, running mem acc: 0.880
==>>> it: 801, avg. loss: 0.691455, running train acc: 0.785
==>>> it: 801, mem avg. loss: 0.330267, running mem acc: 0.882
==>>> it: 901, avg. loss: 0.674053, running train acc: 0.790
==>>> it: 901, mem avg. loss: 0.323321, running mem acc: 0.884
[0.0245 0.1145 0.249 0.5415 0.897 ]
-----------run 4-----------avg_end_acc 0.3653-----------train time 310.55622363090515
Task: 0, Labels:[4, 7]
Task: 1, Labels:[0, 3]
Task: 2, Labels:[9, 6]
Task: 3, Labels:[5, 8]
Task: 4, Labels:[2, 1]
buffer has 1000 slots
-----------run 5 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.203128, running train acc: 0.250
==>>> it: 1, mem avg. loss: 1.226544, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.847969, running train acc: 0.653
==>>> it: 101, mem avg. loss: 0.761936, running mem acc: 0.660
==>>> it: 201, avg. loss: 0.778456, running train acc: 0.656
==>>> it: 201, mem avg. loss: 0.714917, running mem acc: 0.682
==>>> it: 301, avg. loss: 0.722117, running train acc: 0.674
==>>> it: 301, mem avg. loss: 0.665055, running mem acc: 0.709
==>>> it: 401, avg. loss: 0.685605, running train acc: 0.686
==>>> it: 401, mem avg. loss: 0.612335, running mem acc: 0.741
==>>> it: 501, avg. loss: 0.656273, running train acc: 0.697
==>>> it: 501, mem avg. loss: 0.572167, running mem acc: 0.761
==>>> it: 601, avg. loss: 0.636776, running train acc: 0.705
==>>> it: 601, mem avg. loss: 0.543510, running mem acc: 0.775
==>>> it: 701, avg. loss: 0.625411, running train acc: 0.708
==>>> it: 701, mem avg. loss: 0.518143, running mem acc: 0.789
==>>> it: 801, avg. loss: 0.616475, running train acc: 0.711
==>>> it: 801, mem avg. loss: 0.494394, running mem acc: 0.801
==>>> it: 901, avg. loss: 0.609605, running train acc: 0.711
==>>> it: 901, mem avg. loss: 0.474522, running mem acc: 0.813
[0.8 0. 0. 0. 0. ]
-----------run 5 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 7.639872, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.216530, running mem acc: 0.667
==>>> it: 101, avg. loss: 1.015672, running train acc: 0.688
==>>> it: 101, mem avg. loss: 1.440525, running mem acc: 0.414
==>>> it: 201, avg. loss: 0.805103, running train acc: 0.740
==>>> it: 201, mem avg. loss: 1.313555, running mem acc: 0.452
==>>> it: 301, avg. loss: 0.736078, running train acc: 0.750
==>>> it: 301, mem avg. loss: 1.176746, running mem acc: 0.507
==>>> it: 401, avg. loss: 0.688719, running train acc: 0.760
==>>> it: 401, mem avg. loss: 1.105425, running mem acc: 0.541
==>>> it: 501, avg. loss: 0.649090, running train acc: 0.770
==>>> it: 501, mem avg. loss: 1.042493, running mem acc: 0.563
==>>> it: 601, avg. loss: 0.623792, running train acc: 0.775
==>>> it: 601, mem avg. loss: 0.993255, running mem acc: 0.586
==>>> it: 701, avg. loss: 0.597346, running train acc: 0.781
==>>> it: 701, mem avg. loss: 0.956468, running mem acc: 0.598
==>>> it: 801, avg. loss: 0.580278, running train acc: 0.785
==>>> it: 801, mem avg. loss: 0.929077, running mem acc: 0.608
==>>> it: 901, avg. loss: 0.567413, running train acc: 0.789
==>>> it: 901, mem avg. loss: 0.902617, running mem acc: 0.620
[0.2005 0.889 0. 0. 0. ]
-----------run 5 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.559956, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.659124, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.936934, running train acc: 0.733
==>>> it: 101, mem avg. loss: 0.850382, running mem acc: 0.597
==>>> it: 201, avg. loss: 0.761072, running train acc: 0.755
==>>> it: 201, mem avg. loss: 0.794516, running mem acc: 0.649
==>>> it: 301, avg. loss: 0.684577, running train acc: 0.770
==>>> it: 301, mem avg. loss: 0.766669, running mem acc: 0.667
==>>> it: 401, avg. loss: 0.633118, running train acc: 0.782
==>>> it: 401, mem avg. loss: 0.729318, running mem acc: 0.681
==>>> it: 501, avg. loss: 0.612004, running train acc: 0.787
==>>> it: 501, mem avg. loss: 0.699315, running mem acc: 0.696
==>>> it: 601, avg. loss: 0.584117, running train acc: 0.795
==>>> it: 601, mem avg. loss: 0.672377, running mem acc: 0.705
==>>> it: 701, avg. loss: 0.553675, running train acc: 0.805
==>>> it: 701, mem avg. loss: 0.640687, running mem acc: 0.718
==>>> it: 801, avg. loss: 0.532905, running train acc: 0.812
==>>> it: 801, mem avg. loss: 0.619314, running mem acc: 0.727
==>>> it: 901, avg. loss: 0.511871, running train acc: 0.819
==>>> it: 901, mem avg. loss: 0.600754, running mem acc: 0.736
[0.0425 0.256 0.943 0. 0. ]
-----------run 5 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.600192, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.129818, running mem acc: 0.786
==>>> it: 101, avg. loss: 0.884794, running train acc: 0.801
==>>> it: 101, mem avg. loss: 0.646558, running mem acc: 0.690
==>>> it: 201, avg. loss: 0.700264, running train acc: 0.809
==>>> it: 201, mem avg. loss: 0.557719, running mem acc: 0.747
==>>> it: 301, avg. loss: 0.625269, running train acc: 0.817
==>>> it: 301, mem avg. loss: 0.501007, running mem acc: 0.779
==>>> it: 401, avg. loss: 0.574004, running train acc: 0.828
==>>> it: 401, mem avg. loss: 0.460044, running mem acc: 0.802
==>>> it: 501, avg. loss: 0.547868, running train acc: 0.829
==>>> it: 501, mem avg. loss: 0.432287, running mem acc: 0.818
==>>> it: 601, avg. loss: 0.512304, running train acc: 0.837
==>>> it: 601, mem avg. loss: 0.414263, running mem acc: 0.828
==>>> it: 701, avg. loss: 0.486727, running train acc: 0.844
==>>> it: 701, mem avg. loss: 0.392409, running mem acc: 0.836
==>>> it: 801, avg. loss: 0.467569, running train acc: 0.849
==>>> it: 801, mem avg. loss: 0.378439, running mem acc: 0.843
==>>> it: 901, avg. loss: 0.460277, running train acc: 0.849
==>>> it: 901, mem avg. loss: 0.366563, running mem acc: 0.848
[0.0495 0.0345 0.3185 0.9315 0. ]
-----------run 5 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.133467, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.958635, running mem acc: 0.833
==>>> it: 101, avg. loss: 0.930826, running train acc: 0.767
==>>> it: 101, mem avg. loss: 0.443166, running mem acc: 0.821
==>>> it: 201, avg. loss: 0.691515, running train acc: 0.815
==>>> it: 201, mem avg. loss: 0.361354, running mem acc: 0.859
==>>> it: 301, avg. loss: 0.592681, running train acc: 0.826
==>>> it: 301, mem avg. loss: 0.323485, running mem acc: 0.876
==>>> it: 401, avg. loss: 0.554252, running train acc: 0.831
==>>> it: 401, mem avg. loss: 0.295559, running mem acc: 0.890
==>>> it: 501, avg. loss: 0.520340, running train acc: 0.837
==>>> it: 501, mem avg. loss: 0.273887, running mem acc: 0.901
==>>> it: 601, avg. loss: 0.491622, running train acc: 0.843
==>>> it: 601, mem avg. loss: 0.260308, running mem acc: 0.907
==>>> it: 701, avg. loss: 0.462035, running train acc: 0.851
==>>> it: 701, mem avg. loss: 0.245367, running mem acc: 0.914
==>>> it: 801, avg. loss: 0.443197, running train acc: 0.855
==>>> it: 801, mem avg. loss: 0.231996, running mem acc: 0.920
==>>> it: 901, avg. loss: 0.418386, running train acc: 0.862
==>>> it: 901, mem avg. loss: 0.219705, running mem acc: 0.924
[0.0045 0.041 0.114 0.3135 0.9575]
-----------run 5-----------avg_end_acc 0.28609999999999997-----------train time 306.94938015937805
Task: 0, Labels:[7, 1]
Task: 1, Labels:[4, 3]
Task: 2, Labels:[9, 2]
Task: 3, Labels:[5, 0]
Task: 4, Labels:[6, 8]
buffer has 1000 slots
-----------run 6 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.523540, running train acc: 0.300
==>>> it: 1, mem avg. loss: 0.818723, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.688711, running train acc: 0.731
==>>> it: 101, mem avg. loss: 0.625936, running mem acc: 0.773
==>>> it: 201, avg. loss: 0.591178, running train acc: 0.773
==>>> it: 201, mem avg. loss: 0.565080, running mem acc: 0.793
==>>> it: 301, avg. loss: 0.523118, running train acc: 0.799
==>>> it: 301, mem avg. loss: 0.501009, running mem acc: 0.816
==>>> it: 401, avg. loss: 0.482001, running train acc: 0.817
==>>> it: 401, mem avg. loss: 0.443960, running mem acc: 0.837
==>>> it: 501, avg. loss: 0.445793, running train acc: 0.831
==>>> it: 501, mem avg. loss: 0.399745, running mem acc: 0.853
==>>> it: 601, avg. loss: 0.420557, running train acc: 0.841
==>>> it: 601, mem avg. loss: 0.363564, running mem acc: 0.868
==>>> it: 701, avg. loss: 0.385157, running train acc: 0.854
==>>> it: 701, mem avg. loss: 0.331385, running mem acc: 0.880
==>>> it: 801, avg. loss: 0.363283, running train acc: 0.863
==>>> it: 801, mem avg. loss: 0.305856, running mem acc: 0.889
==>>> it: 901, avg. loss: 0.344663, running train acc: 0.871
==>>> it: 901, mem avg. loss: 0.283353, running mem acc: 0.897
[0.955 0. 0. 0. 0. ]
-----------run 6 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.172008, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.620807, running mem acc: 0.667
==>>> it: 101, avg. loss: 1.163981, running train acc: 0.575
==>>> it: 101, mem avg. loss: 1.173667, running mem acc: 0.461
==>>> it: 201, avg. loss: 0.982161, running train acc: 0.617
==>>> it: 201, mem avg. loss: 1.053122, running mem acc: 0.526
==>>> it: 301, avg. loss: 0.904663, running train acc: 0.639
==>>> it: 301, mem avg. loss: 0.972489, running mem acc: 0.574
==>>> it: 401, avg. loss: 0.854734, running train acc: 0.653
==>>> it: 401, mem avg. loss: 0.907992, running mem acc: 0.606
==>>> it: 501, avg. loss: 0.821116, running train acc: 0.662
==>>> it: 501, mem avg. loss: 0.870802, running mem acc: 0.619
==>>> it: 601, avg. loss: 0.792172, running train acc: 0.674
==>>> it: 601, mem avg. loss: 0.839119, running mem acc: 0.635
==>>> it: 701, avg. loss: 0.763468, running train acc: 0.684
==>>> it: 701, mem avg. loss: 0.808271, running mem acc: 0.648
==>>> it: 801, avg. loss: 0.739121, running train acc: 0.693
==>>> it: 801, mem avg. loss: 0.788713, running mem acc: 0.659
==>>> it: 901, avg. loss: 0.723750, running train acc: 0.699
==>>> it: 901, mem avg. loss: 0.773992, running mem acc: 0.666
[0.3265 0.8275 0. 0. 0. ]
-----------run 6 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.380671, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.697662, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.786249, running train acc: 0.798
==>>> it: 101, mem avg. loss: 0.910452, running mem acc: 0.572
==>>> it: 201, avg. loss: 0.624987, running train acc: 0.817
==>>> it: 201, mem avg. loss: 0.780155, running mem acc: 0.643
==>>> it: 301, avg. loss: 0.550864, running train acc: 0.824
==>>> it: 301, mem avg. loss: 0.683071, running mem acc: 0.692
==>>> it: 401, avg. loss: 0.521842, running train acc: 0.828
==>>> it: 401, mem avg. loss: 0.629304, running mem acc: 0.717
==>>> it: 501, avg. loss: 0.487238, running train acc: 0.837
==>>> it: 501, mem avg. loss: 0.596092, running mem acc: 0.733
==>>> it: 601, avg. loss: 0.464447, running train acc: 0.843
==>>> it: 601, mem avg. loss: 0.554462, running mem acc: 0.752
==>>> it: 701, avg. loss: 0.441809, running train acc: 0.848
==>>> it: 701, mem avg. loss: 0.527731, running mem acc: 0.765
==>>> it: 801, avg. loss: 0.428194, running train acc: 0.852
==>>> it: 801, mem avg. loss: 0.501100, running mem acc: 0.779
==>>> it: 901, avg. loss: 0.419565, running train acc: 0.854
==>>> it: 901, mem avg. loss: 0.486859, running mem acc: 0.789
[0.068 0.1335 0.941 0. 0. ]
-----------run 6 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.423432, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.674228, running mem acc: 0.846
==>>> it: 101, avg. loss: 0.847151, running train acc: 0.779
==>>> it: 101, mem avg. loss: 0.549508, running mem acc: 0.800
==>>> it: 201, avg. loss: 0.652962, running train acc: 0.812
==>>> it: 201, mem avg. loss: 0.455598, running mem acc: 0.833
==>>> it: 301, avg. loss: 0.579488, running train acc: 0.821
==>>> it: 301, mem avg. loss: 0.418757, running mem acc: 0.846
==>>> it: 401, avg. loss: 0.521745, running train acc: 0.833
==>>> it: 401, mem avg. loss: 0.374765, running mem acc: 0.866
==>>> it: 501, avg. loss: 0.485937, running train acc: 0.841
==>>> it: 501, mem avg. loss: 0.342482, running mem acc: 0.879
==>>> it: 601, avg. loss: 0.462944, running train acc: 0.845
==>>> it: 601, mem avg. loss: 0.323622, running mem acc: 0.887
==>>> it: 701, avg. loss: 0.452683, running train acc: 0.845
==>>> it: 701, mem avg. loss: 0.300290, running mem acc: 0.897
==>>> it: 801, avg. loss: 0.433543, running train acc: 0.852
==>>> it: 801, mem avg. loss: 0.283672, running mem acc: 0.905
==>>> it: 901, avg. loss: 0.416371, running train acc: 0.857
==>>> it: 901, mem avg. loss: 0.267490, running mem acc: 0.911
[0.0295 0.0335 0.3655 0.945 0. ]
-----------run 6 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.284447, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.943774, running mem acc: 0.889
==>>> it: 101, avg. loss: 0.847721, running train acc: 0.793
==>>> it: 101, mem avg. loss: 0.462617, running mem acc: 0.803
==>>> it: 201, avg. loss: 0.649122, running train acc: 0.817
==>>> it: 201, mem avg. loss: 0.386294, running mem acc: 0.841
==>>> it: 301, avg. loss: 0.567485, running train acc: 0.829
==>>> it: 301, mem avg. loss: 0.335063, running mem acc: 0.862
==>>> it: 401, avg. loss: 0.517177, running train acc: 0.836
==>>> it: 401, mem avg. loss: 0.295141, running mem acc: 0.882
==>>> it: 501, avg. loss: 0.482536, running train acc: 0.845
==>>> it: 501, mem avg. loss: 0.267125, running mem acc: 0.897
==>>> it: 601, avg. loss: 0.460079, running train acc: 0.849
==>>> it: 601, mem avg. loss: 0.246241, running mem acc: 0.906
==>>> it: 701, avg. loss: 0.439992, running train acc: 0.855
==>>> it: 701, mem avg. loss: 0.227583, running mem acc: 0.915
==>>> it: 801, avg. loss: 0.418932, running train acc: 0.861
==>>> it: 801, mem avg. loss: 0.212699, running mem acc: 0.922
==>>> it: 901, avg. loss: 0.402512, running train acc: 0.866
==>>> it: 901, mem avg. loss: 0.203339, running mem acc: 0.927
[0.0405 0.0115 0.315 0.289 0.9535]
-----------run 6-----------avg_end_acc 0.32189999999999996-----------train time 309.109317779541
Task: 0, Labels:[7, 9]
Task: 1, Labels:[1, 4]
Task: 2, Labels:[0, 2]
Task: 3, Labels:[5, 6]
Task: 4, Labels:[3, 8]
buffer has 1000 slots
-----------run 7 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.432209, running train acc: 0.400
==>>> it: 1, mem avg. loss: 0.502799, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.806568, running train acc: 0.711
==>>> it: 101, mem avg. loss: 0.775603, running mem acc: 0.730
==>>> it: 201, avg. loss: 0.628490, running train acc: 0.768
==>>> it: 201, mem avg. loss: 0.686142, running mem acc: 0.761
==>>> it: 301, avg. loss: 0.549630, running train acc: 0.791
==>>> it: 301, mem avg. loss: 0.613751, running mem acc: 0.787
==>>> it: 401, avg. loss: 0.513314, running train acc: 0.802
==>>> it: 401, mem avg. loss: 0.551114, running mem acc: 0.809
==>>> it: 501, avg. loss: 0.492519, running train acc: 0.807
==>>> it: 501, mem avg. loss: 0.501593, running mem acc: 0.827
==>>> it: 601, avg. loss: 0.463498, running train acc: 0.818
==>>> it: 601, mem avg. loss: 0.459700, running mem acc: 0.843
==>>> it: 701, avg. loss: 0.445457, running train acc: 0.824
==>>> it: 701, mem avg. loss: 0.424516, running mem acc: 0.856
==>>> it: 801, avg. loss: 0.432733, running train acc: 0.829
==>>> it: 801, mem avg. loss: 0.397194, running mem acc: 0.866
==>>> it: 901, avg. loss: 0.416974, running train acc: 0.834
==>>> it: 901, mem avg. loss: 0.371714, running mem acc: 0.875
[0.887 0. 0. 0. 0. ]
-----------run 7 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.866856, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.668276, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.620646, running train acc: 0.867
==>>> it: 101, mem avg. loss: 1.159232, running mem acc: 0.502
==>>> it: 201, avg. loss: 0.505703, running train acc: 0.868
==>>> it: 201, mem avg. loss: 1.026982, running mem acc: 0.541
==>>> it: 301, avg. loss: 0.485911, running train acc: 0.861
==>>> it: 301, mem avg. loss: 0.957046, running mem acc: 0.574
==>>> it: 401, avg. loss: 0.465960, running train acc: 0.860
==>>> it: 401, mem avg. loss: 0.914308, running mem acc: 0.598
==>>> it: 501, avg. loss: 0.449142, running train acc: 0.857
==>>> it: 501, mem avg. loss: 0.867910, running mem acc: 0.623
==>>> it: 601, avg. loss: 0.426102, running train acc: 0.863
==>>> it: 601, mem avg. loss: 0.825880, running mem acc: 0.639
==>>> it: 701, avg. loss: 0.410471, running train acc: 0.865
==>>> it: 701, mem avg. loss: 0.799230, running mem acc: 0.648
==>>> it: 801, avg. loss: 0.395236, running train acc: 0.867
==>>> it: 801, mem avg. loss: 0.776138, running mem acc: 0.657
==>>> it: 901, avg. loss: 0.380330, running train acc: 0.870
==>>> it: 901, mem avg. loss: 0.763611, running mem acc: 0.659
[0.0905 0.9665 0. 0. 0. ]
-----------run 7 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.003123, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.728218, running mem acc: 0.500
==>>> it: 101, avg. loss: 0.984202, running train acc: 0.714
==>>> it: 101, mem avg. loss: 0.825758, running mem acc: 0.661
==>>> it: 201, avg. loss: 0.801169, running train acc: 0.741
==>>> it: 201, mem avg. loss: 0.758190, running mem acc: 0.683
==>>> it: 301, avg. loss: 0.752485, running train acc: 0.739
==>>> it: 301, mem avg. loss: 0.726268, running mem acc: 0.695
==>>> it: 401, avg. loss: 0.738174, running train acc: 0.738
==>>> it: 401, mem avg. loss: 0.701266, running mem acc: 0.708
==>>> it: 501, avg. loss: 0.717411, running train acc: 0.740
==>>> it: 501, mem avg. loss: 0.666258, running mem acc: 0.728
==>>> it: 601, avg. loss: 0.698323, running train acc: 0.744
==>>> it: 601, mem avg. loss: 0.645752, running mem acc: 0.736
==>>> it: 701, avg. loss: 0.680908, running train acc: 0.750
==>>> it: 701, mem avg. loss: 0.623501, running mem acc: 0.747
==>>> it: 801, avg. loss: 0.673067, running train acc: 0.750
==>>> it: 801, mem avg. loss: 0.606041, running mem acc: 0.753
==>>> it: 901, avg. loss: 0.656307, running train acc: 0.755
==>>> it: 901, mem avg. loss: 0.589658, running mem acc: 0.760
[0.1065 0.337 0.8755 0. 0. ]
-----------run 7 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.248326, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.139964, running mem acc: 0.714
==>>> it: 101, avg. loss: 1.107239, running train acc: 0.639
==>>> it: 101, mem avg. loss: 0.628489, running mem acc: 0.767
==>>> it: 201, avg. loss: 0.888310, running train acc: 0.688
==>>> it: 201, mem avg. loss: 0.580807, running mem acc: 0.777
==>>> it: 301, avg. loss: 0.799422, running train acc: 0.715
==>>> it: 301, mem avg. loss: 0.537419, running mem acc: 0.791
==>>> it: 401, avg. loss: 0.746363, running train acc: 0.732
==>>> it: 401, mem avg. loss: 0.499567, running mem acc: 0.807
==>>> it: 501, avg. loss: 0.703436, running train acc: 0.745
==>>> it: 501, mem avg. loss: 0.460955, running mem acc: 0.826
==>>> it: 601, avg. loss: 0.680475, running train acc: 0.750
==>>> it: 601, mem avg. loss: 0.429636, running mem acc: 0.840
==>>> it: 701, avg. loss: 0.655954, running train acc: 0.759
==>>> it: 701, mem avg. loss: 0.404988, running mem acc: 0.850
==>>> it: 801, avg. loss: 0.632381, running train acc: 0.768
==>>> it: 801, mem avg. loss: 0.381648, running mem acc: 0.859
==>>> it: 901, avg. loss: 0.613075, running train acc: 0.775
==>>> it: 901, mem avg. loss: 0.368835, running mem acc: 0.864
[0.0975 0.31 0.2625 0.8925 0. ]
-----------run 7 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.846688, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.921604, running mem acc: 0.833
==>>> it: 101, avg. loss: 0.856391, running train acc: 0.796
==>>> it: 101, mem avg. loss: 0.480577, running mem acc: 0.788
==>>> it: 201, avg. loss: 0.680274, running train acc: 0.815
==>>> it: 201, mem avg. loss: 0.412324, running mem acc: 0.815
==>>> it: 301, avg. loss: 0.606197, running train acc: 0.824
==>>> it: 301, mem avg. loss: 0.369209, running mem acc: 0.840
==>>> it: 401, avg. loss: 0.552931, running train acc: 0.833
==>>> it: 401, mem avg. loss: 0.328025, running mem acc: 0.862
==>>> it: 501, avg. loss: 0.526849, running train acc: 0.835
==>>> it: 501, mem avg. loss: 0.293405, running mem acc: 0.880
==>>> it: 601, avg. loss: 0.500980, running train acc: 0.840
==>>> it: 601, mem avg. loss: 0.269533, running mem acc: 0.892
==>>> it: 701, avg. loss: 0.476191, running train acc: 0.847
==>>> it: 701, mem avg. loss: 0.248572, running mem acc: 0.903
==>>> it: 801, avg. loss: 0.453379, running train acc: 0.853
==>>> it: 801, mem avg. loss: 0.230873, running mem acc: 0.912
==>>> it: 901, avg. loss: 0.436978, running train acc: 0.858
==>>> it: 901, mem avg. loss: 0.217873, running mem acc: 0.918
[0.0555 0.1005 0.0265 0.244 0.9315]
-----------run 7-----------avg_end_acc 0.2716-----------train time 306.39585041999817
Task: 0, Labels:[7, 1]
Task: 1, Labels:[9, 4]
Task: 2, Labels:[8, 0]
Task: 3, Labels:[3, 2]
Task: 4, Labels:[6, 5]
buffer has 1000 slots
-----------run 8 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.185376, running train acc: 0.250
==>>> it: 1, mem avg. loss: 0.973085, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.723221, running train acc: 0.735
==>>> it: 101, mem avg. loss: 0.566542, running mem acc: 0.786
==>>> it: 201, avg. loss: 0.593580, running train acc: 0.777
==>>> it: 201, mem avg. loss: 0.508974, running mem acc: 0.806
==>>> it: 301, avg. loss: 0.514818, running train acc: 0.806
==>>> it: 301, mem avg. loss: 0.452917, running mem acc: 0.829
==>>> it: 401, avg. loss: 0.458474, running train acc: 0.827
==>>> it: 401, mem avg. loss: 0.403201, running mem acc: 0.848
==>>> it: 501, avg. loss: 0.430380, running train acc: 0.837
==>>> it: 501, mem avg. loss: 0.366419, running mem acc: 0.862
==>>> it: 601, avg. loss: 0.397709, running train acc: 0.850
==>>> it: 601, mem avg. loss: 0.337331, running mem acc: 0.874
==>>> it: 701, avg. loss: 0.383175, running train acc: 0.856
==>>> it: 701, mem avg. loss: 0.309314, running mem acc: 0.885
==>>> it: 801, avg. loss: 0.366685, running train acc: 0.862
==>>> it: 801, mem avg. loss: 0.287664, running mem acc: 0.894
==>>> it: 901, avg. loss: 0.349552, running train acc: 0.868
==>>> it: 901, mem avg. loss: 0.268029, running mem acc: 0.902
[0.9575 0. 0. 0. 0. ]
-----------run 8 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.831578, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.865294, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.717396, running train acc: 0.799
==>>> it: 101, mem avg. loss: 1.025523, running mem acc: 0.534
==>>> it: 201, avg. loss: 0.580532, running train acc: 0.828
==>>> it: 201, mem avg. loss: 0.967457, running mem acc: 0.546
==>>> it: 301, avg. loss: 0.538485, running train acc: 0.831
==>>> it: 301, mem avg. loss: 0.902653, running mem acc: 0.575
==>>> it: 401, avg. loss: 0.521209, running train acc: 0.829
==>>> it: 401, mem avg. loss: 0.852896, running mem acc: 0.597
==>>> it: 501, avg. loss: 0.496423, running train acc: 0.833
==>>> it: 501, mem avg. loss: 0.803995, running mem acc: 0.620
==>>> it: 601, avg. loss: 0.476398, running train acc: 0.834
==>>> it: 601, mem avg. loss: 0.778463, running mem acc: 0.634
==>>> it: 701, avg. loss: 0.458051, running train acc: 0.839
==>>> it: 701, mem avg. loss: 0.748021, running mem acc: 0.649
==>>> it: 801, avg. loss: 0.449109, running train acc: 0.841
==>>> it: 801, mem avg. loss: 0.721439, running mem acc: 0.662
==>>> it: 901, avg. loss: 0.432462, running train acc: 0.845
==>>> it: 901, mem avg. loss: 0.704087, running mem acc: 0.672
[0.1425 0.924 0. 0. 0. ]
-----------run 8 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.724642, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.841709, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.100936, running train acc: 0.586
==>>> it: 101, mem avg. loss: 0.845550, running mem acc: 0.657
==>>> it: 201, avg. loss: 0.989865, running train acc: 0.616
==>>> it: 201, mem avg. loss: 0.824728, running mem acc: 0.663
==>>> it: 301, avg. loss: 0.931488, running train acc: 0.632
==>>> it: 301, mem avg. loss: 0.794369, running mem acc: 0.675
==>>> it: 401, avg. loss: 0.877034, running train acc: 0.655
==>>> it: 401, mem avg. loss: 0.760737, running mem acc: 0.690
==>>> it: 501, avg. loss: 0.833871, running train acc: 0.674
==>>> it: 501, mem avg. loss: 0.726705, running mem acc: 0.706
==>>> it: 601, avg. loss: 0.808033, running train acc: 0.682
==>>> it: 601, mem avg. loss: 0.715010, running mem acc: 0.710
==>>> it: 701, avg. loss: 0.778459, running train acc: 0.697
==>>> it: 701, mem avg. loss: 0.699993, running mem acc: 0.715
==>>> it: 801, avg. loss: 0.753854, running train acc: 0.707
==>>> it: 801, mem avg. loss: 0.685571, running mem acc: 0.723
==>>> it: 901, avg. loss: 0.731245, running train acc: 0.716
==>>> it: 901, mem avg. loss: 0.674251, running mem acc: 0.726
[0.306 0.547 0.8595 0. 0. ]
-----------run 8 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.856245, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.110275, running mem acc: 0.857
==>>> it: 101, avg. loss: 1.214646, running train acc: 0.535
==>>> it: 101, mem avg. loss: 0.667750, running mem acc: 0.764
==>>> it: 201, avg. loss: 1.010580, running train acc: 0.592
==>>> it: 201, mem avg. loss: 0.573138, running mem acc: 0.797
==>>> it: 301, avg. loss: 0.926618, running train acc: 0.623
==>>> it: 301, mem avg. loss: 0.514130, running mem acc: 0.820
==>>> it: 401, avg. loss: 0.878962, running train acc: 0.638
==>>> it: 401, mem avg. loss: 0.485587, running mem acc: 0.834
==>>> it: 501, avg. loss: 0.841685, running train acc: 0.654
==>>> it: 501, mem avg. loss: 0.447876, running mem acc: 0.848
==>>> it: 601, avg. loss: 0.815672, running train acc: 0.663
==>>> it: 601, mem avg. loss: 0.428996, running mem acc: 0.856
==>>> it: 701, avg. loss: 0.792757, running train acc: 0.674
==>>> it: 701, mem avg. loss: 0.412280, running mem acc: 0.862
==>>> it: 801, avg. loss: 0.769619, running train acc: 0.685
==>>> it: 801, mem avg. loss: 0.394599, running mem acc: 0.870
==>>> it: 901, avg. loss: 0.753396, running train acc: 0.691
==>>> it: 901, mem avg. loss: 0.382451, running mem acc: 0.874
[0.0895 0.1115 0.3855 0.819 0. ]
-----------run 8 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.420528, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.890247, running mem acc: 0.889
==>>> it: 101, avg. loss: 1.177510, running train acc: 0.658
==>>> it: 101, mem avg. loss: 0.546550, running mem acc: 0.795
==>>> it: 201, avg. loss: 0.902865, running train acc: 0.718
==>>> it: 201, mem avg. loss: 0.477786, running mem acc: 0.816
==>>> it: 301, avg. loss: 0.790782, running train acc: 0.743
==>>> it: 301, mem avg. loss: 0.422779, running mem acc: 0.840
==>>> it: 401, avg. loss: 0.723794, running train acc: 0.759
==>>> it: 401, mem avg. loss: 0.384889, running mem acc: 0.860
==>>> it: 501, avg. loss: 0.669197, running train acc: 0.774
==>>> it: 501, mem avg. loss: 0.345020, running mem acc: 0.878
==>>> it: 601, avg. loss: 0.626676, running train acc: 0.785
==>>> it: 601, mem avg. loss: 0.312398, running mem acc: 0.891
==>>> it: 701, avg. loss: 0.596344, running train acc: 0.794
==>>> it: 701, mem avg. loss: 0.293603, running mem acc: 0.900
==>>> it: 801, avg. loss: 0.580158, running train acc: 0.799
==>>> it: 801, mem avg. loss: 0.275828, running mem acc: 0.907
==>>> it: 901, avg. loss: 0.557437, running train acc: 0.806
==>>> it: 901, mem avg. loss: 0.258709, running mem acc: 0.914
[0.083 0.1835 0.5465 0.0735 0.9215]
-----------run 8-----------avg_end_acc 0.3616-----------train time 308.8277337551117
Task: 0, Labels:[0, 5]
Task: 1, Labels:[9, 1]
Task: 2, Labels:[6, 2]
Task: 3, Labels:[8, 3]
Task: 4, Labels:[4, 7]
buffer has 1000 slots
-----------run 9 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.578648, running train acc: 0.200
==>>> it: 1, mem avg. loss: 0.793894, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.678692, running train acc: 0.752
==>>> it: 101, mem avg. loss: 0.584907, running mem acc: 0.761
==>>> it: 201, avg. loss: 0.574671, running train acc: 0.781
==>>> it: 201, mem avg. loss: 0.516954, running mem acc: 0.790
==>>> it: 301, avg. loss: 0.511278, running train acc: 0.802
==>>> it: 301, mem avg. loss: 0.462472, running mem acc: 0.812
==>>> it: 401, avg. loss: 0.478187, running train acc: 0.815
==>>> it: 401, mem avg. loss: 0.421764, running mem acc: 0.829
==>>> it: 501, avg. loss: 0.444526, running train acc: 0.827
==>>> it: 501, mem avg. loss: 0.379436, running mem acc: 0.848
==>>> it: 601, avg. loss: 0.421804, running train acc: 0.837
==>>> it: 601, mem avg. loss: 0.345904, running mem acc: 0.862
==>>> it: 701, avg. loss: 0.404279, running train acc: 0.842
==>>> it: 701, mem avg. loss: 0.320548, running mem acc: 0.873
==>>> it: 801, avg. loss: 0.387482, running train acc: 0.850
==>>> it: 801, mem avg. loss: 0.296985, running mem acc: 0.883
==>>> it: 901, avg. loss: 0.376455, running train acc: 0.853
==>>> it: 901, mem avg. loss: 0.277092, running mem acc: 0.892
[0.8665 0. 0. 0. 0. ]
-----------run 9 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.795288, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.787182, running mem acc: 0.667
==>>> it: 101, avg. loss: 1.134111, running train acc: 0.527
==>>> it: 101, mem avg. loss: 1.185038, running mem acc: 0.490
==>>> it: 201, avg. loss: 0.981864, running train acc: 0.573
==>>> it: 201, mem avg. loss: 1.083994, running mem acc: 0.545
==>>> it: 301, avg. loss: 0.905464, running train acc: 0.606
==>>> it: 301, mem avg. loss: 1.003841, running mem acc: 0.575
==>>> it: 401, avg. loss: 0.856190, running train acc: 0.630
==>>> it: 401, mem avg. loss: 0.948666, running mem acc: 0.595
==>>> it: 501, avg. loss: 0.811648, running train acc: 0.651
==>>> it: 501, mem avg. loss: 0.912344, running mem acc: 0.611
==>>> it: 601, avg. loss: 0.785653, running train acc: 0.665
==>>> it: 601, mem avg. loss: 0.878737, running mem acc: 0.631
==>>> it: 701, avg. loss: 0.757948, running train acc: 0.678
==>>> it: 701, mem avg. loss: 0.856800, running mem acc: 0.640
==>>> it: 801, avg. loss: 0.743466, running train acc: 0.685
==>>> it: 801, mem avg. loss: 0.835606, running mem acc: 0.651
==>>> it: 901, avg. loss: 0.724212, running train acc: 0.694
==>>> it: 901, mem avg. loss: 0.831147, running mem acc: 0.648
[0.3985 0.8615 0. 0. 0. ]
-----------run 9 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.231029, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.469865, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.133250, running train acc: 0.610
==>>> it: 101, mem avg. loss: 0.934501, running mem acc: 0.579
==>>> it: 201, avg. loss: 0.944353, running train acc: 0.650
==>>> it: 201, mem avg. loss: 0.813978, running mem acc: 0.655
==>>> it: 301, avg. loss: 0.877225, running train acc: 0.667
==>>> it: 301, mem avg. loss: 0.748309, running mem acc: 0.692
==>>> it: 401, avg. loss: 0.846221, running train acc: 0.674
==>>> it: 401, mem avg. loss: 0.718077, running mem acc: 0.715
==>>> it: 501, avg. loss: 0.813084, running train acc: 0.686
==>>> it: 501, mem avg. loss: 0.693702, running mem acc: 0.727
==>>> it: 601, avg. loss: 0.773570, running train acc: 0.700
==>>> it: 601, mem avg. loss: 0.676231, running mem acc: 0.738
==>>> it: 701, avg. loss: 0.749369, running train acc: 0.709
==>>> it: 701, mem avg. loss: 0.651723, running mem acc: 0.750
==>>> it: 801, avg. loss: 0.730761, running train acc: 0.715
==>>> it: 801, mem avg. loss: 0.629751, running mem acc: 0.759
==>>> it: 901, avg. loss: 0.715244, running train acc: 0.723
==>>> it: 901, mem avg. loss: 0.619569, running mem acc: 0.762
[0.143 0.499 0.783 0. 0. ]
-----------run 9 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.804684, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.859754, running mem acc: 0.571
==>>> it: 101, avg. loss: 0.841302, running train acc: 0.787
==>>> it: 101, mem avg. loss: 0.617437, running mem acc: 0.742
==>>> it: 201, avg. loss: 0.637405, running train acc: 0.820
==>>> it: 201, mem avg. loss: 0.487660, running mem acc: 0.804
==>>> it: 301, avg. loss: 0.579326, running train acc: 0.823
==>>> it: 301, mem avg. loss: 0.437252, running mem acc: 0.829
==>>> it: 401, avg. loss: 0.546411, running train acc: 0.829
==>>> it: 401, mem avg. loss: 0.396760, running mem acc: 0.848
==>>> it: 501, avg. loss: 0.506200, running train acc: 0.839
==>>> it: 501, mem avg. loss: 0.369651, running mem acc: 0.860
==>>> it: 601, avg. loss: 0.477688, running train acc: 0.846
==>>> it: 601, mem avg. loss: 0.348175, running mem acc: 0.870
==>>> it: 701, avg. loss: 0.464924, running train acc: 0.849
==>>> it: 701, mem avg. loss: 0.336637, running mem acc: 0.876
==>>> it: 801, avg. loss: 0.455144, running train acc: 0.851
==>>> it: 801, mem avg. loss: 0.317373, running mem acc: 0.885
==>>> it: 901, avg. loss: 0.438855, running train acc: 0.855
==>>> it: 901, mem avg. loss: 0.307148, running mem acc: 0.889
[0.01 0.2835 0.3235 0.9265 0. ]
-----------run 9 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.302546, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.454511, running mem acc: 0.941
==>>> it: 101, avg. loss: 1.228580, running train acc: 0.598
==>>> it: 101, mem avg. loss: 0.451256, running mem acc: 0.839
==>>> it: 201, avg. loss: 0.954106, running train acc: 0.670
==>>> it: 201, mem avg. loss: 0.389676, running mem acc: 0.862
==>>> it: 301, avg. loss: 0.834585, running train acc: 0.706
==>>> it: 301, mem avg. loss: 0.350143, running mem acc: 0.875
==>>> it: 401, avg. loss: 0.792571, running train acc: 0.713
==>>> it: 401, mem avg. loss: 0.320012, running mem acc: 0.890
==>>> it: 501, avg. loss: 0.751643, running train acc: 0.727
==>>> it: 501, mem avg. loss: 0.299788, running mem acc: 0.897
==>>> it: 601, avg. loss: 0.721062, running train acc: 0.736
==>>> it: 601, mem avg. loss: 0.283865, running mem acc: 0.904
==>>> it: 701, avg. loss: 0.693177, running train acc: 0.745
==>>> it: 701, mem avg. loss: 0.270289, running mem acc: 0.911
==>>> it: 801, avg. loss: 0.671987, running train acc: 0.751
==>>> it: 801, mem avg. loss: 0.255584, running mem acc: 0.916
==>>> it: 901, avg. loss: 0.656896, running train acc: 0.756
==>>> it: 901, mem avg. loss: 0.243938, running mem acc: 0.921
[0.0155 0.4105 0.0445 0.3925 0.844 ]
-----------run 9-----------avg_end_acc 0.3414-----------train time 308.33262848854065
Task: 0, Labels:[0, 8]
Task: 1, Labels:[2, 9]
Task: 2, Labels:[5, 6]
Task: 3, Labels:[3, 1]
Task: 4, Labels:[7, 4]
buffer has 1000 slots
-----------run 10 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.480733, running train acc: 0.300
==>>> it: 1, mem avg. loss: 0.699621, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.937143, running train acc: 0.552
==>>> it: 101, mem avg. loss: 0.846772, running mem acc: 0.606
==>>> it: 201, avg. loss: 0.835025, running train acc: 0.577
==>>> it: 201, mem avg. loss: 0.817317, running mem acc: 0.611
==>>> it: 301, avg. loss: 0.768048, running train acc: 0.605
==>>> it: 301, mem avg. loss: 0.779253, running mem acc: 0.633
==>>> it: 401, avg. loss: 0.736491, running train acc: 0.623
==>>> it: 401, mem avg. loss: 0.740048, running mem acc: 0.653
==>>> it: 501, avg. loss: 0.713155, running train acc: 0.635
==>>> it: 501, mem avg. loss: 0.707498, running mem acc: 0.674
==>>> it: 601, avg. loss: 0.696362, running train acc: 0.644
==>>> it: 601, mem avg. loss: 0.675924, running mem acc: 0.689
==>>> it: 701, avg. loss: 0.679482, running train acc: 0.653
==>>> it: 701, mem avg. loss: 0.649191, running mem acc: 0.701
==>>> it: 801, avg. loss: 0.667359, running train acc: 0.661
==>>> it: 801, mem avg. loss: 0.621584, running mem acc: 0.717
==>>> it: 901, avg. loss: 0.658537, running train acc: 0.667
==>>> it: 901, mem avg. loss: 0.598678, running mem acc: 0.732
[0.7175 0. 0. 0. 0. ]
-----------run 10 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 7.212744, running train acc: 0.000
==>>> it: 1, mem avg. loss: 3.051097, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.872109, running train acc: 0.732
==>>> it: 101, mem avg. loss: 1.373157, running mem acc: 0.456
==>>> it: 201, avg. loss: 0.731366, running train acc: 0.758
==>>> it: 201, mem avg. loss: 1.237339, running mem acc: 0.507
==>>> it: 301, avg. loss: 0.681080, running train acc: 0.765
==>>> it: 301, mem avg. loss: 1.161784, running mem acc: 0.543
==>>> it: 401, avg. loss: 0.644363, running train acc: 0.772
==>>> it: 401, mem avg. loss: 1.103935, running mem acc: 0.572
==>>> it: 501, avg. loss: 0.618954, running train acc: 0.775
==>>> it: 501, mem avg. loss: 1.085199, running mem acc: 0.584
==>>> it: 601, avg. loss: 0.596515, running train acc: 0.780
==>>> it: 601, mem avg. loss: 1.053903, running mem acc: 0.597
==>>> it: 701, avg. loss: 0.576029, running train acc: 0.785
==>>> it: 701, mem avg. loss: 1.029675, running mem acc: 0.604
==>>> it: 801, avg. loss: 0.563999, running train acc: 0.788
==>>> it: 801, mem avg. loss: 0.994030, running mem acc: 0.613
==>>> it: 901, avg. loss: 0.549429, running train acc: 0.792
==>>> it: 901, mem avg. loss: 0.978237, running mem acc: 0.619
[0.1585 0.8975 0. 0. 0. ]
-----------run 10 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.774612, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.783087, running mem acc: 0.600
==>>> it: 101, avg. loss: 1.311800, running train acc: 0.549
==>>> it: 101, mem avg. loss: 1.052957, running mem acc: 0.579
==>>> it: 201, avg. loss: 1.054549, running train acc: 0.619
==>>> it: 201, mem avg. loss: 0.961764, running mem acc: 0.627
==>>> it: 301, avg. loss: 0.971586, running train acc: 0.642
==>>> it: 301, mem avg. loss: 0.911642, running mem acc: 0.649
==>>> it: 401, avg. loss: 0.929327, running train acc: 0.651
==>>> it: 401, mem avg. loss: 0.895224, running mem acc: 0.657
==>>> it: 501, avg. loss: 0.894500, running train acc: 0.659
==>>> it: 501, mem avg. loss: 0.863130, running mem acc: 0.668
==>>> it: 601, avg. loss: 0.854971, running train acc: 0.671
==>>> it: 601, mem avg. loss: 0.823267, running mem acc: 0.681
==>>> it: 701, avg. loss: 0.823345, running train acc: 0.683
==>>> it: 701, mem avg. loss: 0.787832, running mem acc: 0.694
==>>> it: 801, avg. loss: 0.808383, running train acc: 0.689
==>>> it: 801, mem avg. loss: 0.764949, running mem acc: 0.705
==>>> it: 901, avg. loss: 0.786281, running train acc: 0.696
==>>> it: 901, mem avg. loss: 0.734392, running mem acc: 0.718
[0.245 0.2605 0.8285 0. 0. ]
-----------run 10 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.464813, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.627914, running mem acc: 0.643
==>>> it: 101, avg. loss: 0.932325, running train acc: 0.776
==>>> it: 101, mem avg. loss: 0.766942, running mem acc: 0.665
==>>> it: 201, avg. loss: 0.704261, running train acc: 0.810
==>>> it: 201, mem avg. loss: 0.633885, running mem acc: 0.714
==>>> it: 301, avg. loss: 0.618198, running train acc: 0.825
==>>> it: 301, mem avg. loss: 0.552695, running mem acc: 0.751
==>>> it: 401, avg. loss: 0.555090, running train acc: 0.835
==>>> it: 401, mem avg. loss: 0.499570, running mem acc: 0.780
==>>> it: 501, avg. loss: 0.517506, running train acc: 0.840
==>>> it: 501, mem avg. loss: 0.455970, running mem acc: 0.805
==>>> it: 601, avg. loss: 0.488042, running train acc: 0.846
==>>> it: 601, mem avg. loss: 0.418027, running mem acc: 0.827
==>>> it: 701, avg. loss: 0.468424, running train acc: 0.848
==>>> it: 701, mem avg. loss: 0.385566, running mem acc: 0.842
==>>> it: 801, avg. loss: 0.450186, running train acc: 0.851
==>>> it: 801, mem avg. loss: 0.363532, running mem acc: 0.855
==>>> it: 901, avg. loss: 0.437701, running train acc: 0.854
==>>> it: 901, mem avg. loss: 0.340555, running mem acc: 0.866
[0.1365 0.03 0.312 0.902 0. ]
-----------run 10 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.200311, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.043538, running mem acc: 0.833
==>>> it: 101, avg. loss: 1.284101, running train acc: 0.613
==>>> it: 101, mem avg. loss: 0.520788, running mem acc: 0.806
==>>> it: 201, avg. loss: 1.010886, running train acc: 0.655
==>>> it: 201, mem avg. loss: 0.426296, running mem acc: 0.845
==>>> it: 301, avg. loss: 0.901834, running train acc: 0.670
==>>> it: 301, mem avg. loss: 0.374281, running mem acc: 0.868
==>>> it: 401, avg. loss: 0.853710, running train acc: 0.679
==>>> it: 401, mem avg. loss: 0.341801, running mem acc: 0.882
==>>> it: 501, avg. loss: 0.816078, running train acc: 0.690
==>>> it: 501, mem avg. loss: 0.318260, running mem acc: 0.894
==>>> it: 601, avg. loss: 0.781968, running train acc: 0.699
==>>> it: 601, mem avg. loss: 0.296417, running mem acc: 0.902
==>>> it: 701, avg. loss: 0.754162, running train acc: 0.709
==>>> it: 701, mem avg. loss: 0.280220, running mem acc: 0.909
==>>> it: 801, avg. loss: 0.725706, running train acc: 0.718
==>>> it: 801, mem avg. loss: 0.265815, running mem acc: 0.915
==>>> it: 901, avg. loss: 0.712011, running train acc: 0.720
==>>> it: 901, mem avg. loss: 0.254353, running mem acc: 0.919
[0.131 0.0295 0.1155 0.3975 0.8355]
-----------run 10-----------avg_end_acc 0.30179999999999996-----------train time 301.6666007041931
Task: 0, Labels:[9, 8]
Task: 1, Labels:[7, 3]
Task: 2, Labels:[4, 2]
Task: 3, Labels:[6, 1]
Task: 4, Labels:[0, 5]
buffer has 1000 slots
-----------run 11 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.866596, running train acc: 0.300
==>>> it: 1, mem avg. loss: 0.894854, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.734558, running train acc: 0.699
==>>> it: 101, mem avg. loss: 0.695418, running mem acc: 0.709
==>>> it: 201, avg. loss: 0.626420, running train acc: 0.737
==>>> it: 201, mem avg. loss: 0.618876, running mem acc: 0.741
==>>> it: 301, avg. loss: 0.579040, running train acc: 0.758
==>>> it: 301, mem avg. loss: 0.552310, running mem acc: 0.770
==>>> it: 401, avg. loss: 0.524722, running train acc: 0.783
==>>> it: 401, mem avg. loss: 0.499878, running mem acc: 0.792
==>>> it: 501, avg. loss: 0.503481, running train acc: 0.794
==>>> it: 501, mem avg. loss: 0.458065, running mem acc: 0.812
==>>> it: 601, avg. loss: 0.480478, running train acc: 0.805
==>>> it: 601, mem avg. loss: 0.424175, running mem acc: 0.827
==>>> it: 701, avg. loss: 0.464682, running train acc: 0.812
==>>> it: 701, mem avg. loss: 0.392485, running mem acc: 0.841
==>>> it: 801, avg. loss: 0.443784, running train acc: 0.820
==>>> it: 801, mem avg. loss: 0.362825, running mem acc: 0.854
==>>> it: 901, avg. loss: 0.431304, running train acc: 0.827
==>>> it: 901, mem avg. loss: 0.339628, running mem acc: 0.865
[0.91 0. 0. 0. 0. ]
-----------run 11 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.426595, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.715674, running mem acc: 0.667
==>>> it: 101, avg. loss: 1.062058, running train acc: 0.599
==>>> it: 101, mem avg. loss: 1.260317, running mem acc: 0.461
==>>> it: 201, avg. loss: 0.896191, running train acc: 0.646
==>>> it: 201, mem avg. loss: 1.220848, running mem acc: 0.479
==>>> it: 301, avg. loss: 0.834901, running train acc: 0.661
==>>> it: 301, mem avg. loss: 1.180614, running mem acc: 0.516
==>>> it: 401, avg. loss: 0.802750, running train acc: 0.670
==>>> it: 401, mem avg. loss: 1.145715, running mem acc: 0.537
==>>> it: 501, avg. loss: 0.779240, running train acc: 0.676
==>>> it: 501, mem avg. loss: 1.132704, running mem acc: 0.551
==>>> it: 601, avg. loss: 0.767576, running train acc: 0.679
==>>> it: 601, mem avg. loss: 1.127593, running mem acc: 0.559
==>>> it: 701, avg. loss: 0.751053, running train acc: 0.687
==>>> it: 701, mem avg. loss: 1.133227, running mem acc: 0.562
==>>> it: 801, avg. loss: 0.736815, running train acc: 0.692
==>>> it: 801, mem avg. loss: 1.125414, running mem acc: 0.567
==>>> it: 901, avg. loss: 0.727017, running train acc: 0.698
==>>> it: 901, mem avg. loss: 1.120913, running mem acc: 0.569
[0.276 0.7775 0. 0. 0. ]
-----------run 11 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.630002, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.778949, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.285081, running train acc: 0.525
==>>> it: 101, mem avg. loss: 1.175036, running mem acc: 0.498
==>>> it: 201, avg. loss: 1.077748, running train acc: 0.566
==>>> it: 201, mem avg. loss: 1.051109, running mem acc: 0.543
==>>> it: 301, avg. loss: 1.007846, running train acc: 0.588
==>>> it: 301, mem avg. loss: 0.969975, running mem acc: 0.587
==>>> it: 401, avg. loss: 0.956610, running train acc: 0.608
==>>> it: 401, mem avg. loss: 0.915370, running mem acc: 0.615
==>>> it: 501, avg. loss: 0.915851, running train acc: 0.621
==>>> it: 501, mem avg. loss: 0.862781, running mem acc: 0.641
==>>> it: 601, avg. loss: 0.894699, running train acc: 0.628
==>>> it: 601, mem avg. loss: 0.820995, running mem acc: 0.663
==>>> it: 701, avg. loss: 0.872155, running train acc: 0.635
==>>> it: 701, mem avg. loss: 0.786634, running mem acc: 0.677
==>>> it: 801, avg. loss: 0.853289, running train acc: 0.641
==>>> it: 801, mem avg. loss: 0.758655, running mem acc: 0.688
==>>> it: 901, avg. loss: 0.841169, running train acc: 0.646
==>>> it: 901, mem avg. loss: 0.743427, running mem acc: 0.692
[0.4015 0.067 0.677 0. 0. ]
-----------run 11 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.213817, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.138823, running mem acc: 0.714
==>>> it: 101, avg. loss: 0.973278, running train acc: 0.751
==>>> it: 101, mem avg. loss: 0.613684, running mem acc: 0.749
==>>> it: 201, avg. loss: 0.678797, running train acc: 0.807
==>>> it: 201, mem avg. loss: 0.531081, running mem acc: 0.794
==>>> it: 301, avg. loss: 0.597314, running train acc: 0.818
==>>> it: 301, mem avg. loss: 0.496109, running mem acc: 0.805
==>>> it: 401, avg. loss: 0.535759, running train acc: 0.831
==>>> it: 401, mem avg. loss: 0.464941, running mem acc: 0.820
==>>> it: 501, avg. loss: 0.495699, running train acc: 0.840
==>>> it: 501, mem avg. loss: 0.437359, running mem acc: 0.832
==>>> it: 601, avg. loss: 0.473802, running train acc: 0.847
==>>> it: 601, mem avg. loss: 0.421169, running mem acc: 0.835
==>>> it: 701, avg. loss: 0.448676, running train acc: 0.852
==>>> it: 701, mem avg. loss: 0.403502, running mem acc: 0.843
==>>> it: 801, avg. loss: 0.446927, running train acc: 0.852
==>>> it: 801, mem avg. loss: 0.385660, running mem acc: 0.852
==>>> it: 901, avg. loss: 0.427225, running train acc: 0.856
==>>> it: 901, mem avg. loss: 0.374806, running mem acc: 0.857
[0.136 0.129 0.2655 0.9265 0. ]
-----------run 11 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.729347, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.452760, running mem acc: 0.722
==>>> it: 101, avg. loss: 0.963393, running train acc: 0.755
==>>> it: 101, mem avg. loss: 0.421143, running mem acc: 0.834
==>>> it: 201, avg. loss: 0.715769, running train acc: 0.796
==>>> it: 201, mem avg. loss: 0.353416, running mem acc: 0.868
==>>> it: 301, avg. loss: 0.623355, running train acc: 0.811
==>>> it: 301, mem avg. loss: 0.310526, running mem acc: 0.887
==>>> it: 401, avg. loss: 0.562926, running train acc: 0.823
==>>> it: 401, mem avg. loss: 0.284529, running mem acc: 0.900
==>>> it: 501, avg. loss: 0.529499, running train acc: 0.829
==>>> it: 501, mem avg. loss: 0.267595, running mem acc: 0.907
==>>> it: 601, avg. loss: 0.512076, running train acc: 0.833
==>>> it: 601, mem avg. loss: 0.252187, running mem acc: 0.914
==>>> it: 701, avg. loss: 0.486194, running train acc: 0.841
==>>> it: 701, mem avg. loss: 0.235652, running mem acc: 0.921
==>>> it: 801, avg. loss: 0.469967, running train acc: 0.845
==>>> it: 801, mem avg. loss: 0.227146, running mem acc: 0.926
==>>> it: 901, avg. loss: 0.455899, running train acc: 0.848
==>>> it: 901, mem avg. loss: 0.216403, running mem acc: 0.931
[0.034 0.0285 0.0755 0.4775 0.9285]
-----------run 11-----------avg_end_acc 0.3088-----------train time 307.037987947464
Task: 0, Labels:[5, 9]
Task: 1, Labels:[0, 2]
Task: 2, Labels:[8, 6]
Task: 3, Labels:[7, 3]
Task: 4, Labels:[4, 1]
buffer has 1000 slots
-----------run 12 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.348944, running train acc: 0.650
==>>> it: 1, mem avg. loss: 0.680394, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.692459, running train acc: 0.747
==>>> it: 101, mem avg. loss: 0.672140, running mem acc: 0.760
==>>> it: 201, avg. loss: 0.560507, running train acc: 0.790
==>>> it: 201, mem avg. loss: 0.582059, running mem acc: 0.795
==>>> it: 301, avg. loss: 0.511323, running train acc: 0.806
==>>> it: 301, mem avg. loss: 0.520654, running mem acc: 0.817
==>>> it: 401, avg. loss: 0.456204, running train acc: 0.824
==>>> it: 401, mem avg. loss: 0.460980, running mem acc: 0.839
==>>> it: 501, avg. loss: 0.423669, running train acc: 0.837
==>>> it: 501, mem avg. loss: 0.414346, running mem acc: 0.856
==>>> it: 601, avg. loss: 0.397101, running train acc: 0.846
==>>> it: 601, mem avg. loss: 0.376335, running mem acc: 0.870
==>>> it: 701, avg. loss: 0.374609, running train acc: 0.855
==>>> it: 701, mem avg. loss: 0.346399, running mem acc: 0.880
==>>> it: 801, avg. loss: 0.357977, running train acc: 0.863
==>>> it: 801, mem avg. loss: 0.318596, running mem acc: 0.890
==>>> it: 901, avg. loss: 0.343169, running train acc: 0.869
==>>> it: 901, mem avg. loss: 0.297110, running mem acc: 0.898
[0.936 0. 0. 0. 0. ]
-----------run 12 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.752298, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.469175, running mem acc: 0.667
==>>> it: 101, avg. loss: 0.851072, running train acc: 0.747
==>>> it: 101, mem avg. loss: 1.186595, running mem acc: 0.456
==>>> it: 201, avg. loss: 0.754981, running train acc: 0.750
==>>> it: 201, mem avg. loss: 1.115742, running mem acc: 0.491
==>>> it: 301, avg. loss: 0.699725, running train acc: 0.757
==>>> it: 301, mem avg. loss: 1.040910, running mem acc: 0.519
==>>> it: 401, avg. loss: 0.672386, running train acc: 0.756
==>>> it: 401, mem avg. loss: 0.967558, running mem acc: 0.552
==>>> it: 501, avg. loss: 0.658416, running train acc: 0.755
==>>> it: 501, mem avg. loss: 0.924400, running mem acc: 0.577
==>>> it: 601, avg. loss: 0.642973, running train acc: 0.757
==>>> it: 601, mem avg. loss: 0.890923, running mem acc: 0.595
==>>> it: 701, avg. loss: 0.631731, running train acc: 0.759
==>>> it: 701, mem avg. loss: 0.863383, running mem acc: 0.607
==>>> it: 801, avg. loss: 0.623078, running train acc: 0.760
==>>> it: 801, mem avg. loss: 0.848197, running mem acc: 0.614
==>>> it: 901, avg. loss: 0.615199, running train acc: 0.762
==>>> it: 901, mem avg. loss: 0.833699, running mem acc: 0.620
[0.277 0.851 0. 0. 0. ]
-----------run 12 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.216966, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.512809, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.743840, running train acc: 0.823
==>>> it: 101, mem avg. loss: 0.870339, running mem acc: 0.575
==>>> it: 201, avg. loss: 0.613060, running train acc: 0.832
==>>> it: 201, mem avg. loss: 0.811924, running mem acc: 0.606
==>>> it: 301, avg. loss: 0.557740, running train acc: 0.837
==>>> it: 301, mem avg. loss: 0.757779, running mem acc: 0.649
==>>> it: 401, avg. loss: 0.521260, running train acc: 0.843
==>>> it: 401, mem avg. loss: 0.706331, running mem acc: 0.680
==>>> it: 501, avg. loss: 0.498949, running train acc: 0.845
==>>> it: 501, mem avg. loss: 0.667426, running mem acc: 0.703
==>>> it: 601, avg. loss: 0.478181, running train acc: 0.848
==>>> it: 601, mem avg. loss: 0.629405, running mem acc: 0.724
==>>> it: 701, avg. loss: 0.465459, running train acc: 0.851
==>>> it: 701, mem avg. loss: 0.597629, running mem acc: 0.740
==>>> it: 801, avg. loss: 0.443517, running train acc: 0.855
==>>> it: 801, mem avg. loss: 0.579391, running mem acc: 0.749
==>>> it: 901, avg. loss: 0.427518, running train acc: 0.859
==>>> it: 901, mem avg. loss: 0.556941, running mem acc: 0.760
[0.115 0.2775 0.903 0. 0. ]
-----------run 12 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.523415, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.585291, running mem acc: 0.714
==>>> it: 101, avg. loss: 1.239930, running train acc: 0.547
==>>> it: 101, mem avg. loss: 0.591014, running mem acc: 0.775
==>>> it: 201, avg. loss: 1.016001, running train acc: 0.613
==>>> it: 201, mem avg. loss: 0.538756, running mem acc: 0.801
==>>> it: 301, avg. loss: 0.918445, running train acc: 0.647
==>>> it: 301, mem avg. loss: 0.497963, running mem acc: 0.822
==>>> it: 401, avg. loss: 0.850408, running train acc: 0.669
==>>> it: 401, mem avg. loss: 0.456798, running mem acc: 0.838
==>>> it: 501, avg. loss: 0.811895, running train acc: 0.683
==>>> it: 501, mem avg. loss: 0.422732, running mem acc: 0.852
==>>> it: 601, avg. loss: 0.773666, running train acc: 0.698
==>>> it: 601, mem avg. loss: 0.401390, running mem acc: 0.861
==>>> it: 701, avg. loss: 0.750184, running train acc: 0.707
==>>> it: 701, mem avg. loss: 0.383017, running mem acc: 0.870
==>>> it: 801, avg. loss: 0.731283, running train acc: 0.715
==>>> it: 801, mem avg. loss: 0.371895, running mem acc: 0.875
==>>> it: 901, avg. loss: 0.715851, running train acc: 0.721
==>>> it: 901, mem avg. loss: 0.359367, running mem acc: 0.881
[0.023 0.0695 0.425 0.828 0. ]
-----------run 12 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.962521, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.062025, running mem acc: 0.778
==>>> it: 101, avg. loss: 0.940808, running train acc: 0.770
==>>> it: 101, mem avg. loss: 0.423408, running mem acc: 0.844
==>>> it: 201, avg. loss: 0.681072, running train acc: 0.807
==>>> it: 201, mem avg. loss: 0.347360, running mem acc: 0.876
==>>> it: 301, avg. loss: 0.568798, running train acc: 0.828
==>>> it: 301, mem avg. loss: 0.309661, running mem acc: 0.892
==>>> it: 401, avg. loss: 0.527867, running train acc: 0.833
==>>> it: 401, mem avg. loss: 0.285106, running mem acc: 0.900
==>>> it: 501, avg. loss: 0.490870, running train acc: 0.843
==>>> it: 501, mem avg. loss: 0.265160, running mem acc: 0.909
==>>> it: 601, avg. loss: 0.465758, running train acc: 0.851
==>>> it: 601, mem avg. loss: 0.248678, running mem acc: 0.914
==>>> it: 701, avg. loss: 0.440768, running train acc: 0.858
==>>> it: 701, mem avg. loss: 0.232571, running mem acc: 0.921
==>>> it: 801, avg. loss: 0.426547, running train acc: 0.861
==>>> it: 801, mem avg. loss: 0.224288, running mem acc: 0.924
==>>> it: 901, avg. loss: 0.407295, running train acc: 0.867
==>>> it: 901, mem avg. loss: 0.214035, running mem acc: 0.928
[0.0105 0.05 0.1705 0.1985 0.9775]
-----------run 12-----------avg_end_acc 0.2814-----------train time 308.65648102760315
Task: 0, Labels:[3, 8]
Task: 1, Labels:[7, 6]
Task: 2, Labels:[0, 9]
Task: 3, Labels:[1, 5]
Task: 4, Labels:[4, 2]
buffer has 1000 slots
-----------run 13 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.750861, running train acc: 0.350
==>>> it: 1, mem avg. loss: 0.431890, running mem acc: 1.000
==>>> it: 101, avg. loss: 0.739449, running train acc: 0.747
==>>> it: 101, mem avg. loss: 0.609943, running mem acc: 0.790
==>>> it: 201, avg. loss: 0.610789, running train acc: 0.778
==>>> it: 201, mem avg. loss: 0.542144, running mem acc: 0.814
==>>> it: 301, avg. loss: 0.537955, running train acc: 0.798
==>>> it: 301, mem avg. loss: 0.484849, running mem acc: 0.833
==>>> it: 401, avg. loss: 0.486830, running train acc: 0.816
==>>> it: 401, mem avg. loss: 0.430631, running mem acc: 0.852
==>>> it: 501, avg. loss: 0.451317, running train acc: 0.829
==>>> it: 501, mem avg. loss: 0.387518, running mem acc: 0.867
==>>> it: 601, avg. loss: 0.421071, running train acc: 0.843
==>>> it: 601, mem avg. loss: 0.354949, running mem acc: 0.879
==>>> it: 701, avg. loss: 0.401332, running train acc: 0.849
==>>> it: 701, mem avg. loss: 0.325628, running mem acc: 0.890
==>>> it: 801, avg. loss: 0.392026, running train acc: 0.852
==>>> it: 801, mem avg. loss: 0.304755, running mem acc: 0.898
==>>> it: 901, avg. loss: 0.375382, running train acc: 0.858
==>>> it: 901, mem avg. loss: 0.283938, running mem acc: 0.905
[0.913 0. 0. 0. 0. ]
-----------run 13 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.443404, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.034101, running mem acc: 0.500
==>>> it: 101, avg. loss: 1.158585, running train acc: 0.554
==>>> it: 101, mem avg. loss: 1.450868, running mem acc: 0.401
==>>> it: 201, avg. loss: 0.974347, running train acc: 0.613
==>>> it: 201, mem avg. loss: 1.325249, running mem acc: 0.445
==>>> it: 301, avg. loss: 0.873742, running train acc: 0.656
==>>> it: 301, mem avg. loss: 1.230686, running mem acc: 0.478
==>>> it: 401, avg. loss: 0.805389, running train acc: 0.683
==>>> it: 401, mem avg. loss: 1.123485, running mem acc: 0.522
==>>> it: 501, avg. loss: 0.765629, running train acc: 0.698
==>>> it: 501, mem avg. loss: 1.057278, running mem acc: 0.558
==>>> it: 601, avg. loss: 0.729368, running train acc: 0.711
==>>> it: 601, mem avg. loss: 1.001341, running mem acc: 0.579
==>>> it: 701, avg. loss: 0.715498, running train acc: 0.715
==>>> it: 701, mem avg. loss: 0.971789, running mem acc: 0.596
==>>> it: 801, avg. loss: 0.690623, running train acc: 0.725
==>>> it: 801, mem avg. loss: 0.935446, running mem acc: 0.610
==>>> it: 901, avg. loss: 0.677303, running train acc: 0.731
==>>> it: 901, mem avg. loss: 0.917640, running mem acc: 0.616
[0.327 0.8835 0. 0. 0. ]
-----------run 13 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.090726, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.041477, running mem acc: 0.400
==>>> it: 101, avg. loss: 1.182323, running train acc: 0.594
==>>> it: 101, mem avg. loss: 1.025209, running mem acc: 0.585
==>>> it: 201, avg. loss: 1.004095, running train acc: 0.634
==>>> it: 201, mem avg. loss: 0.969019, running mem acc: 0.631
==>>> it: 301, avg. loss: 0.902612, running train acc: 0.666
==>>> it: 301, mem avg. loss: 0.894778, running mem acc: 0.657
==>>> it: 401, avg. loss: 0.836921, running train acc: 0.688
==>>> it: 401, mem avg. loss: 0.827979, running mem acc: 0.684
==>>> it: 501, avg. loss: 0.794586, running train acc: 0.701
==>>> it: 501, mem avg. loss: 0.790106, running mem acc: 0.700
==>>> it: 601, avg. loss: 0.764674, running train acc: 0.712
==>>> it: 601, mem avg. loss: 0.755597, running mem acc: 0.716
==>>> it: 701, avg. loss: 0.743007, running train acc: 0.720
==>>> it: 701, mem avg. loss: 0.729755, running mem acc: 0.728
==>>> it: 801, avg. loss: 0.725383, running train acc: 0.726
==>>> it: 801, mem avg. loss: 0.709008, running mem acc: 0.735
==>>> it: 901, avg. loss: 0.700898, running train acc: 0.734
==>>> it: 901, mem avg. loss: 0.691541, running mem acc: 0.740
[0.0175 0.4575 0.87 0. 0. ]
-----------run 13 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.550854, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.415663, running mem acc: 0.714
==>>> it: 101, avg. loss: 0.826499, running train acc: 0.788
==>>> it: 101, mem avg. loss: 0.598007, running mem acc: 0.751
==>>> it: 201, avg. loss: 0.606850, running train acc: 0.829
==>>> it: 201, mem avg. loss: 0.522355, running mem acc: 0.779
==>>> it: 301, avg. loss: 0.553897, running train acc: 0.837
==>>> it: 301, mem avg. loss: 0.474574, running mem acc: 0.804
==>>> it: 401, avg. loss: 0.529469, running train acc: 0.838
==>>> it: 401, mem avg. loss: 0.436922, running mem acc: 0.820
==>>> it: 501, avg. loss: 0.500515, running train acc: 0.845
==>>> it: 501, mem avg. loss: 0.407717, running mem acc: 0.835
==>>> it: 601, avg. loss: 0.479768, running train acc: 0.849
==>>> it: 601, mem avg. loss: 0.390505, running mem acc: 0.845
==>>> it: 701, avg. loss: 0.455826, running train acc: 0.853
==>>> it: 701, mem avg. loss: 0.370436, running mem acc: 0.855
==>>> it: 801, avg. loss: 0.437787, running train acc: 0.858
==>>> it: 801, mem avg. loss: 0.358248, running mem acc: 0.860
==>>> it: 901, avg. loss: 0.421803, running train acc: 0.862
==>>> it: 901, mem avg. loss: 0.349643, running mem acc: 0.865
[0.016 0.189 0.4305 0.9475 0. ]
-----------run 13 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.688954, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.820000, running mem acc: 0.889
==>>> it: 101, avg. loss: 1.419300, running train acc: 0.478
==>>> it: 101, mem avg. loss: 0.561497, running mem acc: 0.787
==>>> it: 201, avg. loss: 1.201159, running train acc: 0.527
==>>> it: 201, mem avg. loss: 0.509615, running mem acc: 0.805
==>>> it: 301, avg. loss: 1.081072, running train acc: 0.566
==>>> it: 301, mem avg. loss: 0.466971, running mem acc: 0.826
==>>> it: 401, avg. loss: 1.021584, running train acc: 0.580
==>>> it: 401, mem avg. loss: 0.436012, running mem acc: 0.838
==>>> it: 501, avg. loss: 0.967649, running train acc: 0.600
==>>> it: 501, mem avg. loss: 0.409201, running mem acc: 0.851
==>>> it: 601, avg. loss: 0.924480, running train acc: 0.615
==>>> it: 601, mem avg. loss: 0.384809, running mem acc: 0.861
==>>> it: 701, avg. loss: 0.899140, running train acc: 0.627
==>>> it: 701, mem avg. loss: 0.364701, running mem acc: 0.871
==>>> it: 801, avg. loss: 0.873429, running train acc: 0.638
==>>> it: 801, mem avg. loss: 0.350579, running mem acc: 0.877
==>>> it: 901, avg. loss: 0.845125, running train acc: 0.648
==>>> it: 901, mem avg. loss: 0.332589, running mem acc: 0.886
[0.019 0.024 0.3035 0.369 0.7865]
-----------run 13-----------avg_end_acc 0.3004-----------train time 312.03840017318726
Task: 0, Labels:[1, 7]
Task: 1, Labels:[8, 0]
Task: 2, Labels:[2, 4]
Task: 3, Labels:[6, 5]
Task: 4, Labels:[3, 9]
buffer has 1000 slots
-----------run 14 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.512703, running train acc: 0.350
==>>> it: 1, mem avg. loss: 0.562815, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.846636, running train acc: 0.655
==>>> it: 101, mem avg. loss: 0.800087, running mem acc: 0.663
==>>> it: 201, avg. loss: 0.662081, running train acc: 0.721
==>>> it: 201, mem avg. loss: 0.714853, running mem acc: 0.700
==>>> it: 301, avg. loss: 0.566529, running train acc: 0.768
==>>> it: 301, mem avg. loss: 0.637915, running mem acc: 0.733
==>>> it: 401, avg. loss: 0.514302, running train acc: 0.792
==>>> it: 401, mem avg. loss: 0.568629, running mem acc: 0.764
==>>> it: 501, avg. loss: 0.464172, running train acc: 0.815
==>>> it: 501, mem avg. loss: 0.513042, running mem acc: 0.787
==>>> it: 601, avg. loss: 0.424738, running train acc: 0.830
==>>> it: 601, mem avg. loss: 0.469511, running mem acc: 0.805
==>>> it: 701, avg. loss: 0.398065, running train acc: 0.840
==>>> it: 701, mem avg. loss: 0.434348, running mem acc: 0.820
==>>> it: 801, avg. loss: 0.379056, running train acc: 0.848
==>>> it: 801, mem avg. loss: 0.405874, running mem acc: 0.831
==>>> it: 901, avg. loss: 0.359387, running train acc: 0.857
==>>> it: 901, mem avg. loss: 0.377015, running mem acc: 0.843
[0.9425 0. 0. 0. 0. ]
-----------run 14 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.507403, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.294909, running mem acc: 0.667
==>>> it: 101, avg. loss: 1.121767, running train acc: 0.547
==>>> it: 101, mem avg. loss: 1.200162, running mem acc: 0.441
==>>> it: 201, avg. loss: 0.968840, running train acc: 0.578
==>>> it: 201, mem avg. loss: 1.133657, running mem acc: 0.494
==>>> it: 301, avg. loss: 0.896741, running train acc: 0.608
==>>> it: 301, mem avg. loss: 1.106949, running mem acc: 0.518
==>>> it: 401, avg. loss: 0.852361, running train acc: 0.620
==>>> it: 401, mem avg. loss: 1.085169, running mem acc: 0.531
==>>> it: 501, avg. loss: 0.828993, running train acc: 0.630
==>>> it: 501, mem avg. loss: 1.081623, running mem acc: 0.530
==>>> it: 601, avg. loss: 0.805710, running train acc: 0.643
==>>> it: 601, mem avg. loss: 1.051765, running mem acc: 0.539
==>>> it: 701, avg. loss: 0.787742, running train acc: 0.654
==>>> it: 701, mem avg. loss: 1.018466, running mem acc: 0.552
==>>> it: 801, avg. loss: 0.766264, running train acc: 0.663
==>>> it: 801, mem avg. loss: 0.996622, running mem acc: 0.558
==>>> it: 901, avg. loss: 0.745786, running train acc: 0.673
==>>> it: 901, mem avg. loss: 0.983889, running mem acc: 0.560
[0.627 0.817 0. 0. 0. ]
-----------run 14 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.422308, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.814196, running mem acc: 0.600
==>>> it: 101, avg. loss: 1.253447, running train acc: 0.504
==>>> it: 101, mem avg. loss: 1.088951, running mem acc: 0.559
==>>> it: 201, avg. loss: 1.094717, running train acc: 0.544
==>>> it: 201, mem avg. loss: 1.010640, running mem acc: 0.606
==>>> it: 301, avg. loss: 1.018698, running train acc: 0.570
==>>> it: 301, mem avg. loss: 0.950608, running mem acc: 0.630
==>>> it: 401, avg. loss: 0.972673, running train acc: 0.590
==>>> it: 401, mem avg. loss: 0.894433, running mem acc: 0.648
==>>> it: 501, avg. loss: 0.930481, running train acc: 0.609
==>>> it: 501, mem avg. loss: 0.841562, running mem acc: 0.673
==>>> it: 601, avg. loss: 0.903906, running train acc: 0.621
==>>> it: 601, mem avg. loss: 0.811761, running mem acc: 0.687
==>>> it: 701, avg. loss: 0.885922, running train acc: 0.629
==>>> it: 701, mem avg. loss: 0.783466, running mem acc: 0.698
==>>> it: 801, avg. loss: 0.858678, running train acc: 0.640
==>>> it: 801, mem avg. loss: 0.760061, running mem acc: 0.706
==>>> it: 901, avg. loss: 0.849834, running train acc: 0.642
==>>> it: 901, mem avg. loss: 0.736965, running mem acc: 0.714
[0.2665 0.455 0.7575 0. 0. ]
-----------run 14 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.134953, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.219522, running mem acc: 0.714
==>>> it: 101, avg. loss: 1.327656, running train acc: 0.593
==>>> it: 101, mem avg. loss: 0.782426, running mem acc: 0.710
==>>> it: 201, avg. loss: 1.022446, running train acc: 0.657
==>>> it: 201, mem avg. loss: 0.716887, running mem acc: 0.728
==>>> it: 301, avg. loss: 0.919589, running train acc: 0.683
==>>> it: 301, mem avg. loss: 0.656223, running mem acc: 0.750
==>>> it: 401, avg. loss: 0.841774, running train acc: 0.705
==>>> it: 401, mem avg. loss: 0.616512, running mem acc: 0.768
==>>> it: 501, avg. loss: 0.787340, running train acc: 0.720
==>>> it: 501, mem avg. loss: 0.566825, running mem acc: 0.789
==>>> it: 601, avg. loss: 0.737615, running train acc: 0.735
==>>> it: 601, mem avg. loss: 0.526494, running mem acc: 0.805
==>>> it: 701, avg. loss: 0.701657, running train acc: 0.747
==>>> it: 701, mem avg. loss: 0.494321, running mem acc: 0.818
==>>> it: 801, avg. loss: 0.667550, running train acc: 0.758
==>>> it: 801, mem avg. loss: 0.468675, running mem acc: 0.828
==>>> it: 901, avg. loss: 0.641319, running train acc: 0.767
==>>> it: 901, mem avg. loss: 0.452259, running mem acc: 0.837
[0.1055 0.493 0.1855 0.8795 0. ]
-----------run 14 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.816348, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.964861, running mem acc: 0.889
==>>> it: 101, avg. loss: 0.940208, running train acc: 0.737
==>>> it: 101, mem avg. loss: 0.422950, running mem acc: 0.847
==>>> it: 201, avg. loss: 0.676781, running train acc: 0.794
==>>> it: 201, mem avg. loss: 0.352098, running mem acc: 0.874
==>>> it: 301, avg. loss: 0.586468, running train acc: 0.812
==>>> it: 301, mem avg. loss: 0.308675, running mem acc: 0.892
==>>> it: 401, avg. loss: 0.541168, running train acc: 0.823
==>>> it: 401, mem avg. loss: 0.281516, running mem acc: 0.903
==>>> it: 501, avg. loss: 0.500649, running train acc: 0.834
==>>> it: 501, mem avg. loss: 0.259859, running mem acc: 0.911
==>>> it: 601, avg. loss: 0.470740, running train acc: 0.842
==>>> it: 601, mem avg. loss: 0.238673, running mem acc: 0.920
==>>> it: 701, avg. loss: 0.444696, running train acc: 0.850
==>>> it: 701, mem avg. loss: 0.224022, running mem acc: 0.926
==>>> it: 801, avg. loss: 0.420843, running train acc: 0.857
==>>> it: 801, mem avg. loss: 0.209291, running mem acc: 0.933
==>>> it: 901, avg. loss: 0.408142, running train acc: 0.861
==>>> it: 901, mem avg. loss: 0.196891, running mem acc: 0.938
[0.017 0.0755 0.034 0.339 0.8965]
-----------run 14-----------avg_end_acc 0.27240000000000003-----------train time 306.1260817050934
----------- Total 15 run: 4629.893672704697s -----------
----------- Avg_End_Acc (0.3120866666666667, 0.016629206602027557) Avg_End_Fgt (0.57018, 0.02691643774999124) Avg_Acc (0.5159273333333334, 0.015166143225367162) Avg_Bwtp (-0.6800799999999999, 0.0388831489521253) Avg_Fwt (0.0, 0.0)----------

About the result on SCR

Hi,I run this command

python general_main.py --data cifar100 --cl_type nc --agent SCR --retrieve random --update random --mem_size 1000 --head mlp --temp 0.07

and get the result

Avg_End_Acc (0.146, nan) Avg_End_Fgt (0.0911, nan) Avg_Acc (0.19771325396825395, nan) Avg_Bwtp (0.0, nan) Avg_Fwt (0.0, nan)

I also use --temp 0.1 as in the original paper, and get

Avg_End_Acc (0.14909999999999998, nan) Avg_End_Fgt (0.09540000000000001, nan) Avg_Acc (0.22003507936507938, nan) Avg_Bwtp (0.0, nan) Avg_Fwt (0.0, nan)

Both of them are lower than the performance reported in the paper (26.6 ± 0.5).
Are there any other tricks you used to improve the performance? Or I use the wrong hyper-parameters?

code details

Thanks for your nice work. I have a small question:

in experiments/run.py, there exist two similar functions: def multiple_run_tune(default_params, tune_params, save_path) and def multiple_run_tune_separate(default_params, tune_params, save_path).
In my opinion, both of them first tune the model hyperparameters on the $D_{cv}$ and then conduct online continual learning on $D_{ev}$.
Could you tell me what is the difference between these two functions, and which function do you use to achieve the results in Table7 and Table9 in the paper?

Thanks.

Question about the results on the ASER paper

Hi, I run the code on Cifar100 with memory=2k, but the performance mentioned in ASER cannot be achieved. Could you give me some help? My experiment log is as follows.

(online-learning) abc@GPU-20221:/data2/abc/PycharmProjects/online-continual-learning-main$ CUDA_VISIBLE_DEVICES=1 python general_main.py --data cifar100 --cl_type nc --agent ER --update ASER --retrieve ASER --mem_size 2000 --aser_type asvm --n_smp_cls 1.5 --k 3 --num_task 10
Namespace(agent='ER', alpha=0.9, aser_type='asvm', batch=10, cl_type='nc', classifier_chill=0.01, clip=10.0, cuda=True, cumulative_delta=False, data='cifar100', epoch=1, eps_mem_batch=10, error_analysis=False, fisher_update_after=50, fix_order=False, gss_batch_size=10, gss_mem_strength=10, k=3, kd_trick=False, kd_trick_star=False, labels_trick=False, lambda_=100, learning_rate=0.1, log_alpha=-300, mem_epoch=70, mem_iters=1, mem_size=2000, min_delta=0.0, minlr=0.0005, n_smp_cls=1.5, nmc_trick=False, ns_factor=(0.0, 0.4, 0.8, 1.2, 1.6, 2.0, 2.4, 2.8, 3.2, 3.6), ns_task=(1, 1, 2, 2, 2, 2), ns_type='noise', num_runs=15, num_runs_val=3, num_tasks=10, num_val=3, optimizer='SGD', patience=0, plot_sample=False, retrieve='ASER', review_trick=False, seed=0, separated_softmax=False, stm_capacity=1000, subsample=50, test_batch=128, update='ASER', val_size=0.1, verbose=True, weight_decay=0)
Setting up data stream
Files already downloaded and verified
Files already downloaded and verified
data setup time: 3.9977123737335205
Task: 0, Labels:[26, 86, 2, 55, 75, 93, 16, 73, 54, 95]
Task: 1, Labels:[53, 92, 78, 13, 7, 30, 22, 24, 33, 8]
Task: 2, Labels:[43, 62, 3, 71, 45, 48, 6, 99, 82, 76]
Task: 3, Labels:[60, 80, 90, 68, 51, 27, 18, 56, 63, 74]
Task: 4, Labels:[1, 61, 42, 41, 4, 15, 17, 40, 38, 5]
Task: 5, Labels:[91, 59, 0, 34, 28, 50, 11, 35, 23, 52]
Task: 6, Labels:[10, 31, 66, 57, 79, 85, 32, 84, 14, 89]
Task: 7, Labels:[19, 29, 49, 97, 98, 69, 20, 94, 72, 77]
Task: 8, Labels:[25, 37, 81, 46, 39, 65, 58, 12, 88, 70]
Task: 9, Labels:[87, 36, 21, 83, 9, 96, 67, 64, 47, 44]
buffer has 2000 slots
-----------run 0 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.366265, running train acc: 0.050
==>>> it: 1, mem avg. loss: 3.453408, running mem acc: 0.200
==>>> it: 101, avg. loss: 2.514554, running train acc: 0.195
==>>> it: 101, mem avg. loss: 2.400514, running mem acc: 0.215
==>>> it: 201, avg. loss: 2.281995, running train acc: 0.223
==>>> it: 201, mem avg. loss: 2.187224, running mem acc: 0.253
==>>> it: 301, avg. loss: 2.140274, running train acc: 0.262
==>>> it: 301, mem avg. loss: 2.015976, running mem acc: 0.299
==>>> it: 401, avg. loss: 2.039123, running train acc: 0.295
==>>> it: 401, mem avg. loss: 1.912398, running mem acc: 0.337
[0.476 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.223926, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.334138, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.866541, running train acc: 0.217
==>>> it: 101, mem avg. loss: 2.321005, running mem acc: 0.332
==>>> it: 201, avg. loss: 2.437139, running train acc: 0.298
==>>> it: 201, mem avg. loss: 2.147823, running mem acc: 0.369
==>>> it: 301, avg. loss: 2.253343, running train acc: 0.331
==>>> it: 301, mem avg. loss: 1.940424, running mem acc: 0.429
==>>> it: 401, avg. loss: 2.129498, running train acc: 0.354
==>>> it: 401, mem avg. loss: 1.778431, running mem acc: 0.474
[0.207 0.427 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.128556, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.828860, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.801638, running train acc: 0.272
==>>> it: 101, mem avg. loss: 1.474055, running mem acc: 0.603
==>>> it: 201, avg. loss: 2.367040, running train acc: 0.331
==>>> it: 201, mem avg. loss: 1.303505, running mem acc: 0.653
==>>> it: 301, avg. loss: 2.118351, running train acc: 0.380
==>>> it: 301, mem avg. loss: 1.159649, running mem acc: 0.687
==>>> it: 401, avg. loss: 1.966332, running train acc: 0.412
==>>> it: 401, mem avg. loss: 1.098953, running mem acc: 0.700
[0.079 0.113 0.565 0. 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.798961, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.499136, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.786529, running train acc: 0.243
==>>> it: 101, mem avg. loss: 1.221853, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.369076, running train acc: 0.303
==>>> it: 201, mem avg. loss: 1.155729, running mem acc: 0.677
==>>> it: 301, avg. loss: 2.178312, running train acc: 0.342
==>>> it: 301, mem avg. loss: 1.091359, running mem acc: 0.693
==>>> it: 401, avg. loss: 2.085022, running train acc: 0.360
==>>> it: 401, mem avg. loss: 1.013904, running mem acc: 0.718
[0.079 0.109 0.333 0.467 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.315978, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.365603, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.727537, running train acc: 0.293
==>>> it: 101, mem avg. loss: 1.230186, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.354434, running train acc: 0.345
==>>> it: 201, mem avg. loss: 1.134729, running mem acc: 0.691
==>>> it: 301, avg. loss: 2.175920, running train acc: 0.373
==>>> it: 301, mem avg. loss: 1.034543, running mem acc: 0.721
==>>> it: 401, avg. loss: 2.080158, running train acc: 0.396
==>>> it: 401, mem avg. loss: 0.939699, running mem acc: 0.750
[0.057 0.082 0.218 0.217 0.536 0. 0. 0. 0. 0. ]
-----------run 0 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.290586, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.617102, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.571284, running train acc: 0.316
==>>> it: 101, mem avg. loss: 1.096890, running mem acc: 0.725
==>>> it: 201, avg. loss: 2.157435, running train acc: 0.374
==>>> it: 201, mem avg. loss: 0.991963, running mem acc: 0.735
==>>> it: 301, avg. loss: 1.994095, running train acc: 0.399
==>>> it: 301, mem avg. loss: 0.872221, running mem acc: 0.764
==>>> it: 401, avg. loss: 1.863082, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.779988, running mem acc: 0.786
[0.041 0.079 0.132 0.164 0.207 0.544 0. 0. 0. 0. ]
-----------run 0 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.317054, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.155062, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.697141, running train acc: 0.269
==>>> it: 101, mem avg. loss: 1.024942, running mem acc: 0.739
==>>> it: 201, avg. loss: 2.293812, running train acc: 0.341
==>>> it: 201, mem avg. loss: 0.841099, running mem acc: 0.790
==>>> it: 301, avg. loss: 2.146386, running train acc: 0.366
==>>> it: 301, mem avg. loss: 0.758618, running mem acc: 0.803
==>>> it: 401, avg. loss: 2.050880, running train acc: 0.386
==>>> it: 401, mem avg. loss: 0.708850, running mem acc: 0.813
[0.03 0.018 0.191 0.151 0.145 0.238 0.492 0. 0. 0. ]
-----------run 0 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.596883, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.763713, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.489205, running train acc: 0.364
==>>> it: 101, mem avg. loss: 1.103100, running mem acc: 0.721
==>>> it: 201, avg. loss: 2.035138, running train acc: 0.429
==>>> it: 201, mem avg. loss: 0.894908, running mem acc: 0.761
==>>> it: 301, avg. loss: 1.873820, running train acc: 0.459
==>>> it: 301, mem avg. loss: 0.779344, running mem acc: 0.791
==>>> it: 401, avg. loss: 1.769029, running train acc: 0.482
==>>> it: 401, mem avg. loss: 0.692773, running mem acc: 0.812
[0.044 0.051 0.202 0.155 0.157 0.207 0.124 0.584 0. 0. ]
-----------run 0 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.519897, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.652946, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.576428, running train acc: 0.323
==>>> it: 101, mem avg. loss: 0.889941, running mem acc: 0.788
==>>> it: 201, avg. loss: 2.174902, running train acc: 0.380
==>>> it: 201, mem avg. loss: 0.717063, running mem acc: 0.821
==>>> it: 301, avg. loss: 2.005695, running train acc: 0.413
==>>> it: 301, mem avg. loss: 0.620423, running mem acc: 0.845
==>>> it: 401, avg. loss: 1.910732, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.574640, running mem acc: 0.855
[0.042 0.065 0.154 0.146 0.111 0.153 0.065 0.199 0.531 0. ]
-----------run 0 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.985172, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.364957, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.401293, running train acc: 0.378
==>>> it: 101, mem avg. loss: 0.856421, running mem acc: 0.783
==>>> it: 201, avg. loss: 1.966463, running train acc: 0.440
==>>> it: 201, mem avg. loss: 0.697410, running mem acc: 0.815
==>>> it: 301, avg. loss: 1.813318, running train acc: 0.469
==>>> it: 301, mem avg. loss: 0.616911, running mem acc: 0.837
==>>> it: 401, avg. loss: 1.694467, running train acc: 0.501
==>>> it: 401, mem avg. loss: 0.561811, running mem acc: 0.851
[0.032 0.029 0.163 0.168 0.079 0.058 0.079 0.171 0.11 0.615]
-----------run 0-----------avg_end_acc 0.1504-----------train time 2568.949486732483
Task: 0, Labels:[86, 42, 56, 60, 98, 53, 37, 30, 25, 88]
Task: 1, Labels:[14, 89, 67, 63, 72, 29, 24, 19, 2, 27]
Task: 2, Labels:[6, 1, 54, 3, 10, 9, 13, 52, 79, 35]
Task: 3, Labels:[57, 81, 70, 99, 15, 33, 41, 28, 62, 96]
Task: 4, Labels:[50, 32, 74, 69, 93, 22, 92, 20, 49, 94]
Task: 5, Labels:[40, 21, 55, 4, 77, 82, 51, 84, 44, 78]
Task: 6, Labels:[31, 47, 17, 16, 7, 43, 5, 75, 59, 87]
Task: 7, Labels:[8, 90, 64, 0, 85, 97, 61, 73, 23, 83]
Task: 8, Labels:[68, 76, 18, 26, 39, 11, 71, 45, 91, 34]
Task: 9, Labels:[80, 38, 58, 66, 65, 36, 48, 95, 12, 46]
buffer has 2000 slots
-----------run 1 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.808821, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.941404, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.296983, running train acc: 0.275
==>>> it: 101, mem avg. loss: 2.115360, running mem acc: 0.337
==>>> it: 201, avg. loss: 2.032232, running train acc: 0.334
==>>> it: 201, mem avg. loss: 1.877936, running mem acc: 0.377
==>>> it: 301, avg. loss: 1.886132, running train acc: 0.367
==>>> it: 301, mem avg. loss: 1.657498, running mem acc: 0.440
==>>> it: 401, avg. loss: 1.800445, running train acc: 0.394
==>>> it: 401, mem avg. loss: 1.520751, running mem acc: 0.479
[0.547 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.904166, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.290099, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.935921, running train acc: 0.153
==>>> it: 101, mem avg. loss: 1.868124, running mem acc: 0.464
==>>> it: 201, avg. loss: 2.576270, running train acc: 0.221
==>>> it: 201, mem avg. loss: 1.706802, running mem acc: 0.502
==>>> it: 301, avg. loss: 2.415551, running train acc: 0.254
==>>> it: 301, mem avg. loss: 1.590665, running mem acc: 0.529
==>>> it: 401, avg. loss: 2.307406, running train acc: 0.277
==>>> it: 401, mem avg. loss: 1.494794, running mem acc: 0.556
[0.292 0.402 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.988120, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.893860, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.747594, running train acc: 0.275
==>>> it: 101, mem avg. loss: 1.576998, running mem acc: 0.598
==>>> it: 201, avg. loss: 2.305754, running train acc: 0.337
==>>> it: 201, mem avg. loss: 1.406595, running mem acc: 0.624
==>>> it: 301, avg. loss: 2.087591, running train acc: 0.382
==>>> it: 301, mem avg. loss: 1.305067, running mem acc: 0.644
==>>> it: 401, avg. loss: 1.977011, running train acc: 0.409
==>>> it: 401, mem avg. loss: 1.217746, running mem acc: 0.665
[0.177 0.053 0.56 0. 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.944031, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.231176, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.743852, running train acc: 0.279
==>>> it: 101, mem avg. loss: 1.247328, running mem acc: 0.674
==>>> it: 201, avg. loss: 2.297974, running train acc: 0.348
==>>> it: 201, mem avg. loss: 1.100986, running mem acc: 0.701
==>>> it: 301, avg. loss: 2.126419, running train acc: 0.376
==>>> it: 301, mem avg. loss: 1.042671, running mem acc: 0.715
==>>> it: 401, avg. loss: 2.013182, running train acc: 0.393
==>>> it: 401, mem avg. loss: 0.983884, running mem acc: 0.730
[0.157 0.048 0.226 0.493 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.162704, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.110199, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.575309, running train acc: 0.315
==>>> it: 101, mem avg. loss: 1.166070, running mem acc: 0.698
==>>> it: 201, avg. loss: 2.179827, running train acc: 0.380
==>>> it: 201, mem avg. loss: 1.032305, running mem acc: 0.716
==>>> it: 301, avg. loss: 2.005400, running train acc: 0.414
==>>> it: 301, mem avg. loss: 0.920114, running mem acc: 0.741
==>>> it: 401, avg. loss: 1.882150, running train acc: 0.438
==>>> it: 401, mem avg. loss: 0.835061, running mem acc: 0.761
[0.096 0.039 0.14 0.238 0.532 0. 0. 0. 0. 0. ]
-----------run 1 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.208174, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.684143, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.826087, running train acc: 0.217
==>>> it: 101, mem avg. loss: 1.157389, running mem acc: 0.696
==>>> it: 201, avg. loss: 2.442925, running train acc: 0.280
==>>> it: 201, mem avg. loss: 1.004365, running mem acc: 0.730
==>>> it: 301, avg. loss: 2.257702, running train acc: 0.317
==>>> it: 301, mem avg. loss: 0.908919, running mem acc: 0.752
==>>> it: 401, avg. loss: 2.174214, running train acc: 0.332
==>>> it: 401, mem avg. loss: 0.841674, running mem acc: 0.768
[0.109 0.019 0.101 0.242 0.233 0.439 0. 0. 0. 0. ]
-----------run 1 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.298544, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.311026, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.551284, running train acc: 0.325
==>>> it: 101, mem avg. loss: 1.089027, running mem acc: 0.709
==>>> it: 201, avg. loss: 2.086483, running train acc: 0.407
==>>> it: 201, mem avg. loss: 0.950434, running mem acc: 0.736
==>>> it: 301, avg. loss: 1.887336, running train acc: 0.452
==>>> it: 301, mem avg. loss: 0.850815, running mem acc: 0.769
==>>> it: 401, avg. loss: 1.768425, running train acc: 0.476
==>>> it: 401, mem avg. loss: 0.760746, running mem acc: 0.794
[0.068 0.015 0.081 0.198 0.226 0.123 0.578 0. 0. 0. ]
-----------run 1 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.049094, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.315270, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.429609, running train acc: 0.357
==>>> it: 101, mem avg. loss: 0.977726, running mem acc: 0.749
==>>> it: 201, avg. loss: 2.046649, running train acc: 0.411
==>>> it: 201, mem avg. loss: 0.830165, running mem acc: 0.780
==>>> it: 301, avg. loss: 1.851019, running train acc: 0.453
==>>> it: 301, mem avg. loss: 0.732780, running mem acc: 0.802
==>>> it: 401, avg. loss: 1.728394, running train acc: 0.478
==>>> it: 401, mem avg. loss: 0.653093, running mem acc: 0.824
[0.05 0.011 0.097 0.164 0.172 0.102 0.251 0.636 0. 0. ]
-----------run 1 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.555409, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.844635, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.561639, running train acc: 0.332
==>>> it: 101, mem avg. loss: 0.929011, running mem acc: 0.765
==>>> it: 201, avg. loss: 2.119777, running train acc: 0.405
==>>> it: 201, mem avg. loss: 0.812653, running mem acc: 0.782
==>>> it: 301, avg. loss: 1.901092, running train acc: 0.447
==>>> it: 301, mem avg. loss: 0.720588, running mem acc: 0.805
==>>> it: 401, avg. loss: 1.790047, running train acc: 0.471
==>>> it: 401, mem avg. loss: 0.659008, running mem acc: 0.821
[0.053 0.012 0.113 0.171 0.151 0.107 0.205 0.202 0.615 0. ]
-----------run 1 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.460724, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.234395, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.556615, running train acc: 0.310
==>>> it: 101, mem avg. loss: 0.950068, running mem acc: 0.758
==>>> it: 201, avg. loss: 2.157833, running train acc: 0.372
==>>> it: 201, mem avg. loss: 0.781524, running mem acc: 0.798
==>>> it: 301, avg. loss: 1.994952, running train acc: 0.404
==>>> it: 301, mem avg. loss: 0.695320, running mem acc: 0.817
==>>> it: 401, avg. loss: 1.914375, running train acc: 0.423
==>>> it: 401, mem avg. loss: 0.636332, running mem acc: 0.834
[0.057 0.009 0.087 0.183 0.136 0.118 0.16 0.161 0.166 0.526]
-----------run 1-----------avg_end_acc 0.1603-----------train time 2500.0991492271423
Task: 0, Labels:[95, 72, 6, 39, 62, 24, 56, 36, 75, 61]
Task: 1, Labels:[42, 53, 26, 70, 88, 17, 98, 13, 47, 5]
Task: 2, Labels:[87, 85, 59, 7, 8, 16, 83, 11, 1, 69]
Task: 3, Labels:[33, 37, 94, 28, 73, 2, 22, 49, 64, 90]
Task: 4, Labels:[21, 44, 48, 30, 34, 65, 15, 29, 67, 78]
Task: 5, Labels:[93, 31, 12, 81, 57, 68, 89, 86, 25, 9]
Task: 6, Labels:[84, 52, 80, 20, 63, 38, 50, 99, 74, 79]
Task: 7, Labels:[51, 45, 96, 60, 35, 41, 71, 14, 4, 54]
Task: 8, Labels:[0, 82, 91, 66, 23, 40, 10, 76, 55, 58]
Task: 9, Labels:[27, 32, 77, 43, 18, 92, 97, 19, 3, 46]
buffer has 2000 slots
-----------run 2 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.470682, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.411853, running mem acc: 0.600
==>>> it: 101, avg. loss: 2.549353, running train acc: 0.207
==>>> it: 101, mem avg. loss: 2.306140, running mem acc: 0.245
==>>> it: 201, avg. loss: 2.231801, running train acc: 0.262
==>>> it: 201, mem avg. loss: 2.051633, running mem acc: 0.304
==>>> it: 301, avg. loss: 2.042731, running train acc: 0.322
==>>> it: 301, mem avg. loss: 1.848049, running mem acc: 0.364
==>>> it: 401, avg. loss: 1.921602, running train acc: 0.357
==>>> it: 401, mem avg. loss: 1.719325, running mem acc: 0.401
[0.517 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.191833, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.252552, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.789692, running train acc: 0.208
==>>> it: 101, mem avg. loss: 1.714980, running mem acc: 0.522
==>>> it: 201, avg. loss: 2.381965, running train acc: 0.275
==>>> it: 201, mem avg. loss: 1.523606, running mem acc: 0.553
==>>> it: 301, avg. loss: 2.159425, running train acc: 0.337
==>>> it: 301, mem avg. loss: 1.356072, running mem acc: 0.599
==>>> it: 401, avg. loss: 2.034362, running train acc: 0.362
==>>> it: 401, mem avg. loss: 1.206179, running mem acc: 0.640
[0.306 0.515 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.325661, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.754713, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.766322, running train acc: 0.249
==>>> it: 101, mem avg. loss: 1.391459, running mem acc: 0.620
==>>> it: 201, avg. loss: 2.354863, running train acc: 0.309
==>>> it: 201, mem avg. loss: 1.212666, running mem acc: 0.650
==>>> it: 301, avg. loss: 2.157635, running train acc: 0.354
==>>> it: 301, mem avg. loss: 1.115524, running mem acc: 0.673
==>>> it: 401, avg. loss: 2.065960, running train acc: 0.372
==>>> it: 401, mem avg. loss: 1.011921, running mem acc: 0.703
[0.262 0.165 0.51 0. 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.009512, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.490777, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.678599, running train acc: 0.264
==>>> it: 101, mem avg. loss: 1.269854, running mem acc: 0.636
==>>> it: 201, avg. loss: 2.286281, running train acc: 0.324
==>>> it: 201, mem avg. loss: 1.150229, running mem acc: 0.665
==>>> it: 301, avg. loss: 2.112021, running train acc: 0.361
==>>> it: 301, mem avg. loss: 1.078822, running mem acc: 0.684
==>>> it: 401, avg. loss: 1.988484, running train acc: 0.392
==>>> it: 401, mem avg. loss: 1.001867, running mem acc: 0.703
[0.234 0.213 0.198 0.526 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.937668, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.499459, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.814881, running train acc: 0.225
==>>> it: 101, mem avg. loss: 1.251760, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.445513, running train acc: 0.279
==>>> it: 201, mem avg. loss: 1.103261, running mem acc: 0.697
==>>> it: 301, avg. loss: 2.249603, running train acc: 0.319
==>>> it: 301, mem avg. loss: 1.030050, running mem acc: 0.718
==>>> it: 401, avg. loss: 2.146565, running train acc: 0.347
==>>> it: 401, mem avg. loss: 0.945477, running mem acc: 0.743
[0.212 0.242 0.091 0.206 0.449 0. 0. 0. 0. 0. ]
-----------run 2 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.141557, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.166369, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.672371, running train acc: 0.304
==>>> it: 101, mem avg. loss: 1.196003, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.243378, running train acc: 0.375
==>>> it: 201, mem avg. loss: 1.073220, running mem acc: 0.708
==>>> it: 301, avg. loss: 2.050653, running train acc: 0.408
==>>> it: 301, mem avg. loss: 0.960671, running mem acc: 0.734
==>>> it: 401, avg. loss: 1.923413, running train acc: 0.438
==>>> it: 401, mem avg. loss: 0.878884, running mem acc: 0.756
[0.166 0.16 0.095 0.146 0.167 0.567 0. 0. 0. 0. ]
-----------run 2 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.476360, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.158807, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.743589, running train acc: 0.262
==>>> it: 101, mem avg. loss: 1.095355, running mem acc: 0.735
==>>> it: 201, avg. loss: 2.323596, running train acc: 0.311
==>>> it: 201, mem avg. loss: 0.939340, running mem acc: 0.758
==>>> it: 301, avg. loss: 2.192877, running train acc: 0.334
==>>> it: 301, mem avg. loss: 0.876565, running mem acc: 0.772
==>>> it: 401, avg. loss: 2.053821, running train acc: 0.364
==>>> it: 401, mem avg. loss: 0.803992, running mem acc: 0.787
[0.142 0.205 0.086 0.132 0.116 0.245 0.458 0. 0. 0. ]
-----------run 2 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.927103, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.158655, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.554221, running train acc: 0.344
==>>> it: 101, mem avg. loss: 1.098241, running mem acc: 0.703
==>>> it: 201, avg. loss: 2.108451, running train acc: 0.404
==>>> it: 201, mem avg. loss: 0.954647, running mem acc: 0.739
==>>> it: 301, avg. loss: 1.925533, running train acc: 0.436
==>>> it: 301, mem avg. loss: 0.845131, running mem acc: 0.768
==>>> it: 401, avg. loss: 1.809468, running train acc: 0.460
==>>> it: 401, mem avg. loss: 0.764796, running mem acc: 0.790
[0.159 0.136 0.073 0.091 0.098 0.182 0.172 0.572 0. 0. ]
-----------run 2 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.628189, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.174980, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.488896, running train acc: 0.338
==>>> it: 101, mem avg. loss: 1.001205, running mem acc: 0.744
==>>> it: 201, avg. loss: 2.010815, running train acc: 0.437
==>>> it: 201, mem avg. loss: 0.847334, running mem acc: 0.773
==>>> it: 301, avg. loss: 1.820574, running train acc: 0.478
==>>> it: 301, mem avg. loss: 0.745095, running mem acc: 0.799
==>>> it: 401, avg. loss: 1.692188, running train acc: 0.510
==>>> it: 401, mem avg. loss: 0.670650, running mem acc: 0.818
[0.126 0.165 0.07 0.103 0.074 0.153 0.137 0.24 0.638 0. ]
-----------run 2 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.108333, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.707170, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.686840, running train acc: 0.268
==>>> it: 101, mem avg. loss: 0.934979, running mem acc: 0.764
==>>> it: 201, avg. loss: 2.303757, running train acc: 0.338
==>>> it: 201, mem avg. loss: 0.791730, running mem acc: 0.793
==>>> it: 301, avg. loss: 2.155229, running train acc: 0.361
==>>> it: 301, mem avg. loss: 0.708373, running mem acc: 0.814
==>>> it: 401, avg. loss: 2.063445, running train acc: 0.379
==>>> it: 401, mem avg. loss: 0.657133, running mem acc: 0.828
[0.137 0.094 0.067 0.084 0.088 0.119 0.15 0.203 0.217 0.524]
-----------run 2-----------avg_end_acc 0.1683-----------train time 2479.9554154872894
Task: 0, Labels:[44, 5, 59, 13, 83, 34, 56, 63, 75, 45]
Task: 1, Labels:[69, 94, 77, 80, 23, 62, 10, 97, 42, 84]
Task: 2, Labels:[37, 64, 20, 21, 65, 98, 76, 85, 88, 12]
Task: 3, Labels:[33, 92, 38, 22, 50, 96, 16, 28, 89, 4]
Task: 4, Labels:[72, 27, 48, 55, 90, 47, 49, 31, 67, 17]
Task: 5, Labels:[32, 99, 11, 91, 1, 6, 41, 93, 15, 86]
Task: 6, Labels:[61, 82, 51, 68, 40, 8, 57, 30, 81, 35]
Task: 7, Labels:[9, 95, 79, 39, 58, 78, 43, 73, 70, 18]
Task: 8, Labels:[46, 52, 54, 29, 26, 3, 74, 24, 14, 71]
Task: 9, Labels:[60, 19, 36, 2, 66, 25, 87, 53, 0, 7]
buffer has 2000 slots
-----------run 3 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.261781, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.832437, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.584979, running train acc: 0.197
==>>> it: 101, mem avg. loss: 2.441796, running mem acc: 0.221
==>>> it: 201, avg. loss: 2.278637, running train acc: 0.243
==>>> it: 201, mem avg. loss: 2.138299, running mem acc: 0.281
==>>> it: 301, avg. loss: 2.102452, running train acc: 0.287
==>>> it: 301, mem avg. loss: 1.939245, running mem acc: 0.334
==>>> it: 401, avg. loss: 2.005750, running train acc: 0.317
==>>> it: 401, mem avg. loss: 1.809237, running mem acc: 0.375
[0.498 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.578908, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.547303, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.901179, running train acc: 0.181
==>>> it: 101, mem avg. loss: 2.233723, running mem acc: 0.375
==>>> it: 201, avg. loss: 2.516784, running train acc: 0.253
==>>> it: 201, mem avg. loss: 2.129915, running mem acc: 0.383
==>>> it: 301, avg. loss: 2.355105, running train acc: 0.276
==>>> it: 301, mem avg. loss: 1.976324, running mem acc: 0.423
==>>> it: 401, avg. loss: 2.221411, running train acc: 0.307
==>>> it: 401, mem avg. loss: 1.810834, running mem acc: 0.471
[0.161 0.411 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.633906, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.257336, running mem acc: 0.600
==>>> it: 101, avg. loss: 2.626944, running train acc: 0.267
==>>> it: 101, mem avg. loss: 1.318865, running mem acc: 0.645
==>>> it: 201, avg. loss: 2.234762, running train acc: 0.339
==>>> it: 201, mem avg. loss: 1.196154, running mem acc: 0.669
==>>> it: 301, avg. loss: 2.046574, running train acc: 0.376
==>>> it: 301, mem avg. loss: 1.108157, running mem acc: 0.692
==>>> it: 401, avg. loss: 1.920790, running train acc: 0.403
==>>> it: 401, mem avg. loss: 1.036049, running mem acc: 0.712
[0.131 0.191 0.518 0. 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.100137, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.721326, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.770089, running train acc: 0.248
==>>> it: 101, mem avg. loss: 1.258925, running mem acc: 0.664
==>>> it: 201, avg. loss: 2.381001, running train acc: 0.302
==>>> it: 201, mem avg. loss: 1.232124, running mem acc: 0.659
==>>> it: 301, avg. loss: 2.214494, running train acc: 0.325
==>>> it: 301, mem avg. loss: 1.178918, running mem acc: 0.674
==>>> it: 401, avg. loss: 2.127861, running train acc: 0.344
==>>> it: 401, mem avg. loss: 1.101001, running mem acc: 0.694
[0.055 0.089 0.182 0.465 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.997226, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.507348, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.644032, running train acc: 0.298
==>>> it: 101, mem avg. loss: 1.280746, running mem acc: 0.662
==>>> it: 201, avg. loss: 2.243546, running train acc: 0.357
==>>> it: 201, mem avg. loss: 1.182486, running mem acc: 0.674
==>>> it: 301, avg. loss: 2.105380, running train acc: 0.380
==>>> it: 301, mem avg. loss: 1.078236, running mem acc: 0.699
==>>> it: 401, avg. loss: 1.994536, running train acc: 0.405
==>>> it: 401, mem avg. loss: 0.984265, running mem acc: 0.727
[0.055 0.135 0.139 0.169 0.5 0. 0. 0. 0. 0. ]
-----------run 3 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.871581, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.176172, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.786092, running train acc: 0.236
==>>> it: 101, mem avg. loss: 1.168959, running mem acc: 0.701
==>>> it: 201, avg. loss: 2.397523, running train acc: 0.312
==>>> it: 201, mem avg. loss: 1.000313, running mem acc: 0.739
==>>> it: 301, avg. loss: 2.234745, running train acc: 0.345
==>>> it: 301, mem avg. loss: 0.872306, running mem acc: 0.770
==>>> it: 401, avg. loss: 2.122992, running train acc: 0.370
==>>> it: 401, mem avg. loss: 0.790151, running mem acc: 0.792
[0.031 0.131 0.113 0.12 0.205 0.524 0. 0. 0. 0. ]
-----------run 3 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.295253, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.346270, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.414082, running train acc: 0.381
==>>> it: 101, mem avg. loss: 1.044271, running mem acc: 0.724
==>>> it: 201, avg. loss: 1.941657, running train acc: 0.466
==>>> it: 201, mem avg. loss: 0.908932, running mem acc: 0.757
==>>> it: 301, avg. loss: 1.763143, running train acc: 0.504
==>>> it: 301, mem avg. loss: 0.810459, running mem acc: 0.779
==>>> it: 401, avg. loss: 1.641878, running train acc: 0.531
==>>> it: 401, mem avg. loss: 0.721429, running mem acc: 0.800
[0.046 0.065 0.063 0.107 0.157 0.122 0.667 0. 0. 0. ]
-----------run 3 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.655418, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.675616, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.540270, running train acc: 0.331
==>>> it: 101, mem avg. loss: 0.964820, running mem acc: 0.759
==>>> it: 201, avg. loss: 2.146472, running train acc: 0.387
==>>> it: 201, mem avg. loss: 0.837115, running mem acc: 0.781
==>>> it: 301, avg. loss: 1.965311, running train acc: 0.426
==>>> it: 301, mem avg. loss: 0.726884, running mem acc: 0.809
==>>> it: 401, avg. loss: 1.842273, running train acc: 0.452
==>>> it: 401, mem avg. loss: 0.667068, running mem acc: 0.822
[0.032 0.056 0.098 0.127 0.136 0.144 0.236 0.551 0. 0. ]
-----------run 3 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.633739, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.289289, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.348962, running train acc: 0.410
==>>> it: 101, mem avg. loss: 0.949333, running mem acc: 0.766
==>>> it: 201, avg. loss: 1.981276, running train acc: 0.457
==>>> it: 201, mem avg. loss: 0.775644, running mem acc: 0.801
==>>> it: 301, avg. loss: 1.793520, running train acc: 0.488
==>>> it: 301, mem avg. loss: 0.694071, running mem acc: 0.821
==>>> it: 401, avg. loss: 1.712958, running train acc: 0.501
==>>> it: 401, mem avg. loss: 0.622364, running mem acc: 0.839
[0.028 0.041 0.056 0.109 0.092 0.106 0.213 0.169 0.594 0. ]
-----------run 3 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.743945, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.639668, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.397058, running train acc: 0.380
==>>> it: 101, mem avg. loss: 0.836878, running mem acc: 0.780
==>>> it: 201, avg. loss: 1.899061, running train acc: 0.465
==>>> it: 201, mem avg. loss: 0.684619, running mem acc: 0.814
==>>> it: 301, avg. loss: 1.713826, running train acc: 0.505
==>>> it: 301, mem avg. loss: 0.614928, running mem acc: 0.825
==>>> it: 401, avg. loss: 1.607956, running train acc: 0.531
==>>> it: 401, mem avg. loss: 0.564728, running mem acc: 0.837
[0.041 0.039 0.041 0.088 0.114 0.098 0.165 0.125 0.189 0.615]
-----------run 3-----------avg_end_acc 0.15150000000000002-----------train time 2442.0826992988586
Task: 0, Labels:[14, 16, 10, 42, 34, 47, 61, 80, 71, 26]
Task: 1, Labels:[89, 33, 44, 12, 91, 9, 22, 83, 18, 45]
Task: 2, Labels:[5, 36, 24, 46, 98, 35, 87, 3, 48, 28]
Task: 3, Labels:[29, 8, 57, 0, 23, 41, 4, 60, 62, 69]
Task: 4, Labels:[81, 40, 52, 55, 38, 6, 53, 85, 74, 11]
Task: 5, Labels:[93, 30, 65, 56, 13, 82, 96, 37, 32, 27]
Task: 6, Labels:[88, 2, 77, 75, 21, 64, 19, 95, 1, 63]
Task: 7, Labels:[67, 68, 50, 51, 84, 59, 58, 7, 78, 31]
Task: 8, Labels:[72, 97, 54, 15, 49, 99, 86, 79, 94, 92]
Task: 9, Labels:[25, 73, 66, 76, 17, 70, 90, 43, 20, 39]
buffer has 2000 slots
-----------run 4 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.061858, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.598956, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.511449, running train acc: 0.209
==>>> it: 101, mem avg. loss: 2.397471, running mem acc: 0.246
==>>> it: 201, avg. loss: 2.280364, running train acc: 0.245
==>>> it: 201, mem avg. loss: 2.148914, running mem acc: 0.279
==>>> it: 301, avg. loss: 2.163460, running train acc: 0.270
==>>> it: 301, mem avg. loss: 2.025820, running mem acc: 0.311
==>>> it: 401, avg. loss: 2.069307, running train acc: 0.295
==>>> it: 401, mem avg. loss: 1.965708, running mem acc: 0.327
[0.425 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.782812, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.683428, running mem acc: 0.450
==>>> it: 101, avg. loss: 2.861774, running train acc: 0.209
==>>> it: 101, mem avg. loss: 2.311387, running mem acc: 0.339
==>>> it: 201, avg. loss: 2.483971, running train acc: 0.278
==>>> it: 201, mem avg. loss: 2.073559, running mem acc: 0.383
==>>> it: 301, avg. loss: 2.314230, running train acc: 0.314
==>>> it: 301, mem avg. loss: 1.924911, running mem acc: 0.424
==>>> it: 401, avg. loss: 2.185459, running train acc: 0.344
==>>> it: 401, mem avg. loss: 1.788918, running mem acc: 0.461
[0.061 0.455 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.930666, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.089005, running mem acc: 0.650
==>>> it: 101, avg. loss: 2.977102, running train acc: 0.182
==>>> it: 101, mem avg. loss: 1.486402, running mem acc: 0.637
==>>> it: 201, avg. loss: 2.546127, running train acc: 0.246
==>>> it: 201, mem avg. loss: 1.376242, running mem acc: 0.647
==>>> it: 301, avg. loss: 2.345372, running train acc: 0.280
==>>> it: 301, mem avg. loss: 1.271719, running mem acc: 0.668
==>>> it: 401, avg. loss: 2.221823, running train acc: 0.306
==>>> it: 401, mem avg. loss: 1.191879, running mem acc: 0.688
[0.048 0.143 0.418 0. 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.041882, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.231821, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.627415, running train acc: 0.318
==>>> it: 101, mem avg. loss: 1.229593, running mem acc: 0.703
==>>> it: 201, avg. loss: 2.188096, running train acc: 0.382
==>>> it: 201, mem avg. loss: 1.157265, running mem acc: 0.705
==>>> it: 301, avg. loss: 1.980324, running train acc: 0.424
==>>> it: 301, mem avg. loss: 1.045634, running mem acc: 0.730
==>>> it: 401, avg. loss: 1.845734, running train acc: 0.451
==>>> it: 401, mem avg. loss: 0.948088, running mem acc: 0.755
[0.075 0.09 0.212 0.572 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.720258, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.300091, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.592635, running train acc: 0.322
==>>> it: 101, mem avg. loss: 1.106748, running mem acc: 0.715
==>>> it: 201, avg. loss: 2.193321, running train acc: 0.363
==>>> it: 201, mem avg. loss: 0.989534, running mem acc: 0.740
==>>> it: 301, avg. loss: 2.022125, running train acc: 0.390
==>>> it: 301, mem avg. loss: 0.891040, running mem acc: 0.759
==>>> it: 401, avg. loss: 1.912615, running train acc: 0.413
==>>> it: 401, mem avg. loss: 0.816686, running mem acc: 0.783
[0.037 0.048 0.085 0.368 0.557 0. 0. 0. 0. 0. ]
-----------run 4 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.221529, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.492802, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.601299, running train acc: 0.314
==>>> it: 101, mem avg. loss: 1.084352, running mem acc: 0.711
==>>> it: 201, avg. loss: 2.221762, running train acc: 0.358
==>>> it: 201, mem avg. loss: 0.899692, running mem acc: 0.757
==>>> it: 301, avg. loss: 2.051465, running train acc: 0.386
==>>> it: 301, mem avg. loss: 0.799671, running mem acc: 0.783
==>>> it: 401, avg. loss: 1.962413, running train acc: 0.405
==>>> it: 401, mem avg. loss: 0.730385, running mem acc: 0.800
[0.026 0.06 0.156 0.359 0.109 0.506 0. 0. 0. 0. ]
-----------run 4 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.047173, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.934935, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.644008, running train acc: 0.280
==>>> it: 101, mem avg. loss: 1.144862, running mem acc: 0.693
==>>> it: 201, avg. loss: 2.207292, running train acc: 0.365
==>>> it: 201, mem avg. loss: 0.947099, running mem acc: 0.750
==>>> it: 301, avg. loss: 2.040977, running train acc: 0.396
==>>> it: 301, mem avg. loss: 0.847885, running mem acc: 0.773
==>>> it: 401, avg. loss: 1.946916, running train acc: 0.413
==>>> it: 401, mem avg. loss: 0.755749, running mem acc: 0.799
[0.024 0.048 0.087 0.241 0.165 0.132 0.558 0. 0. 0. ]
-----------run 4 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.870583, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.645980, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.636354, running train acc: 0.326
==>>> it: 101, mem avg. loss: 1.100048, running mem acc: 0.714
==>>> it: 201, avg. loss: 2.210996, running train acc: 0.387
==>>> it: 201, mem avg. loss: 0.888319, running mem acc: 0.767
==>>> it: 301, avg. loss: 2.013457, running train acc: 0.420
==>>> it: 301, mem avg. loss: 0.785999, running mem acc: 0.789
==>>> it: 401, avg. loss: 1.911632, running train acc: 0.445
==>>> it: 401, mem avg. loss: 0.715387, running mem acc: 0.805
[0.044 0.047 0.071 0.224 0.132 0.129 0.132 0.546 0. 0. ]
-----------run 4 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.703582, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.689163, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.587725, running train acc: 0.313
==>>> it: 101, mem avg. loss: 1.000830, running mem acc: 0.738
==>>> it: 201, avg. loss: 2.139680, running train acc: 0.390
==>>> it: 201, mem avg. loss: 0.865142, running mem acc: 0.767
==>>> it: 301, avg. loss: 1.960235, running train acc: 0.428
==>>> it: 301, mem avg. loss: 0.775243, running mem acc: 0.783
==>>> it: 401, avg. loss: 1.876596, running train acc: 0.444
==>>> it: 401, mem avg. loss: 0.700705, running mem acc: 0.805
[0.03 0.04 0.079 0.239 0.084 0.123 0.094 0.163 0.527 0. ]
-----------run 4 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.971141, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.875949, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.409849, running train acc: 0.380
==>>> it: 101, mem avg. loss: 0.913430, running mem acc: 0.762
==>>> it: 201, avg. loss: 1.993176, running train acc: 0.445
==>>> it: 201, mem avg. loss: 0.770595, running mem acc: 0.783
==>>> it: 301, avg. loss: 1.827065, running train acc: 0.474
==>>> it: 301, mem avg. loss: 0.683498, running mem acc: 0.805
==>>> it: 401, avg. loss: 1.717748, running train acc: 0.497
==>>> it: 401, mem avg. loss: 0.633218, running mem acc: 0.816
[0.027 0.027 0.086 0.246 0.145 0.114 0.065 0.101 0.078 0.626]
-----------run 4-----------avg_end_acc 0.1515-----------train time 2447.1072702407837
Task: 0, Labels:[59, 27, 99, 11, 53, 51, 9, 97, 67, 8]
Task: 1, Labels:[84, 0, 6, 20, 44, 46, 91, 68, 70, 90]
Task: 2, Labels:[96, 15, 14, 85, 75, 42, 30, 81, 92, 64]
Task: 3, Labels:[55, 45, 71, 76, 36, 47, 21, 17, 24, 82]
Task: 4, Labels:[7, 69, 79, 3, 18, 25, 32, 38, 33, 63]
Task: 5, Labels:[77, 88, 52, 60, 93, 5, 66, 57, 16, 89]
Task: 6, Labels:[98, 10, 78, 35, 22, 12, 4, 43, 40, 39]
Task: 7, Labels:[37, 72, 49, 48, 54, 80, 1, 41, 2, 19]
Task: 8, Labels:[29, 74, 83, 58, 62, 26, 73, 61, 65, 86]
Task: 9, Labels:[31, 13, 56, 95, 34, 28, 50, 23, 94, 87]
buffer has 2000 slots
-----------run 5 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.489303, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.463734, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.524730, running train acc: 0.205
==>>> it: 101, mem avg. loss: 2.312677, running mem acc: 0.259
==>>> it: 201, avg. loss: 2.208632, running train acc: 0.274
==>>> it: 201, mem avg. loss: 2.072790, running mem acc: 0.301
==>>> it: 301, avg. loss: 2.087608, running train acc: 0.298
==>>> it: 301, mem avg. loss: 1.867941, running mem acc: 0.359
==>>> it: 401, avg. loss: 1.994107, running train acc: 0.328
==>>> it: 401, mem avg. loss: 1.762580, running mem acc: 0.396
[0.466 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.493091, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.470254, running mem acc: 0.600
==>>> it: 101, avg. loss: 2.786852, running train acc: 0.221
==>>> it: 101, mem avg. loss: 1.996050, running mem acc: 0.437
==>>> it: 201, avg. loss: 2.396383, running train acc: 0.281
==>>> it: 201, mem avg. loss: 1.824047, running mem acc: 0.463
==>>> it: 301, avg. loss: 2.237637, running train acc: 0.309
==>>> it: 301, mem avg. loss: 1.676074, running mem acc: 0.510
==>>> it: 401, avg. loss: 2.115021, running train acc: 0.338
==>>> it: 401, mem avg. loss: 1.528308, running mem acc: 0.550
[0.266 0.469 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.348887, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.783934, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.805101, running train acc: 0.232
==>>> it: 101, mem avg. loss: 1.406823, running mem acc: 0.617
==>>> it: 201, avg. loss: 2.372143, running train acc: 0.300
==>>> it: 201, mem avg. loss: 1.247796, running mem acc: 0.640
==>>> it: 301, avg. loss: 2.195835, running train acc: 0.338
==>>> it: 301, mem avg. loss: 1.141824, running mem acc: 0.663
==>>> it: 401, avg. loss: 2.063770, running train acc: 0.367
==>>> it: 401, mem avg. loss: 1.049696, running mem acc: 0.686
[0.25 0.292 0.462 0. 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.591803, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.438095, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.451628, running train acc: 0.369
==>>> it: 101, mem avg. loss: 1.195000, running mem acc: 0.674
==>>> it: 201, avg. loss: 1.949464, running train acc: 0.458
==>>> it: 201, mem avg. loss: 1.145484, running mem acc: 0.667
==>>> it: 301, avg. loss: 1.804470, running train acc: 0.489
==>>> it: 301, mem avg. loss: 1.060942, running mem acc: 0.685
==>>> it: 401, avg. loss: 1.683415, running train acc: 0.514
==>>> it: 401, mem avg. loss: 0.957913, running mem acc: 0.718
[0.188 0.216 0.102 0.652 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.093219, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.485474, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.862644, running train acc: 0.223
==>>> it: 101, mem avg. loss: 1.108386, running mem acc: 0.717
==>>> it: 201, avg. loss: 2.441749, running train acc: 0.289
==>>> it: 201, mem avg. loss: 0.988327, running mem acc: 0.737
==>>> it: 301, avg. loss: 2.287093, running train acc: 0.314
==>>> it: 301, mem avg. loss: 0.911507, running mem acc: 0.753
==>>> it: 401, avg. loss: 2.180297, running train acc: 0.334
==>>> it: 401, mem avg. loss: 0.857030, running mem acc: 0.768
[0.163 0.165 0.116 0.272 0.471 0. 0. 0. 0. 0. ]
-----------run 5 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.643694, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.493902, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.552498, running train acc: 0.341
==>>> it: 101, mem avg. loss: 1.209915, running mem acc: 0.666
==>>> it: 201, avg. loss: 2.123344, running train acc: 0.400
==>>> it: 201, mem avg. loss: 1.060327, running mem acc: 0.705
==>>> it: 301, avg. loss: 1.927784, running train acc: 0.443
==>>> it: 301, mem avg. loss: 0.975367, running mem acc: 0.718
==>>> it: 401, avg. loss: 1.816467, running train acc: 0.468
==>>> it: 401, mem avg. loss: 0.889102, running mem acc: 0.744
[0.164 0.13 0.072 0.222 0.11 0.569 0. 0. 0. 0. ]
-----------run 5 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.313493, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.826723, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.842690, running train acc: 0.219
==>>> it: 101, mem avg. loss: 1.038724, running mem acc: 0.747
==>>> it: 201, avg. loss: 2.436411, running train acc: 0.287
==>>> it: 201, mem avg. loss: 0.927999, running mem acc: 0.759
==>>> it: 301, avg. loss: 2.258984, running train acc: 0.316
==>>> it: 301, mem avg. loss: 0.820383, running mem acc: 0.787
==>>> it: 401, avg. loss: 2.163124, running train acc: 0.336
==>>> it: 401, mem avg. loss: 0.748139, running mem acc: 0.808
[0.166 0.169 0.061 0.245 0.106 0.213 0.462 0. 0. 0. ]
-----------run 5 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.765607, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.227935, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.470374, running train acc: 0.359
==>>> it: 101, mem avg. loss: 1.090107, running mem acc: 0.707
==>>> it: 201, avg. loss: 2.109177, running train acc: 0.407
==>>> it: 201, mem avg. loss: 0.932586, running mem acc: 0.741
==>>> it: 301, avg. loss: 1.948294, running train acc: 0.436
==>>> it: 301, mem avg. loss: 0.830496, running mem acc: 0.771
==>>> it: 401, avg. loss: 1.834971, running train acc: 0.465
==>>> it: 401, mem avg. loss: 0.778851, running mem acc: 0.784
[0.17 0.146 0.044 0.162 0.074 0.187 0.063 0.588 0. 0. ]
-----------run 5 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.198342, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.145915, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.478800, running train acc: 0.371
==>>> it: 101, mem avg. loss: 1.071105, running mem acc: 0.733
==>>> it: 201, avg. loss: 2.077831, running train acc: 0.430
==>>> it: 201, mem avg. loss: 0.896071, running mem acc: 0.764
==>>> it: 301, avg. loss: 1.886414, running train acc: 0.465
==>>> it: 301, mem avg. loss: 0.791619, running mem acc: 0.789
==>>> it: 401, avg. loss: 1.781400, running train acc: 0.486
==>>> it: 401, mem avg. loss: 0.714693, running mem acc: 0.807
[0.115 0.077 0.055 0.195 0.051 0.151 0.064 0.144 0.554 0. ]
-----------run 5 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.938799, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.337434, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.480568, running train acc: 0.368
==>>> it: 101, mem avg. loss: 0.929006, running mem acc: 0.755
==>>> it: 201, avg. loss: 1.908960, running train acc: 0.483
==>>> it: 201, mem avg. loss: 0.762375, running mem acc: 0.791
==>>> it: 301, avg. loss: 1.728318, running train acc: 0.513
==>>> it: 301, mem avg. loss: 0.686818, running mem acc: 0.813
==>>> it: 401, avg. loss: 1.604723, running train acc: 0.541
==>>> it: 401, mem avg. loss: 0.621501, running mem acc: 0.830
[0.109 0.135 0.034 0.204 0.068 0.138 0.062 0.118 0.166 0.665]
-----------run 5-----------avg_end_acc 0.1699-----------train time 2466.8818497657776
Task: 0, Labels:[77, 32, 34, 85, 28, 68, 40, 52, 18, 4]
Task: 1, Labels:[15, 81, 60, 11, 7, 50, 64, 45, 17, 44]
Task: 2, Labels:[78, 91, 88, 54, 16, 75, 83, 24, 39, 62]
Task: 3, Labels:[74, 31, 99, 1, 0, 33, 53, 69, 93, 92]
Task: 4, Labels:[19, 80, 10, 59, 71, 14, 57, 97, 43, 49]
Task: 5, Labels:[23, 20, 48, 27, 2, 29, 76, 41, 58, 55]
Task: 6, Labels:[9, 5, 89, 61, 94, 56, 42, 51, 25, 70]
Task: 7, Labels:[47, 6, 90, 95, 46, 87, 84, 82, 67, 86]
Task: 8, Labels:[38, 73, 13, 98, 65, 35, 72, 26, 8, 63]
Task: 9, Labels:[12, 66, 36, 22, 79, 21, 30, 3, 96, 37]
buffer has 2000 slots
-----------run 6 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.424465, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.827387, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.441040, running train acc: 0.252
==>>> it: 101, mem avg. loss: 2.246048, running mem acc: 0.269
==>>> it: 201, avg. loss: 2.175049, running train acc: 0.290
==>>> it: 201, mem avg. loss: 2.068972, running mem acc: 0.307
==>>> it: 301, avg. loss: 2.028873, running train acc: 0.319
==>>> it: 301, mem avg. loss: 1.882205, running mem acc: 0.360
==>>> it: 401, avg. loss: 1.940446, running train acc: 0.344
==>>> it: 401, mem avg. loss: 1.760353, running mem acc: 0.393
[0.488 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.124155, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.471427, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.830100, running train acc: 0.155
==>>> it: 101, mem avg. loss: 2.007412, running mem acc: 0.433
==>>> it: 201, avg. loss: 2.484368, running train acc: 0.227
==>>> it: 201, mem avg. loss: 1.798803, running mem acc: 0.474
==>>> it: 301, avg. loss: 2.304645, running train acc: 0.271
==>>> it: 301, mem avg. loss: 1.676674, running mem acc: 0.501
==>>> it: 401, avg. loss: 2.201093, running train acc: 0.299
==>>> it: 401, mem avg. loss: 1.572525, running mem acc: 0.531
[0.198 0.407 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.695428, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.710797, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.686658, running train acc: 0.285
==>>> it: 101, mem avg. loss: 1.355388, running mem acc: 0.636
==>>> it: 201, avg. loss: 2.268869, running train acc: 0.348
==>>> it: 201, mem avg. loss: 1.233098, running mem acc: 0.665
==>>> it: 301, avg. loss: 2.092359, running train acc: 0.378
==>>> it: 301, mem avg. loss: 1.167383, running mem acc: 0.680
==>>> it: 401, avg. loss: 1.988331, running train acc: 0.399
==>>> it: 401, mem avg. loss: 1.100888, running mem acc: 0.698
[0.166 0.142 0.49 0. 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.888567, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.786435, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.790213, running train acc: 0.254
==>>> it: 101, mem avg. loss: 1.371788, running mem acc: 0.641
==>>> it: 201, avg. loss: 2.370656, running train acc: 0.321
==>>> it: 201, mem avg. loss: 1.268843, running mem acc: 0.656
==>>> it: 301, avg. loss: 2.187636, running train acc: 0.361
==>>> it: 301, mem avg. loss: 1.167690, running mem acc: 0.679
==>>> it: 401, avg. loss: 2.039294, running train acc: 0.393
==>>> it: 401, mem avg. loss: 1.066892, running mem acc: 0.705
[0.148 0.108 0.181 0.532 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.945261, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.347808, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.678255, running train acc: 0.268
==>>> it: 101, mem avg. loss: 1.116896, running mem acc: 0.718
==>>> it: 201, avg. loss: 2.232262, running train acc: 0.352
==>>> it: 201, mem avg. loss: 0.991887, running mem acc: 0.736
==>>> it: 301, avg. loss: 2.045683, running train acc: 0.383
==>>> it: 301, mem avg. loss: 0.939464, running mem acc: 0.741
==>>> it: 401, avg. loss: 1.958282, running train acc: 0.402
==>>> it: 401, mem avg. loss: 0.860477, running mem acc: 0.760
[0.083 0.091 0.212 0.271 0.481 0. 0. 0. 0. 0. ]
-----------run 6 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.344313, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.649494, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.669256, running train acc: 0.307
==>>> it: 101, mem avg. loss: 1.123230, running mem acc: 0.710
==>>> it: 201, avg. loss: 2.215271, running train acc: 0.377
==>>> it: 201, mem avg. loss: 1.015452, running mem acc: 0.726
==>>> it: 301, avg. loss: 2.011953, running train acc: 0.418
==>>> it: 301, mem avg. loss: 0.915072, running mem acc: 0.748
==>>> it: 401, avg. loss: 1.897490, running train acc: 0.442
==>>> it: 401, mem avg. loss: 0.825849, running mem acc: 0.770
[0.06 0.093 0.126 0.253 0.141 0.593 0. 0. 0. 0. ]
-----------run 6 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.469841, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.759230, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.543913, running train acc: 0.329
==>>> it: 101, mem avg. loss: 0.996563, running mem acc: 0.736
==>>> it: 201, avg. loss: 2.092412, running train acc: 0.397
==>>> it: 201, mem avg. loss: 0.893401, running mem acc: 0.752
==>>> it: 301, avg. loss: 1.909276, running train acc: 0.438
==>>> it: 301, mem avg. loss: 0.798484, running mem acc: 0.778
==>>> it: 401, avg. loss: 1.809289, running train acc: 0.458
==>>> it: 401, mem avg. loss: 0.726981, running mem acc: 0.799
[0.067 0.072 0.121 0.141 0.143 0.173 0.557 0. 0. 0. ]
-----------run 6 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.641023, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.470937, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.621981, running train acc: 0.304
==>>> it: 101, mem avg. loss: 1.011013, running mem acc: 0.731
==>>> it: 201, avg. loss: 2.132377, running train acc: 0.395
==>>> it: 201, mem avg. loss: 0.839622, running mem acc: 0.773
==>>> it: 301, avg. loss: 1.940537, running train acc: 0.429
==>>> it: 301, mem avg. loss: 0.741315, running mem acc: 0.799
==>>> it: 401, avg. loss: 1.806512, running train acc: 0.460
==>>> it: 401, mem avg. loss: 0.675164, running mem acc: 0.813
[0.051 0.09 0.121 0.144 0.1 0.132 0.18 0.593 0. 0. ]
-----------run 6 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.695618, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.457331, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.685711, running train acc: 0.271
==>>> it: 101, mem avg. loss: 0.963907, running mem acc: 0.733
==>>> it: 201, avg. loss: 2.336026, running train acc: 0.316
==>>> it: 201, mem avg. loss: 0.827039, running mem acc: 0.775
==>>> it: 301, avg. loss: 2.156183, running train acc: 0.347
==>>> it: 301, mem avg. loss: 0.742315, running mem acc: 0.798
==>>> it: 401, avg. loss: 2.044847, running train acc: 0.370
==>>> it: 401, mem avg. loss: 0.677813, running mem acc: 0.813
[0.04 0.087 0.167 0.137 0.097 0.118 0.147 0.195 0.483 0. ]
-----------run 6 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.950813, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.183434, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.512239, running train acc: 0.342
==>>> it: 101, mem avg. loss: 1.060104, running mem acc: 0.721
==>>> it: 201, avg. loss: 2.097214, running train acc: 0.411
==>>> it: 201, mem avg. loss: 0.877097, running mem acc: 0.761
==>>> it: 301, avg. loss: 1.951872, running train acc: 0.441
==>>> it: 301, mem avg. loss: 0.763038, running mem acc: 0.794
==>>> it: 401, avg. loss: 1.847733, running train acc: 0.460
==>>> it: 401, mem avg. loss: 0.692187, running mem acc: 0.812
[0.03 0.089 0.103 0.158 0.098 0.143 0.163 0.123 0.076 0.567]
-----------run 6-----------avg_end_acc 0.15499999999999997-----------train time 2632.0857589244843
Task: 0, Labels:[12, 25, 94, 43, 18, 3, 11, 84, 72, 26]
Task: 1, Labels:[41, 63, 52, 21, 60, 66, 82, 50, 7, 91]
Task: 2, Labels:[71, 76, 88, 40, 99, 85, 53, 16, 10, 90]
Task: 3, Labels:[14, 54, 13, 81, 38, 29, 23, 67, 93, 57]
Task: 4, Labels:[17, 75, 89, 69, 98, 34, 65, 68, 35, 0]
Task: 5, Labels:[30, 44, 24, 9, 49, 8, 80, 64, 33, 73]
Task: 6, Labels:[20, 19, 46, 32, 45, 48, 58, 2, 97, 92]
Task: 7, Labels:[5, 22, 56, 51, 86, 42, 4, 28, 95, 15]
Task: 8, Labels:[61, 27, 77, 87, 31, 74, 55, 79, 70, 36]
Task: 9, Labels:[1, 6, 39, 96, 37, 83, 59, 47, 62, 78]
buffer has 2000 slots
-----------run 7 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.459373, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.835314, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.636453, running train acc: 0.167
==>>> it: 101, mem avg. loss: 2.443826, running mem acc: 0.221
==>>> it: 201, avg. loss: 2.372414, running train acc: 0.221
==>>> it: 201, mem avg. loss: 2.221057, running mem acc: 0.249
==>>> it: 301, avg. loss: 2.238662, running train acc: 0.249
==>>> it: 301, mem avg. loss: 2.076012, running mem acc: 0.290
==>>> it: 401, avg. loss: 2.149004, running train acc: 0.275
==>>> it: 401, mem avg. loss: 2.008679, running mem acc: 0.310
[0.426 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 8.656170, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.082384, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.808567, running train acc: 0.248
==>>> it: 101, mem avg. loss: 2.524130, running mem acc: 0.279
==>>> it: 201, avg. loss: 2.372859, running train acc: 0.328
==>>> it: 201, mem avg. loss: 2.356813, running mem acc: 0.303
==>>> it: 301, avg. loss: 2.179746, running train acc: 0.368
==>>> it: 301, mem avg. loss: 2.182154, running mem acc: 0.351
==>>> it: 401, avg. loss: 2.042269, running train acc: 0.405
==>>> it: 401, mem avg. loss: 1.971062, running mem acc: 0.414
[0.128 0.596 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.871606, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.848329, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.755503, running train acc: 0.275
==>>> it: 101, mem avg. loss: 1.130463, running mem acc: 0.708
==>>> it: 201, avg. loss: 2.320084, running train acc: 0.331
==>>> it: 201, mem avg. loss: 1.052767, running mem acc: 0.714
==>>> it: 301, avg. loss: 2.151854, running train acc: 0.358
==>>> it: 301, mem avg. loss: 0.974725, running mem acc: 0.729
==>>> it: 401, avg. loss: 2.010622, running train acc: 0.391
==>>> it: 401, mem avg. loss: 0.903236, running mem acc: 0.748
[0.053 0.387 0.509 0. 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.828763, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.402969, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.766981, running train acc: 0.265
==>>> it: 101, mem avg. loss: 1.091205, running mem acc: 0.720
==>>> it: 201, avg. loss: 2.357346, running train acc: 0.325
==>>> it: 201, mem avg. loss: 1.010492, running mem acc: 0.732
==>>> it: 301, avg. loss: 2.156860, running train acc: 0.357
==>>> it: 301, mem avg. loss: 0.940323, running mem acc: 0.741
==>>> it: 401, avg. loss: 2.036664, running train acc: 0.377
==>>> it: 401, mem avg. loss: 0.867772, running mem acc: 0.757
[0.039 0.322 0.239 0.483 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.768903, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.273910, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.519102, running train acc: 0.313
==>>> it: 101, mem avg. loss: 1.072551, running mem acc: 0.707
==>>> it: 201, avg. loss: 2.076965, running train acc: 0.390
==>>> it: 201, mem avg. loss: 1.002093, running mem acc: 0.712
==>>> it: 301, avg. loss: 1.867073, running train acc: 0.440
==>>> it: 301, mem avg. loss: 0.902792, running mem acc: 0.741
==>>> it: 401, avg. loss: 1.753686, running train acc: 0.465
==>>> it: 401, mem avg. loss: 0.813836, running mem acc: 0.770
[0.029 0.275 0.175 0.108 0.609 0. 0. 0. 0. 0. ]
-----------run 7 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.825694, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.990649, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.589592, running train acc: 0.307
==>>> it: 101, mem avg. loss: 1.009668, running mem acc: 0.750
==>>> it: 201, avg. loss: 2.157351, running train acc: 0.372
==>>> it: 201, mem avg. loss: 0.852519, running mem acc: 0.776
==>>> it: 301, avg. loss: 1.974858, running train acc: 0.407
==>>> it: 301, mem avg. loss: 0.758699, running mem acc: 0.795
==>>> it: 401, avg. loss: 1.855935, running train acc: 0.434
==>>> it: 401, mem avg. loss: 0.676005, running mem acc: 0.817
[0.032 0.265 0.168 0.096 0.269 0.534 0. 0. 0. 0. ]
-----------run 7 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.497041, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.687015, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.755907, running train acc: 0.259
==>>> it: 101, mem avg. loss: 1.089326, running mem acc: 0.716
==>>> it: 201, avg. loss: 2.300543, running train acc: 0.336
==>>> it: 201, mem avg. loss: 0.897815, running mem acc: 0.764
==>>> it: 301, avg. loss: 2.109253, running train acc: 0.382
==>>> it: 301, mem avg. loss: 0.781966, running mem acc: 0.794
==>>> it: 401, avg. loss: 1.981141, running train acc: 0.411
==>>> it: 401, mem avg. loss: 0.707089, running mem acc: 0.815
[0.032 0.27 0.16 0.059 0.196 0.223 0.543 0. 0. 0. ]
-----------run 7 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.202760, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.291011, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.720132, running train acc: 0.277
==>>> it: 101, mem avg. loss: 0.982876, running mem acc: 0.758
==>>> it: 201, avg. loss: 2.276543, running train acc: 0.359
==>>> it: 201, mem avg. loss: 0.842555, running mem acc: 0.783
==>>> it: 301, avg. loss: 2.081342, running train acc: 0.396
==>>> it: 301, mem avg. loss: 0.757955, running mem acc: 0.803
==>>> it: 401, avg. loss: 1.940704, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.687171, running mem acc: 0.821
[0.014 0.275 0.144 0.05 0.193 0.136 0.19 0.548 0. 0. ]
-----------run 7 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.547227, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.666669, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.569132, running train acc: 0.343
==>>> it: 101, mem avg. loss: 0.998925, running mem acc: 0.745
==>>> it: 201, avg. loss: 2.173751, running train acc: 0.398
==>>> it: 201, mem avg. loss: 0.847100, running mem acc: 0.774
==>>> it: 301, avg. loss: 1.995588, running train acc: 0.421
==>>> it: 301, mem avg. loss: 0.744859, running mem acc: 0.801
==>>> it: 401, avg. loss: 1.884755, running train acc: 0.443
==>>> it: 401, mem avg. loss: 0.677147, running mem acc: 0.822
[0.014 0.228 0.077 0.058 0.198 0.132 0.175 0.108 0.542 0. ]
-----------run 7 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.805775, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.646891, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.582336, running train acc: 0.323
==>>> it: 101, mem avg. loss: 1.000322, running mem acc: 0.742
==>>> it: 201, avg. loss: 2.110834, running train acc: 0.394
==>>> it: 201, mem avg. loss: 0.790720, running mem acc: 0.794
==>>> it: 301, avg. loss: 1.947144, running train acc: 0.425
==>>> it: 301, mem avg. loss: 0.700833, running mem acc: 0.814
==>>> it: 401, avg. loss: 1.826645, running train acc: 0.448
==>>> it: 401, mem avg. loss: 0.646136, running mem acc: 0.828
[0.023 0.182 0.084 0.074 0.155 0.122 0.149 0.081 0.14 0.55 ]
-----------run 7-----------avg_end_acc 0.156-----------train time 2527.3951234817505
Task: 0, Labels:[37, 28, 66, 70, 49, 24, 39, 80, 86, 12]
Task: 1, Labels:[85, 34, 52, 82, 91, 48, 2, 23, 17, 58]
Task: 2, Labels:[18, 44, 0, 65, 92, 95, 25, 33, 36, 41]
Task: 3, Labels:[67, 78, 29, 81, 13, 54, 15, 21, 99, 77]
Task: 4, Labels:[83, 32, 87, 43, 68, 69, 10, 71, 60, 89]
Task: 5, Labels:[57, 96, 27, 50, 90, 72, 53, 4, 40, 19]
Task: 6, Labels:[38, 31, 55, 8, 61, 73, 16, 22, 79, 7]
Task: 7, Labels:[42, 26, 76, 35, 63, 3, 93, 64, 88, 62]
Task: 8, Labels:[1, 56, 5, 30, 9, 45, 51, 98, 11, 20]
Task: 9, Labels:[6, 59, 84, 97, 14, 94, 47, 74, 75, 46]
buffer has 2000 slots
-----------run 8 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.483299, running train acc: 0.050
==>>> it: 1, mem avg. loss: 3.162857, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.519204, running train acc: 0.224
==>>> it: 101, mem avg. loss: 2.351665, running mem acc: 0.237
==>>> it: 201, avg. loss: 2.194489, running train acc: 0.269
==>>> it: 201, mem avg. loss: 2.104952, running mem acc: 0.285
==>>> it: 301, avg. loss: 2.028299, running train acc: 0.320
==>>> it: 301, mem avg. loss: 1.946598, running mem acc: 0.331
==>>> it: 401, avg. loss: 1.926344, running train acc: 0.348
==>>> it: 401, mem avg. loss: 1.810042, running mem acc: 0.372
[0.495 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.746885, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.124197, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.676889, running train acc: 0.260
==>>> it: 101, mem avg. loss: 2.018450, running mem acc: 0.405
==>>> it: 201, avg. loss: 2.271834, running train acc: 0.339
==>>> it: 201, mem avg. loss: 1.836719, running mem acc: 0.444
==>>> it: 301, avg. loss: 2.050705, running train acc: 0.394
==>>> it: 301, mem avg. loss: 1.739310, running mem acc: 0.466
==>>> it: 401, avg. loss: 1.919698, running train acc: 0.424
==>>> it: 401, mem avg. loss: 1.618645, running mem acc: 0.500
[0.309 0.587 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.087857, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.624774, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.795800, running train acc: 0.251
==>>> it: 101, mem avg. loss: 1.337885, running mem acc: 0.650
==>>> it: 201, avg. loss: 2.406708, running train acc: 0.293
==>>> it: 201, mem avg. loss: 1.240008, running mem acc: 0.663
==>>> it: 301, avg. loss: 2.229786, running train acc: 0.327
==>>> it: 301, mem avg. loss: 1.141159, running mem acc: 0.684
==>>> it: 401, avg. loss: 2.105185, running train acc: 0.352
==>>> it: 401, mem avg. loss: 1.024746, running mem acc: 0.718
[0.137 0.344 0.519 0. 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.177635, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.589542, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.818896, running train acc: 0.240
==>>> it: 101, mem avg. loss: 1.142737, running mem acc: 0.689
==>>> it: 201, avg. loss: 2.424950, running train acc: 0.303
==>>> it: 201, mem avg. loss: 1.110081, running mem acc: 0.686
==>>> it: 301, avg. loss: 2.261410, running train acc: 0.331
==>>> it: 301, mem avg. loss: 1.106698, running mem acc: 0.685
==>>> it: 401, avg. loss: 2.162844, running train acc: 0.346
==>>> it: 401, mem avg. loss: 1.060699, running mem acc: 0.699
[0.117 0.278 0.21 0.497 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.103175, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.657426, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.629313, running train acc: 0.299
==>>> it: 101, mem avg. loss: 1.183270, running mem acc: 0.672
==>>> it: 201, avg. loss: 2.152890, running train acc: 0.379
==>>> it: 201, mem avg. loss: 1.129015, running mem acc: 0.686
==>>> it: 301, avg. loss: 1.965346, running train acc: 0.418
==>>> it: 301, mem avg. loss: 1.049710, running mem acc: 0.708
==>>> it: 401, avg. loss: 1.823467, running train acc: 0.451
==>>> it: 401, mem avg. loss: 0.954739, running mem acc: 0.734
[0.084 0.253 0.223 0.108 0.581 0. 0. 0. 0. 0. ]
-----------run 8 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.033952, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.268862, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.708922, running train acc: 0.274
==>>> it: 101, mem avg. loss: 1.070676, running mem acc: 0.720
==>>> it: 201, avg. loss: 2.321157, running train acc: 0.328
==>>> it: 201, mem avg. loss: 0.968164, running mem acc: 0.744
==>>> it: 301, avg. loss: 2.141550, running train acc: 0.363
==>>> it: 301, mem avg. loss: 0.855893, running mem acc: 0.772
==>>> it: 401, avg. loss: 2.027804, running train acc: 0.383
==>>> it: 401, mem avg. loss: 0.790531, running mem acc: 0.788
[0.045 0.194 0.105 0.089 0.214 0.479 0. 0. 0. 0. ]
-----------run 8 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.469357, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.513997, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.777180, running train acc: 0.247
==>>> it: 101, mem avg. loss: 1.124619, running mem acc: 0.715
==>>> it: 201, avg. loss: 2.334727, running train acc: 0.317
==>>> it: 201, mem avg. loss: 0.996187, running mem acc: 0.740
==>>> it: 301, avg. loss: 2.152750, running train acc: 0.349
==>>> it: 301, mem avg. loss: 0.872538, running mem acc: 0.772
==>>> it: 401, avg. loss: 2.039560, running train acc: 0.373
==>>> it: 401, mem avg. loss: 0.779483, running mem acc: 0.796
[0.039 0.242 0.094 0.079 0.205 0.124 0.541 0. 0. 0. ]
-----------run 8 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.360435, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.196974, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.718455, running train acc: 0.285
==>>> it: 101, mem avg. loss: 1.078434, running mem acc: 0.710
==>>> it: 201, avg. loss: 2.277845, running train acc: 0.354
==>>> it: 201, mem avg. loss: 0.886643, running mem acc: 0.765
==>>> it: 301, avg. loss: 2.105392, running train acc: 0.382
==>>> it: 301, mem avg. loss: 0.801410, running mem acc: 0.787
==>>> it: 401, avg. loss: 1.976321, running train acc: 0.415
==>>> it: 401, mem avg. loss: 0.719056, running mem acc: 0.810
[0.055 0.204 0.085 0.093 0.201 0.103 0.114 0.477 0. 0. ]
-----------run 8 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.354440, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.749486, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.531024, running train acc: 0.358
==>>> it: 101, mem avg. loss: 1.045954, running mem acc: 0.725
==>>> it: 201, avg. loss: 2.095567, running train acc: 0.407
==>>> it: 201, mem avg. loss: 0.872953, running mem acc: 0.761
==>>> it: 301, avg. loss: 1.946133, running train acc: 0.433
==>>> it: 301, mem avg. loss: 0.782512, running mem acc: 0.786
==>>> it: 401, avg. loss: 1.847266, running train acc: 0.457
==>>> it: 401, mem avg. loss: 0.698494, running mem acc: 0.811
[0.049 0.171 0.08 0.064 0.201 0.089 0.058 0.138 0.562 0. ]
-----------run 8 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.762014, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.181871, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.502903, running train acc: 0.339
==>>> it: 101, mem avg. loss: 0.936425, running mem acc: 0.744
==>>> it: 201, avg. loss: 2.034085, running train acc: 0.413
==>>> it: 201, mem avg. loss: 0.768617, running mem acc: 0.784
==>>> it: 301, avg. loss: 1.865874, running train acc: 0.447
==>>> it: 301, mem avg. loss: 0.696831, running mem acc: 0.807
==>>> it: 401, avg. loss: 1.765241, running train acc: 0.467
==>>> it: 401, mem avg. loss: 0.623293, running mem acc: 0.827
[0.053 0.14 0.104 0.044 0.195 0.081 0.08 0.08 0.158 0.585]
-----------run 8-----------avg_end_acc 0.152-----------train time 2543.4716382026672
Task: 0, Labels:[21, 4, 44, 77, 48, 75, 90, 40, 81, 16]
Task: 1, Labels:[8, 22, 42, 41, 35, 62, 7, 98, 6, 24]
Task: 2, Labels:[27, 80, 71, 96, 47, 33, 92, 31, 61, 91]
Task: 3, Labels:[55, 52, 79, 58, 43, 65, 0, 94, 46, 26]
Task: 4, Labels:[38, 53, 73, 74, 45, 9, 25, 82, 57, 56]
Task: 5, Labels:[68, 99, 60, 29, 83, 5, 95, 64, 12, 63]
Task: 6, Labels:[70, 18, 59, 51, 69, 39, 67, 97, 11, 13]
Task: 7, Labels:[89, 86, 10, 36, 30, 28, 15, 19, 23, 87]
Task: 8, Labels:[85, 72, 76, 20, 88, 93, 66, 34, 84, 32]
Task: 9, Labels:[3, 78, 17, 37, 54, 49, 50, 14, 1, 2]
buffer has 2000 slots
-----------run 9 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.120544, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.617083, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.613127, running train acc: 0.179
==>>> it: 101, mem avg. loss: 2.371817, running mem acc: 0.231
==>>> it: 201, avg. loss: 2.359849, running train acc: 0.231
==>>> it: 201, mem avg. loss: 2.172538, running mem acc: 0.271
==>>> it: 301, avg. loss: 2.191753, running train acc: 0.274
==>>> it: 301, mem avg. loss: 2.013949, running mem acc: 0.317
==>>> it: 401, avg. loss: 2.093562, running train acc: 0.296
==>>> it: 401, mem avg. loss: 1.916984, running mem acc: 0.344
[0.441 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.098277, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.378008, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.935359, running train acc: 0.172
==>>> it: 101, mem avg. loss: 2.261704, running mem acc: 0.351
==>>> it: 201, avg. loss: 2.525982, running train acc: 0.244
==>>> it: 201, mem avg. loss: 2.048927, running mem acc: 0.393
==>>> it: 301, avg. loss: 2.340217, running train acc: 0.290
==>>> it: 301, mem avg. loss: 1.847029, running mem acc: 0.455
==>>> it: 401, avg. loss: 2.228837, running train acc: 0.313
==>>> it: 401, mem avg. loss: 1.692535, running mem acc: 0.497
[0.135 0.442 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.423477, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.247947, running mem acc: 0.650
==>>> it: 101, avg. loss: 2.640900, running train acc: 0.291
==>>> it: 101, mem avg. loss: 1.400566, running mem acc: 0.616
==>>> it: 201, avg. loss: 2.180233, running train acc: 0.359
==>>> it: 201, mem avg. loss: 1.241131, running mem acc: 0.649
==>>> it: 301, avg. loss: 1.982572, running train acc: 0.406
==>>> it: 301, mem avg. loss: 1.170756, running mem acc: 0.668
==>>> it: 401, avg. loss: 1.869420, running train acc: 0.430
==>>> it: 401, mem avg. loss: 1.099925, running mem acc: 0.687
[0.082 0.187 0.569 0. 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.426653, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.279176, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.612793, running train acc: 0.328
==>>> it: 101, mem avg. loss: 1.240951, running mem acc: 0.670
==>>> it: 201, avg. loss: 2.224056, running train acc: 0.376
==>>> it: 201, mem avg. loss: 1.180653, running mem acc: 0.672
==>>> it: 301, avg. loss: 2.058625, running train acc: 0.405
==>>> it: 301, mem avg. loss: 1.066736, running mem acc: 0.696
==>>> it: 401, avg. loss: 1.935622, running train acc: 0.432
==>>> it: 401, mem avg. loss: 0.968292, running mem acc: 0.723
[0.072 0.114 0.203 0.605 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.142316, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.576928, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.625875, running train acc: 0.316
==>>> it: 101, mem avg. loss: 1.087335, running mem acc: 0.707
==>>> it: 201, avg. loss: 2.202543, running train acc: 0.377
==>>> it: 201, mem avg. loss: 0.943995, running mem acc: 0.743
==>>> it: 301, avg. loss: 2.009743, running train acc: 0.422
==>>> it: 301, mem avg. loss: 0.828864, running mem acc: 0.773
==>>> it: 401, avg. loss: 1.866760, running train acc: 0.450
==>>> it: 401, mem avg. loss: 0.731779, running mem acc: 0.801
[0.048 0.142 0.156 0.276 0.607 0. 0. 0. 0. 0. ]
-----------run 9 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.929319, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.760754, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.464731, running train acc: 0.348
==>>> it: 101, mem avg. loss: 0.861999, running mem acc: 0.771
==>>> it: 201, avg. loss: 1.996821, running train acc: 0.437
==>>> it: 201, mem avg. loss: 0.775646, running mem acc: 0.785
==>>> it: 301, avg. loss: 1.811338, running train acc: 0.469
==>>> it: 301, mem avg. loss: 0.699306, running mem acc: 0.805
==>>> it: 401, avg. loss: 1.693592, running train acc: 0.498
==>>> it: 401, mem avg. loss: 0.646784, running mem acc: 0.820
[0.037 0.096 0.128 0.275 0.234 0.592 0. 0. 0. 0. ]
-----------run 9 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.435322, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.905392, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.514953, running train acc: 0.339
==>>> it: 101, mem avg. loss: 0.931387, running mem acc: 0.761
==>>> it: 201, avg. loss: 2.083657, running train acc: 0.398
==>>> it: 201, mem avg. loss: 0.801827, running mem acc: 0.788
==>>> it: 301, avg. loss: 1.906918, running train acc: 0.440
==>>> it: 301, mem avg. loss: 0.709797, running mem acc: 0.806
==>>> it: 401, avg. loss: 1.780945, running train acc: 0.472
==>>> it: 401, mem avg. loss: 0.635117, running mem acc: 0.825
[0.046 0.095 0.123 0.239 0.209 0.225 0.597 0. 0. 0. ]
-----------run 9 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.596126, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.767127, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.357536, running train acc: 0.386
==>>> it: 101, mem avg. loss: 0.923270, running mem acc: 0.765
==>>> it: 201, avg. loss: 1.944545, running train acc: 0.453
==>>> it: 201, mem avg. loss: 0.764019, running mem acc: 0.796
==>>> it: 301, avg. loss: 1.758554, running train acc: 0.489
==>>> it: 301, mem avg. loss: 0.674325, running mem acc: 0.819
==>>> it: 401, avg. loss: 1.665297, running train acc: 0.507
==>>> it: 401, mem avg. loss: 0.608821, running mem acc: 0.835
[0.057 0.081 0.132 0.24 0.181 0.163 0.198 0.623 0. 0. ]
-----------run 9 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.618112, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.294658, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.586261, running train acc: 0.325
==>>> it: 101, mem avg. loss: 0.867026, running mem acc: 0.778
==>>> it: 201, avg. loss: 2.172870, running train acc: 0.367
==>>> it: 201, mem avg. loss: 0.739413, running mem acc: 0.803
==>>> it: 301, avg. loss: 2.006553, running train acc: 0.401
==>>> it: 301, mem avg. loss: 0.647727, running mem acc: 0.831
==>>> it: 401, avg. loss: 1.904231, running train acc: 0.420
==>>> it: 401, mem avg. loss: 0.591324, running mem acc: 0.847
[0.032 0.075 0.12 0.2 0.178 0.164 0.144 0.201 0.552 0. ]
-----------run 9 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.737688, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.264111, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.545728, running train acc: 0.321
==>>> it: 101, mem avg. loss: 0.891467, running mem acc: 0.766
==>>> it: 201, avg. loss: 2.128180, running train acc: 0.384
==>>> it: 201, mem avg. loss: 0.738494, running mem acc: 0.806
==>>> it: 301, avg. loss: 1.949013, running train acc: 0.419
==>>> it: 301, mem avg. loss: 0.652600, running mem acc: 0.828
==>>> it: 401, avg. loss: 1.836256, running train acc: 0.437
==>>> it: 401, mem avg. loss: 0.594830, running mem acc: 0.842
[0.028 0.065 0.133 0.2 0.187 0.133 0.089 0.147 0.156 0.542]
-----------run 9-----------avg_end_acc 0.168-----------train time 2598.2259385585785
Task: 0, Labels:[57, 17, 3, 47, 0, 94, 66, 56, 44, 7]
Task: 1, Labels:[38, 10, 23, 18, 14, 86, 67, 87, 52, 5]
Task: 2, Labels:[83, 98, 76, 96, 49, 20, 58, 21, 22, 40]
Task: 3, Labels:[36, 33, 41, 92, 88, 9, 95, 11, 28, 62]
Task: 4, Labels:[25, 91, 2, 46, 89, 8, 78, 72, 79, 26]
Task: 5, Labels:[99, 37, 15, 48, 90, 24, 59, 80, 93, 65]
Task: 6, Labels:[53, 6, 27, 51, 60, 73, 34, 64, 35, 81]
Task: 7, Labels:[12, 77, 32, 74, 61, 43, 54, 13, 50, 68]
Task: 8, Labels:[97, 19, 1, 85, 63, 84, 75, 30, 42, 71]
Task: 9, Labels:[39, 29, 45, 31, 55, 16, 4, 82, 69, 70]
buffer has 2000 slots
-----------run 10 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.517881, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.687700, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.535598, running train acc: 0.210
==>>> it: 101, mem avg. loss: 2.307323, running mem acc: 0.248
==>>> it: 201, avg. loss: 2.226912, running train acc: 0.267
==>>> it: 201, mem avg. loss: 2.076577, running mem acc: 0.295
==>>> it: 301, avg. loss: 2.039200, running train acc: 0.316
==>>> it: 301, mem avg. loss: 1.866356, running mem acc: 0.355
==>>> it: 401, avg. loss: 1.913767, running train acc: 0.353
==>>> it: 401, mem avg. loss: 1.736595, running mem acc: 0.395
[0.541 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.186578, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.068785, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.792997, running train acc: 0.226
==>>> it: 101, mem avg. loss: 2.060229, running mem acc: 0.413
==>>> it: 201, avg. loss: 2.409643, running train acc: 0.289
==>>> it: 201, mem avg. loss: 1.809448, running mem acc: 0.462
==>>> it: 301, avg. loss: 2.233301, running train acc: 0.324
==>>> it: 301, mem avg. loss: 1.631063, running mem acc: 0.509
==>>> it: 401, avg. loss: 2.090543, running train acc: 0.356
==>>> it: 401, mem avg. loss: 1.455732, running mem acc: 0.561
[0.208 0.473 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.591722, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.173227, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.624443, running train acc: 0.300
==>>> it: 101, mem avg. loss: 1.225916, running mem acc: 0.676
==>>> it: 201, avg. loss: 2.130590, running train acc: 0.388
==>>> it: 201, mem avg. loss: 1.104724, running mem acc: 0.698
==>>> it: 301, avg. loss: 1.918651, running train acc: 0.432
==>>> it: 301, mem avg. loss: 1.027460, running mem acc: 0.718
==>>> it: 401, avg. loss: 1.799968, running train acc: 0.457
==>>> it: 401, mem avg. loss: 0.945020, running mem acc: 0.738
[0.094 0.193 0.59 0. 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.920829, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.336894, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.617306, running train acc: 0.289
==>>> it: 101, mem avg. loss: 1.105894, running mem acc: 0.717
==>>> it: 201, avg. loss: 2.187614, running train acc: 0.351
==>>> it: 201, mem avg. loss: 0.953943, running mem acc: 0.741
==>>> it: 301, avg. loss: 2.007888, running train acc: 0.386
==>>> it: 301, mem avg. loss: 0.885073, running mem acc: 0.756
==>>> it: 401, avg. loss: 1.919051, running train acc: 0.406
==>>> it: 401, mem avg. loss: 0.811270, running mem acc: 0.774
[0.066 0.131 0.37 0.516 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.489842, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.540555, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.854046, running train acc: 0.234
==>>> it: 101, mem avg. loss: 1.208285, running mem acc: 0.681
==>>> it: 201, avg. loss: 2.528900, running train acc: 0.270
==>>> it: 201, mem avg. loss: 1.134745, running mem acc: 0.691
==>>> it: 301, avg. loss: 2.356390, running train acc: 0.298
==>>> it: 301, mem avg. loss: 1.053090, running mem acc: 0.712
==>>> it: 401, avg. loss: 2.259578, running train acc: 0.315
==>>> it: 401, mem avg. loss: 0.980200, running mem acc: 0.729
[0.027 0.128 0.241 0.19 0.459 0. 0. 0. 0. 0. ]
-----------run 10 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.058362, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.813412, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.737207, running train acc: 0.278
==>>> it: 101, mem avg. loss: 1.296762, running mem acc: 0.648
==>>> it: 201, avg. loss: 2.309689, running train acc: 0.347
==>>> it: 201, mem avg. loss: 1.208238, running mem acc: 0.659
==>>> it: 301, avg. loss: 2.141962, running train acc: 0.372
==>>> it: 301, mem avg. loss: 1.106809, running mem acc: 0.684
==>>> it: 401, avg. loss: 2.040643, running train acc: 0.389
==>>> it: 401, mem avg. loss: 1.003296, running mem acc: 0.711
[0.027 0.113 0.18 0.184 0.121 0.531 0. 0. 0. 0. ]
-----------run 10 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.126886, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.351078, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.473592, running train acc: 0.357
==>>> it: 101, mem avg. loss: 1.224211, running mem acc: 0.664
==>>> it: 201, avg. loss: 2.061169, running train acc: 0.427
==>>> it: 201, mem avg. loss: 1.084968, running mem acc: 0.696
==>>> it: 301, avg. loss: 1.892117, running train acc: 0.454
==>>> it: 301, mem avg. loss: 0.957290, running mem acc: 0.729
==>>> it: 401, avg. loss: 1.791067, running train acc: 0.473
==>>> it: 401, mem avg. loss: 0.845054, running mem acc: 0.762
[0.019 0.088 0.133 0.144 0.078 0.159 0.593 0. 0. 0. ]
-----------run 10 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.421838, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.863021, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.608776, running train acc: 0.303
==>>> it: 101, mem avg. loss: 1.060748, running mem acc: 0.733
==>>> it: 201, avg. loss: 2.157215, running train acc: 0.376
==>>> it: 201, mem avg. loss: 0.911113, running mem acc: 0.765
==>>> it: 301, avg. loss: 1.960685, running train acc: 0.415
==>>> it: 301, mem avg. loss: 0.802939, running mem acc: 0.789
==>>> it: 401, avg. loss: 1.859882, running train acc: 0.438
==>>> it: 401, mem avg. loss: 0.727116, running mem acc: 0.807
[0.031 0.095 0.146 0.108 0.061 0.081 0.169 0.516 0. 0. ]
-----------run 10 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.758717, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.320936, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.485607, running train acc: 0.335
==>>> it: 101, mem avg. loss: 0.953603, running mem acc: 0.756
==>>> it: 201, avg. loss: 2.028043, running train acc: 0.418
==>>> it: 201, mem avg. loss: 0.792594, running mem acc: 0.789
==>>> it: 301, avg. loss: 1.834099, running train acc: 0.458
==>>> it: 301, mem avg. loss: 0.716500, running mem acc: 0.807
==>>> it: 401, avg. loss: 1.719138, running train acc: 0.484
==>>> it: 401, mem avg. loss: 0.660174, running mem acc: 0.821
[0.032 0.078 0.143 0.071 0.072 0.115 0.175 0.184 0.63 0. ]
-----------run 10 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.149339, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.194741, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.620654, running train acc: 0.317
==>>> it: 101, mem avg. loss: 0.933220, running mem acc: 0.762
==>>> it: 201, avg. loss: 2.158245, running train acc: 0.400
==>>> it: 201, mem avg. loss: 0.813508, running mem acc: 0.783
==>>> it: 301, avg. loss: 1.985122, running train acc: 0.426
==>>> it: 301, mem avg. loss: 0.709405, running mem acc: 0.812
==>>> it: 401, avg. loss: 1.841146, running train acc: 0.458
==>>> it: 401, mem avg. loss: 0.646138, running mem acc: 0.831
[0.019 0.017 0.09 0.081 0.065 0.084 0.157 0.181 0.201 0.55 ]
-----------run 10-----------avg_end_acc 0.14450000000000002-----------train time 2629.6844050884247
Task: 0, Labels:[85, 6, 88, 31, 84, 91, 75, 49, 69, 76]
Task: 1, Labels:[3, 99, 24, 78, 32, 71, 81, 0, 63, 44]
Task: 2, Labels:[8, 13, 51, 61, 89, 20, 82, 16, 64, 55]
Task: 3, Labels:[30, 93, 95, 25, 57, 58, 83, 41, 15, 79]
Task: 4, Labels:[33, 72, 18, 35, 14, 23, 22, 87, 80, 26]
Task: 5, Labels:[17, 27, 56, 43, 54, 29, 97, 65, 50, 46]
Task: 6, Labels:[96, 10, 90, 12, 21, 19, 36, 67, 4, 34]
Task: 7, Labels:[40, 9, 53, 38, 37, 45, 62, 94, 86, 60]
Task: 8, Labels:[59, 98, 66, 68, 2, 7, 1, 77, 11, 92]
Task: 9, Labels:[52, 42, 48, 73, 74, 39, 5, 47, 28, 70]
buffer has 2000 slots
-----------run 11 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.189835, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.168808, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.541171, running train acc: 0.200
==>>> it: 101, mem avg. loss: 2.264231, running mem acc: 0.250
==>>> it: 201, avg. loss: 2.203083, running train acc: 0.269
==>>> it: 201, mem avg. loss: 2.034104, running mem acc: 0.297
==>>> it: 301, avg. loss: 2.010828, running train acc: 0.319
==>>> it: 301, mem avg. loss: 1.830894, running mem acc: 0.363
==>>> it: 401, avg. loss: 1.912500, running train acc: 0.353
==>>> it: 401, mem avg. loss: 1.690996, running mem acc: 0.407
[0.559 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.024690, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.321640, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.726428, running train acc: 0.232
==>>> it: 101, mem avg. loss: 1.743590, running mem acc: 0.507
==>>> it: 201, avg. loss: 2.359242, running train acc: 0.293
==>>> it: 201, mem avg. loss: 1.536989, running mem acc: 0.543
==>>> it: 301, avg. loss: 2.178935, running train acc: 0.330
==>>> it: 301, mem avg. loss: 1.378923, running mem acc: 0.593
==>>> it: 401, avg. loss: 2.063812, running train acc: 0.357
==>>> it: 401, mem avg. loss: 1.266082, running mem acc: 0.624
[0.233 0.429 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.537317, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.489530, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.780185, running train acc: 0.259
==>>> it: 101, mem avg. loss: 1.223219, running mem acc: 0.679
==>>> it: 201, avg. loss: 2.340605, running train acc: 0.335
==>>> it: 201, mem avg. loss: 1.151194, running mem acc: 0.694
==>>> it: 301, avg. loss: 2.171662, running train acc: 0.369
==>>> it: 301, mem avg. loss: 1.096604, running mem acc: 0.703
==>>> it: 401, avg. loss: 2.054652, running train acc: 0.393
==>>> it: 401, mem avg. loss: 1.041396, running mem acc: 0.716
[0.153 0.211 0.497 0. 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.506497, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.359183, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.771453, running train acc: 0.252
==>>> it: 101, mem avg. loss: 1.276556, running mem acc: 0.666
==>>> it: 201, avg. loss: 2.334628, running train acc: 0.310
==>>> it: 201, mem avg. loss: 1.224476, running mem acc: 0.672
==>>> it: 301, avg. loss: 2.153257, running train acc: 0.341
==>>> it: 301, mem avg. loss: 1.135418, running mem acc: 0.692
==>>> it: 401, avg. loss: 2.029568, running train acc: 0.374
==>>> it: 401, mem avg. loss: 1.015071, running mem acc: 0.724
[0.118 0.149 0.198 0.488 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.613126, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.493856, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.748457, running train acc: 0.265
==>>> it: 101, mem avg. loss: 1.113203, running mem acc: 0.736
==>>> it: 201, avg. loss: 2.386869, running train acc: 0.315
==>>> it: 201, mem avg. loss: 1.039944, running mem acc: 0.731
==>>> it: 301, avg. loss: 2.208878, running train acc: 0.352
==>>> it: 301, mem avg. loss: 0.946079, running mem acc: 0.750
==>>> it: 401, avg. loss: 2.098850, running train acc: 0.371
==>>> it: 401, mem avg. loss: 0.868129, running mem acc: 0.771
[0.131 0.073 0.178 0.229 0.465 0. 0. 0. 0. 0. ]
-----------run 11 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.867691, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.240804, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.717609, running train acc: 0.308
==>>> it: 101, mem avg. loss: 1.159337, running mem acc: 0.691
==>>> it: 201, avg. loss: 2.306640, running train acc: 0.358
==>>> it: 201, mem avg. loss: 0.999113, running mem acc: 0.729
==>>> it: 301, avg. loss: 2.145154, running train acc: 0.383
==>>> it: 301, mem avg. loss: 0.912478, running mem acc: 0.752
==>>> it: 401, avg. loss: 2.025131, running train acc: 0.410
==>>> it: 401, mem avg. loss: 0.830844, running mem acc: 0.773
[0.103 0.165 0.184 0.15 0.107 0.55 0. 0. 0. 0. ]
-----------run 11 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.392546, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.694815, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.582074, running train acc: 0.321
==>>> it: 101, mem avg. loss: 1.081163, running mem acc: 0.710
==>>> it: 201, avg. loss: 2.213374, running train acc: 0.361
==>>> it: 201, mem avg. loss: 0.957886, running mem acc: 0.728
==>>> it: 301, avg. loss: 2.027303, running train acc: 0.406
==>>> it: 301, mem avg. loss: 0.852747, running mem acc: 0.763
==>>> it: 401, avg. loss: 1.927963, running train acc: 0.424
==>>> it: 401, mem avg. loss: 0.775535, running mem acc: 0.788
[0.065 0.11 0.161 0.148 0.103 0.198 0.508 0. 0. 0. ]
-----------run 11 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.169339, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.504658, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.409145, running train acc: 0.389
==>>> it: 101, mem avg. loss: 0.973405, running mem acc: 0.759
==>>> it: 201, avg. loss: 1.931373, running train acc: 0.466
==>>> it: 201, mem avg. loss: 0.797592, running mem acc: 0.794
==>>> it: 301, avg. loss: 1.741714, running train acc: 0.503
==>>> it: 301, mem avg. loss: 0.706490, running mem acc: 0.812
==>>> it: 401, avg. loss: 1.609237, running train acc: 0.535
==>>> it: 401, mem avg. loss: 0.643387, running mem acc: 0.826
[0.064 0.079 0.131 0.112 0.08 0.144 0.181 0.639 0. 0. ]
-----------run 11 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.843805, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.639262, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.537906, running train acc: 0.325
==>>> it: 101, mem avg. loss: 0.861095, running mem acc: 0.778
==>>> it: 201, avg. loss: 2.086805, running train acc: 0.394
==>>> it: 201, mem avg. loss: 0.709975, running mem acc: 0.810
==>>> it: 301, avg. loss: 1.940486, running train acc: 0.424
==>>> it: 301, mem avg. loss: 0.627988, running mem acc: 0.832
==>>> it: 401, avg. loss: 1.832742, running train acc: 0.446
==>>> it: 401, mem avg. loss: 0.583471, running mem acc: 0.842
[0.054 0.075 0.111 0.103 0.084 0.068 0.093 0.269 0.534 0. ]
-----------run 11 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.101140, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.135612, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.375093, running train acc: 0.392
==>>> it: 101, mem avg. loss: 0.915956, running mem acc: 0.749
==>>> it: 201, avg. loss: 1.868183, running train acc: 0.481
==>>> it: 201, mem avg. loss: 0.742477, running mem acc: 0.791
==>>> it: 301, avg. loss: 1.669394, running train acc: 0.524
==>>> it: 301, mem avg. loss: 0.642697, running mem acc: 0.820
==>>> it: 401, avg. loss: 1.578260, running train acc: 0.540
==>>> it: 401, mem avg. loss: 0.574483, running mem acc: 0.839
[0.083 0.093 0.141 0.097 0.075 0.092 0.067 0.27 0.103 0.634]
-----------run 11-----------avg_end_acc 0.16549999999999998-----------train time 2605.7747802734375
Task: 0, Labels:[90, 74, 9, 39, 27, 58, 0, 37, 32, 77]
Task: 1, Labels:[94, 65, 84, 52, 71, 30, 21, 97, 8, 40]
Task: 2, Labels:[7, 73, 49, 6, 22, 87, 70, 3, 62, 4]
Task: 3, Labels:[43, 61, 91, 50, 66, 44, 5, 1, 95, 75]
Task: 4, Labels:[85, 13, 63, 56, 15, 67, 14, 36, 28, 29]
Task: 5, Labels:[89, 99, 53, 18, 64, 72, 69, 41, 82, 54]
Task: 6, Labels:[46, 23, 47, 59, 25, 83, 35, 76, 33, 34]
Task: 7, Labels:[57, 16, 51, 12, 93, 68, 24, 2, 31, 10]
Task: 8, Labels:[20, 38, 88, 11, 96, 78, 60, 45, 92, 17]
Task: 9, Labels:[48, 55, 86, 81, 79, 42, 98, 26, 19, 80]
buffer has 2000 slots
-----------run 12 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.257377, running train acc: 0.150
==>>> it: 1, mem avg. loss: 2.783452, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.502423, running train acc: 0.181
==>>> it: 101, mem avg. loss: 2.316297, running mem acc: 0.235
==>>> it: 201, avg. loss: 2.262949, running train acc: 0.232
==>>> it: 201, mem avg. loss: 2.118018, running mem acc: 0.266
==>>> it: 301, avg. loss: 2.116876, running train acc: 0.266
==>>> it: 301, mem avg. loss: 1.973954, running mem acc: 0.310
==>>> it: 401, avg. loss: 2.026005, running train acc: 0.296
==>>> it: 401, mem avg. loss: 1.867314, running mem acc: 0.343
[0.435 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.249852, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.524688, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.890226, running train acc: 0.195
==>>> it: 101, mem avg. loss: 2.273713, running mem acc: 0.372
==>>> it: 201, avg. loss: 2.403394, running train acc: 0.301
==>>> it: 201, mem avg. loss: 2.070942, running mem acc: 0.400
==>>> it: 301, avg. loss: 2.189349, running train acc: 0.355
==>>> it: 301, mem avg. loss: 1.883119, running mem acc: 0.441
==>>> it: 401, avg. loss: 2.043502, running train acc: 0.387
==>>> it: 401, mem avg. loss: 1.756742, running mem acc: 0.471
[0.092 0.575 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.655917, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.681748, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.780860, running train acc: 0.248
==>>> it: 101, mem avg. loss: 1.379903, running mem acc: 0.621
==>>> it: 201, avg. loss: 2.323989, running train acc: 0.316
==>>> it: 201, mem avg. loss: 1.223047, running mem acc: 0.653
==>>> it: 301, avg. loss: 2.114620, running train acc: 0.354
==>>> it: 301, mem avg. loss: 1.105980, running mem acc: 0.684
==>>> it: 401, avg. loss: 2.012818, running train acc: 0.368
==>>> it: 401, mem avg. loss: 1.037005, running mem acc: 0.703
[0.108 0.373 0.427 0. 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.188715, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.916224, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.789864, running train acc: 0.253
==>>> it: 101, mem avg. loss: 1.270348, running mem acc: 0.674
==>>> it: 201, avg. loss: 2.354775, running train acc: 0.328
==>>> it: 201, mem avg. loss: 1.214207, running mem acc: 0.667
==>>> it: 301, avg. loss: 2.218764, running train acc: 0.349
==>>> it: 301, mem avg. loss: 1.159115, running mem acc: 0.673
==>>> it: 401, avg. loss: 2.084206, running train acc: 0.375
==>>> it: 401, mem avg. loss: 1.086726, running mem acc: 0.694
[0.051 0.201 0.186 0.462 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.321850, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.785401, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.781317, running train acc: 0.235
==>>> it: 101, mem avg. loss: 1.238444, running mem acc: 0.679
==>>> it: 201, avg. loss: 2.332697, running train acc: 0.324
==>>> it: 201, mem avg. loss: 1.149954, running mem acc: 0.688
==>>> it: 301, avg. loss: 2.138244, running train acc: 0.362
==>>> it: 301, mem avg. loss: 1.035216, running mem acc: 0.712
==>>> it: 401, avg. loss: 2.037223, running train acc: 0.388
==>>> it: 401, mem avg. loss: 0.967758, running mem acc: 0.730
[0.027 0.181 0.156 0.206 0.528 0. 0. 0. 0. 0. ]
-----------run 12 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.232672, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.989838, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.614954, running train acc: 0.331
==>>> it: 101, mem avg. loss: 1.117936, running mem acc: 0.716
==>>> it: 201, avg. loss: 2.137420, running train acc: 0.417
==>>> it: 201, mem avg. loss: 1.003794, running mem acc: 0.733
==>>> it: 301, avg. loss: 1.956120, running train acc: 0.449
==>>> it: 301, mem avg. loss: 0.911681, running mem acc: 0.754
==>>> it: 401, avg. loss: 1.848569, running train acc: 0.469
==>>> it: 401, mem avg. loss: 0.843964, running mem acc: 0.773
[0.04 0.173 0.123 0.194 0.142 0.563 0. 0. 0. 0. ]
-----------run 12 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.095715, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.014509, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.518998, running train acc: 0.329
==>>> it: 101, mem avg. loss: 1.027070, running mem acc: 0.724
==>>> it: 201, avg. loss: 2.110429, running train acc: 0.404
==>>> it: 201, mem avg. loss: 0.887483, running mem acc: 0.755
==>>> it: 301, avg. loss: 1.918013, running train acc: 0.440
==>>> it: 301, mem avg. loss: 0.797369, running mem acc: 0.776
==>>> it: 401, avg. loss: 1.807105, running train acc: 0.461
==>>> it: 401, mem avg. loss: 0.741815, running mem acc: 0.789
[0.034 0.126 0.095 0.157 0.113 0.239 0.581 0. 0. 0. ]
-----------run 12 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.068079, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.698236, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.507452, running train acc: 0.343
==>>> it: 101, mem avg. loss: 0.964361, running mem acc: 0.729
==>>> it: 201, avg. loss: 2.038898, running train acc: 0.437
==>>> it: 201, mem avg. loss: 0.844381, running mem acc: 0.760
==>>> it: 301, avg. loss: 1.864232, running train acc: 0.468
==>>> it: 301, mem avg. loss: 0.757455, running mem acc: 0.780
==>>> it: 401, avg. loss: 1.742462, running train acc: 0.490
==>>> it: 401, mem avg. loss: 0.679897, running mem acc: 0.803
[0.024 0.174 0.103 0.115 0.099 0.173 0.173 0.618 0. 0. ]
-----------run 12 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.783829, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.335765, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.388472, running train acc: 0.372
==>>> it: 101, mem avg. loss: 0.975477, running mem acc: 0.746
==>>> it: 201, avg. loss: 1.940569, running train acc: 0.451
==>>> it: 201, mem avg. loss: 0.792848, running mem acc: 0.789
==>>> it: 301, avg. loss: 1.772970, running train acc: 0.485
==>>> it: 301, mem avg. loss: 0.699815, running mem acc: 0.811
==>>> it: 401, avg. loss: 1.669511, running train acc: 0.502
==>>> it: 401, mem avg. loss: 0.619996, running mem acc: 0.832
[0.035 0.119 0.1 0.13 0.102 0.183 0.12 0.213 0.597 0. ]
-----------run 12 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.419349, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.647834, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.678536, running train acc: 0.274
==>>> it: 101, mem avg. loss: 0.930434, running mem acc: 0.763
==>>> it: 201, avg. loss: 2.271431, running train acc: 0.340
==>>> it: 201, mem avg. loss: 0.784307, running mem acc: 0.794
==>>> it: 301, avg. loss: 2.074034, running train acc: 0.379
==>>> it: 301, mem avg. loss: 0.713420, running mem acc: 0.806
==>>> it: 401, avg. loss: 1.992376, running train acc: 0.399
==>>> it: 401, mem avg. loss: 0.661524, running mem acc: 0.819
[0.035 0.114 0.098 0.116 0.1 0.166 0.112 0.135 0.184 0.497]
-----------run 12-----------avg_end_acc 0.1557-----------train time 2608.7608783245087
Task: 0, Labels:[81, 62, 48, 54, 92, 69, 44, 17, 7, 40]
Task: 1, Labels:[68, 96, 75, 97, 56, 27, 59, 95, 46, 86]
Task: 2, Labels:[3, 37, 74, 2, 11, 26, 98, 45, 67, 23]
Task: 3, Labels:[42, 4, 25, 77, 1, 83, 9, 14, 10, 89]
Task: 4, Labels:[52, 29, 41, 70, 85, 65, 43, 61, 72, 38]
Task: 5, Labels:[39, 82, 57, 63, 15, 5, 79, 21, 47, 58]
Task: 6, Labels:[28, 91, 24, 13, 35, 49, 88, 50, 55, 33]
Task: 7, Labels:[12, 36, 16, 90, 34, 71, 78, 22, 87, 53]
Task: 8, Labels:[80, 94, 32, 19, 66, 0, 6, 64, 30, 31]
Task: 9, Labels:[60, 93, 18, 20, 84, 51, 99, 76, 73, 8]
buffer has 2000 slots
-----------run 13 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 3.959098, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.266457, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.693590, running train acc: 0.185
==>>> it: 101, mem avg. loss: 2.463321, running mem acc: 0.245
==>>> it: 201, avg. loss: 2.328282, running train acc: 0.242
==>>> it: 201, mem avg. loss: 2.178192, running mem acc: 0.280
==>>> it: 301, avg. loss: 2.130127, running train acc: 0.288
==>>> it: 301, mem avg. loss: 1.973371, running mem acc: 0.321
==>>> it: 401, avg. loss: 2.009217, running train acc: 0.317
==>>> it: 401, mem avg. loss: 1.844590, running mem acc: 0.361
[0.446 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.383232, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.690739, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.748249, running train acc: 0.225
==>>> it: 101, mem avg. loss: 2.083563, running mem acc: 0.405
==>>> it: 201, avg. loss: 2.324604, running train acc: 0.308
==>>> it: 201, mem avg. loss: 1.861153, running mem acc: 0.453
==>>> it: 301, avg. loss: 2.164358, running train acc: 0.343
==>>> it: 301, mem avg. loss: 1.660083, running mem acc: 0.502
==>>> it: 401, avg. loss: 2.024073, running train acc: 0.374
==>>> it: 401, mem avg. loss: 1.485873, running mem acc: 0.551
[0.12 0.523 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.947900, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.242770, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.880012, running train acc: 0.190
==>>> it: 101, mem avg. loss: 1.205163, running mem acc: 0.685
==>>> it: 201, avg. loss: 2.503846, running train acc: 0.245
==>>> it: 201, mem avg. loss: 1.115638, running mem acc: 0.701
==>>> it: 301, avg. loss: 2.309515, running train acc: 0.283
==>>> it: 301, mem avg. loss: 1.046339, running mem acc: 0.710
==>>> it: 401, avg. loss: 2.215081, running train acc: 0.300
==>>> it: 401, mem avg. loss: 0.993009, running mem acc: 0.721
[0.089 0.219 0.361 0. 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.650943, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.742707, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.893240, running train acc: 0.215
==>>> it: 101, mem avg. loss: 1.123824, running mem acc: 0.713
==>>> it: 201, avg. loss: 2.500915, running train acc: 0.257
==>>> it: 201, mem avg. loss: 1.129604, running mem acc: 0.704
==>>> it: 301, avg. loss: 2.352127, running train acc: 0.288
==>>> it: 301, mem avg. loss: 1.108346, running mem acc: 0.703
==>>> it: 401, avg. loss: 2.267806, running train acc: 0.305
==>>> it: 401, mem avg. loss: 1.068621, running mem acc: 0.710
[0.088 0.2 0.063 0.454 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.072036, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.593135, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.691800, running train acc: 0.305
==>>> it: 101, mem avg. loss: 1.317419, running mem acc: 0.664
==>>> it: 201, avg. loss: 2.278079, running train acc: 0.364
==>>> it: 201, mem avg. loss: 1.262957, running mem acc: 0.662
==>>> it: 301, avg. loss: 2.105722, running train acc: 0.396
==>>> it: 301, mem avg. loss: 1.195606, running mem acc: 0.675
==>>> it: 401, avg. loss: 1.982915, running train acc: 0.418
==>>> it: 401, mem avg. loss: 1.096567, running mem acc: 0.700
[0.102 0.202 0.059 0.148 0.516 0. 0. 0. 0. 0. ]
-----------run 13 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.690141, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.262250, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.636736, running train acc: 0.300
==>>> it: 101, mem avg. loss: 1.208325, running mem acc: 0.681
==>>> it: 201, avg. loss: 2.190698, running train acc: 0.362
==>>> it: 201, mem avg. loss: 1.048665, running mem acc: 0.709
==>>> it: 301, avg. loss: 2.004990, running train acc: 0.411
==>>> it: 301, mem avg. loss: 0.950227, running mem acc: 0.731
==>>> it: 401, avg. loss: 1.902089, running train acc: 0.433
==>>> it: 401, mem avg. loss: 0.857150, running mem acc: 0.758
[0.08 0.157 0.028 0.071 0.217 0.6 0. 0. 0. 0. ]
-----------run 13 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.231692, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.989456, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.566033, running train acc: 0.331
==>>> it: 101, mem avg. loss: 1.100000, running mem acc: 0.723
==>>> it: 201, avg. loss: 2.164060, running train acc: 0.394
==>>> it: 201, mem avg. loss: 0.930848, running mem acc: 0.759
==>>> it: 301, avg. loss: 1.983517, running train acc: 0.429
==>>> it: 301, mem avg. loss: 0.852066, running mem acc: 0.776
==>>> it: 401, avg. loss: 1.882589, running train acc: 0.445
==>>> it: 401, mem avg. loss: 0.770192, running mem acc: 0.796
[0.077 0.113 0.035 0.078 0.219 0.249 0.557 0. 0. 0. ]
-----------run 13 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.021660, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.429346, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.598072, running train acc: 0.306
==>>> it: 101, mem avg. loss: 1.017025, running mem acc: 0.720
==>>> it: 201, avg. loss: 2.164167, running train acc: 0.376
==>>> it: 201, mem avg. loss: 0.884785, running mem acc: 0.746
==>>> it: 301, avg. loss: 1.969117, running train acc: 0.413
==>>> it: 301, mem avg. loss: 0.791476, running mem acc: 0.774
==>>> it: 401, avg. loss: 1.858525, running train acc: 0.440
==>>> it: 401, mem avg. loss: 0.713248, running mem acc: 0.797
[0.056 0.122 0.021 0.058 0.192 0.176 0.168 0.545 0. 0. ]
-----------run 13 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.307083, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.590237, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.552793, running train acc: 0.325
==>>> it: 101, mem avg. loss: 1.029522, running mem acc: 0.727
==>>> it: 201, avg. loss: 2.162954, running train acc: 0.374
==>>> it: 201, mem avg. loss: 0.857149, running mem acc: 0.767
==>>> it: 301, avg. loss: 2.005573, running train acc: 0.396
==>>> it: 301, mem avg. loss: 0.758802, running mem acc: 0.793
==>>> it: 401, avg. loss: 1.899629, running train acc: 0.419
==>>> it: 401, mem avg. loss: 0.684734, running mem acc: 0.815
[0.067 0.045 0.017 0.051 0.174 0.152 0.159 0.169 0.508 0. ]
-----------run 13 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.360004, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.396177, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.556236, running train acc: 0.322
==>>> it: 101, mem avg. loss: 1.036353, running mem acc: 0.721
==>>> it: 201, avg. loss: 2.103134, running train acc: 0.400
==>>> it: 201, mem avg. loss: 0.859243, running mem acc: 0.763
==>>> it: 301, avg. loss: 1.893402, running train acc: 0.445
==>>> it: 301, mem avg. loss: 0.759684, running mem acc: 0.796
==>>> it: 401, avg. loss: 1.786111, running train acc: 0.472
==>>> it: 401, mem avg. loss: 0.685444, running mem acc: 0.816
[0.053 0.057 0.033 0.041 0.183 0.132 0.102 0.107 0.174 0.549]
-----------run 13-----------avg_end_acc 0.1431-----------train time 2603.259247303009
Task: 0, Labels:[25, 97, 73, 51, 12, 90, 16, 84, 19, 48]
Task: 1, Labels:[77, 30, 81, 60, 11, 95, 39, 2, 64, 62]
Task: 2, Labels:[69, 42, 35, 71, 80, 78, 24, 98, 44, 32]
Task: 3, Labels:[85, 13, 18, 74, 34, 14, 57, 9, 86, 87]
Task: 4, Labels:[43, 27, 1, 66, 88, 82, 68, 33, 5, 22]
Task: 5, Labels:[26, 50, 21, 41, 93, 3, 23, 91, 70, 8]
Task: 6, Labels:[0, 94, 54, 61, 59, 92, 89, 49, 79, 58]
Task: 7, Labels:[38, 96, 20, 4, 99, 53, 40, 10, 46, 83]
Task: 8, Labels:[36, 75, 67, 7, 28, 63, 56, 6, 17, 47]
Task: 9, Labels:[15, 72, 45, 55, 37, 65, 76, 52, 29, 31]
buffer has 2000 slots
-----------run 14 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.358658, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.967481, running mem acc: 0.200
==>>> it: 101, avg. loss: 2.582929, running train acc: 0.204
==>>> it: 101, mem avg. loss: 2.458889, running mem acc: 0.224
==>>> it: 201, avg. loss: 2.329128, running train acc: 0.239
==>>> it: 201, mem avg. loss: 2.239299, running mem acc: 0.244
==>>> it: 301, avg. loss: 2.181188, running train acc: 0.271
==>>> it: 301, mem avg. loss: 2.051827, running mem acc: 0.303
==>>> it: 401, avg. loss: 2.096934, running train acc: 0.291
==>>> it: 401, mem avg. loss: 1.947911, running mem acc: 0.331
[0.423 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.024535, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.048299, running mem acc: 0.450
==>>> it: 101, avg. loss: 2.826031, running train acc: 0.204
==>>> it: 101, mem avg. loss: 2.353848, running mem acc: 0.327
==>>> it: 201, avg. loss: 2.437938, running train acc: 0.278
==>>> it: 201, mem avg. loss: 2.189060, running mem acc: 0.341
==>>> it: 301, avg. loss: 2.248351, running train acc: 0.308
==>>> it: 301, mem avg. loss: 1.996692, running mem acc: 0.390
==>>> it: 401, avg. loss: 2.120198, running train acc: 0.336
==>>> it: 401, mem avg. loss: 1.804357, running mem acc: 0.448
[0.145 0.451 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.638925, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.187967, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.867677, running train acc: 0.204
==>>> it: 101, mem avg. loss: 1.403636, running mem acc: 0.604
==>>> it: 201, avg. loss: 2.459344, running train acc: 0.278
==>>> it: 201, mem avg. loss: 1.313771, running mem acc: 0.622
==>>> it: 301, avg. loss: 2.260878, running train acc: 0.315
==>>> it: 301, mem avg. loss: 1.247918, running mem acc: 0.637
==>>> it: 401, avg. loss: 2.155340, running train acc: 0.332
==>>> it: 401, mem avg. loss: 1.185807, running mem acc: 0.651
[0.061 0.214 0.453 0. 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.759886, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.049620, running mem acc: 0.650
==>>> it: 101, avg. loss: 2.811434, running train acc: 0.250
==>>> it: 101, mem avg. loss: 1.392834, running mem acc: 0.629
==>>> it: 201, avg. loss: 2.435460, running train acc: 0.305
==>>> it: 201, mem avg. loss: 1.341295, running mem acc: 0.629
==>>> it: 301, avg. loss: 2.286328, running train acc: 0.336
==>>> it: 301, mem avg. loss: 1.290752, running mem acc: 0.638
==>>> it: 401, avg. loss: 2.180328, running train acc: 0.357
==>>> it: 401, mem avg. loss: 1.215492, running mem acc: 0.659
[0.067 0.2 0.239 0.469 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.949551, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.437599, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.667984, running train acc: 0.292
==>>> it: 101, mem avg. loss: 1.302220, running mem acc: 0.655
==>>> it: 201, avg. loss: 2.226800, running train acc: 0.364
==>>> it: 201, mem avg. loss: 1.151142, running mem acc: 0.684
==>>> it: 301, avg. loss: 2.017009, running train acc: 0.405
==>>> it: 301, mem avg. loss: 1.051614, running mem acc: 0.712
==>>> it: 401, avg. loss: 1.891178, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.966127, running mem acc: 0.732
[0.044 0.198 0.191 0.167 0.564 0. 0. 0. 0. 0. ]
-----------run 14 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.930771, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.756248, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.698502, running train acc: 0.300
==>>> it: 101, mem avg. loss: 1.182181, running mem acc: 0.705
==>>> it: 201, avg. loss: 2.262118, running train acc: 0.368
==>>> it: 201, mem avg. loss: 1.038262, running mem acc: 0.727
==>>> it: 301, avg. loss: 2.108587, running train acc: 0.388
==>>> it: 301, mem avg. loss: 0.931342, running mem acc: 0.750
==>>> it: 401, avg. loss: 1.990190, running train acc: 0.414
==>>> it: 401, mem avg. loss: 0.868667, running mem acc: 0.764
[0.057 0.096 0.195 0.115 0.235 0.553 0. 0. 0. 0. ]
-----------run 14 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.563370, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.740415, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.408910, running train acc: 0.371
==>>> it: 101, mem avg. loss: 1.145265, running mem acc: 0.688
==>>> it: 201, avg. loss: 1.910146, running train acc: 0.470
==>>> it: 201, mem avg. loss: 0.990742, running mem acc: 0.717
==>>> it: 301, avg. loss: 1.719402, running train acc: 0.509
==>>> it: 301, mem avg. loss: 0.845708, running mem acc: 0.755
==>>> it: 401, avg. loss: 1.598108, running train acc: 0.536
==>>> it: 401, mem avg. loss: 0.766205, running mem acc: 0.776
[0.04 0.148 0.198 0.087 0.179 0.161 0.644 0. 0. 0. ]
-----------run 14 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.139684, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.585182, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.508467, running train acc: 0.323
==>>> it: 101, mem avg. loss: 0.913936, running mem acc: 0.759
==>>> it: 201, avg. loss: 2.100855, running train acc: 0.391
==>>> it: 201, mem avg. loss: 0.813265, running mem acc: 0.783
==>>> it: 301, avg. loss: 1.914420, running train acc: 0.432
==>>> it: 301, mem avg. loss: 0.730208, running mem acc: 0.801
==>>> it: 401, avg. loss: 1.809624, running train acc: 0.453
==>>> it: 401, mem avg. loss: 0.655646, running mem acc: 0.821
[0.056 0.146 0.15 0.073 0.177 0.138 0.293 0.539 0. 0. ]
-----------run 14 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.630907, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.190936, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.388817, running train acc: 0.379
==>>> it: 101, mem avg. loss: 0.854647, running mem acc: 0.782
==>>> it: 201, avg. loss: 1.940800, running train acc: 0.453
==>>> it: 201, mem avg. loss: 0.743290, running mem acc: 0.808
==>>> it: 301, avg. loss: 1.719147, running train acc: 0.499
==>>> it: 301, mem avg. loss: 0.667775, running mem acc: 0.822
==>>> it: 401, avg. loss: 1.599522, running train acc: 0.530
==>>> it: 401, mem avg. loss: 0.605869, running mem acc: 0.836
[0.047 0.099 0.125 0.041 0.161 0.139 0.198 0.145 0.639 0. ]
-----------run 14 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.956848, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.186578, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.700618, running train acc: 0.280
==>>> it: 101, mem avg. loss: 0.866148, running mem acc: 0.784
==>>> it: 201, avg. loss: 2.265596, running train acc: 0.351
==>>> it: 201, mem avg. loss: 0.752482, running mem acc: 0.802
==>>> it: 301, avg. loss: 2.060721, running train acc: 0.391
==>>> it: 301, mem avg. loss: 0.658484, running mem acc: 0.825
==>>> it: 401, avg. loss: 1.955261, running train acc: 0.411
==>>> it: 401, mem avg. loss: 0.606845, running mem acc: 0.841
[0.033 0.127 0.122 0.044 0.137 0.119 0.182 0.119 0.196 0.492]
-----------run 14-----------avg_end_acc 0.1571-----------train time 2599.750324487686
----------- Total 15 run: 38257.48938179016s -----------
----------- Avg_End_Acc (0.15658666666666665, 0.0046353895248637655) Avg_End_Fgt (0.37292666666666663, 0.005743844949978044) Avg_Acc (0.24635227248677247, 0.00827137818566395) Avg_Bwtp (0.0, 0.0) Avg_Fwt (0.0, 0.0)-----------

Reservoir update potential issue

As I was reading the code, I noticed that in the file utils/buffer/reservoir_update.py, in case the IF statement on line 23 fails (but not the one in line 13), some examples will be added to the last places in the buffer, whereas the rest of the input batch examples may be or may not be added depending on the sampling. However, I believe the RETURN statements in lines 44 and 61 are supposed to return indexes of all examples that were just added, but instead just return the examples added in the "sampling" part of the code. Should the examples added to the last positions of the buffer (i.e. examples that can be calculated the same way that the variable in line 24 is calculated) also be returned?

runtime warning

1

Hello scholar, follow the instructions in the readme to run your code, and the above error occurs. What is the reason?

Parameter update of Experience Replay

Hi. I have read your survey and am working on your codes. They are beautiful works. I have a question about the parameter update of Experience Replay, which uses a batch of current input data (bacth_x) and a batch of memory samples (batch_mem).

Quote from the paper(ER part on Page 16): ''ER simply trains the model with the incoming and memory mini-batches together using the cross-entropy loss ''. My first view on this is to update the model's parameters with a combined batch (batch_x + bacth_mem), calculating the loss and backward propagate with only one step. In the codes, however, the update is realized by 2 backward steps, one calculated from the batch_x and one from the bacth_mem.

I think the former way may be better, especially when using multiple mem_iters. Is there any special consideration to update the model with batch_x and batch_mem separately? Which method did the original work of Experience Replay use?

Look forward to your reply. Thanks.

Question about distillation loss implementation of ICARL

Hi,

I found that in your ICARL Implementation, there is no hyper-parameter "T" used to soften the output distribution for distillation, which is implemented in other work such as LWF and BIC where T=2 is commonly suggested. Could you provide some detail for how you implement the distillation loss term? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.