Git Product home page Git Product logo

Comments (9)

RaptorMai avatar RaptorMai commented on June 3, 2024

Hi,

Thank you so much for your interest in our project.
Actually, there is a small typo in the implementation. I have pushed a fix.
Thanks again for pointing out the issue.

from online-continual-learning.

YananGu avatar YananGu commented on June 3, 2024

Thanks for your help! And I also have a question on cifar10. What is the value of the hyperparameter "n_smp_cls" on cifar10? In Cifar100, it is 1.5. It seems inappropriate to also use 1.5 on cifar10. And If it is convenient, can you provide me with a command to train on cifar10? Thank you very much!

from online-continual-learning.

RaptorMai avatar RaptorMai commented on June 3, 2024

For CIFAR10, n_smp_cls=9, num_k=3.

from online-continual-learning.

YananGu avatar YananGu commented on June 3, 2024

Hi, I am confused, you mean that for cifar 100, n_sample_cls=9, not for cifar10?

from online-continual-learning.

RaptorMai avatar RaptorMai commented on June 3, 2024

My bad, I have updated my reply. You should use n_smp_cls=9, num_k=3 for cifar10

from online-continual-learning.

YananGu avatar YananGu commented on June 3, 2024

Get it , thanks!

from online-continual-learning.

YananGu avatar YananGu commented on June 3, 2024

Sorry to disturb you again, I run the code on cifar 10 using " python general_main.py --data cifar10 --cl_type nc --agent ER --update ASER --retrieve ASER --mem_size 200 --aser_type asvm --n_smp_cls 9 --k 3 --num_task 5" , but the performance mentioned in ASER cannot be achieved. And the performance of other methods like MIR, GSS, ER are also lower than mentioned in the paper. What needs to be mentioned is that I achieved the performance in cifar 100 mentioned in the paper.
My experiment log in cifar 10 is as follows.

(online-learning) user@gpu-20228:~/online/online-continual-learning-main$ python general_main.py --data cifar10 --cl_type nc --agent ER --update ASER --retrieve ASER --mem_size 200 --aser_type asvm --n_smp_cls 9 --k 3 --num_task 5
Namespace(agent='ER', alpha=0.9, aser_type='asvm', batch=10, cl_type='nc', classifier_chill=0.01, clip=10.0, cuda=True, cumulative_delta=False, data='cifar10', epoch=1, eps_mem_batch=10, error_analysis=False, fisher_update_after=50, fix_order=False, gss_batch_size=10, gss_mem_strength=10, k=3, kd_trick=False, kd_trick_star=False, labels_trick=False, lambda_=100, learning_rate=0.1, log_alpha=-300, mem_epoch=70, mem_iters=1, mem_size=200, min_delta=0.0, minlr=0.0005, n_smp_cls=9.0, nmc_trick=False, ns_factor=(0.0, 0.4, 0.8, 1.2, 1.6, 2.0, 2.4, 2.8, 3.2, 3.6), ns_task=(1, 1, 2, 2, 2, 2), ns_type='noise', num_runs=15, num_runs_val=3, num_tasks=5, num_val=3, optimizer='SGD', patience=0, plot_sample=False, retrieve='ASER', review_trick=False, seed=0, separated_softmax=False, stm_capacity=1000, subsample=50, test_batch=128, update='ASER', val_size=0.1, verbose=True, weight_decay=0)
Setting up data stream
Files already downloaded and verified
Files already downloaded and verified
data setup time: 2.0761542320251465
Task: 0, Labels:[2, 8]
Task: 1, Labels:[4, 9]
Task: 2, Labels:[1, 6]
Task: 3, Labels:[7, 3]
Task: 4, Labels:[0, 5]
buffer has 200 slots
-----------run 0 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.103998, running train acc: 0.150
==>>> it: 1, mem avg. loss: 0.481654, running mem acc: 0.900
==>>> it: 101, avg. loss: 0.547180, running train acc: 0.799
==>>> it: 101, mem avg. loss: 0.630051, running mem acc: 0.761
==>>> it: 201, avg. loss: 0.503702, running train acc: 0.815
==>>> it: 201, mem avg. loss: 0.546003, running mem acc: 0.786
==>>> it: 301, avg. loss: 0.462222, running train acc: 0.830
==>>> it: 301, mem avg. loss: 0.497363, running mem acc: 0.800
==>>> it: 401, avg. loss: 0.433031, running train acc: 0.841
==>>> it: 401, mem avg. loss: 0.477170, running mem acc: 0.808
==>>> it: 501, avg. loss: 0.421850, running train acc: 0.843
==>>> it: 501, mem avg. loss: 0.470170, running mem acc: 0.811
==>>> it: 601, avg. loss: 0.411663, running train acc: 0.848
==>>> it: 601, mem avg. loss: 0.454274, running mem acc: 0.816
==>>> it: 701, avg. loss: 0.400061, running train acc: 0.854
==>>> it: 701, mem avg. loss: 0.429815, running mem acc: 0.826
==>>> it: 801, avg. loss: 0.392237, running train acc: 0.857
==>>> it: 801, mem avg. loss: 0.415428, running mem acc: 0.834
==>>> it: 901, avg. loss: 0.387698, running train acc: 0.860
==>>> it: 901, mem avg. loss: 0.409943, running mem acc: 0.834
[0.9015 0. 0. 0. 0. ]
-----------run 0 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.254006, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.033962, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.584611, running train acc: 0.496
==>>> it: 101, mem avg. loss: 1.069966, running mem acc: 0.561
==>>> it: 201, avg. loss: 1.451314, running train acc: 0.560
==>>> it: 201, mem avg. loss: 1.179563, running mem acc: 0.534
==>>> it: 301, avg. loss: 1.438917, running train acc: 0.578
==>>> it: 301, mem avg. loss: 1.303055, running mem acc: 0.529
==>>> it: 401, avg. loss: 1.441481, running train acc: 0.594
==>>> it: 401, mem avg. loss: 1.361716, running mem acc: 0.526
==>>> it: 501, avg. loss: 1.400436, running train acc: 0.608
==>>> it: 501, mem avg. loss: 1.444916, running mem acc: 0.511
==>>> it: 601, avg. loss: 1.351038, running train acc: 0.625
==>>> it: 601, mem avg. loss: 1.487159, running mem acc: 0.503
==>>> it: 701, avg. loss: 1.321809, running train acc: 0.634
==>>> it: 701, mem avg. loss: 1.513744, running mem acc: 0.499
==>>> it: 801, avg. loss: 1.294017, running train acc: 0.645
==>>> it: 801, mem avg. loss: 1.548219, running mem acc: 0.497
==>>> it: 901, avg. loss: 1.294306, running train acc: 0.649
==>>> it: 901, mem avg. loss: 1.554349, running mem acc: 0.495
[0.0875 0.936 0. 0. 0. ]
-----------run 0 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.972395, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.303956, running mem acc: 0.900
==>>> it: 101, avg. loss: 1.215210, running train acc: 0.556
==>>> it: 101, mem avg. loss: 1.462585, running mem acc: 0.477
==>>> it: 201, avg. loss: 1.051999, running train acc: 0.624
==>>> it: 201, mem avg. loss: 1.658221, running mem acc: 0.480
==>>> it: 301, avg. loss: 0.960982, running train acc: 0.656
==>>> it: 301, mem avg. loss: 1.743931, running mem acc: 0.482
==>>> it: 401, avg. loss: 0.901165, running train acc: 0.683
==>>> it: 401, mem avg. loss: 1.740881, running mem acc: 0.504
==>>> it: 501, avg. loss: 0.843433, running train acc: 0.705
==>>> it: 501, mem avg. loss: 1.747646, running mem acc: 0.508
==>>> it: 601, avg. loss: 0.827923, running train acc: 0.714
==>>> it: 601, mem avg. loss: 1.801579, running mem acc: 0.504
==>>> it: 701, avg. loss: 0.796928, running train acc: 0.726
==>>> it: 701, mem avg. loss: 1.778031, running mem acc: 0.512
==>>> it: 801, avg. loss: 0.778125, running train acc: 0.735
==>>> it: 801, mem avg. loss: 1.773922, running mem acc: 0.517
==>>> it: 901, avg. loss: 0.762766, running train acc: 0.742
==>>> it: 901, mem avg. loss: 1.780658, running mem acc: 0.518
[0.073 0.0435 0.961 0. 0. ]
-----------run 0 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.195302, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.289266, running mem acc: 0.500
==>>> it: 101, avg. loss: 1.339623, running train acc: 0.492
==>>> it: 101, mem avg. loss: 1.117245, running mem acc: 0.588
==>>> it: 201, avg. loss: 1.042714, running train acc: 0.602
==>>> it: 201, mem avg. loss: 1.086941, running mem acc: 0.618
==>>> it: 301, avg. loss: 0.947336, running train acc: 0.630
==>>> it: 301, mem avg. loss: 1.080081, running mem acc: 0.637
==>>> it: 401, avg. loss: 0.875232, running train acc: 0.656
==>>> it: 401, mem avg. loss: 1.160231, running mem acc: 0.633
==>>> it: 501, avg. loss: 0.831104, running train acc: 0.674
==>>> it: 501, mem avg. loss: 1.182915, running mem acc: 0.630
==>>> it: 601, avg. loss: 0.798324, running train acc: 0.685
==>>> it: 601, mem avg. loss: 1.163236, running mem acc: 0.640
==>>> it: 701, avg. loss: 0.768920, running train acc: 0.697
==>>> it: 701, mem avg. loss: 1.199998, running mem acc: 0.630
==>>> it: 801, avg. loss: 0.748054, running train acc: 0.704
==>>> it: 801, mem avg. loss: 1.236799, running mem acc: 0.624
==>>> it: 901, avg. loss: 0.734918, running train acc: 0.710
==>>> it: 901, mem avg. loss: 1.281958, running mem acc: 0.615
[0.201 0.024 0.213 0.853 0. ]
-----------run 0 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.286205, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.203197, running mem acc: 0.650
==>>> it: 101, avg. loss: 1.025613, running train acc: 0.661
==>>> it: 101, mem avg. loss: 1.052566, running mem acc: 0.624
==>>> it: 201, avg. loss: 0.769134, running train acc: 0.735
==>>> it: 201, mem avg. loss: 1.107627, running mem acc: 0.621
==>>> it: 301, avg. loss: 0.651043, running train acc: 0.774
==>>> it: 301, mem avg. loss: 1.083226, running mem acc: 0.635
==>>> it: 401, avg. loss: 0.579522, running train acc: 0.797
==>>> it: 401, mem avg. loss: 1.089635, running mem acc: 0.640
==>>> it: 501, avg. loss: 0.546942, running train acc: 0.806
==>>> it: 501, mem avg. loss: 1.079575, running mem acc: 0.652
==>>> it: 601, avg. loss: 0.510559, running train acc: 0.817
==>>> it: 601, mem avg. loss: 1.080131, running mem acc: 0.657
==>>> it: 701, avg. loss: 0.484567, running train acc: 0.828
==>>> it: 701, mem avg. loss: 1.070176, running mem acc: 0.662
==>>> it: 801, avg. loss: 0.471664, running train acc: 0.835
==>>> it: 801, mem avg. loss: 1.073243, running mem acc: 0.662
==>>> it: 901, avg. loss: 0.462835, running train acc: 0.837
==>>> it: 901, mem avg. loss: 1.072045, running mem acc: 0.663
[0.0065 0.0055 0.0575 0.0385 0.9205]
-----------run 0-----------avg_end_acc 0.2057-----------train time 359.7401502132416
Task: 0, Labels:[2, 5]
Task: 1, Labels:[6, 1]
Task: 2, Labels:[7, 0]
Task: 3, Labels:[3, 8]
Task: 4, Labels:[9, 4]
buffer has 200 slots
-----------run 1 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.547592, running train acc: 0.350
==>>> it: 1, mem avg. loss: 0.819113, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.927501, running train acc: 0.620
==>>> it: 101, mem avg. loss: 0.622244, running mem acc: 0.705
==>>> it: 201, avg. loss: 0.825103, running train acc: 0.665
==>>> it: 201, mem avg. loss: 0.544085, running mem acc: 0.748
==>>> it: 301, avg. loss: 0.773088, running train acc: 0.681
==>>> it: 301, mem avg. loss: 0.502800, running mem acc: 0.768
==>>> it: 401, avg. loss: 0.749233, running train acc: 0.692
==>>> it: 401, mem avg. loss: 0.495313, running mem acc: 0.773
==>>> it: 501, avg. loss: 0.715479, running train acc: 0.705
==>>> it: 501, mem avg. loss: 0.472219, running mem acc: 0.785
==>>> it: 601, avg. loss: 0.692767, running train acc: 0.716
==>>> it: 601, mem avg. loss: 0.457847, running mem acc: 0.791
==>>> it: 701, avg. loss: 0.672020, running train acc: 0.726
==>>> it: 701, mem avg. loss: 0.462109, running mem acc: 0.792
==>>> it: 801, avg. loss: 0.656804, running train acc: 0.732
==>>> it: 801, mem avg. loss: 0.450491, running mem acc: 0.799
==>>> it: 901, avg. loss: 0.643379, running train acc: 0.737
==>>> it: 901, mem avg. loss: 0.440225, running mem acc: 0.806
[0.791 0. 0. 0. 0. ]
-----------run 1 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.928461, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.038443, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.595826, running train acc: 0.429
==>>> it: 101, mem avg. loss: 0.980230, running mem acc: 0.603
==>>> it: 201, avg. loss: 1.515725, running train acc: 0.511
==>>> it: 201, mem avg. loss: 1.167012, running mem acc: 0.580
==>>> it: 301, avg. loss: 1.421692, running train acc: 0.560
==>>> it: 301, mem avg. loss: 1.247424, running mem acc: 0.569
==>>> it: 401, avg. loss: 1.382044, running train acc: 0.596
==>>> it: 401, mem avg. loss: 1.315885, running mem acc: 0.556
==>>> it: 501, avg. loss: 1.386719, running train acc: 0.607
==>>> it: 501, mem avg. loss: 1.327387, running mem acc: 0.557
==>>> it: 601, avg. loss: 1.433002, running train acc: 0.610
==>>> it: 601, mem avg. loss: 1.338796, running mem acc: 0.558
==>>> it: 701, avg. loss: 1.422777, running train acc: 0.619
==>>> it: 701, mem avg. loss: 1.329164, running mem acc: 0.566
==>>> it: 801, avg. loss: 1.397495, running train acc: 0.631
==>>> it: 801, mem avg. loss: 1.320004, running mem acc: 0.566
==>>> it: 901, avg. loss: 1.426931, running train acc: 0.635
==>>> it: 901, mem avg. loss: 1.320986, running mem acc: 0.569
[0.118 0.9585 0. 0. 0. ]
-----------run 1 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.641842, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.644038, running mem acc: 0.750
==>>> it: 101, avg. loss: 1.248967, running train acc: 0.573
==>>> it: 101, mem avg. loss: 1.576881, running mem acc: 0.451
==>>> it: 201, avg. loss: 1.049485, running train acc: 0.649
==>>> it: 201, mem avg. loss: 1.673498, running mem acc: 0.474
==>>> it: 301, avg. loss: 1.003246, running train acc: 0.670
==>>> it: 301, mem avg. loss: 1.605797, running mem acc: 0.506
==>>> it: 401, avg. loss: 0.970591, running train acc: 0.686
==>>> it: 401, mem avg. loss: 1.595379, running mem acc: 0.520
==>>> it: 501, avg. loss: 0.949932, running train acc: 0.699
==>>> it: 501, mem avg. loss: 1.636166, running mem acc: 0.513
==>>> it: 601, avg. loss: 0.935501, running train acc: 0.707
==>>> it: 601, mem avg. loss: 1.664925, running mem acc: 0.509
==>>> it: 701, avg. loss: 0.904217, running train acc: 0.717
==>>> it: 701, mem avg. loss: 1.643778, running mem acc: 0.513
==>>> it: 801, avg. loss: 0.903892, running train acc: 0.722
==>>> it: 801, mem avg. loss: 1.705957, running mem acc: 0.505
==>>> it: 901, avg. loss: 0.912207, running train acc: 0.722
==>>> it: 901, mem avg. loss: 1.689624, running mem acc: 0.511
[0.062 0.222 0.923 0. 0. ]
-----------run 1 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.100552, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.809042, running mem acc: 0.800
==>>> it: 101, avg. loss: 1.176503, running train acc: 0.586
==>>> it: 101, mem avg. loss: 1.115445, running mem acc: 0.573
==>>> it: 201, avg. loss: 0.903372, running train acc: 0.689
==>>> it: 201, mem avg. loss: 1.184635, running mem acc: 0.578
==>>> it: 301, avg. loss: 0.814796, running train acc: 0.717
==>>> it: 301, mem avg. loss: 1.232070, running mem acc: 0.591
==>>> it: 401, avg. loss: 0.753372, running train acc: 0.735
==>>> it: 401, mem avg. loss: 1.237938, running mem acc: 0.596
==>>> it: 501, avg. loss: 0.730460, running train acc: 0.743
==>>> it: 501, mem avg. loss: 1.225833, running mem acc: 0.602
==>>> it: 601, avg. loss: 0.700754, running train acc: 0.755
==>>> it: 601, mem avg. loss: 1.260163, running mem acc: 0.599
==>>> it: 701, avg. loss: 0.668975, running train acc: 0.767
==>>> it: 701, mem avg. loss: 1.266929, running mem acc: 0.597
==>>> it: 801, avg. loss: 0.638782, running train acc: 0.778
==>>> it: 801, mem avg. loss: 1.269994, running mem acc: 0.595
==>>> it: 901, avg. loss: 0.618768, running train acc: 0.786
==>>> it: 901, mem avg. loss: 1.281296, running mem acc: 0.593
[0.003 0.088 0.1385 0.898 0. ]
-----------run 1 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.119282, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.104572, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.916637, running train acc: 0.710
==>>> it: 101, mem avg. loss: 0.906527, running mem acc: 0.675
==>>> it: 201, avg. loss: 0.685522, running train acc: 0.770
==>>> it: 201, mem avg. loss: 0.953240, running mem acc: 0.679
==>>> it: 301, avg. loss: 0.599628, running train acc: 0.790
==>>> it: 301, mem avg. loss: 1.042624, running mem acc: 0.672
==>>> it: 401, avg. loss: 0.571288, running train acc: 0.802
==>>> it: 401, mem avg. loss: 1.099396, running mem acc: 0.655
==>>> it: 501, avg. loss: 0.528266, running train acc: 0.816
==>>> it: 501, mem avg. loss: 1.120456, running mem acc: 0.658
==>>> it: 601, avg. loss: 0.507938, running train acc: 0.823
==>>> it: 601, mem avg. loss: 1.178216, running mem acc: 0.648
==>>> it: 701, avg. loss: 0.479084, running train acc: 0.831
==>>> it: 701, mem avg. loss: 1.175897, running mem acc: 0.648
==>>> it: 801, avg. loss: 0.455231, running train acc: 0.840
==>>> it: 801, mem avg. loss: 1.192652, running mem acc: 0.642
==>>> it: 901, avg. loss: 0.433300, running train acc: 0.847
==>>> it: 901, mem avg. loss: 1.205229, running mem acc: 0.641
[0.0135 0.0125 0.085 0.074 0.953 ]
-----------run 1-----------avg_end_acc 0.22759999999999997-----------train time 358.8827509880066
Task: 0, Labels:[7, 3]
Task: 1, Labels:[9, 1]
Task: 2, Labels:[8, 4]
Task: 3, Labels:[5, 6]
Task: 4, Labels:[0, 2]
buffer has 200 slots
-----------run 2 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.830795, running train acc: 0.300
==>>> it: 1, mem avg. loss: 0.693073, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.973233, running train acc: 0.578
==>>> it: 101, mem avg. loss: 0.704828, running mem acc: 0.674
==>>> it: 201, avg. loss: 0.828202, running train acc: 0.639
==>>> it: 201, mem avg. loss: 0.614845, running mem acc: 0.717
==>>> it: 301, avg. loss: 0.764841, running train acc: 0.669
==>>> it: 301, mem avg. loss: 0.550493, running mem acc: 0.749
==>>> it: 401, avg. loss: 0.722202, running train acc: 0.692
==>>> it: 401, mem avg. loss: 0.515839, running mem acc: 0.765
==>>> it: 501, avg. loss: 0.687150, running train acc: 0.707
==>>> it: 501, mem avg. loss: 0.489951, running mem acc: 0.778
==>>> it: 601, avg. loss: 0.661459, running train acc: 0.722
==>>> it: 601, mem avg. loss: 0.468036, running mem acc: 0.788
==>>> it: 701, avg. loss: 0.651921, running train acc: 0.729
==>>> it: 701, mem avg. loss: 0.448312, running mem acc: 0.800
==>>> it: 801, avg. loss: 0.631976, running train acc: 0.737
==>>> it: 801, mem avg. loss: 0.432503, running mem acc: 0.808
==>>> it: 901, avg. loss: 0.616292, running train acc: 0.745
==>>> it: 901, mem avg. loss: 0.421437, running mem acc: 0.814
[0.8375 0. 0. 0. 0. ]
-----------run 2 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.042130, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.154605, running mem acc: 0.950
==>>> it: 101, avg. loss: 1.977878, running train acc: 0.364
==>>> it: 101, mem avg. loss: 1.502540, running mem acc: 0.536
==>>> it: 201, avg. loss: 1.859057, running train acc: 0.418
==>>> it: 201, mem avg. loss: 1.771965, running mem acc: 0.503
==>>> it: 301, avg. loss: 1.840611, running train acc: 0.441
==>>> it: 301, mem avg. loss: 1.909391, running mem acc: 0.486
==>>> it: 401, avg. loss: 1.768059, running train acc: 0.471
==>>> it: 401, mem avg. loss: 2.001215, running mem acc: 0.487
==>>> it: 501, avg. loss: 1.703275, running train acc: 0.496
==>>> it: 501, mem avg. loss: 2.037848, running mem acc: 0.482
==>>> it: 601, avg. loss: 1.642701, running train acc: 0.515
==>>> it: 601, mem avg. loss: 2.106936, running mem acc: 0.468
==>>> it: 701, avg. loss: 1.611619, running train acc: 0.528
==>>> it: 701, mem avg. loss: 2.113528, running mem acc: 0.470
==>>> it: 801, avg. loss: 1.584971, running train acc: 0.543
==>>> it: 801, mem avg. loss: 2.154865, running mem acc: 0.467
==>>> it: 901, avg. loss: 1.557653, running train acc: 0.553
==>>> it: 901, mem avg. loss: 2.202518, running mem acc: 0.465
[0.3555 0.8225 0. 0. 0. ]
-----------run 2 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.264886, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.662170, running mem acc: 0.800
==>>> it: 101, avg. loss: 1.288040, running train acc: 0.589
==>>> it: 101, mem avg. loss: 1.147391, running mem acc: 0.563
==>>> it: 201, avg. loss: 1.132960, running train acc: 0.637
==>>> it: 201, mem avg. loss: 1.169589, running mem acc: 0.573
==>>> it: 301, avg. loss: 1.059477, running train acc: 0.663
==>>> it: 301, mem avg. loss: 1.138503, running mem acc: 0.596
==>>> it: 401, avg. loss: 0.996776, running train acc: 0.682
==>>> it: 401, mem avg. loss: 1.135886, running mem acc: 0.609
==>>> it: 501, avg. loss: 0.979989, running train acc: 0.690
==>>> it: 501, mem avg. loss: 1.151726, running mem acc: 0.613
==>>> it: 601, avg. loss: 0.950254, running train acc: 0.704
==>>> it: 601, mem avg. loss: 1.135023, running mem acc: 0.617
==>>> it: 701, avg. loss: 0.941803, running train acc: 0.713
==>>> it: 701, mem avg. loss: 1.159697, running mem acc: 0.615
==>>> it: 801, avg. loss: 0.959624, running train acc: 0.715
==>>> it: 801, mem avg. loss: 1.190748, running mem acc: 0.608
==>>> it: 901, avg. loss: 0.954831, running train acc: 0.723
==>>> it: 901, mem avg. loss: 1.216257, running mem acc: 0.606
[0.0335 0.185 0.953 0. 0. ]
-----------run 2 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.875390, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.714287, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.295891, running train acc: 0.537
==>>> it: 101, mem avg. loss: 1.053761, running mem acc: 0.650
==>>> it: 201, avg. loss: 1.026351, running train acc: 0.619
==>>> it: 201, mem avg. loss: 1.034634, running mem acc: 0.658
==>>> it: 301, avg. loss: 0.920582, running train acc: 0.662
==>>> it: 301, mem avg. loss: 1.061766, running mem acc: 0.657
==>>> it: 401, avg. loss: 0.864622, running train acc: 0.683
==>>> it: 401, mem avg. loss: 1.138489, running mem acc: 0.638
==>>> it: 501, avg. loss: 0.831251, running train acc: 0.696
==>>> it: 501, mem avg. loss: 1.182220, running mem acc: 0.628
==>>> it: 601, avg. loss: 0.808119, running train acc: 0.706
==>>> it: 601, mem avg. loss: 1.203013, running mem acc: 0.622
==>>> it: 701, avg. loss: 0.785256, running train acc: 0.713
==>>> it: 701, mem avg. loss: 1.208066, running mem acc: 0.619
==>>> it: 801, avg. loss: 0.762937, running train acc: 0.722
==>>> it: 801, mem avg. loss: 1.198189, running mem acc: 0.620
==>>> it: 901, avg. loss: 0.755998, running train acc: 0.726
==>>> it: 901, mem avg. loss: 1.197184, running mem acc: 0.623
[0.007 0.2045 0.3165 0.8725 0. ]
-----------run 2 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.132127, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.129415, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.201187, running train acc: 0.584
==>>> it: 101, mem avg. loss: 1.143965, running mem acc: 0.610
==>>> it: 201, avg. loss: 0.956593, running train acc: 0.659
==>>> it: 201, mem avg. loss: 1.175777, running mem acc: 0.609
==>>> it: 301, avg. loss: 0.893027, running train acc: 0.669
==>>> it: 301, mem avg. loss: 1.147648, running mem acc: 0.626
==>>> it: 401, avg. loss: 0.855874, running train acc: 0.682
==>>> it: 401, mem avg. loss: 1.186681, running mem acc: 0.617
==>>> it: 501, avg. loss: 0.833885, running train acc: 0.695
==>>> it: 501, mem avg. loss: 1.245756, running mem acc: 0.606
==>>> it: 601, avg. loss: 0.807589, running train acc: 0.703
==>>> it: 601, mem avg. loss: 1.264589, running mem acc: 0.601
==>>> it: 701, avg. loss: 0.784931, running train acc: 0.713
==>>> it: 701, mem avg. loss: 1.248049, running mem acc: 0.606
==>>> it: 801, avg. loss: 0.766062, running train acc: 0.719
==>>> it: 801, mem avg. loss: 1.278820, running mem acc: 0.596
==>>> it: 901, avg. loss: 0.751439, running train acc: 0.725
==>>> it: 901, mem avg. loss: 1.329250, running mem acc: 0.587
[0.014 0.093 0.0375 0.079 0.872 ]
-----------run 2-----------avg_end_acc 0.2191-----------train time 361.83493661880493
Task: 0, Labels:[9, 0]
Task: 1, Labels:[7, 5]
Task: 2, Labels:[3, 2]
Task: 3, Labels:[6, 8]
Task: 4, Labels:[4, 1]
buffer has 200 slots
-----------run 3 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.616393, running train acc: 0.250
==>>> it: 1, mem avg. loss: 1.003314, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.818333, running train acc: 0.670
==>>> it: 101, mem avg. loss: 0.678705, running mem acc: 0.728
==>>> it: 201, avg. loss: 0.715619, running train acc: 0.708
==>>> it: 201, mem avg. loss: 0.596262, running mem acc: 0.752
==>>> it: 301, avg. loss: 0.649327, running train acc: 0.729
==>>> it: 301, mem avg. loss: 0.528387, running mem acc: 0.776
==>>> it: 401, avg. loss: 0.616625, running train acc: 0.743
==>>> it: 401, mem avg. loss: 0.481026, running mem acc: 0.795
==>>> it: 501, avg. loss: 0.589549, running train acc: 0.759
==>>> it: 501, mem avg. loss: 0.444141, running mem acc: 0.811
==>>> it: 601, avg. loss: 0.569192, running train acc: 0.768
==>>> it: 601, mem avg. loss: 0.420398, running mem acc: 0.823
==>>> it: 701, avg. loss: 0.548496, running train acc: 0.778
==>>> it: 701, mem avg. loss: 0.399149, running mem acc: 0.832
==>>> it: 801, avg. loss: 0.538423, running train acc: 0.784
==>>> it: 801, mem avg. loss: 0.386821, running mem acc: 0.837
==>>> it: 901, avg. loss: 0.522230, running train acc: 0.792
==>>> it: 901, mem avg. loss: 0.375308, running mem acc: 0.842
[0.863 0. 0. 0. 0. ]
-----------run 3 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.966468, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.036071, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.847055, running train acc: 0.417
==>>> it: 101, mem avg. loss: 0.999914, running mem acc: 0.652
==>>> it: 201, avg. loss: 1.815710, running train acc: 0.436
==>>> it: 201, mem avg. loss: 1.111604, running mem acc: 0.632
==>>> it: 301, avg. loss: 1.749467, running train acc: 0.467
==>>> it: 301, mem avg. loss: 1.154275, running mem acc: 0.621
==>>> it: 401, avg. loss: 1.719877, running train acc: 0.490
==>>> it: 401, mem avg. loss: 1.207809, running mem acc: 0.610
==>>> it: 501, avg. loss: 1.704327, running train acc: 0.503
==>>> it: 501, mem avg. loss: 1.224183, running mem acc: 0.605
==>>> it: 601, avg. loss: 1.702550, running train acc: 0.510
==>>> it: 601, mem avg. loss: 1.216591, running mem acc: 0.607
==>>> it: 701, avg. loss: 1.689686, running train acc: 0.523
==>>> it: 701, mem avg. loss: 1.207842, running mem acc: 0.610
==>>> it: 801, avg. loss: 1.674172, running train acc: 0.530
==>>> it: 801, mem avg. loss: 1.211530, running mem acc: 0.609
==>>> it: 901, avg. loss: 1.679660, running train acc: 0.536
==>>> it: 901, mem avg. loss: 1.203197, running mem acc: 0.614
[0.2325 0.7555 0. 0. 0. ]
-----------run 3 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.895998, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.463672, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.561145, running train acc: 0.415
==>>> it: 101, mem avg. loss: 1.224155, running mem acc: 0.542
==>>> it: 201, avg. loss: 1.441890, running train acc: 0.460
==>>> it: 201, mem avg. loss: 1.297389, running mem acc: 0.536
==>>> it: 301, avg. loss: 1.379063, running train acc: 0.482
==>>> it: 301, mem avg. loss: 1.282690, running mem acc: 0.550
==>>> it: 401, avg. loss: 1.325884, running train acc: 0.507
==>>> it: 401, mem avg. loss: 1.254275, running mem acc: 0.563
==>>> it: 501, avg. loss: 1.296667, running train acc: 0.524
==>>> it: 501, mem avg. loss: 1.273533, running mem acc: 0.563
==>>> it: 601, avg. loss: 1.256915, running train acc: 0.536
==>>> it: 601, mem avg. loss: 1.231287, running mem acc: 0.571
==>>> it: 701, avg. loss: 1.233280, running train acc: 0.549
==>>> it: 701, mem avg. loss: 1.198977, running mem acc: 0.581
==>>> it: 801, avg. loss: 1.230494, running train acc: 0.554
==>>> it: 801, mem avg. loss: 1.198175, running mem acc: 0.585
==>>> it: 901, avg. loss: 1.225145, running train acc: 0.560
==>>> it: 901, mem avg. loss: 1.199823, running mem acc: 0.584
[0.1605 0.0905 0.7495 0. 0. ]
-----------run 3 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.917183, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.205596, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.048564, running train acc: 0.660
==>>> it: 101, mem avg. loss: 1.253421, running mem acc: 0.560
==>>> it: 201, avg. loss: 0.839414, running train acc: 0.724
==>>> it: 201, mem avg. loss: 1.278255, running mem acc: 0.571
==>>> it: 301, avg. loss: 0.787808, running train acc: 0.744
==>>> it: 301, mem avg. loss: 1.349451, running mem acc: 0.568
==>>> it: 401, avg. loss: 0.731352, running train acc: 0.761
==>>> it: 401, mem avg. loss: 1.341570, running mem acc: 0.576
==>>> it: 501, avg. loss: 0.721176, running train acc: 0.764
==>>> it: 501, mem avg. loss: 1.343657, running mem acc: 0.582
==>>> it: 601, avg. loss: 0.681728, running train acc: 0.777
==>>> it: 601, mem avg. loss: 1.349315, running mem acc: 0.580
==>>> it: 701, avg. loss: 0.659434, running train acc: 0.785
==>>> it: 701, mem avg. loss: 1.358018, running mem acc: 0.582
==>>> it: 801, avg. loss: 0.638762, running train acc: 0.793
==>>> it: 801, mem avg. loss: 1.363391, running mem acc: 0.582
==>>> it: 901, avg. loss: 0.614060, running train acc: 0.801
==>>> it: 901, mem avg. loss: 1.359676, running mem acc: 0.583
[0.026 0.1285 0.0615 0.9635 0. ]
-----------run 3 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.292490, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.318352, running mem acc: 0.550
==>>> it: 101, avg. loss: 0.861588, running train acc: 0.753
==>>> it: 101, mem avg. loss: 0.955220, running mem acc: 0.641
==>>> it: 201, avg. loss: 0.639851, running train acc: 0.805
==>>> it: 201, mem avg. loss: 1.081663, running mem acc: 0.635
==>>> it: 301, avg. loss: 0.553528, running train acc: 0.829
==>>> it: 301, mem avg. loss: 1.025432, running mem acc: 0.667
==>>> it: 401, avg. loss: 0.520615, running train acc: 0.835
==>>> it: 401, mem avg. loss: 1.064242, running mem acc: 0.669
==>>> it: 501, avg. loss: 0.488462, running train acc: 0.843
==>>> it: 501, mem avg. loss: 1.115369, running mem acc: 0.658
==>>> it: 601, avg. loss: 0.453744, running train acc: 0.853
==>>> it: 601, mem avg. loss: 1.143580, running mem acc: 0.654
==>>> it: 701, avg. loss: 0.429078, running train acc: 0.861
==>>> it: 701, mem avg. loss: 1.191207, running mem acc: 0.645
==>>> it: 801, avg. loss: 0.418600, running train acc: 0.865
==>>> it: 801, mem avg. loss: 1.224465, running mem acc: 0.634
==>>> it: 901, avg. loss: 0.404278, running train acc: 0.870
==>>> it: 901, mem avg. loss: 1.243558, running mem acc: 0.631
[0.0195 0.0875 0.0065 0.199 0.968 ]
-----------run 3-----------avg_end_acc 0.2561-----------train time 358.91018629074097
Task: 0, Labels:[1, 6]
Task: 1, Labels:[3, 5]
Task: 2, Labels:[0, 9]
Task: 3, Labels:[2, 8]
Task: 4, Labels:[7, 4]
buffer has 200 slots
-----------run 4 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.944538, running train acc: 0.250
==>>> it: 1, mem avg. loss: 0.963893, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.575127, running train acc: 0.806
==>>> it: 101, mem avg. loss: 0.692649, running mem acc: 0.764
==>>> it: 201, avg. loss: 0.494559, running train acc: 0.839
==>>> it: 201, mem avg. loss: 0.593839, running mem acc: 0.797
==>>> it: 301, avg. loss: 0.441519, running train acc: 0.856
==>>> it: 301, mem avg. loss: 0.532984, running mem acc: 0.812
==>>> it: 401, avg. loss: 0.397281, running train acc: 0.870
==>>> it: 401, mem avg. loss: 0.492999, running mem acc: 0.822
==>>> it: 501, avg. loss: 0.365051, running train acc: 0.879
==>>> it: 501, mem avg. loss: 0.449466, running mem acc: 0.834
==>>> it: 601, avg. loss: 0.345992, running train acc: 0.885
==>>> it: 601, mem avg. loss: 0.433455, running mem acc: 0.838
==>>> it: 701, avg. loss: 0.327070, running train acc: 0.891
==>>> it: 701, mem avg. loss: 0.417830, running mem acc: 0.843
==>>> it: 801, avg. loss: 0.318430, running train acc: 0.895
==>>> it: 801, mem avg. loss: 0.404719, running mem acc: 0.848
==>>> it: 901, avg. loss: 0.313842, running train acc: 0.897
==>>> it: 901, mem avg. loss: 0.391949, running mem acc: 0.853
[0.944 0. 0. 0. 0. ]
-----------run 4 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 11.141585, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.133669, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.981893, running train acc: 0.316
==>>> it: 101, mem avg. loss: 1.132128, running mem acc: 0.591
==>>> it: 201, avg. loss: 1.793167, running train acc: 0.338
==>>> it: 201, mem avg. loss: 1.329128, running mem acc: 0.537
==>>> it: 301, avg. loss: 1.708968, running train acc: 0.361
==>>> it: 301, mem avg. loss: 1.440932, running mem acc: 0.532
==>>> it: 401, avg. loss: 1.639482, running train acc: 0.381
==>>> it: 401, mem avg. loss: 1.489372, running mem acc: 0.529
==>>> it: 501, avg. loss: 1.561463, running train acc: 0.399
==>>> it: 501, mem avg. loss: 1.530419, running mem acc: 0.529
==>>> it: 601, avg. loss: 1.508692, running train acc: 0.409
==>>> it: 601, mem avg. loss: 1.552461, running mem acc: 0.537
==>>> it: 701, avg. loss: 1.462283, running train acc: 0.419
==>>> it: 701, mem avg. loss: 1.556903, running mem acc: 0.543
==>>> it: 801, avg. loss: 1.422261, running train acc: 0.434
==>>> it: 801, mem avg. loss: 1.596865, running mem acc: 0.542
==>>> it: 901, avg. loss: 1.373560, running train acc: 0.448
==>>> it: 901, mem avg. loss: 1.581930, running mem acc: 0.549
[0.332 0.5875 0. 0. 0. ]
-----------run 4 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.460232, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.652971, running mem acc: 0.500
==>>> it: 101, avg. loss: 1.488140, running train acc: 0.510
==>>> it: 101, mem avg. loss: 1.351693, running mem acc: 0.526
==>>> it: 201, avg. loss: 1.279975, running train acc: 0.574
==>>> it: 201, mem avg. loss: 1.475291, running mem acc: 0.526
==>>> it: 301, avg. loss: 1.237328, running train acc: 0.600
==>>> it: 301, mem avg. loss: 1.581478, running mem acc: 0.523
==>>> it: 401, avg. loss: 1.208664, running train acc: 0.612
==>>> it: 401, mem avg. loss: 1.618472, running mem acc: 0.525
==>>> it: 501, avg. loss: 1.185114, running train acc: 0.624
==>>> it: 501, mem avg. loss: 1.686954, running mem acc: 0.518
==>>> it: 601, avg. loss: 1.128607, running train acc: 0.643
==>>> it: 601, mem avg. loss: 1.697352, running mem acc: 0.519
==>>> it: 701, avg. loss: 1.114078, running train acc: 0.653
==>>> it: 701, mem avg. loss: 1.691454, running mem acc: 0.522
==>>> it: 801, avg. loss: 1.090853, running train acc: 0.662
==>>> it: 801, mem avg. loss: 1.708985, running mem acc: 0.521
==>>> it: 901, avg. loss: 1.116445, running train acc: 0.665
==>>> it: 901, mem avg. loss: 1.736914, running mem acc: 0.521
[0.1115 0.2045 0.884 0. 0. ]
-----------run 4 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.879036, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.359957, running mem acc: 0.800
==>>> it: 101, avg. loss: 1.217059, running train acc: 0.572
==>>> it: 101, mem avg. loss: 1.186522, running mem acc: 0.581
==>>> it: 201, avg. loss: 1.018763, running train acc: 0.640
==>>> it: 201, mem avg. loss: 1.213826, running mem acc: 0.595
==>>> it: 301, avg. loss: 0.934800, running train acc: 0.672
==>>> it: 301, mem avg. loss: 1.211018, running mem acc: 0.597
==>>> it: 401, avg. loss: 0.890802, running train acc: 0.685
==>>> it: 401, mem avg. loss: 1.249100, running mem acc: 0.594
==>>> it: 501, avg. loss: 0.834658, running train acc: 0.706
==>>> it: 501, mem avg. loss: 1.242247, running mem acc: 0.602
==>>> it: 601, avg. loss: 0.801443, running train acc: 0.720
==>>> it: 601, mem avg. loss: 1.214317, running mem acc: 0.612
==>>> it: 701, avg. loss: 0.800314, running train acc: 0.724
==>>> it: 701, mem avg. loss: 1.245760, running mem acc: 0.606
==>>> it: 801, avg. loss: 0.784725, running train acc: 0.730
==>>> it: 801, mem avg. loss: 1.259908, running mem acc: 0.604
==>>> it: 901, avg. loss: 0.781507, running train acc: 0.733
==>>> it: 901, mem avg. loss: 1.253265, running mem acc: 0.607
[0.041 0.0815 0.0965 0.872 0. ]
-----------run 4 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.943014, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.081432, running mem acc: 0.950
==>>> it: 101, avg. loss: 1.337805, running train acc: 0.502
==>>> it: 101, mem avg. loss: 1.071442, running mem acc: 0.614
==>>> it: 201, avg. loss: 1.056043, running train acc: 0.587
==>>> it: 201, mem avg. loss: 1.051421, running mem acc: 0.631
==>>> it: 301, avg. loss: 0.966068, running train acc: 0.618
==>>> it: 301, mem avg. loss: 1.099461, running mem acc: 0.624
==>>> it: 401, avg. loss: 0.885617, running train acc: 0.647
==>>> it: 401, mem avg. loss: 1.130276, running mem acc: 0.620
==>>> it: 501, avg. loss: 0.852807, running train acc: 0.661
==>>> it: 501, mem avg. loss: 1.198584, running mem acc: 0.605
==>>> it: 601, avg. loss: 0.827000, running train acc: 0.673
==>>> it: 601, mem avg. loss: 1.269797, running mem acc: 0.591
==>>> it: 701, avg. loss: 0.807084, running train acc: 0.678
==>>> it: 701, mem avg. loss: 1.281898, running mem acc: 0.590
==>>> it: 801, avg. loss: 0.787623, running train acc: 0.685
==>>> it: 801, mem avg. loss: 1.304745, running mem acc: 0.587
==>>> it: 901, avg. loss: 0.770198, running train acc: 0.694
==>>> it: 901, mem avg. loss: 1.309789, running mem acc: 0.591
[0.016 0.03 0.199 0.18 0.838]
-----------run 4-----------avg_end_acc 0.2526-----------train time 357.56273317337036
Task: 0, Labels:[4, 3]
Task: 1, Labels:[8, 5]
Task: 2, Labels:[0, 2]
Task: 3, Labels:[9, 6]
Task: 4, Labels:[7, 1]
buffer has 200 slots
-----------run 5 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.578283, running train acc: 0.250
==>>> it: 1, mem avg. loss: 1.249877, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.922471, running train acc: 0.628
==>>> it: 101, mem avg. loss: 0.644823, running mem acc: 0.718
==>>> it: 201, avg. loss: 0.790587, running train acc: 0.663
==>>> it: 201, mem avg. loss: 0.596845, running mem acc: 0.737
==>>> it: 301, avg. loss: 0.763355, running train acc: 0.677
==>>> it: 301, mem avg. loss: 0.552495, running mem acc: 0.752
==>>> it: 401, avg. loss: 0.740845, running train acc: 0.687
==>>> it: 401, mem avg. loss: 0.526987, running mem acc: 0.760
==>>> it: 501, avg. loss: 0.714505, running train acc: 0.695
==>>> it: 501, mem avg. loss: 0.519044, running mem acc: 0.765
==>>> it: 601, avg. loss: 0.690666, running train acc: 0.708
==>>> it: 601, mem avg. loss: 0.499666, running mem acc: 0.775
==>>> it: 701, avg. loss: 0.671928, running train acc: 0.716
==>>> it: 701, mem avg. loss: 0.496474, running mem acc: 0.777
==>>> it: 801, avg. loss: 0.666068, running train acc: 0.721
==>>> it: 801, mem avg. loss: 0.484888, running mem acc: 0.784
==>>> it: 901, avg. loss: 0.650634, running train acc: 0.727
==>>> it: 901, mem avg. loss: 0.473663, running mem acc: 0.789
[0.7815 0. 0. 0. 0. ]
-----------run 5 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.554150, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.190670, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.670862, running train acc: 0.465
==>>> it: 101, mem avg. loss: 1.298996, running mem acc: 0.520
==>>> it: 201, avg. loss: 1.498901, running train acc: 0.532
==>>> it: 201, mem avg. loss: 1.500602, running mem acc: 0.490
==>>> it: 301, avg. loss: 1.458031, running train acc: 0.561
==>>> it: 301, mem avg. loss: 1.666458, running mem acc: 0.476
==>>> it: 401, avg. loss: 1.463576, running train acc: 0.575
==>>> it: 401, mem avg. loss: 1.706202, running mem acc: 0.482
==>>> it: 501, avg. loss: 1.399455, running train acc: 0.600
==>>> it: 501, mem avg. loss: 1.728629, running mem acc: 0.480
==>>> it: 601, avg. loss: 1.348404, running train acc: 0.617
==>>> it: 601, mem avg. loss: 1.736694, running mem acc: 0.479
==>>> it: 701, avg. loss: 1.334469, running train acc: 0.628
==>>> it: 701, mem avg. loss: 1.725126, running mem acc: 0.483
==>>> it: 801, avg. loss: 1.337693, running train acc: 0.635
==>>> it: 801, mem avg. loss: 1.724921, running mem acc: 0.488
==>>> it: 901, avg. loss: 1.349309, running train acc: 0.640
==>>> it: 901, mem avg. loss: 1.707368, running mem acc: 0.493
[0.093 0.912 0. 0. 0. ]
-----------run 5 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.784210, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.166661, running mem acc: 0.600
==>>> it: 101, avg. loss: 1.428597, running train acc: 0.502
==>>> it: 101, mem avg. loss: 1.076206, running mem acc: 0.547
==>>> it: 201, avg. loss: 1.201467, running train acc: 0.563
==>>> it: 201, mem avg. loss: 1.092634, running mem acc: 0.577
==>>> it: 301, avg. loss: 1.068791, running train acc: 0.605
==>>> it: 301, mem avg. loss: 1.140011, running mem acc: 0.587
==>>> it: 401, avg. loss: 1.016684, running train acc: 0.621
==>>> it: 401, mem avg. loss: 1.201630, running mem acc: 0.582
==>>> it: 501, avg. loss: 0.990004, running train acc: 0.633
==>>> it: 501, mem avg. loss: 1.223274, running mem acc: 0.582
==>>> it: 601, avg. loss: 0.968943, running train acc: 0.645
==>>> it: 601, mem avg. loss: 1.219539, running mem acc: 0.590
==>>> it: 701, avg. loss: 0.931725, running train acc: 0.657
==>>> it: 701, mem avg. loss: 1.217522, running mem acc: 0.590
==>>> it: 801, avg. loss: 0.905948, running train acc: 0.666
==>>> it: 801, mem avg. loss: 1.223998, running mem acc: 0.589
==>>> it: 901, avg. loss: 0.877856, running train acc: 0.676
==>>> it: 901, mem avg. loss: 1.246437, running mem acc: 0.582
[0.025 0.029 0.877 0. 0. ]
-----------run 5 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.466000, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.600794, running mem acc: 0.600
==>>> it: 101, avg. loss: 1.017673, running train acc: 0.666
==>>> it: 101, mem avg. loss: 1.011015, running mem acc: 0.647
==>>> it: 201, avg. loss: 0.803784, running train acc: 0.727
==>>> it: 201, mem avg. loss: 1.090327, running mem acc: 0.645
==>>> it: 301, avg. loss: 0.689737, running train acc: 0.763
==>>> it: 301, mem avg. loss: 1.158691, running mem acc: 0.630
==>>> it: 401, avg. loss: 0.622602, running train acc: 0.780
==>>> it: 401, mem avg. loss: 1.197662, running mem acc: 0.627
==>>> it: 501, avg. loss: 0.596045, running train acc: 0.787
==>>> it: 501, mem avg. loss: 1.227741, running mem acc: 0.623
==>>> it: 601, avg. loss: 0.563013, running train acc: 0.799
==>>> it: 601, mem avg. loss: 1.230588, running mem acc: 0.623
==>>> it: 701, avg. loss: 0.545375, running train acc: 0.805
==>>> it: 701, mem avg. loss: 1.252217, running mem acc: 0.624
==>>> it: 801, avg. loss: 0.529123, running train acc: 0.809
==>>> it: 801, mem avg. loss: 1.196073, running mem acc: 0.640
==>>> it: 901, avg. loss: 0.515032, running train acc: 0.815
==>>> it: 901, mem avg. loss: 1.206743, running mem acc: 0.641
[0.0075 0.0335 0.125 0.9375 0. ]
-----------run 5 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.987063, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.162752, running mem acc: 0.900
==>>> it: 101, avg. loss: 0.894613, running train acc: 0.714
==>>> it: 101, mem avg. loss: 0.841552, running mem acc: 0.700
==>>> it: 201, avg. loss: 0.667196, running train acc: 0.772
==>>> it: 201, mem avg. loss: 0.999749, running mem acc: 0.664
==>>> it: 301, avg. loss: 0.561921, running train acc: 0.806
==>>> it: 301, mem avg. loss: 0.991697, running mem acc: 0.671
==>>> it: 401, avg. loss: 0.504406, running train acc: 0.826
==>>> it: 401, mem avg. loss: 1.049513, running mem acc: 0.661
==>>> it: 501, avg. loss: 0.481323, running train acc: 0.834
==>>> it: 501, mem avg. loss: 1.077605, running mem acc: 0.665
==>>> it: 601, avg. loss: 0.453708, running train acc: 0.842
==>>> it: 601, mem avg. loss: 1.088178, running mem acc: 0.661
==>>> it: 701, avg. loss: 0.436707, running train acc: 0.849
==>>> it: 701, mem avg. loss: 1.085347, running mem acc: 0.665
==>>> it: 801, avg. loss: 0.417894, running train acc: 0.856
==>>> it: 801, mem avg. loss: 1.098141, running mem acc: 0.663
==>>> it: 901, avg. loss: 0.400547, running train acc: 0.862
==>>> it: 901, mem avg. loss: 1.115344, running mem acc: 0.659
[0.0275 0.057 0.152 0.0925 0.957 ]
-----------run 5-----------avg_end_acc 0.2572-----------train time 357.31354308128357
Task: 0, Labels:[0, 1]
Task: 1, Labels:[2, 3]
Task: 2, Labels:[7, 6]
Task: 3, Labels:[9, 4]
Task: 4, Labels:[5, 8]
buffer has 200 slots
-----------run 6 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.833361, running train acc: 0.250
==>>> it: 1, mem avg. loss: 0.862043, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.755709, running train acc: 0.695
==>>> it: 101, mem avg. loss: 0.626251, running mem acc: 0.741
==>>> it: 201, avg. loss: 0.649792, running train acc: 0.747
==>>> it: 201, mem avg. loss: 0.496452, running mem acc: 0.792
==>>> it: 301, avg. loss: 0.606560, running train acc: 0.768
==>>> it: 301, mem avg. loss: 0.445102, running mem acc: 0.815
==>>> it: 401, avg. loss: 0.566411, running train acc: 0.786
==>>> it: 401, mem avg. loss: 0.423035, running mem acc: 0.827
==>>> it: 501, avg. loss: 0.538640, running train acc: 0.795
==>>> it: 501, mem avg. loss: 0.411376, running mem acc: 0.831
==>>> it: 601, avg. loss: 0.510376, running train acc: 0.808
==>>> it: 601, mem avg. loss: 0.403339, running mem acc: 0.833
==>>> it: 701, avg. loss: 0.488390, running train acc: 0.818
==>>> it: 701, mem avg. loss: 0.396586, running mem acc: 0.836
==>>> it: 801, avg. loss: 0.470849, running train acc: 0.825
==>>> it: 801, mem avg. loss: 0.385421, running mem acc: 0.839
==>>> it: 901, avg. loss: 0.456784, running train acc: 0.830
==>>> it: 901, mem avg. loss: 0.372945, running mem acc: 0.844
[0.872 0. 0. 0. 0. ]
-----------run 6 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.591821, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.055370, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.028344, running train acc: 0.372
==>>> it: 101, mem avg. loss: 1.097080, running mem acc: 0.602
==>>> it: 201, avg. loss: 1.870882, running train acc: 0.427
==>>> it: 201, mem avg. loss: 1.129302, running mem acc: 0.589
==>>> it: 301, avg. loss: 1.893059, running train acc: 0.452
==>>> it: 301, mem avg. loss: 1.119634, running mem acc: 0.604
==>>> it: 401, avg. loss: 1.890594, running train acc: 0.473
==>>> it: 401, mem avg. loss: 1.151619, running mem acc: 0.601
==>>> it: 501, avg. loss: 1.912213, running train acc: 0.482
==>>> it: 501, mem avg. loss: 1.142734, running mem acc: 0.608
==>>> it: 601, avg. loss: 1.914623, running train acc: 0.490
==>>> it: 601, mem avg. loss: 1.164729, running mem acc: 0.606
==>>> it: 701, avg. loss: 1.887907, running train acc: 0.500
==>>> it: 701, mem avg. loss: 1.160332, running mem acc: 0.603
==>>> it: 801, avg. loss: 1.871865, running train acc: 0.507
==>>> it: 801, mem avg. loss: 1.165566, running mem acc: 0.600
==>>> it: 901, avg. loss: 1.846386, running train acc: 0.515
==>>> it: 901, mem avg. loss: 1.179854, running mem acc: 0.597
[0.378 0.788 0. 0. 0. ]
-----------run 6 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.262117, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.889229, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.347322, running train acc: 0.524
==>>> it: 101, mem avg. loss: 1.283343, running mem acc: 0.566
==>>> it: 201, avg. loss: 1.129916, running train acc: 0.596
==>>> it: 201, mem avg. loss: 1.342824, running mem acc: 0.568
==>>> it: 301, avg. loss: 1.001738, running train acc: 0.642
==>>> it: 301, mem avg. loss: 1.361707, running mem acc: 0.572
==>>> it: 401, avg. loss: 0.925588, running train acc: 0.674
==>>> it: 401, mem avg. loss: 1.388867, running mem acc: 0.565
==>>> it: 501, avg. loss: 0.870522, running train acc: 0.697
==>>> it: 501, mem avg. loss: 1.392383, running mem acc: 0.571
==>>> it: 601, avg. loss: 0.827690, running train acc: 0.712
==>>> it: 601, mem avg. loss: 1.414581, running mem acc: 0.568
==>>> it: 701, avg. loss: 0.815222, running train acc: 0.723
==>>> it: 701, mem avg. loss: 1.420911, running mem acc: 0.572
==>>> it: 801, avg. loss: 0.795404, running train acc: 0.732
==>>> it: 801, mem avg. loss: 1.411279, running mem acc: 0.578
==>>> it: 901, avg. loss: 0.777688, running train acc: 0.738
==>>> it: 901, mem avg. loss: 1.422839, running mem acc: 0.575
[0.3385 0.036 0.935 0. 0. ]
-----------run 6 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.055633, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.426358, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.043919, running train acc: 0.652
==>>> it: 101, mem avg. loss: 0.992352, running mem acc: 0.638
==>>> it: 201, avg. loss: 0.792547, running train acc: 0.732
==>>> it: 201, mem avg. loss: 1.063485, running mem acc: 0.621
==>>> it: 301, avg. loss: 0.682095, running train acc: 0.766
==>>> it: 301, mem avg. loss: 1.056097, running mem acc: 0.630
==>>> it: 401, avg. loss: 0.638914, running train acc: 0.783
==>>> it: 401, mem avg. loss: 1.086145, running mem acc: 0.632
==>>> it: 501, avg. loss: 0.600733, running train acc: 0.795
==>>> it: 501, mem avg. loss: 1.137632, running mem acc: 0.623
==>>> it: 601, avg. loss: 0.577368, running train acc: 0.802
==>>> it: 601, mem avg. loss: 1.182558, running mem acc: 0.613
==>>> it: 701, avg. loss: 0.560853, running train acc: 0.809
==>>> it: 701, mem avg. loss: 1.192970, running mem acc: 0.613
==>>> it: 801, avg. loss: 0.562260, running train acc: 0.810
==>>> it: 801, mem avg. loss: 1.207200, running mem acc: 0.613
==>>> it: 901, avg. loss: 0.542746, running train acc: 0.816
==>>> it: 901, mem avg. loss: 1.217554, running mem acc: 0.611
[0.025 0.013 0.084 0.9425 0. ]
-----------run 6 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.143974, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.590939, running mem acc: 0.650
==>>> it: 101, avg. loss: 0.961911, running train acc: 0.714
==>>> it: 101, mem avg. loss: 0.891121, running mem acc: 0.628
==>>> it: 201, avg. loss: 0.725017, running train acc: 0.769
==>>> it: 201, mem avg. loss: 0.947870, running mem acc: 0.650
==>>> it: 301, avg. loss: 0.618451, running train acc: 0.798
==>>> it: 301, mem avg. loss: 1.010216, running mem acc: 0.640
==>>> it: 401, avg. loss: 0.552053, running train acc: 0.817
==>>> it: 401, mem avg. loss: 1.057063, running mem acc: 0.639
==>>> it: 501, avg. loss: 0.511465, running train acc: 0.829
==>>> it: 501, mem avg. loss: 1.113112, running mem acc: 0.632
==>>> it: 601, avg. loss: 0.475035, running train acc: 0.840
==>>> it: 601, mem avg. loss: 1.121968, running mem acc: 0.631
==>>> it: 701, avg. loss: 0.447532, running train acc: 0.849
==>>> it: 701, mem avg. loss: 1.120948, running mem acc: 0.629
==>>> it: 801, avg. loss: 0.441976, running train acc: 0.852
==>>> it: 801, mem avg. loss: 1.113229, running mem acc: 0.634
==>>> it: 901, avg. loss: 0.428114, running train acc: 0.857
==>>> it: 901, mem avg. loss: 1.154305, running mem acc: 0.629
[0.034 0.0295 0.064 0.1035 0.956 ]
-----------run 6-----------avg_end_acc 0.23739999999999997-----------train time 357.7628221511841
Task: 0, Labels:[2, 0]
Task: 1, Labels:[3, 6]
Task: 2, Labels:[5, 8]
Task: 3, Labels:[9, 1]
Task: 4, Labels:[7, 4]
buffer has 200 slots
-----------run 7 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.253926, running train acc: 0.500
==>>> it: 1, mem avg. loss: 0.262878, running mem acc: 0.900
==>>> it: 101, avg. loss: 0.726062, running train acc: 0.726
==>>> it: 101, mem avg. loss: 0.737730, running mem acc: 0.720
==>>> it: 201, avg. loss: 0.652920, running train acc: 0.749
==>>> it: 201, mem avg. loss: 0.608825, running mem acc: 0.745
==>>> it: 301, avg. loss: 0.605694, running train acc: 0.767
==>>> it: 301, mem avg. loss: 0.564952, running mem acc: 0.765
==>>> it: 401, avg. loss: 0.593721, running train acc: 0.775
==>>> it: 401, mem avg. loss: 0.544199, running mem acc: 0.769
==>>> it: 501, avg. loss: 0.576390, running train acc: 0.783
==>>> it: 501, mem avg. loss: 0.521323, running mem acc: 0.778
==>>> it: 601, avg. loss: 0.563522, running train acc: 0.787
==>>> it: 601, mem avg. loss: 0.513423, running mem acc: 0.782
==>>> it: 701, avg. loss: 0.552204, running train acc: 0.789
==>>> it: 701, mem avg. loss: 0.503087, running mem acc: 0.784
==>>> it: 801, avg. loss: 0.539867, running train acc: 0.795
==>>> it: 801, mem avg. loss: 0.487942, running mem acc: 0.790
==>>> it: 901, avg. loss: 0.525288, running train acc: 0.800
==>>> it: 901, mem avg. loss: 0.473509, running mem acc: 0.796
[0.8595 0. 0. 0. 0. ]
-----------run 7 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.671967, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.125587, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.833798, running train acc: 0.324
==>>> it: 101, mem avg. loss: 1.152744, running mem acc: 0.607
==>>> it: 201, avg. loss: 1.673347, running train acc: 0.404
==>>> it: 201, mem avg. loss: 1.289025, running mem acc: 0.587
==>>> it: 301, avg. loss: 1.591084, running train acc: 0.431
==>>> it: 301, mem avg. loss: 1.332375, running mem acc: 0.575
==>>> it: 401, avg. loss: 1.516904, running train acc: 0.463
==>>> it: 401, mem avg. loss: 1.339461, running mem acc: 0.573
==>>> it: 501, avg. loss: 1.465757, running train acc: 0.486
==>>> it: 501, mem avg. loss: 1.345392, running mem acc: 0.571
==>>> it: 601, avg. loss: 1.430510, running train acc: 0.507
==>>> it: 601, mem avg. loss: 1.342511, running mem acc: 0.574
==>>> it: 701, avg. loss: 1.425673, running train acc: 0.520
==>>> it: 701, mem avg. loss: 1.341519, running mem acc: 0.576
==>>> it: 801, avg. loss: 1.411362, running train acc: 0.531
==>>> it: 801, mem avg. loss: 1.384993, running mem acc: 0.568
==>>> it: 901, avg. loss: 1.407815, running train acc: 0.535
==>>> it: 901, mem avg. loss: 1.402491, running mem acc: 0.565
[0.295 0.7855 0. 0. 0. ]
-----------run 7 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.484653, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.395092, running mem acc: 0.750
==>>> it: 101, avg. loss: 1.151952, running train acc: 0.620
==>>> it: 101, mem avg. loss: 1.343050, running mem acc: 0.523
==>>> it: 201, avg. loss: 0.963532, running train acc: 0.668
==>>> it: 201, mem avg. loss: 1.551713, running mem acc: 0.502
==>>> it: 301, avg. loss: 0.890669, running train acc: 0.695
==>>> it: 301, mem avg. loss: 1.708809, running mem acc: 0.491
==>>> it: 401, avg. loss: 0.839828, running train acc: 0.714
==>>> it: 401, mem avg. loss: 1.731056, running mem acc: 0.496
==>>> it: 501, avg. loss: 0.795073, running train acc: 0.729
==>>> it: 501, mem avg. loss: 1.747547, running mem acc: 0.491
==>>> it: 601, avg. loss: 0.745792, running train acc: 0.745
==>>> it: 601, mem avg. loss: 1.736603, running mem acc: 0.499
==>>> it: 701, avg. loss: 0.727091, running train acc: 0.755
==>>> it: 701, mem avg. loss: 1.747363, running mem acc: 0.500
==>>> it: 801, avg. loss: 0.718962, running train acc: 0.760
==>>> it: 801, mem avg. loss: 1.780436, running mem acc: 0.496
==>>> it: 901, avg. loss: 0.717674, running train acc: 0.765
==>>> it: 901, mem avg. loss: 1.768286, running mem acc: 0.504
[0.0465 0.0605 0.942 0. 0. ]
-----------run 7 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.920018, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.154637, running mem acc: 0.550
==>>> it: 101, avg. loss: 1.432304, running train acc: 0.479
==>>> it: 101, mem avg. loss: 1.046757, running mem acc: 0.628
==>>> it: 201, avg. loss: 1.253209, running train acc: 0.536
==>>> it: 201, mem avg. loss: 1.164172, running mem acc: 0.610
==>>> it: 301, avg. loss: 1.167772, running train acc: 0.558
==>>> it: 301, mem avg. loss: 1.344925, running mem acc: 0.571
==>>> it: 401, avg. loss: 1.120557, running train acc: 0.577
==>>> it: 401, mem avg. loss: 1.409995, running mem acc: 0.566
==>>> it: 501, avg. loss: 1.065089, running train acc: 0.599
==>>> it: 501, mem avg. loss: 1.448513, running mem acc: 0.561
==>>> it: 601, avg. loss: 1.025511, running train acc: 0.615
==>>> it: 601, mem avg. loss: 1.534615, running mem acc: 0.548
==>>> it: 701, avg. loss: 1.014445, running train acc: 0.625
==>>> it: 701, mem avg. loss: 1.580698, running mem acc: 0.538
==>>> it: 801, avg. loss: 1.001876, running train acc: 0.633
==>>> it: 801, mem avg. loss: 1.606874, running mem acc: 0.536
==>>> it: 901, avg. loss: 0.983896, running train acc: 0.642
==>>> it: 901, mem avg. loss: 1.629255, running mem acc: 0.533
[0.104 0.1745 0.24 0.8515 0. ]
-----------run 7 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.258887, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.610927, running mem acc: 0.800
==>>> it: 101, avg. loss: 1.096490, running train acc: 0.584
==>>> it: 101, mem avg. loss: 0.947562, running mem acc: 0.675
==>>> it: 201, avg. loss: 0.951852, running train acc: 0.630
==>>> it: 201, mem avg. loss: 1.100524, running mem acc: 0.633
==>>> it: 301, avg. loss: 0.852861, running train acc: 0.666
==>>> it: 301, mem avg. loss: 1.096936, running mem acc: 0.631
==>>> it: 401, avg. loss: 0.808590, running train acc: 0.681
==>>> it: 401, mem avg. loss: 1.191747, running mem acc: 0.610
==>>> it: 501, avg. loss: 0.769706, running train acc: 0.694
==>>> it: 501, mem avg. loss: 1.196592, running mem acc: 0.607
==>>> it: 601, avg. loss: 0.747662, running train acc: 0.702
==>>> it: 601, mem avg. loss: 1.210925, running mem acc: 0.601
==>>> it: 701, avg. loss: 0.730981, running train acc: 0.709
==>>> it: 701, mem avg. loss: 1.230744, running mem acc: 0.596
==>>> it: 801, avg. loss: 0.738974, running train acc: 0.710
==>>> it: 801, mem avg. loss: 1.250149, running mem acc: 0.597
==>>> it: 901, avg. loss: 0.735950, running train acc: 0.713
==>>> it: 901, mem avg. loss: 1.305289, running mem acc: 0.587
[0.031 0.0365 0.082 0.2165 0.8545]
-----------run 7-----------avg_end_acc 0.24409999999999998-----------train time 354.48273825645447
Task: 0, Labels:[2, 4]
Task: 1, Labels:[1, 9]
Task: 2, Labels:[6, 8]
Task: 3, Labels:[3, 0]
Task: 4, Labels:[5, 7]
buffer has 200 slots
-----------run 8 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.971492, running train acc: 0.400
==>>> it: 1, mem avg. loss: 1.072496, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.003407, running train acc: 0.542
==>>> it: 101, mem avg. loss: 0.789860, running mem acc: 0.643
==>>> it: 201, avg. loss: 0.937843, running train acc: 0.558
==>>> it: 201, mem avg. loss: 0.683468, running mem acc: 0.689
==>>> it: 301, avg. loss: 0.904287, running train acc: 0.571
==>>> it: 301, mem avg. loss: 0.635474, running mem acc: 0.716
==>>> it: 401, avg. loss: 0.892103, running train acc: 0.582
==>>> it: 401, mem avg. loss: 0.602407, running mem acc: 0.728
==>>> it: 501, avg. loss: 0.875251, running train acc: 0.588
==>>> it: 501, mem avg. loss: 0.576786, running mem acc: 0.739
==>>> it: 601, avg. loss: 0.863835, running train acc: 0.592
==>>> it: 601, mem avg. loss: 0.555901, running mem acc: 0.749
==>>> it: 701, avg. loss: 0.853504, running train acc: 0.597
==>>> it: 701, mem avg. loss: 0.539524, running mem acc: 0.754
==>>> it: 801, avg. loss: 0.840490, running train acc: 0.605
==>>> it: 801, mem avg. loss: 0.532035, running mem acc: 0.758
==>>> it: 901, avg. loss: 0.829896, running train acc: 0.611
==>>> it: 901, mem avg. loss: 0.524969, running mem acc: 0.760
[0.6865 0. 0. 0. 0. ]
-----------run 8 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.640658, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.125490, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.998535, running train acc: 0.355
==>>> it: 101, mem avg. loss: 1.459550, running mem acc: 0.539
==>>> it: 201, avg. loss: 1.846025, running train acc: 0.406
==>>> it: 201, mem avg. loss: 1.670816, running mem acc: 0.506
==>>> it: 301, avg. loss: 1.845912, running train acc: 0.428
==>>> it: 301, mem avg. loss: 1.727157, running mem acc: 0.502
==>>> it: 401, avg. loss: 1.848553, running train acc: 0.451
==>>> it: 401, mem avg. loss: 1.851222, running mem acc: 0.492
==>>> it: 501, avg. loss: 1.855619, running train acc: 0.471
==>>> it: 501, mem avg. loss: 1.895980, running mem acc: 0.494
==>>> it: 601, avg. loss: 1.808859, running train acc: 0.482
==>>> it: 601, mem avg. loss: 1.864798, running mem acc: 0.499
==>>> it: 701, avg. loss: 1.788004, running train acc: 0.493
==>>> it: 701, mem avg. loss: 1.880975, running mem acc: 0.496
==>>> it: 801, avg. loss: 1.750737, running train acc: 0.503
==>>> it: 801, mem avg. loss: 1.893232, running mem acc: 0.494
==>>> it: 901, avg. loss: 1.708517, running train acc: 0.516
==>>> it: 901, mem avg. loss: 1.929694, running mem acc: 0.491
[0.371 0.8055 0. 0. 0. ]
-----------run 8 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.869048, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.379610, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.226900, running train acc: 0.596
==>>> it: 101, mem avg. loss: 1.044557, running mem acc: 0.575
==>>> it: 201, avg. loss: 1.021688, running train acc: 0.653
==>>> it: 201, mem avg. loss: 1.063132, running mem acc: 0.592
==>>> it: 301, avg. loss: 0.948596, running train acc: 0.682
==>>> it: 301, mem avg. loss: 1.124988, running mem acc: 0.586
==>>> it: 401, avg. loss: 0.876770, running train acc: 0.703
==>>> it: 401, mem avg. loss: 1.102588, running mem acc: 0.594
==>>> it: 501, avg. loss: 0.854721, running train acc: 0.714
==>>> it: 501, mem avg. loss: 1.125861, running mem acc: 0.597
==>>> it: 601, avg. loss: 0.841682, running train acc: 0.720
==>>> it: 601, mem avg. loss: 1.144799, running mem acc: 0.598
==>>> it: 701, avg. loss: 0.818848, running train acc: 0.731
==>>> it: 701, mem avg. loss: 1.166258, running mem acc: 0.596
==>>> it: 801, avg. loss: 0.805328, running train acc: 0.738
==>>> it: 801, mem avg. loss: 1.175235, running mem acc: 0.595
==>>> it: 901, avg. loss: 0.788312, running train acc: 0.746
==>>> it: 901, mem avg. loss: 1.189387, running mem acc: 0.595
[0.0545 0.131 0.9245 0. 0. ]
-----------run 8 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.364166, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.895846, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.117741, running train acc: 0.626
==>>> it: 101, mem avg. loss: 1.140015, running mem acc: 0.603
==>>> it: 201, avg. loss: 0.841639, running train acc: 0.704
==>>> it: 201, mem avg. loss: 1.129366, running mem acc: 0.608
==>>> it: 301, avg. loss: 0.762097, running train acc: 0.723
==>>> it: 301, mem avg. loss: 1.202546, running mem acc: 0.601
==>>> it: 401, avg. loss: 0.715205, running train acc: 0.740
==>>> it: 401, mem avg. loss: 1.259776, running mem acc: 0.599
==>>> it: 501, avg. loss: 0.691452, running train acc: 0.748
==>>> it: 501, mem avg. loss: 1.279262, running mem acc: 0.598
==>>> it: 601, avg. loss: 0.677272, running train acc: 0.756
==>>> it: 601, mem avg. loss: 1.314479, running mem acc: 0.596
==>>> it: 701, avg. loss: 0.670566, running train acc: 0.761
==>>> it: 701, mem avg. loss: 1.318946, running mem acc: 0.598
==>>> it: 801, avg. loss: 0.661245, running train acc: 0.767
==>>> it: 801, mem avg. loss: 1.333615, running mem acc: 0.598
==>>> it: 901, avg. loss: 0.654927, running train acc: 0.770
==>>> it: 901, mem avg. loss: 1.364058, running mem acc: 0.593
[0.0195 0.0555 0.037 0.9185 0. ]
-----------run 8 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.104951, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.383237, running mem acc: 0.900
==>>> it: 101, avg. loss: 1.254064, running train acc: 0.544
==>>> it: 101, mem avg. loss: 0.925790, running mem acc: 0.660
==>>> it: 201, avg. loss: 0.995922, running train acc: 0.626
==>>> it: 201, mem avg. loss: 1.019793, running mem acc: 0.652
==>>> it: 301, avg. loss: 0.882282, running train acc: 0.666
==>>> it: 301, mem avg. loss: 1.061561, running mem acc: 0.648
==>>> it: 401, avg. loss: 0.838588, running train acc: 0.681
==>>> it: 401, mem avg. loss: 1.082002, running mem acc: 0.651
==>>> it: 501, avg. loss: 0.811248, running train acc: 0.686
==>>> it: 501, mem avg. loss: 1.071984, running mem acc: 0.657
==>>> it: 601, avg. loss: 0.768018, running train acc: 0.703
==>>> it: 601, mem avg. loss: 1.089626, running mem acc: 0.657
==>>> it: 701, avg. loss: 0.739496, running train acc: 0.711
==>>> it: 701, mem avg. loss: 1.120645, running mem acc: 0.649
==>>> it: 801, avg. loss: 0.719601, running train acc: 0.718
==>>> it: 801, mem avg. loss: 1.116099, running mem acc: 0.646
==>>> it: 901, avg. loss: 0.700080, running train acc: 0.726
==>>> it: 901, mem avg. loss: 1.117305, running mem acc: 0.648
[0.013 0.0645 0.083 0.155 0.8345]
-----------run 8-----------avg_end_acc 0.22999999999999998-----------train time 356.3757390975952
Task: 0, Labels:[8, 5]
Task: 1, Labels:[3, 0]
Task: 2, Labels:[2, 9]
Task: 3, Labels:[6, 7]
Task: 4, Labels:[4, 1]
buffer has 200 slots
-----------run 9 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.842968, running train acc: 0.400
==>>> it: 1, mem avg. loss: 1.131518, running mem acc: 0.700
==>>> it: 101, avg. loss: 0.556412, running train acc: 0.804
==>>> it: 101, mem avg. loss: 0.661581, running mem acc: 0.757
==>>> it: 201, avg. loss: 0.443284, running train acc: 0.845
==>>> it: 201, mem avg. loss: 0.544123, running mem acc: 0.795
==>>> it: 301, avg. loss: 0.389447, running train acc: 0.864
==>>> it: 301, mem avg. loss: 0.519375, running mem acc: 0.804
==>>> it: 401, avg. loss: 0.354715, running train acc: 0.877
==>>> it: 401, mem avg. loss: 0.488643, running mem acc: 0.815
==>>> it: 501, avg. loss: 0.325475, running train acc: 0.886
==>>> it: 501, mem avg. loss: 0.462648, running mem acc: 0.827
==>>> it: 601, avg. loss: 0.309258, running train acc: 0.890
==>>> it: 601, mem avg. loss: 0.441764, running mem acc: 0.835
==>>> it: 701, avg. loss: 0.302434, running train acc: 0.894
==>>> it: 701, mem avg. loss: 0.418398, running mem acc: 0.844
==>>> it: 801, avg. loss: 0.292218, running train acc: 0.898
==>>> it: 801, mem avg. loss: 0.406166, running mem acc: 0.848
==>>> it: 901, avg. loss: 0.287177, running train acc: 0.901
==>>> it: 901, mem avg. loss: 0.396439, running mem acc: 0.852
[0.9515 0. 0. 0. 0. ]
-----------run 9 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.181549, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.024473, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.471610, running train acc: 0.524
==>>> it: 101, mem avg. loss: 1.430310, running mem acc: 0.496
==>>> it: 201, avg. loss: 1.382991, running train acc: 0.563
==>>> it: 201, mem avg. loss: 1.585641, running mem acc: 0.487
==>>> it: 301, avg. loss: 1.330016, running train acc: 0.586
==>>> it: 301, mem avg. loss: 1.759282, running mem acc: 0.472
==>>> it: 401, avg. loss: 1.385414, running train acc: 0.590
==>>> it: 401, mem avg. loss: 1.831997, running mem acc: 0.475
==>>> it: 501, avg. loss: 1.428205, running train acc: 0.593
==>>> it: 501, mem avg. loss: 1.798466, running mem acc: 0.490
==>>> it: 601, avg. loss: 1.453426, running train acc: 0.599
==>>> it: 601, mem avg. loss: 1.811237, running mem acc: 0.492
==>>> it: 701, avg. loss: 1.452045, running train acc: 0.606
==>>> it: 701, mem avg. loss: 1.809529, running mem acc: 0.496
==>>> it: 801, avg. loss: 1.452844, running train acc: 0.613
==>>> it: 801, mem avg. loss: 1.795872, running mem acc: 0.500
==>>> it: 901, avg. loss: 1.484972, running train acc: 0.611
==>>> it: 901, mem avg. loss: 1.765529, running mem acc: 0.508
[0.0175 0.926 0. 0. 0. ]
-----------run 9 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.463183, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.360976, running mem acc: 0.650
==>>> it: 101, avg. loss: 1.285945, running train acc: 0.579
==>>> it: 101, mem avg. loss: 1.086202, running mem acc: 0.575
==>>> it: 201, avg. loss: 1.062238, running train acc: 0.653
==>>> it: 201, mem avg. loss: 1.127905, running mem acc: 0.584
==>>> it: 301, avg. loss: 0.955364, running train acc: 0.687
==>>> it: 301, mem avg. loss: 1.180175, running mem acc: 0.590
==>>> it: 401, avg. loss: 0.883162, running train acc: 0.704
==>>> it: 401, mem avg. loss: 1.221584, running mem acc: 0.585
==>>> it: 501, avg. loss: 0.856146, running train acc: 0.714
==>>> it: 501, mem avg. loss: 1.208999, running mem acc: 0.590
==>>> it: 601, avg. loss: 0.841073, running train acc: 0.719
==>>> it: 601, mem avg. loss: 1.240104, running mem acc: 0.588
==>>> it: 701, avg. loss: 0.835502, running train acc: 0.727
==>>> it: 701, mem avg. loss: 1.233476, running mem acc: 0.590
==>>> it: 801, avg. loss: 0.856684, running train acc: 0.726
==>>> it: 801, mem avg. loss: 1.237129, running mem acc: 0.593
==>>> it: 901, avg. loss: 0.860966, running train acc: 0.729
==>>> it: 901, mem avg. loss: 1.286882, running mem acc: 0.587
[0.0415 0.0285 0.9455 0. 0. ]
-----------run 9 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.400607, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.415224, running mem acc: 0.600
==>>> it: 101, avg. loss: 1.061653, running train acc: 0.630
==>>> it: 101, mem avg. loss: 0.830434, running mem acc: 0.688
==>>> it: 201, avg. loss: 0.845992, running train acc: 0.706
==>>> it: 201, mem avg. loss: 0.932125, running mem acc: 0.685
==>>> it: 301, avg. loss: 0.743845, running train acc: 0.740
==>>> it: 301, mem avg. loss: 0.970450, running mem acc: 0.674
==>>> it: 401, avg. loss: 0.714553, running train acc: 0.757
==>>> it: 401, mem avg. loss: 1.044505, running mem acc: 0.665
==>>> it: 501, avg. loss: 0.669086, running train acc: 0.773
==>>> it: 501, mem avg. loss: 1.063820, running mem acc: 0.660
==>>> it: 601, avg. loss: 0.653266, running train acc: 0.777
==>>> it: 601, mem avg. loss: 1.093866, running mem acc: 0.658
==>>> it: 701, avg. loss: 0.624353, running train acc: 0.787
==>>> it: 701, mem avg. loss: 1.078608, running mem acc: 0.664
==>>> it: 801, avg. loss: 0.597735, running train acc: 0.796
==>>> it: 801, mem avg. loss: 1.093733, running mem acc: 0.662
==>>> it: 901, avg. loss: 0.585669, running train acc: 0.802
==>>> it: 901, mem avg. loss: 1.109949, running mem acc: 0.657
[0.0385 0.071 0.186 0.9505 0. ]
-----------run 9 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.323338, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.034167, running mem acc: 1.000
==>>> it: 101, avg. loss: 0.907406, running train acc: 0.697
==>>> it: 101, mem avg. loss: 1.047464, running mem acc: 0.609
==>>> it: 201, avg. loss: 0.678379, running train acc: 0.763
==>>> it: 201, mem avg. loss: 1.127182, running mem acc: 0.610
==>>> it: 301, avg. loss: 0.589772, running train acc: 0.795
==>>> it: 301, mem avg. loss: 1.179885, running mem acc: 0.609
==>>> it: 401, avg. loss: 0.561710, running train acc: 0.807
==>>> it: 401, mem avg. loss: 1.227743, running mem acc: 0.603
==>>> it: 501, avg. loss: 0.540920, running train acc: 0.815
==>>> it: 501, mem avg. loss: 1.282048, running mem acc: 0.596
==>>> it: 601, avg. loss: 0.535182, running train acc: 0.818
==>>> it: 601, mem avg. loss: 1.295246, running mem acc: 0.603
==>>> it: 701, avg. loss: 0.520143, running train acc: 0.824
==>>> it: 701, mem avg. loss: 1.298665, running mem acc: 0.608
==>>> it: 801, avg. loss: 0.505921, running train acc: 0.829
==>>> it: 801, mem avg. loss: 1.300904, running mem acc: 0.610
==>>> it: 901, avg. loss: 0.490784, running train acc: 0.834
==>>> it: 901, mem avg. loss: 1.325279, running mem acc: 0.605
[0.0105 0.0045 0.0615 0.0985 0.969 ]
-----------run 9-----------avg_end_acc 0.22879999999999998-----------train time 351.76325249671936
Task: 0, Labels:[6, 8]
Task: 1, Labels:[5, 1]
Task: 2, Labels:[2, 4]
Task: 3, Labels:[9, 7]
Task: 4, Labels:[0, 3]
buffer has 200 slots
-----------run 10 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.964619, running train acc: 0.200
==>>> it: 1, mem avg. loss: 0.633902, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.569551, running train acc: 0.803
==>>> it: 101, mem avg. loss: 0.758052, running mem acc: 0.736
==>>> it: 201, avg. loss: 0.463436, running train acc: 0.842
==>>> it: 201, mem avg. loss: 0.627231, running mem acc: 0.777
==>>> it: 301, avg. loss: 0.403481, running train acc: 0.863
==>>> it: 301, mem avg. loss: 0.603257, running mem acc: 0.791
==>>> it: 401, avg. loss: 0.391208, running train acc: 0.870
==>>> it: 401, mem avg. loss: 0.579609, running mem acc: 0.800
==>>> it: 501, avg. loss: 0.377673, running train acc: 0.875
==>>> it: 501, mem avg. loss: 0.570509, running mem acc: 0.805
==>>> it: 601, avg. loss: 0.359968, running train acc: 0.881
==>>> it: 601, mem avg. loss: 0.560699, running mem acc: 0.808
==>>> it: 701, avg. loss: 0.334558, running train acc: 0.889
==>>> it: 701, mem avg. loss: 0.539870, running mem acc: 0.816
==>>> it: 801, avg. loss: 0.321128, running train acc: 0.892
==>>> it: 801, mem avg. loss: 0.518969, running mem acc: 0.822
==>>> it: 901, avg. loss: 0.313590, running train acc: 0.895
==>>> it: 901, mem avg. loss: 0.506427, running mem acc: 0.825
[0.949 0. 0. 0. 0. ]
-----------run 10 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.542141, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.112130, running mem acc: 0.950
==>>> it: 101, avg. loss: 1.634136, running train acc: 0.496
==>>> it: 101, mem avg. loss: 1.089673, running mem acc: 0.581
==>>> it: 201, avg. loss: 1.476262, running train acc: 0.539
==>>> it: 201, mem avg. loss: 1.266163, running mem acc: 0.533
==>>> it: 301, avg. loss: 1.438348, running train acc: 0.555
==>>> it: 301, mem avg. loss: 1.383494, running mem acc: 0.517
==>>> it: 401, avg. loss: 1.402216, running train acc: 0.576
==>>> it: 401, mem avg. loss: 1.423515, running mem acc: 0.512
==>>> it: 501, avg. loss: 1.419565, running train acc: 0.587
==>>> it: 501, mem avg. loss: 1.486798, running mem acc: 0.506
==>>> it: 601, avg. loss: 1.399537, running train acc: 0.602
==>>> it: 601, mem avg. loss: 1.499924, running mem acc: 0.503
==>>> it: 701, avg. loss: 1.406327, running train acc: 0.608
==>>> it: 701, mem avg. loss: 1.483819, running mem acc: 0.510
==>>> it: 801, avg. loss: 1.417939, running train acc: 0.615
==>>> it: 801, mem avg. loss: 1.490868, running mem acc: 0.512
==>>> it: 901, avg. loss: 1.408598, running train acc: 0.625
==>>> it: 901, mem avg. loss: 1.537434, running mem acc: 0.509
[0.285 0.934 0. 0. 0. ]
-----------run 10 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.624074, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.348743, running mem acc: 0.650
==>>> it: 101, avg. loss: 1.592387, running train acc: 0.341
==>>> it: 101, mem avg. loss: 1.361258, running mem acc: 0.510
==>>> it: 201, avg. loss: 1.426194, running train acc: 0.383
==>>> it: 201, mem avg. loss: 1.409031, running mem acc: 0.507
==>>> it: 301, avg. loss: 1.323234, running train acc: 0.418
==>>> it: 301, mem avg. loss: 1.349841, running mem acc: 0.540
==>>> it: 401, avg. loss: 1.291882, running train acc: 0.435
==>>> it: 401, mem avg. loss: 1.394734, running mem acc: 0.543
==>>> it: 501, avg. loss: 1.230719, running train acc: 0.461
==>>> it: 501, mem avg. loss: 1.389584, running mem acc: 0.552
==>>> it: 601, avg. loss: 1.205972, running train acc: 0.477
==>>> it: 601, mem avg. loss: 1.360976, running mem acc: 0.564
==>>> it: 701, avg. loss: 1.163111, running train acc: 0.496
==>>> it: 701, mem avg. loss: 1.331566, running mem acc: 0.573
==>>> it: 801, avg. loss: 1.136324, running train acc: 0.513
==>>> it: 801, mem avg. loss: 1.350244, running mem acc: 0.573
==>>> it: 901, avg. loss: 1.112218, running train acc: 0.526
==>>> it: 901, mem avg. loss: 1.335858, running mem acc: 0.579
[0.1 0.2405 0.7215 0. 0. ]
-----------run 10 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.227316, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.230024, running mem acc: 0.900
==>>> it: 101, avg. loss: 1.106009, running train acc: 0.594
==>>> it: 101, mem avg. loss: 1.160731, running mem acc: 0.581
==>>> it: 201, avg. loss: 0.881787, running train acc: 0.672
==>>> it: 201, mem avg. loss: 1.214181, running mem acc: 0.577
==>>> it: 301, avg. loss: 0.781103, running train acc: 0.708
==>>> it: 301, mem avg. loss: 1.307234, running mem acc: 0.567
==>>> it: 401, avg. loss: 0.738705, running train acc: 0.725
==>>> it: 401, mem avg. loss: 1.417592, running mem acc: 0.554
==>>> it: 501, avg. loss: 0.717494, running train acc: 0.734
==>>> it: 501, mem avg. loss: 1.454643, running mem acc: 0.557
==>>> it: 601, avg. loss: 0.684712, running train acc: 0.748
==>>> it: 601, mem avg. loss: 1.502194, running mem acc: 0.550
==>>> it: 701, avg. loss: 0.657722, running train acc: 0.759
==>>> it: 701, mem avg. loss: 1.502611, running mem acc: 0.550
==>>> it: 801, avg. loss: 0.641722, running train acc: 0.766
==>>> it: 801, mem avg. loss: 1.491824, running mem acc: 0.557
==>>> it: 901, avg. loss: 0.631792, running train acc: 0.770
==>>> it: 901, mem avg. loss: 1.499430, running mem acc: 0.556
[0.06 0.031 0.1795 0.9035 0. ]
-----------run 10 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.872224, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.487156, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.985785, running train acc: 0.695
==>>> it: 101, mem avg. loss: 0.845555, running mem acc: 0.675
==>>> it: 201, avg. loss: 0.788328, running train acc: 0.741
==>>> it: 201, mem avg. loss: 1.006548, running mem acc: 0.651
==>>> it: 301, avg. loss: 0.693055, running train acc: 0.768
==>>> it: 301, mem avg. loss: 0.990421, running mem acc: 0.667
==>>> it: 401, avg. loss: 0.627377, running train acc: 0.784
==>>> it: 401, mem avg. loss: 1.038227, running mem acc: 0.664
==>>> it: 501, avg. loss: 0.603819, running train acc: 0.790
==>>> it: 501, mem avg. loss: 1.071894, running mem acc: 0.668
==>>> it: 601, avg. loss: 0.578010, running train acc: 0.799
==>>> it: 601, mem avg. loss: 1.111116, running mem acc: 0.658
==>>> it: 701, avg. loss: 0.549019, running train acc: 0.809
==>>> it: 701, mem avg. loss: 1.106638, running mem acc: 0.659
==>>> it: 801, avg. loss: 0.539998, running train acc: 0.811
==>>> it: 801, mem avg. loss: 1.140508, running mem acc: 0.653
==>>> it: 901, avg. loss: 0.525956, running train acc: 0.817
==>>> it: 901, mem avg. loss: 1.164174, running mem acc: 0.651
[0.006 0.016 0.064 0.1635 0.913 ]
-----------run 10-----------avg_end_acc 0.2325-----------train time 358.60441541671753
Task: 0, Labels:[7, 2]
Task: 1, Labels:[5, 9]
Task: 2, Labels:[3, 1]
Task: 3, Labels:[6, 0]
Task: 4, Labels:[8, 4]
buffer has 200 slots
-----------run 11 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.634367, running train acc: 0.250
==>>> it: 1, mem avg. loss: 0.788564, running mem acc: 0.600
==>>> it: 101, avg. loss: 0.883966, running train acc: 0.629
==>>> it: 101, mem avg. loss: 0.698197, running mem acc: 0.695
==>>> it: 201, avg. loss: 0.769385, running train acc: 0.671
==>>> it: 201, mem avg. loss: 0.565793, running mem acc: 0.748
==>>> it: 301, avg. loss: 0.718458, running train acc: 0.697
==>>> it: 301, mem avg. loss: 0.517152, running mem acc: 0.769
==>>> it: 401, avg. loss: 0.676678, running train acc: 0.720
==>>> it: 401, mem avg. loss: 0.477971, running mem acc: 0.789
==>>> it: 501, avg. loss: 0.649954, running train acc: 0.735
==>>> it: 501, mem avg. loss: 0.462954, running mem acc: 0.798
==>>> it: 601, avg. loss: 0.616626, running train acc: 0.750
==>>> it: 601, mem avg. loss: 0.448258, running mem acc: 0.806
==>>> it: 701, avg. loss: 0.585877, running train acc: 0.764
==>>> it: 701, mem avg. loss: 0.431959, running mem acc: 0.812
==>>> it: 801, avg. loss: 0.570696, running train acc: 0.770
==>>> it: 801, mem avg. loss: 0.420499, running mem acc: 0.818
==>>> it: 901, avg. loss: 0.559472, running train acc: 0.776
==>>> it: 901, mem avg. loss: 0.417183, running mem acc: 0.820
[0.844 0. 0. 0. 0. ]
-----------run 11 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.732858, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.149319, running mem acc: 0.900
==>>> it: 101, avg. loss: 1.868239, running train acc: 0.436
==>>> it: 101, mem avg. loss: 1.260214, running mem acc: 0.546
==>>> it: 201, avg. loss: 1.663856, running train acc: 0.513
==>>> it: 201, mem avg. loss: 1.367483, running mem acc: 0.535
==>>> it: 301, avg. loss: 1.606956, running train acc: 0.550
==>>> it: 301, mem avg. loss: 1.425993, running mem acc: 0.539
==>>> it: 401, avg. loss: 1.587434, running train acc: 0.570
==>>> it: 401, mem avg. loss: 1.427580, running mem acc: 0.547
==>>> it: 501, avg. loss: 1.562295, running train acc: 0.588
==>>> it: 501, mem avg. loss: 1.396664, running mem acc: 0.556
==>>> it: 601, avg. loss: 1.572996, running train acc: 0.599
==>>> it: 601, mem avg. loss: 1.375718, running mem acc: 0.561
==>>> it: 701, avg. loss: 1.617373, running train acc: 0.603
==>>> it: 701, mem avg. loss: 1.387471, running mem acc: 0.563
==>>> it: 801, avg. loss: 1.630451, running train acc: 0.609
==>>> it: 801, mem avg. loss: 1.406090, running mem acc: 0.567
==>>> it: 901, avg. loss: 1.652611, running train acc: 0.613
==>>> it: 901, mem avg. loss: 1.412448, running mem acc: 0.569
[0.051 0.948 0. 0. 0. ]
-----------run 11 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.075732, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.056574, running mem acc: 1.000
==>>> it: 101, avg. loss: 1.200332, running train acc: 0.589
==>>> it: 101, mem avg. loss: 1.419644, running mem acc: 0.510
==>>> it: 201, avg. loss: 0.900834, running train acc: 0.690
==>>> it: 201, mem avg. loss: 1.452471, running mem acc: 0.509
==>>> it: 301, avg. loss: 0.792861, running train acc: 0.729
==>>> it: 301, mem avg. loss: 1.483005, running mem acc: 0.525
==>>> it: 401, avg. loss: 0.758255, running train acc: 0.743
==>>> it: 401, mem avg. loss: 1.446166, running mem acc: 0.546
==>>> it: 501, avg. loss: 0.728275, running train acc: 0.754
==>>> it: 501, mem avg. loss: 1.449351, running mem acc: 0.557
==>>> it: 601, avg. loss: 0.688917, running train acc: 0.766
==>>> it: 601, mem avg. loss: 1.456176, running mem acc: 0.560
==>>> it: 701, avg. loss: 0.677167, running train acc: 0.771
==>>> it: 701, mem avg. loss: 1.470191, running mem acc: 0.561
==>>> it: 801, avg. loss: 0.661188, running train acc: 0.776
==>>> it: 801, mem avg. loss: 1.444812, running mem acc: 0.566
==>>> it: 901, avg. loss: 0.640713, running train acc: 0.783
==>>> it: 901, mem avg. loss: 1.415090, running mem acc: 0.572
[0.036 0.02 0.952 0. 0. ]
-----------run 11 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.347911, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.343016, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.133858, running train acc: 0.635
==>>> it: 101, mem avg. loss: 1.184931, running mem acc: 0.609
==>>> it: 201, avg. loss: 0.851264, running train acc: 0.714
==>>> it: 201, mem avg. loss: 1.218354, running mem acc: 0.600
==>>> it: 301, avg. loss: 0.757188, running train acc: 0.743
==>>> it: 301, mem avg. loss: 1.261394, running mem acc: 0.595
==>>> it: 401, avg. loss: 0.681236, running train acc: 0.769
==>>> it: 401, mem avg. loss: 1.240945, running mem acc: 0.613
==>>> it: 501, avg. loss: 0.657840, running train acc: 0.775
==>>> it: 501, mem avg. loss: 1.282916, running mem acc: 0.610
==>>> it: 601, avg. loss: 0.634225, running train acc: 0.787
==>>> it: 601, mem avg. loss: 1.285007, running mem acc: 0.616
==>>> it: 701, avg. loss: 0.590608, running train acc: 0.800
==>>> it: 701, mem avg. loss: 1.313342, running mem acc: 0.610
==>>> it: 801, avg. loss: 0.565548, running train acc: 0.808
==>>> it: 801, mem avg. loss: 1.311907, running mem acc: 0.611
==>>> it: 901, avg. loss: 0.547157, running train acc: 0.814
==>>> it: 901, mem avg. loss: 1.327218, running mem acc: 0.607
[0.03 0.061 0.073 0.959 0. ]
-----------run 11 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.093782, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.085141, running mem acc: 0.650
==>>> it: 101, avg. loss: 0.981959, running train acc: 0.695
==>>> it: 101, mem avg. loss: 1.287211, running mem acc: 0.600
==>>> it: 201, avg. loss: 0.746335, running train acc: 0.753
==>>> it: 201, mem avg. loss: 1.262462, running mem acc: 0.604
==>>> it: 301, avg. loss: 0.660733, running train acc: 0.776
==>>> it: 301, mem avg. loss: 1.289982, running mem acc: 0.611
==>>> it: 401, avg. loss: 0.589288, running train acc: 0.800
==>>> it: 401, mem avg. loss: 1.283478, running mem acc: 0.613
==>>> it: 501, avg. loss: 0.542818, running train acc: 0.816
==>>> it: 501, mem avg. loss: 1.349436, running mem acc: 0.592
==>>> it: 601, avg. loss: 0.510581, running train acc: 0.826
==>>> it: 601, mem avg. loss: 1.423337, running mem acc: 0.578
==>>> it: 701, avg. loss: 0.486769, running train acc: 0.833
==>>> it: 701, mem avg. loss: 1.436769, running mem acc: 0.575
==>>> it: 801, avg. loss: 0.468700, running train acc: 0.839
==>>> it: 801, mem avg. loss: 1.427190, running mem acc: 0.580
==>>> it: 901, avg. loss: 0.456845, running train acc: 0.844
==>>> it: 901, mem avg. loss: 1.389046, running mem acc: 0.590
[0.0235 0.017 0.0205 0.074 0.8935]
-----------run 11-----------avg_end_acc 0.2057-----------train time 359.34954738616943
Task: 0, Labels:[0, 9]
Task: 1, Labels:[6, 8]
Task: 2, Labels:[4, 1]
Task: 3, Labels:[5, 7]
Task: 4, Labels:[2, 3]
buffer has 200 slots
-----------run 12 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.718163, running train acc: 0.300
==>>> it: 1, mem avg. loss: 0.591034, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.843719, running train acc: 0.702
==>>> it: 101, mem avg. loss: 0.695816, running mem acc: 0.709
==>>> it: 201, avg. loss: 0.699118, running train acc: 0.737
==>>> it: 201, mem avg. loss: 0.547421, running mem acc: 0.770
==>>> it: 301, avg. loss: 0.637162, running train acc: 0.752
==>>> it: 301, mem avg. loss: 0.488480, running mem acc: 0.795
==>>> it: 401, avg. loss: 0.589690, running train acc: 0.771
==>>> it: 401, mem avg. loss: 0.461834, running mem acc: 0.809
==>>> it: 501, avg. loss: 0.561671, running train acc: 0.784
==>>> it: 501, mem avg. loss: 0.441033, running mem acc: 0.817
==>>> it: 601, avg. loss: 0.548769, running train acc: 0.791
==>>> it: 601, mem avg. loss: 0.423736, running mem acc: 0.828
==>>> it: 701, avg. loss: 0.540185, running train acc: 0.793
==>>> it: 701, mem avg. loss: 0.409093, running mem acc: 0.832
==>>> it: 801, avg. loss: 0.528331, running train acc: 0.800
==>>> it: 801, mem avg. loss: 0.396985, running mem acc: 0.836
==>>> it: 901, avg. loss: 0.518779, running train acc: 0.804
==>>> it: 901, mem avg. loss: 0.382138, running mem acc: 0.842
[0.846 0. 0. 0. 0. ]
-----------run 12 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.671188, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.102946, running mem acc: 0.950
==>>> it: 101, avg. loss: 1.534376, running train acc: 0.521
==>>> it: 101, mem avg. loss: 1.013115, running mem acc: 0.587
==>>> it: 201, avg. loss: 1.469837, running train acc: 0.574
==>>> it: 201, mem avg. loss: 1.137695, running mem acc: 0.575
==>>> it: 301, avg. loss: 1.472774, running train acc: 0.602
==>>> it: 301, mem avg. loss: 1.187251, running mem acc: 0.577
==>>> it: 401, avg. loss: 1.429122, running train acc: 0.626
==>>> it: 401, mem avg. loss: 1.199661, running mem acc: 0.569
==>>> it: 501, avg. loss: 1.460481, running train acc: 0.633
==>>> it: 501, mem avg. loss: 1.197105, running mem acc: 0.574
==>>> it: 601, avg. loss: 1.461908, running train acc: 0.640
==>>> it: 601, mem avg. loss: 1.215386, running mem acc: 0.576
==>>> it: 701, avg. loss: 1.474346, running train acc: 0.644
==>>> it: 701, mem avg. loss: 1.260712, running mem acc: 0.570
==>>> it: 801, avg. loss: 1.487364, running train acc: 0.646
==>>> it: 801, mem avg. loss: 1.285599, running mem acc: 0.566
==>>> it: 901, avg. loss: 1.513733, running train acc: 0.646
==>>> it: 901, mem avg. loss: 1.285095, running mem acc: 0.566
[0.084 0.964 0. 0. 0. ]
-----------run 12 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.386350, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.796488, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.184053, running train acc: 0.611
==>>> it: 101, mem avg. loss: 1.190895, running mem acc: 0.552
==>>> it: 201, avg. loss: 0.940010, running train acc: 0.684
==>>> it: 201, mem avg. loss: 1.374536, running mem acc: 0.528
==>>> it: 301, avg. loss: 0.884326, running train acc: 0.705
==>>> it: 301, mem avg. loss: 1.450029, running mem acc: 0.531
==>>> it: 401, avg. loss: 0.838504, running train acc: 0.721
==>>> it: 401, mem avg. loss: 1.507351, running mem acc: 0.525
==>>> it: 501, avg. loss: 0.794028, running train acc: 0.740
==>>> it: 501, mem avg. loss: 1.532764, running mem acc: 0.525
==>>> it: 601, avg. loss: 0.779128, running train acc: 0.748
==>>> it: 601, mem avg. loss: 1.527451, running mem acc: 0.529
==>>> it: 701, avg. loss: 0.773477, running train acc: 0.752
==>>> it: 701, mem avg. loss: 1.532549, running mem acc: 0.528
==>>> it: 801, avg. loss: 0.770775, running train acc: 0.755
==>>> it: 801, mem avg. loss: 1.516704, running mem acc: 0.533
==>>> it: 901, avg. loss: 0.767962, running train acc: 0.756
==>>> it: 901, mem avg. loss: 1.500694, running mem acc: 0.542
[0.0715 0.1165 0.943 0. 0. ]
-----------run 12 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.260730, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.972202, running mem acc: 0.650
==>>> it: 101, avg. loss: 1.406278, running train acc: 0.496
==>>> it: 101, mem avg. loss: 1.175551, running mem acc: 0.591
==>>> it: 201, avg. loss: 1.134967, running train acc: 0.573
==>>> it: 201, mem avg. loss: 1.225733, running mem acc: 0.591
==>>> it: 301, avg. loss: 1.047659, running train acc: 0.603
==>>> it: 301, mem avg. loss: 1.271801, running mem acc: 0.594
==>>> it: 401, avg. loss: 0.987809, running train acc: 0.625
==>>> it: 401, mem avg. loss: 1.317990, running mem acc: 0.589
==>>> it: 501, avg. loss: 0.958204, running train acc: 0.638
==>>> it: 501, mem avg. loss: 1.353367, running mem acc: 0.587
==>>> it: 601, avg. loss: 0.927363, running train acc: 0.650
==>>> it: 601, mem avg. loss: 1.361680, running mem acc: 0.587
==>>> it: 701, avg. loss: 0.893504, running train acc: 0.661
==>>> it: 701, mem avg. loss: 1.346073, running mem acc: 0.591
==>>> it: 801, avg. loss: 0.867834, running train acc: 0.672
==>>> it: 801, mem avg. loss: 1.339618, running mem acc: 0.590
==>>> it: 901, avg. loss: 0.862486, running train acc: 0.677
==>>> it: 901, mem avg. loss: 1.360422, running mem acc: 0.587
[0.063 0.177 0.204 0.833 0. ]
-----------run 12 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.220604, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.299554, running mem acc: 0.900
==>>> it: 101, avg. loss: 1.226357, running train acc: 0.524
==>>> it: 101, mem avg. loss: 0.921056, running mem acc: 0.688
==>>> it: 201, avg. loss: 1.023460, running train acc: 0.589
==>>> it: 201, mem avg. loss: 1.033062, running mem acc: 0.657
==>>> it: 301, avg. loss: 0.926138, running train acc: 0.624
==>>> it: 301, mem avg. loss: 1.063162, running mem acc: 0.654
==>>> it: 401, avg. loss: 0.867517, running train acc: 0.648
==>>> it: 401, mem avg. loss: 1.120561, running mem acc: 0.641
==>>> it: 501, avg. loss: 0.824150, running train acc: 0.664
==>>> it: 501, mem avg. loss: 1.138012, running mem acc: 0.636
==>>> it: 601, avg. loss: 0.789811, running train acc: 0.677
==>>> it: 601, mem avg. loss: 1.151197, running mem acc: 0.630
==>>> it: 701, avg. loss: 0.759823, running train acc: 0.688
==>>> it: 701, mem avg. loss: 1.147362, running mem acc: 0.630
==>>> it: 801, avg. loss: 0.744000, running train acc: 0.695
==>>> it: 801, mem avg. loss: 1.167487, running mem acc: 0.631
==>>> it: 901, avg. loss: 0.725395, running train acc: 0.702
==>>> it: 901, mem avg. loss: 1.161487, running mem acc: 0.635
[0.039 0.0925 0.1995 0.0635 0.796 ]
-----------run 12-----------avg_end_acc 0.23810000000000003-----------train time 355.9063673019409
Task: 0, Labels:[2, 4]
Task: 1, Labels:[5, 3]
Task: 2, Labels:[7, 9]
Task: 3, Labels:[1, 0]
Task: 4, Labels:[8, 6]
buffer has 200 slots
-----------run 13 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 1.605964, running train acc: 0.350
==>>> it: 1, mem avg. loss: 0.470038, running mem acc: 0.800
==>>> it: 101, avg. loss: 1.023316, running train acc: 0.541
==>>> it: 101, mem avg. loss: 0.800925, running mem acc: 0.645
==>>> it: 201, avg. loss: 0.928463, running train acc: 0.579
==>>> it: 201, mem avg. loss: 0.652172, running mem acc: 0.713
==>>> it: 301, avg. loss: 0.899375, running train acc: 0.592
==>>> it: 301, mem avg. loss: 0.614246, running mem acc: 0.723
==>>> it: 401, avg. loss: 0.881908, running train acc: 0.598
==>>> it: 401, mem avg. loss: 0.586051, running mem acc: 0.732
==>>> it: 501, avg. loss: 0.867278, running train acc: 0.607
==>>> it: 501, mem avg. loss: 0.557181, running mem acc: 0.743
==>>> it: 601, avg. loss: 0.849194, running train acc: 0.612
==>>> it: 601, mem avg. loss: 0.543856, running mem acc: 0.748
==>>> it: 701, avg. loss: 0.832245, running train acc: 0.621
==>>> it: 701, mem avg. loss: 0.526697, running mem acc: 0.756
==>>> it: 801, avg. loss: 0.818336, running train acc: 0.628
==>>> it: 801, mem avg. loss: 0.515327, running mem acc: 0.762
==>>> it: 901, avg. loss: 0.808178, running train acc: 0.633
==>>> it: 901, mem avg. loss: 0.503886, running mem acc: 0.767
[0.708 0. 0. 0. 0. ]
-----------run 13 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.802822, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.207961, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.126050, running train acc: 0.296
==>>> it: 101, mem avg. loss: 1.540918, running mem acc: 0.514
==>>> it: 201, avg. loss: 2.001147, running train acc: 0.318
==>>> it: 201, mem avg. loss: 1.753057, running mem acc: 0.507
==>>> it: 301, avg. loss: 1.897873, running train acc: 0.335
==>>> it: 301, mem avg. loss: 1.852063, running mem acc: 0.506
==>>> it: 401, avg. loss: 1.866093, running train acc: 0.351
==>>> it: 401, mem avg. loss: 1.979853, running mem acc: 0.503
==>>> it: 501, avg. loss: 1.826138, running train acc: 0.366
==>>> it: 501, mem avg. loss: 2.008567, running mem acc: 0.507
==>>> it: 601, avg. loss: 1.786154, running train acc: 0.382
==>>> it: 601, mem avg. loss: 2.064100, running mem acc: 0.503
==>>> it: 701, avg. loss: 1.745653, running train acc: 0.396
==>>> it: 701, mem avg. loss: 2.053239, running mem acc: 0.512
==>>> it: 801, avg. loss: 1.725044, running train acc: 0.407
==>>> it: 801, mem avg. loss: 2.085665, running mem acc: 0.512
==>>> it: 901, avg. loss: 1.698092, running train acc: 0.415
==>>> it: 901, mem avg. loss: 2.089774, running mem acc: 0.514
[0.0855 0.6635 0. 0. 0. ]
-----------run 13 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.094963, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.402217, running mem acc: 0.750
==>>> it: 101, avg. loss: 1.728195, running train acc: 0.388
==>>> it: 101, mem avg. loss: 1.725669, running mem acc: 0.474
==>>> it: 201, avg. loss: 1.519148, running train acc: 0.476
==>>> it: 201, mem avg. loss: 1.860654, running mem acc: 0.458
==>>> it: 301, avg. loss: 1.395724, running train acc: 0.525
==>>> it: 301, mem avg. loss: 1.881791, running mem acc: 0.463
==>>> it: 401, avg. loss: 1.312124, running train acc: 0.562
==>>> it: 401, mem avg. loss: 1.859944, running mem acc: 0.471
==>>> it: 501, avg. loss: 1.267220, running train acc: 0.585
==>>> it: 501, mem avg. loss: 1.949038, running mem acc: 0.463
==>>> it: 601, avg. loss: 1.214039, running train acc: 0.608
==>>> it: 601, mem avg. loss: 1.928434, running mem acc: 0.469
==>>> it: 701, avg. loss: 1.209811, running train acc: 0.620
==>>> it: 701, mem avg. loss: 1.954459, running mem acc: 0.472
==>>> it: 801, avg. loss: 1.174084, running train acc: 0.636
==>>> it: 801, mem avg. loss: 1.966736, running mem acc: 0.471
==>>> it: 901, avg. loss: 1.148971, running train acc: 0.646
==>>> it: 901, mem avg. loss: 1.965914, running mem acc: 0.471
[0.0995 0.053 0.9215 0. 0. ]
-----------run 13 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.426067, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.283163, running mem acc: 0.550
==>>> it: 101, avg. loss: 1.273503, running train acc: 0.556
==>>> it: 101, mem avg. loss: 1.216225, running mem acc: 0.598
==>>> it: 201, avg. loss: 1.055884, running train acc: 0.624
==>>> it: 201, mem avg. loss: 1.346750, running mem acc: 0.569
==>>> it: 301, avg. loss: 0.958056, running train acc: 0.667
==>>> it: 301, mem avg. loss: 1.429318, running mem acc: 0.554
==>>> it: 401, avg. loss: 0.902509, running train acc: 0.694
==>>> it: 401, mem avg. loss: 1.467311, running mem acc: 0.549
==>>> it: 501, avg. loss: 0.867040, running train acc: 0.704
==>>> it: 501, mem avg. loss: 1.529831, running mem acc: 0.543
==>>> it: 601, avg. loss: 0.823714, running train acc: 0.719
==>>> it: 601, mem avg. loss: 1.550125, running mem acc: 0.541
==>>> it: 701, avg. loss: 0.795800, running train acc: 0.732
==>>> it: 701, mem avg. loss: 1.573262, running mem acc: 0.536
==>>> it: 801, avg. loss: 0.781243, running train acc: 0.739
==>>> it: 801, mem avg. loss: 1.589653, running mem acc: 0.531
==>>> it: 901, avg. loss: 0.761411, running train acc: 0.747
==>>> it: 901, mem avg. loss: 1.591606, running mem acc: 0.526
[0.1655 0.0515 0.184 0.948 0. ]
-----------run 13 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.738444, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.951395, running mem acc: 0.650
==>>> it: 101, avg. loss: 0.950299, running train acc: 0.710
==>>> it: 101, mem avg. loss: 1.141976, running mem acc: 0.602
==>>> it: 201, avg. loss: 0.699778, running train acc: 0.772
==>>> it: 201, mem avg. loss: 1.079843, running mem acc: 0.622
==>>> it: 301, avg. loss: 0.615345, running train acc: 0.798
==>>> it: 301, mem avg. loss: 1.086415, running mem acc: 0.630
==>>> it: 401, avg. loss: 0.535348, running train acc: 0.823
==>>> it: 401, mem avg. loss: 1.138584, running mem acc: 0.621
==>>> it: 501, avg. loss: 0.509085, running train acc: 0.829
==>>> it: 501, mem avg. loss: 1.178642, running mem acc: 0.621
==>>> it: 601, avg. loss: 0.490169, running train acc: 0.835
==>>> it: 601, mem avg. loss: 1.191217, running mem acc: 0.622
==>>> it: 701, avg. loss: 0.473661, running train acc: 0.841
==>>> it: 701, mem avg. loss: 1.217431, running mem acc: 0.620
==>>> it: 801, avg. loss: 0.454391, running train acc: 0.847
==>>> it: 801, mem avg. loss: 1.246198, running mem acc: 0.615
==>>> it: 901, avg. loss: 0.438546, running train acc: 0.853
==>>> it: 901, mem avg. loss: 1.279826, running mem acc: 0.609
[0.019 0.0545 0.13 0.016 0.9485]
-----------run 13-----------avg_end_acc 0.23360000000000003-----------train time 355.909298658371
Task: 0, Labels:[5, 1]
Task: 1, Labels:[9, 8]
Task: 2, Labels:[6, 0]
Task: 3, Labels:[3, 2]
Task: 4, Labels:[7, 4]
buffer has 200 slots
-----------run 14 training batch 0-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 2.535123, running train acc: 0.450
==>>> it: 1, mem avg. loss: 0.974660, running mem acc: 0.800
==>>> it: 101, avg. loss: 0.616051, running train acc: 0.774
==>>> it: 101, mem avg. loss: 0.703601, running mem acc: 0.731
==>>> it: 201, avg. loss: 0.464682, running train acc: 0.829
==>>> it: 201, mem avg. loss: 0.536179, running mem acc: 0.791
==>>> it: 301, avg. loss: 0.409765, running train acc: 0.849
==>>> it: 301, mem avg. loss: 0.477468, running mem acc: 0.815
==>>> it: 401, avg. loss: 0.378538, running train acc: 0.865
==>>> it: 401, mem avg. loss: 0.456810, running mem acc: 0.826
==>>> it: 501, avg. loss: 0.350980, running train acc: 0.875
==>>> it: 501, mem avg. loss: 0.433002, running mem acc: 0.834
==>>> it: 601, avg. loss: 0.331925, running train acc: 0.882
==>>> it: 601, mem avg. loss: 0.418033, running mem acc: 0.840
==>>> it: 701, avg. loss: 0.312436, running train acc: 0.889
==>>> it: 701, mem avg. loss: 0.403851, running mem acc: 0.846
==>>> it: 801, avg. loss: 0.304566, running train acc: 0.893
==>>> it: 801, mem avg. loss: 0.389616, running mem acc: 0.851
==>>> it: 901, avg. loss: 0.296934, running train acc: 0.896
==>>> it: 901, mem avg. loss: 0.381823, running mem acc: 0.856
[0.955 0. 0. 0. 0. ]
-----------run 14 training batch 1-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.862129, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.319085, running mem acc: 0.900
==>>> it: 101, avg. loss: 1.953883, running train acc: 0.444
==>>> it: 101, mem avg. loss: 1.120724, running mem acc: 0.589
==>>> it: 201, avg. loss: 1.737157, running train acc: 0.498
==>>> it: 201, mem avg. loss: 1.248272, running mem acc: 0.578
==>>> it: 301, avg. loss: 1.641455, running train acc: 0.532
==>>> it: 301, mem avg. loss: 1.297246, running mem acc: 0.557
==>>> it: 401, avg. loss: 1.647166, running train acc: 0.551
==>>> it: 401, mem avg. loss: 1.402236, running mem acc: 0.548
==>>> it: 501, avg. loss: 1.621026, running train acc: 0.569
==>>> it: 501, mem avg. loss: 1.447042, running mem acc: 0.545
==>>> it: 601, avg. loss: 1.604343, running train acc: 0.582
==>>> it: 601, mem avg. loss: 1.493490, running mem acc: 0.543
==>>> it: 701, avg. loss: 1.583406, running train acc: 0.594
==>>> it: 701, mem avg. loss: 1.542538, running mem acc: 0.535
==>>> it: 801, avg. loss: 1.579960, running train acc: 0.600
==>>> it: 801, mem avg. loss: 1.549354, running mem acc: 0.537
==>>> it: 901, avg. loss: 1.562062, running train acc: 0.608
==>>> it: 901, mem avg. loss: 1.562158, running mem acc: 0.538
[0.2975 0.9005 0. 0. 0. ]
-----------run 14 training batch 2-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 10.188853, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.408725, running mem acc: 0.850
==>>> it: 101, avg. loss: 1.327490, running train acc: 0.546
==>>> it: 101, mem avg. loss: 1.064179, running mem acc: 0.599
==>>> it: 201, avg. loss: 1.108351, running train acc: 0.628
==>>> it: 201, mem avg. loss: 1.156029, running mem acc: 0.593
==>>> it: 301, avg. loss: 1.016189, running train acc: 0.662
==>>> it: 301, mem avg. loss: 1.188458, running mem acc: 0.592
==>>> it: 401, avg. loss: 0.959065, running train acc: 0.680
==>>> it: 401, mem avg. loss: 1.186450, running mem acc: 0.599
==>>> it: 501, avg. loss: 0.916438, running train acc: 0.696
==>>> it: 501, mem avg. loss: 1.193804, running mem acc: 0.602
==>>> it: 601, avg. loss: 0.882019, running train acc: 0.710
==>>> it: 601, mem avg. loss: 1.201014, running mem acc: 0.598
==>>> it: 701, avg. loss: 0.851802, running train acc: 0.723
==>>> it: 701, mem avg. loss: 1.219290, running mem acc: 0.594
==>>> it: 801, avg. loss: 0.828411, running train acc: 0.732
==>>> it: 801, mem avg. loss: 1.210299, running mem acc: 0.594
==>>> it: 901, avg. loss: 0.811306, running train acc: 0.739
==>>> it: 901, mem avg. loss: 1.199807, running mem acc: 0.597
[0.094 0.1795 0.8775 0. 0. ]
-----------run 14 training batch 3-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 9.537708, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.624893, running mem acc: 0.700
==>>> it: 101, avg. loss: 1.371354, running train acc: 0.464
==>>> it: 101, mem avg. loss: 1.174273, running mem acc: 0.579
==>>> it: 201, avg. loss: 1.128175, running train acc: 0.539
==>>> it: 201, mem avg. loss: 1.234973, running mem acc: 0.568
==>>> it: 301, avg. loss: 1.045085, running train acc: 0.567
==>>> it: 301, mem avg. loss: 1.253333, running mem acc: 0.571
==>>> it: 401, avg. loss: 0.993227, running train acc: 0.589
==>>> it: 401, mem avg. loss: 1.257866, running mem acc: 0.576
==>>> it: 501, avg. loss: 0.952184, running train acc: 0.604
==>>> it: 501, mem avg. loss: 1.243152, running mem acc: 0.585
==>>> it: 601, avg. loss: 0.943023, running train acc: 0.611
==>>> it: 601, mem avg. loss: 1.293733, running mem acc: 0.579
==>>> it: 701, avg. loss: 0.919529, running train acc: 0.624
==>>> it: 701, mem avg. loss: 1.304667, running mem acc: 0.580
==>>> it: 801, avg. loss: 0.902215, running train acc: 0.634
==>>> it: 801, mem avg. loss: 1.286311, running mem acc: 0.585
==>>> it: 901, avg. loss: 0.892034, running train acc: 0.641
==>>> it: 901, mem avg. loss: 1.297551, running mem acc: 0.583
[0.0575 0.0755 0.119 0.7845 0. ]
-----------run 14 training batch 4-------------
size: (10000, 32, 32, 3), (10000,)
==>>> it: 1, avg. loss: 8.640839, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.568195, running mem acc: 0.800
==>>> it: 101, avg. loss: 1.210621, running train acc: 0.543
==>>> it: 101, mem avg. loss: 1.125526, running mem acc: 0.576
==>>> it: 201, avg. loss: 0.990392, running train acc: 0.618
==>>> it: 201, mem avg. loss: 1.211097, running mem acc: 0.588
==>>> it: 301, avg. loss: 0.878117, running train acc: 0.655
==>>> it: 301, mem avg. loss: 1.200514, running mem acc: 0.595
==>>> it: 401, avg. loss: 0.802194, running train acc: 0.684
==>>> it: 401, mem avg. loss: 1.190065, running mem acc: 0.602
==>>> it: 501, avg. loss: 0.782097, running train acc: 0.693
==>>> it: 501, mem avg. loss: 1.197412, running mem acc: 0.602
==>>> it: 601, avg. loss: 0.742337, running train acc: 0.709
==>>> it: 601, mem avg. loss: 1.181473, running mem acc: 0.609
==>>> it: 701, avg. loss: 0.720857, running train acc: 0.718
==>>> it: 701, mem avg. loss: 1.190435, running mem acc: 0.611
==>>> it: 801, avg. loss: 0.706158, running train acc: 0.724
==>>> it: 801, mem avg. loss: 1.209461, running mem acc: 0.608
==>>> it: 901, avg. loss: 0.685339, running train acc: 0.732
==>>> it: 901, mem avg. loss: 1.221538, running mem acc: 0.606
[0.085 0.0825 0.105 0.0405 0.849 ]
-----------run 14-----------avg_end_acc 0.2324-----------train time 362.97244119644165
----------- Total 15 run: 5369.449762582779s -----------
----------- Avg_End_Acc (0.23339333333333337, 0.008643034586845445) Avg_End_Fgt (0.6465666666666666, 0.024354991887866728) Avg_Acc (0.45521255555555556, 0.013628742950858771) Avg_Bwtp (0.0, 0.0) Avg_Fwt (0.0, 0.0)-----------

from online-continual-learning.

RaptorMai avatar RaptorMai commented on June 3, 2024

Hi, for the ASER and SCR paper, we fix the task order for simplicity. To duplicate the result, please add --fix_order True.

from online-continual-learning.

czjghost avatar czjghost commented on June 3, 2024

Hi, for the ASER and SCR paper, we fix the task order for simplicity. To duplicate the result, please add --fix_order True.

Excuse me, I only copy many parts of SCR you have implemented and implement buffer method by myself to reproduce SCR result, I found "fix_order=False" does not affect the result. Is "add --fix_order True" important for reproduce the result?
@RaptorMai
image

Oh...sorry, I found other method may be affected if fix_order False...

from online-continual-learning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.