Hi, I run the code on Cifar100 with memory=2k, but the performance mentioned in ASER cannot be achieved. Could you give me some help? My experiment log is as follows.
(online-learning) abc@GPU-20221:/data2/abc/PycharmProjects/online-continual-learning-main$ CUDA_VISIBLE_DEVICES=1 python general_main.py --data cifar100 --cl_type nc --agent ER --update ASER --retrieve ASER --mem_size 2000 --aser_type asvm --n_smp_cls 1.5 --k 3 --num_task 10
Namespace(agent='ER', alpha=0.9, aser_type='asvm', batch=10, cl_type='nc', classifier_chill=0.01, clip=10.0, cuda=True, cumulative_delta=False, data='cifar100', epoch=1, eps_mem_batch=10, error_analysis=False, fisher_update_after=50, fix_order=False, gss_batch_size=10, gss_mem_strength=10, k=3, kd_trick=False, kd_trick_star=False, labels_trick=False, lambda_=100, learning_rate=0.1, log_alpha=-300, mem_epoch=70, mem_iters=1, mem_size=2000, min_delta=0.0, minlr=0.0005, n_smp_cls=1.5, nmc_trick=False, ns_factor=(0.0, 0.4, 0.8, 1.2, 1.6, 2.0, 2.4, 2.8, 3.2, 3.6), ns_task=(1, 1, 2, 2, 2, 2), ns_type='noise', num_runs=15, num_runs_val=3, num_tasks=10, num_val=3, optimizer='SGD', patience=0, plot_sample=False, retrieve='ASER', review_trick=False, seed=0, separated_softmax=False, stm_capacity=1000, subsample=50, test_batch=128, update='ASER', val_size=0.1, verbose=True, weight_decay=0)
Setting up data stream
Files already downloaded and verified
Files already downloaded and verified
data setup time: 3.9977123737335205
Task: 0, Labels:[26, 86, 2, 55, 75, 93, 16, 73, 54, 95]
Task: 1, Labels:[53, 92, 78, 13, 7, 30, 22, 24, 33, 8]
Task: 2, Labels:[43, 62, 3, 71, 45, 48, 6, 99, 82, 76]
Task: 3, Labels:[60, 80, 90, 68, 51, 27, 18, 56, 63, 74]
Task: 4, Labels:[1, 61, 42, 41, 4, 15, 17, 40, 38, 5]
Task: 5, Labels:[91, 59, 0, 34, 28, 50, 11, 35, 23, 52]
Task: 6, Labels:[10, 31, 66, 57, 79, 85, 32, 84, 14, 89]
Task: 7, Labels:[19, 29, 49, 97, 98, 69, 20, 94, 72, 77]
Task: 8, Labels:[25, 37, 81, 46, 39, 65, 58, 12, 88, 70]
Task: 9, Labels:[87, 36, 21, 83, 9, 96, 67, 64, 47, 44]
buffer has 2000 slots
-----------run 0 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.366265, running train acc: 0.050
==>>> it: 1, mem avg. loss: 3.453408, running mem acc: 0.200
==>>> it: 101, avg. loss: 2.514554, running train acc: 0.195
==>>> it: 101, mem avg. loss: 2.400514, running mem acc: 0.215
==>>> it: 201, avg. loss: 2.281995, running train acc: 0.223
==>>> it: 201, mem avg. loss: 2.187224, running mem acc: 0.253
==>>> it: 301, avg. loss: 2.140274, running train acc: 0.262
==>>> it: 301, mem avg. loss: 2.015976, running mem acc: 0.299
==>>> it: 401, avg. loss: 2.039123, running train acc: 0.295
==>>> it: 401, mem avg. loss: 1.912398, running mem acc: 0.337
[0.476 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.223926, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.334138, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.866541, running train acc: 0.217
==>>> it: 101, mem avg. loss: 2.321005, running mem acc: 0.332
==>>> it: 201, avg. loss: 2.437139, running train acc: 0.298
==>>> it: 201, mem avg. loss: 2.147823, running mem acc: 0.369
==>>> it: 301, avg. loss: 2.253343, running train acc: 0.331
==>>> it: 301, mem avg. loss: 1.940424, running mem acc: 0.429
==>>> it: 401, avg. loss: 2.129498, running train acc: 0.354
==>>> it: 401, mem avg. loss: 1.778431, running mem acc: 0.474
[0.207 0.427 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.128556, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.828860, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.801638, running train acc: 0.272
==>>> it: 101, mem avg. loss: 1.474055, running mem acc: 0.603
==>>> it: 201, avg. loss: 2.367040, running train acc: 0.331
==>>> it: 201, mem avg. loss: 1.303505, running mem acc: 0.653
==>>> it: 301, avg. loss: 2.118351, running train acc: 0.380
==>>> it: 301, mem avg. loss: 1.159649, running mem acc: 0.687
==>>> it: 401, avg. loss: 1.966332, running train acc: 0.412
==>>> it: 401, mem avg. loss: 1.098953, running mem acc: 0.700
[0.079 0.113 0.565 0. 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.798961, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.499136, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.786529, running train acc: 0.243
==>>> it: 101, mem avg. loss: 1.221853, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.369076, running train acc: 0.303
==>>> it: 201, mem avg. loss: 1.155729, running mem acc: 0.677
==>>> it: 301, avg. loss: 2.178312, running train acc: 0.342
==>>> it: 301, mem avg. loss: 1.091359, running mem acc: 0.693
==>>> it: 401, avg. loss: 2.085022, running train acc: 0.360
==>>> it: 401, mem avg. loss: 1.013904, running mem acc: 0.718
[0.079 0.109 0.333 0.467 0. 0. 0. 0. 0. 0. ]
-----------run 0 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.315978, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.365603, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.727537, running train acc: 0.293
==>>> it: 101, mem avg. loss: 1.230186, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.354434, running train acc: 0.345
==>>> it: 201, mem avg. loss: 1.134729, running mem acc: 0.691
==>>> it: 301, avg. loss: 2.175920, running train acc: 0.373
==>>> it: 301, mem avg. loss: 1.034543, running mem acc: 0.721
==>>> it: 401, avg. loss: 2.080158, running train acc: 0.396
==>>> it: 401, mem avg. loss: 0.939699, running mem acc: 0.750
[0.057 0.082 0.218 0.217 0.536 0. 0. 0. 0. 0. ]
-----------run 0 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.290586, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.617102, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.571284, running train acc: 0.316
==>>> it: 101, mem avg. loss: 1.096890, running mem acc: 0.725
==>>> it: 201, avg. loss: 2.157435, running train acc: 0.374
==>>> it: 201, mem avg. loss: 0.991963, running mem acc: 0.735
==>>> it: 301, avg. loss: 1.994095, running train acc: 0.399
==>>> it: 301, mem avg. loss: 0.872221, running mem acc: 0.764
==>>> it: 401, avg. loss: 1.863082, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.779988, running mem acc: 0.786
[0.041 0.079 0.132 0.164 0.207 0.544 0. 0. 0. 0. ]
-----------run 0 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.317054, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.155062, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.697141, running train acc: 0.269
==>>> it: 101, mem avg. loss: 1.024942, running mem acc: 0.739
==>>> it: 201, avg. loss: 2.293812, running train acc: 0.341
==>>> it: 201, mem avg. loss: 0.841099, running mem acc: 0.790
==>>> it: 301, avg. loss: 2.146386, running train acc: 0.366
==>>> it: 301, mem avg. loss: 0.758618, running mem acc: 0.803
==>>> it: 401, avg. loss: 2.050880, running train acc: 0.386
==>>> it: 401, mem avg. loss: 0.708850, running mem acc: 0.813
[0.03 0.018 0.191 0.151 0.145 0.238 0.492 0. 0. 0. ]
-----------run 0 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.596883, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.763713, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.489205, running train acc: 0.364
==>>> it: 101, mem avg. loss: 1.103100, running mem acc: 0.721
==>>> it: 201, avg. loss: 2.035138, running train acc: 0.429
==>>> it: 201, mem avg. loss: 0.894908, running mem acc: 0.761
==>>> it: 301, avg. loss: 1.873820, running train acc: 0.459
==>>> it: 301, mem avg. loss: 0.779344, running mem acc: 0.791
==>>> it: 401, avg. loss: 1.769029, running train acc: 0.482
==>>> it: 401, mem avg. loss: 0.692773, running mem acc: 0.812
[0.044 0.051 0.202 0.155 0.157 0.207 0.124 0.584 0. 0. ]
-----------run 0 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.519897, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.652946, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.576428, running train acc: 0.323
==>>> it: 101, mem avg. loss: 0.889941, running mem acc: 0.788
==>>> it: 201, avg. loss: 2.174902, running train acc: 0.380
==>>> it: 201, mem avg. loss: 0.717063, running mem acc: 0.821
==>>> it: 301, avg. loss: 2.005695, running train acc: 0.413
==>>> it: 301, mem avg. loss: 0.620423, running mem acc: 0.845
==>>> it: 401, avg. loss: 1.910732, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.574640, running mem acc: 0.855
[0.042 0.065 0.154 0.146 0.111 0.153 0.065 0.199 0.531 0. ]
-----------run 0 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.985172, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.364957, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.401293, running train acc: 0.378
==>>> it: 101, mem avg. loss: 0.856421, running mem acc: 0.783
==>>> it: 201, avg. loss: 1.966463, running train acc: 0.440
==>>> it: 201, mem avg. loss: 0.697410, running mem acc: 0.815
==>>> it: 301, avg. loss: 1.813318, running train acc: 0.469
==>>> it: 301, mem avg. loss: 0.616911, running mem acc: 0.837
==>>> it: 401, avg. loss: 1.694467, running train acc: 0.501
==>>> it: 401, mem avg. loss: 0.561811, running mem acc: 0.851
[0.032 0.029 0.163 0.168 0.079 0.058 0.079 0.171 0.11 0.615]
-----------run 0-----------avg_end_acc 0.1504-----------train time 2568.949486732483
Task: 0, Labels:[86, 42, 56, 60, 98, 53, 37, 30, 25, 88]
Task: 1, Labels:[14, 89, 67, 63, 72, 29, 24, 19, 2, 27]
Task: 2, Labels:[6, 1, 54, 3, 10, 9, 13, 52, 79, 35]
Task: 3, Labels:[57, 81, 70, 99, 15, 33, 41, 28, 62, 96]
Task: 4, Labels:[50, 32, 74, 69, 93, 22, 92, 20, 49, 94]
Task: 5, Labels:[40, 21, 55, 4, 77, 82, 51, 84, 44, 78]
Task: 6, Labels:[31, 47, 17, 16, 7, 43, 5, 75, 59, 87]
Task: 7, Labels:[8, 90, 64, 0, 85, 97, 61, 73, 23, 83]
Task: 8, Labels:[68, 76, 18, 26, 39, 11, 71, 45, 91, 34]
Task: 9, Labels:[80, 38, 58, 66, 65, 36, 48, 95, 12, 46]
buffer has 2000 slots
-----------run 1 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.808821, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.941404, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.296983, running train acc: 0.275
==>>> it: 101, mem avg. loss: 2.115360, running mem acc: 0.337
==>>> it: 201, avg. loss: 2.032232, running train acc: 0.334
==>>> it: 201, mem avg. loss: 1.877936, running mem acc: 0.377
==>>> it: 301, avg. loss: 1.886132, running train acc: 0.367
==>>> it: 301, mem avg. loss: 1.657498, running mem acc: 0.440
==>>> it: 401, avg. loss: 1.800445, running train acc: 0.394
==>>> it: 401, mem avg. loss: 1.520751, running mem acc: 0.479
[0.547 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.904166, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.290099, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.935921, running train acc: 0.153
==>>> it: 101, mem avg. loss: 1.868124, running mem acc: 0.464
==>>> it: 201, avg. loss: 2.576270, running train acc: 0.221
==>>> it: 201, mem avg. loss: 1.706802, running mem acc: 0.502
==>>> it: 301, avg. loss: 2.415551, running train acc: 0.254
==>>> it: 301, mem avg. loss: 1.590665, running mem acc: 0.529
==>>> it: 401, avg. loss: 2.307406, running train acc: 0.277
==>>> it: 401, mem avg. loss: 1.494794, running mem acc: 0.556
[0.292 0.402 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.988120, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.893860, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.747594, running train acc: 0.275
==>>> it: 101, mem avg. loss: 1.576998, running mem acc: 0.598
==>>> it: 201, avg. loss: 2.305754, running train acc: 0.337
==>>> it: 201, mem avg. loss: 1.406595, running mem acc: 0.624
==>>> it: 301, avg. loss: 2.087591, running train acc: 0.382
==>>> it: 301, mem avg. loss: 1.305067, running mem acc: 0.644
==>>> it: 401, avg. loss: 1.977011, running train acc: 0.409
==>>> it: 401, mem avg. loss: 1.217746, running mem acc: 0.665
[0.177 0.053 0.56 0. 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.944031, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.231176, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.743852, running train acc: 0.279
==>>> it: 101, mem avg. loss: 1.247328, running mem acc: 0.674
==>>> it: 201, avg. loss: 2.297974, running train acc: 0.348
==>>> it: 201, mem avg. loss: 1.100986, running mem acc: 0.701
==>>> it: 301, avg. loss: 2.126419, running train acc: 0.376
==>>> it: 301, mem avg. loss: 1.042671, running mem acc: 0.715
==>>> it: 401, avg. loss: 2.013182, running train acc: 0.393
==>>> it: 401, mem avg. loss: 0.983884, running mem acc: 0.730
[0.157 0.048 0.226 0.493 0. 0. 0. 0. 0. 0. ]
-----------run 1 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.162704, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.110199, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.575309, running train acc: 0.315
==>>> it: 101, mem avg. loss: 1.166070, running mem acc: 0.698
==>>> it: 201, avg. loss: 2.179827, running train acc: 0.380
==>>> it: 201, mem avg. loss: 1.032305, running mem acc: 0.716
==>>> it: 301, avg. loss: 2.005400, running train acc: 0.414
==>>> it: 301, mem avg. loss: 0.920114, running mem acc: 0.741
==>>> it: 401, avg. loss: 1.882150, running train acc: 0.438
==>>> it: 401, mem avg. loss: 0.835061, running mem acc: 0.761
[0.096 0.039 0.14 0.238 0.532 0. 0. 0. 0. 0. ]
-----------run 1 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.208174, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.684143, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.826087, running train acc: 0.217
==>>> it: 101, mem avg. loss: 1.157389, running mem acc: 0.696
==>>> it: 201, avg. loss: 2.442925, running train acc: 0.280
==>>> it: 201, mem avg. loss: 1.004365, running mem acc: 0.730
==>>> it: 301, avg. loss: 2.257702, running train acc: 0.317
==>>> it: 301, mem avg. loss: 0.908919, running mem acc: 0.752
==>>> it: 401, avg. loss: 2.174214, running train acc: 0.332
==>>> it: 401, mem avg. loss: 0.841674, running mem acc: 0.768
[0.109 0.019 0.101 0.242 0.233 0.439 0. 0. 0. 0. ]
-----------run 1 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.298544, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.311026, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.551284, running train acc: 0.325
==>>> it: 101, mem avg. loss: 1.089027, running mem acc: 0.709
==>>> it: 201, avg. loss: 2.086483, running train acc: 0.407
==>>> it: 201, mem avg. loss: 0.950434, running mem acc: 0.736
==>>> it: 301, avg. loss: 1.887336, running train acc: 0.452
==>>> it: 301, mem avg. loss: 0.850815, running mem acc: 0.769
==>>> it: 401, avg. loss: 1.768425, running train acc: 0.476
==>>> it: 401, mem avg. loss: 0.760746, running mem acc: 0.794
[0.068 0.015 0.081 0.198 0.226 0.123 0.578 0. 0. 0. ]
-----------run 1 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.049094, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.315270, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.429609, running train acc: 0.357
==>>> it: 101, mem avg. loss: 0.977726, running mem acc: 0.749
==>>> it: 201, avg. loss: 2.046649, running train acc: 0.411
==>>> it: 201, mem avg. loss: 0.830165, running mem acc: 0.780
==>>> it: 301, avg. loss: 1.851019, running train acc: 0.453
==>>> it: 301, mem avg. loss: 0.732780, running mem acc: 0.802
==>>> it: 401, avg. loss: 1.728394, running train acc: 0.478
==>>> it: 401, mem avg. loss: 0.653093, running mem acc: 0.824
[0.05 0.011 0.097 0.164 0.172 0.102 0.251 0.636 0. 0. ]
-----------run 1 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.555409, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.844635, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.561639, running train acc: 0.332
==>>> it: 101, mem avg. loss: 0.929011, running mem acc: 0.765
==>>> it: 201, avg. loss: 2.119777, running train acc: 0.405
==>>> it: 201, mem avg. loss: 0.812653, running mem acc: 0.782
==>>> it: 301, avg. loss: 1.901092, running train acc: 0.447
==>>> it: 301, mem avg. loss: 0.720588, running mem acc: 0.805
==>>> it: 401, avg. loss: 1.790047, running train acc: 0.471
==>>> it: 401, mem avg. loss: 0.659008, running mem acc: 0.821
[0.053 0.012 0.113 0.171 0.151 0.107 0.205 0.202 0.615 0. ]
-----------run 1 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.460724, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.234395, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.556615, running train acc: 0.310
==>>> it: 101, mem avg. loss: 0.950068, running mem acc: 0.758
==>>> it: 201, avg. loss: 2.157833, running train acc: 0.372
==>>> it: 201, mem avg. loss: 0.781524, running mem acc: 0.798
==>>> it: 301, avg. loss: 1.994952, running train acc: 0.404
==>>> it: 301, mem avg. loss: 0.695320, running mem acc: 0.817
==>>> it: 401, avg. loss: 1.914375, running train acc: 0.423
==>>> it: 401, mem avg. loss: 0.636332, running mem acc: 0.834
[0.057 0.009 0.087 0.183 0.136 0.118 0.16 0.161 0.166 0.526]
-----------run 1-----------avg_end_acc 0.1603-----------train time 2500.0991492271423
Task: 0, Labels:[95, 72, 6, 39, 62, 24, 56, 36, 75, 61]
Task: 1, Labels:[42, 53, 26, 70, 88, 17, 98, 13, 47, 5]
Task: 2, Labels:[87, 85, 59, 7, 8, 16, 83, 11, 1, 69]
Task: 3, Labels:[33, 37, 94, 28, 73, 2, 22, 49, 64, 90]
Task: 4, Labels:[21, 44, 48, 30, 34, 65, 15, 29, 67, 78]
Task: 5, Labels:[93, 31, 12, 81, 57, 68, 89, 86, 25, 9]
Task: 6, Labels:[84, 52, 80, 20, 63, 38, 50, 99, 74, 79]
Task: 7, Labels:[51, 45, 96, 60, 35, 41, 71, 14, 4, 54]
Task: 8, Labels:[0, 82, 91, 66, 23, 40, 10, 76, 55, 58]
Task: 9, Labels:[27, 32, 77, 43, 18, 92, 97, 19, 3, 46]
buffer has 2000 slots
-----------run 2 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.470682, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.411853, running mem acc: 0.600
==>>> it: 101, avg. loss: 2.549353, running train acc: 0.207
==>>> it: 101, mem avg. loss: 2.306140, running mem acc: 0.245
==>>> it: 201, avg. loss: 2.231801, running train acc: 0.262
==>>> it: 201, mem avg. loss: 2.051633, running mem acc: 0.304
==>>> it: 301, avg. loss: 2.042731, running train acc: 0.322
==>>> it: 301, mem avg. loss: 1.848049, running mem acc: 0.364
==>>> it: 401, avg. loss: 1.921602, running train acc: 0.357
==>>> it: 401, mem avg. loss: 1.719325, running mem acc: 0.401
[0.517 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.191833, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.252552, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.789692, running train acc: 0.208
==>>> it: 101, mem avg. loss: 1.714980, running mem acc: 0.522
==>>> it: 201, avg. loss: 2.381965, running train acc: 0.275
==>>> it: 201, mem avg. loss: 1.523606, running mem acc: 0.553
==>>> it: 301, avg. loss: 2.159425, running train acc: 0.337
==>>> it: 301, mem avg. loss: 1.356072, running mem acc: 0.599
==>>> it: 401, avg. loss: 2.034362, running train acc: 0.362
==>>> it: 401, mem avg. loss: 1.206179, running mem acc: 0.640
[0.306 0.515 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.325661, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.754713, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.766322, running train acc: 0.249
==>>> it: 101, mem avg. loss: 1.391459, running mem acc: 0.620
==>>> it: 201, avg. loss: 2.354863, running train acc: 0.309
==>>> it: 201, mem avg. loss: 1.212666, running mem acc: 0.650
==>>> it: 301, avg. loss: 2.157635, running train acc: 0.354
==>>> it: 301, mem avg. loss: 1.115524, running mem acc: 0.673
==>>> it: 401, avg. loss: 2.065960, running train acc: 0.372
==>>> it: 401, mem avg. loss: 1.011921, running mem acc: 0.703
[0.262 0.165 0.51 0. 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.009512, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.490777, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.678599, running train acc: 0.264
==>>> it: 101, mem avg. loss: 1.269854, running mem acc: 0.636
==>>> it: 201, avg. loss: 2.286281, running train acc: 0.324
==>>> it: 201, mem avg. loss: 1.150229, running mem acc: 0.665
==>>> it: 301, avg. loss: 2.112021, running train acc: 0.361
==>>> it: 301, mem avg. loss: 1.078822, running mem acc: 0.684
==>>> it: 401, avg. loss: 1.988484, running train acc: 0.392
==>>> it: 401, mem avg. loss: 1.001867, running mem acc: 0.703
[0.234 0.213 0.198 0.526 0. 0. 0. 0. 0. 0. ]
-----------run 2 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.937668, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.499459, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.814881, running train acc: 0.225
==>>> it: 101, mem avg. loss: 1.251760, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.445513, running train acc: 0.279
==>>> it: 201, mem avg. loss: 1.103261, running mem acc: 0.697
==>>> it: 301, avg. loss: 2.249603, running train acc: 0.319
==>>> it: 301, mem avg. loss: 1.030050, running mem acc: 0.718
==>>> it: 401, avg. loss: 2.146565, running train acc: 0.347
==>>> it: 401, mem avg. loss: 0.945477, running mem acc: 0.743
[0.212 0.242 0.091 0.206 0.449 0. 0. 0. 0. 0. ]
-----------run 2 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.141557, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.166369, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.672371, running train acc: 0.304
==>>> it: 101, mem avg. loss: 1.196003, running mem acc: 0.675
==>>> it: 201, avg. loss: 2.243378, running train acc: 0.375
==>>> it: 201, mem avg. loss: 1.073220, running mem acc: 0.708
==>>> it: 301, avg. loss: 2.050653, running train acc: 0.408
==>>> it: 301, mem avg. loss: 0.960671, running mem acc: 0.734
==>>> it: 401, avg. loss: 1.923413, running train acc: 0.438
==>>> it: 401, mem avg. loss: 0.878884, running mem acc: 0.756
[0.166 0.16 0.095 0.146 0.167 0.567 0. 0. 0. 0. ]
-----------run 2 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.476360, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.158807, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.743589, running train acc: 0.262
==>>> it: 101, mem avg. loss: 1.095355, running mem acc: 0.735
==>>> it: 201, avg. loss: 2.323596, running train acc: 0.311
==>>> it: 201, mem avg. loss: 0.939340, running mem acc: 0.758
==>>> it: 301, avg. loss: 2.192877, running train acc: 0.334
==>>> it: 301, mem avg. loss: 0.876565, running mem acc: 0.772
==>>> it: 401, avg. loss: 2.053821, running train acc: 0.364
==>>> it: 401, mem avg. loss: 0.803992, running mem acc: 0.787
[0.142 0.205 0.086 0.132 0.116 0.245 0.458 0. 0. 0. ]
-----------run 2 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.927103, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.158655, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.554221, running train acc: 0.344
==>>> it: 101, mem avg. loss: 1.098241, running mem acc: 0.703
==>>> it: 201, avg. loss: 2.108451, running train acc: 0.404
==>>> it: 201, mem avg. loss: 0.954647, running mem acc: 0.739
==>>> it: 301, avg. loss: 1.925533, running train acc: 0.436
==>>> it: 301, mem avg. loss: 0.845131, running mem acc: 0.768
==>>> it: 401, avg. loss: 1.809468, running train acc: 0.460
==>>> it: 401, mem avg. loss: 0.764796, running mem acc: 0.790
[0.159 0.136 0.073 0.091 0.098 0.182 0.172 0.572 0. 0. ]
-----------run 2 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.628189, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.174980, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.488896, running train acc: 0.338
==>>> it: 101, mem avg. loss: 1.001205, running mem acc: 0.744
==>>> it: 201, avg. loss: 2.010815, running train acc: 0.437
==>>> it: 201, mem avg. loss: 0.847334, running mem acc: 0.773
==>>> it: 301, avg. loss: 1.820574, running train acc: 0.478
==>>> it: 301, mem avg. loss: 0.745095, running mem acc: 0.799
==>>> it: 401, avg. loss: 1.692188, running train acc: 0.510
==>>> it: 401, mem avg. loss: 0.670650, running mem acc: 0.818
[0.126 0.165 0.07 0.103 0.074 0.153 0.137 0.24 0.638 0. ]
-----------run 2 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.108333, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.707170, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.686840, running train acc: 0.268
==>>> it: 101, mem avg. loss: 0.934979, running mem acc: 0.764
==>>> it: 201, avg. loss: 2.303757, running train acc: 0.338
==>>> it: 201, mem avg. loss: 0.791730, running mem acc: 0.793
==>>> it: 301, avg. loss: 2.155229, running train acc: 0.361
==>>> it: 301, mem avg. loss: 0.708373, running mem acc: 0.814
==>>> it: 401, avg. loss: 2.063445, running train acc: 0.379
==>>> it: 401, mem avg. loss: 0.657133, running mem acc: 0.828
[0.137 0.094 0.067 0.084 0.088 0.119 0.15 0.203 0.217 0.524]
-----------run 2-----------avg_end_acc 0.1683-----------train time 2479.9554154872894
Task: 0, Labels:[44, 5, 59, 13, 83, 34, 56, 63, 75, 45]
Task: 1, Labels:[69, 94, 77, 80, 23, 62, 10, 97, 42, 84]
Task: 2, Labels:[37, 64, 20, 21, 65, 98, 76, 85, 88, 12]
Task: 3, Labels:[33, 92, 38, 22, 50, 96, 16, 28, 89, 4]
Task: 4, Labels:[72, 27, 48, 55, 90, 47, 49, 31, 67, 17]
Task: 5, Labels:[32, 99, 11, 91, 1, 6, 41, 93, 15, 86]
Task: 6, Labels:[61, 82, 51, 68, 40, 8, 57, 30, 81, 35]
Task: 7, Labels:[9, 95, 79, 39, 58, 78, 43, 73, 70, 18]
Task: 8, Labels:[46, 52, 54, 29, 26, 3, 74, 24, 14, 71]
Task: 9, Labels:[60, 19, 36, 2, 66, 25, 87, 53, 0, 7]
buffer has 2000 slots
-----------run 3 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.261781, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.832437, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.584979, running train acc: 0.197
==>>> it: 101, mem avg. loss: 2.441796, running mem acc: 0.221
==>>> it: 201, avg. loss: 2.278637, running train acc: 0.243
==>>> it: 201, mem avg. loss: 2.138299, running mem acc: 0.281
==>>> it: 301, avg. loss: 2.102452, running train acc: 0.287
==>>> it: 301, mem avg. loss: 1.939245, running mem acc: 0.334
==>>> it: 401, avg. loss: 2.005750, running train acc: 0.317
==>>> it: 401, mem avg. loss: 1.809237, running mem acc: 0.375
[0.498 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.578908, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.547303, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.901179, running train acc: 0.181
==>>> it: 101, mem avg. loss: 2.233723, running mem acc: 0.375
==>>> it: 201, avg. loss: 2.516784, running train acc: 0.253
==>>> it: 201, mem avg. loss: 2.129915, running mem acc: 0.383
==>>> it: 301, avg. loss: 2.355105, running train acc: 0.276
==>>> it: 301, mem avg. loss: 1.976324, running mem acc: 0.423
==>>> it: 401, avg. loss: 2.221411, running train acc: 0.307
==>>> it: 401, mem avg. loss: 1.810834, running mem acc: 0.471
[0.161 0.411 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.633906, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.257336, running mem acc: 0.600
==>>> it: 101, avg. loss: 2.626944, running train acc: 0.267
==>>> it: 101, mem avg. loss: 1.318865, running mem acc: 0.645
==>>> it: 201, avg. loss: 2.234762, running train acc: 0.339
==>>> it: 201, mem avg. loss: 1.196154, running mem acc: 0.669
==>>> it: 301, avg. loss: 2.046574, running train acc: 0.376
==>>> it: 301, mem avg. loss: 1.108157, running mem acc: 0.692
==>>> it: 401, avg. loss: 1.920790, running train acc: 0.403
==>>> it: 401, mem avg. loss: 1.036049, running mem acc: 0.712
[0.131 0.191 0.518 0. 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.100137, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.721326, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.770089, running train acc: 0.248
==>>> it: 101, mem avg. loss: 1.258925, running mem acc: 0.664
==>>> it: 201, avg. loss: 2.381001, running train acc: 0.302
==>>> it: 201, mem avg. loss: 1.232124, running mem acc: 0.659
==>>> it: 301, avg. loss: 2.214494, running train acc: 0.325
==>>> it: 301, mem avg. loss: 1.178918, running mem acc: 0.674
==>>> it: 401, avg. loss: 2.127861, running train acc: 0.344
==>>> it: 401, mem avg. loss: 1.101001, running mem acc: 0.694
[0.055 0.089 0.182 0.465 0. 0. 0. 0. 0. 0. ]
-----------run 3 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.997226, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.507348, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.644032, running train acc: 0.298
==>>> it: 101, mem avg. loss: 1.280746, running mem acc: 0.662
==>>> it: 201, avg. loss: 2.243546, running train acc: 0.357
==>>> it: 201, mem avg. loss: 1.182486, running mem acc: 0.674
==>>> it: 301, avg. loss: 2.105380, running train acc: 0.380
==>>> it: 301, mem avg. loss: 1.078236, running mem acc: 0.699
==>>> it: 401, avg. loss: 1.994536, running train acc: 0.405
==>>> it: 401, mem avg. loss: 0.984265, running mem acc: 0.727
[0.055 0.135 0.139 0.169 0.5 0. 0. 0. 0. 0. ]
-----------run 3 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.871581, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.176172, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.786092, running train acc: 0.236
==>>> it: 101, mem avg. loss: 1.168959, running mem acc: 0.701
==>>> it: 201, avg. loss: 2.397523, running train acc: 0.312
==>>> it: 201, mem avg. loss: 1.000313, running mem acc: 0.739
==>>> it: 301, avg. loss: 2.234745, running train acc: 0.345
==>>> it: 301, mem avg. loss: 0.872306, running mem acc: 0.770
==>>> it: 401, avg. loss: 2.122992, running train acc: 0.370
==>>> it: 401, mem avg. loss: 0.790151, running mem acc: 0.792
[0.031 0.131 0.113 0.12 0.205 0.524 0. 0. 0. 0. ]
-----------run 3 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.295253, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.346270, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.414082, running train acc: 0.381
==>>> it: 101, mem avg. loss: 1.044271, running mem acc: 0.724
==>>> it: 201, avg. loss: 1.941657, running train acc: 0.466
==>>> it: 201, mem avg. loss: 0.908932, running mem acc: 0.757
==>>> it: 301, avg. loss: 1.763143, running train acc: 0.504
==>>> it: 301, mem avg. loss: 0.810459, running mem acc: 0.779
==>>> it: 401, avg. loss: 1.641878, running train acc: 0.531
==>>> it: 401, mem avg. loss: 0.721429, running mem acc: 0.800
[0.046 0.065 0.063 0.107 0.157 0.122 0.667 0. 0. 0. ]
-----------run 3 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.655418, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.675616, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.540270, running train acc: 0.331
==>>> it: 101, mem avg. loss: 0.964820, running mem acc: 0.759
==>>> it: 201, avg. loss: 2.146472, running train acc: 0.387
==>>> it: 201, mem avg. loss: 0.837115, running mem acc: 0.781
==>>> it: 301, avg. loss: 1.965311, running train acc: 0.426
==>>> it: 301, mem avg. loss: 0.726884, running mem acc: 0.809
==>>> it: 401, avg. loss: 1.842273, running train acc: 0.452
==>>> it: 401, mem avg. loss: 0.667068, running mem acc: 0.822
[0.032 0.056 0.098 0.127 0.136 0.144 0.236 0.551 0. 0. ]
-----------run 3 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.633739, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.289289, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.348962, running train acc: 0.410
==>>> it: 101, mem avg. loss: 0.949333, running mem acc: 0.766
==>>> it: 201, avg. loss: 1.981276, running train acc: 0.457
==>>> it: 201, mem avg. loss: 0.775644, running mem acc: 0.801
==>>> it: 301, avg. loss: 1.793520, running train acc: 0.488
==>>> it: 301, mem avg. loss: 0.694071, running mem acc: 0.821
==>>> it: 401, avg. loss: 1.712958, running train acc: 0.501
==>>> it: 401, mem avg. loss: 0.622364, running mem acc: 0.839
[0.028 0.041 0.056 0.109 0.092 0.106 0.213 0.169 0.594 0. ]
-----------run 3 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.743945, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.639668, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.397058, running train acc: 0.380
==>>> it: 101, mem avg. loss: 0.836878, running mem acc: 0.780
==>>> it: 201, avg. loss: 1.899061, running train acc: 0.465
==>>> it: 201, mem avg. loss: 0.684619, running mem acc: 0.814
==>>> it: 301, avg. loss: 1.713826, running train acc: 0.505
==>>> it: 301, mem avg. loss: 0.614928, running mem acc: 0.825
==>>> it: 401, avg. loss: 1.607956, running train acc: 0.531
==>>> it: 401, mem avg. loss: 0.564728, running mem acc: 0.837
[0.041 0.039 0.041 0.088 0.114 0.098 0.165 0.125 0.189 0.615]
-----------run 3-----------avg_end_acc 0.15150000000000002-----------train time 2442.0826992988586
Task: 0, Labels:[14, 16, 10, 42, 34, 47, 61, 80, 71, 26]
Task: 1, Labels:[89, 33, 44, 12, 91, 9, 22, 83, 18, 45]
Task: 2, Labels:[5, 36, 24, 46, 98, 35, 87, 3, 48, 28]
Task: 3, Labels:[29, 8, 57, 0, 23, 41, 4, 60, 62, 69]
Task: 4, Labels:[81, 40, 52, 55, 38, 6, 53, 85, 74, 11]
Task: 5, Labels:[93, 30, 65, 56, 13, 82, 96, 37, 32, 27]
Task: 6, Labels:[88, 2, 77, 75, 21, 64, 19, 95, 1, 63]
Task: 7, Labels:[67, 68, 50, 51, 84, 59, 58, 7, 78, 31]
Task: 8, Labels:[72, 97, 54, 15, 49, 99, 86, 79, 94, 92]
Task: 9, Labels:[25, 73, 66, 76, 17, 70, 90, 43, 20, 39]
buffer has 2000 slots
-----------run 4 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.061858, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.598956, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.511449, running train acc: 0.209
==>>> it: 101, mem avg. loss: 2.397471, running mem acc: 0.246
==>>> it: 201, avg. loss: 2.280364, running train acc: 0.245
==>>> it: 201, mem avg. loss: 2.148914, running mem acc: 0.279
==>>> it: 301, avg. loss: 2.163460, running train acc: 0.270
==>>> it: 301, mem avg. loss: 2.025820, running mem acc: 0.311
==>>> it: 401, avg. loss: 2.069307, running train acc: 0.295
==>>> it: 401, mem avg. loss: 1.965708, running mem acc: 0.327
[0.425 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.782812, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.683428, running mem acc: 0.450
==>>> it: 101, avg. loss: 2.861774, running train acc: 0.209
==>>> it: 101, mem avg. loss: 2.311387, running mem acc: 0.339
==>>> it: 201, avg. loss: 2.483971, running train acc: 0.278
==>>> it: 201, mem avg. loss: 2.073559, running mem acc: 0.383
==>>> it: 301, avg. loss: 2.314230, running train acc: 0.314
==>>> it: 301, mem avg. loss: 1.924911, running mem acc: 0.424
==>>> it: 401, avg. loss: 2.185459, running train acc: 0.344
==>>> it: 401, mem avg. loss: 1.788918, running mem acc: 0.461
[0.061 0.455 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.930666, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.089005, running mem acc: 0.650
==>>> it: 101, avg. loss: 2.977102, running train acc: 0.182
==>>> it: 101, mem avg. loss: 1.486402, running mem acc: 0.637
==>>> it: 201, avg. loss: 2.546127, running train acc: 0.246
==>>> it: 201, mem avg. loss: 1.376242, running mem acc: 0.647
==>>> it: 301, avg. loss: 2.345372, running train acc: 0.280
==>>> it: 301, mem avg. loss: 1.271719, running mem acc: 0.668
==>>> it: 401, avg. loss: 2.221823, running train acc: 0.306
==>>> it: 401, mem avg. loss: 1.191879, running mem acc: 0.688
[0.048 0.143 0.418 0. 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.041882, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.231821, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.627415, running train acc: 0.318
==>>> it: 101, mem avg. loss: 1.229593, running mem acc: 0.703
==>>> it: 201, avg. loss: 2.188096, running train acc: 0.382
==>>> it: 201, mem avg. loss: 1.157265, running mem acc: 0.705
==>>> it: 301, avg. loss: 1.980324, running train acc: 0.424
==>>> it: 301, mem avg. loss: 1.045634, running mem acc: 0.730
==>>> it: 401, avg. loss: 1.845734, running train acc: 0.451
==>>> it: 401, mem avg. loss: 0.948088, running mem acc: 0.755
[0.075 0.09 0.212 0.572 0. 0. 0. 0. 0. 0. ]
-----------run 4 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.720258, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.300091, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.592635, running train acc: 0.322
==>>> it: 101, mem avg. loss: 1.106748, running mem acc: 0.715
==>>> it: 201, avg. loss: 2.193321, running train acc: 0.363
==>>> it: 201, mem avg. loss: 0.989534, running mem acc: 0.740
==>>> it: 301, avg. loss: 2.022125, running train acc: 0.390
==>>> it: 301, mem avg. loss: 0.891040, running mem acc: 0.759
==>>> it: 401, avg. loss: 1.912615, running train acc: 0.413
==>>> it: 401, mem avg. loss: 0.816686, running mem acc: 0.783
[0.037 0.048 0.085 0.368 0.557 0. 0. 0. 0. 0. ]
-----------run 4 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.221529, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.492802, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.601299, running train acc: 0.314
==>>> it: 101, mem avg. loss: 1.084352, running mem acc: 0.711
==>>> it: 201, avg. loss: 2.221762, running train acc: 0.358
==>>> it: 201, mem avg. loss: 0.899692, running mem acc: 0.757
==>>> it: 301, avg. loss: 2.051465, running train acc: 0.386
==>>> it: 301, mem avg. loss: 0.799671, running mem acc: 0.783
==>>> it: 401, avg. loss: 1.962413, running train acc: 0.405
==>>> it: 401, mem avg. loss: 0.730385, running mem acc: 0.800
[0.026 0.06 0.156 0.359 0.109 0.506 0. 0. 0. 0. ]
-----------run 4 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.047173, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.934935, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.644008, running train acc: 0.280
==>>> it: 101, mem avg. loss: 1.144862, running mem acc: 0.693
==>>> it: 201, avg. loss: 2.207292, running train acc: 0.365
==>>> it: 201, mem avg. loss: 0.947099, running mem acc: 0.750
==>>> it: 301, avg. loss: 2.040977, running train acc: 0.396
==>>> it: 301, mem avg. loss: 0.847885, running mem acc: 0.773
==>>> it: 401, avg. loss: 1.946916, running train acc: 0.413
==>>> it: 401, mem avg. loss: 0.755749, running mem acc: 0.799
[0.024 0.048 0.087 0.241 0.165 0.132 0.558 0. 0. 0. ]
-----------run 4 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.870583, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.645980, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.636354, running train acc: 0.326
==>>> it: 101, mem avg. loss: 1.100048, running mem acc: 0.714
==>>> it: 201, avg. loss: 2.210996, running train acc: 0.387
==>>> it: 201, mem avg. loss: 0.888319, running mem acc: 0.767
==>>> it: 301, avg. loss: 2.013457, running train acc: 0.420
==>>> it: 301, mem avg. loss: 0.785999, running mem acc: 0.789
==>>> it: 401, avg. loss: 1.911632, running train acc: 0.445
==>>> it: 401, mem avg. loss: 0.715387, running mem acc: 0.805
[0.044 0.047 0.071 0.224 0.132 0.129 0.132 0.546 0. 0. ]
-----------run 4 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.703582, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.689163, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.587725, running train acc: 0.313
==>>> it: 101, mem avg. loss: 1.000830, running mem acc: 0.738
==>>> it: 201, avg. loss: 2.139680, running train acc: 0.390
==>>> it: 201, mem avg. loss: 0.865142, running mem acc: 0.767
==>>> it: 301, avg. loss: 1.960235, running train acc: 0.428
==>>> it: 301, mem avg. loss: 0.775243, running mem acc: 0.783
==>>> it: 401, avg. loss: 1.876596, running train acc: 0.444
==>>> it: 401, mem avg. loss: 0.700705, running mem acc: 0.805
[0.03 0.04 0.079 0.239 0.084 0.123 0.094 0.163 0.527 0. ]
-----------run 4 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.971141, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.875949, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.409849, running train acc: 0.380
==>>> it: 101, mem avg. loss: 0.913430, running mem acc: 0.762
==>>> it: 201, avg. loss: 1.993176, running train acc: 0.445
==>>> it: 201, mem avg. loss: 0.770595, running mem acc: 0.783
==>>> it: 301, avg. loss: 1.827065, running train acc: 0.474
==>>> it: 301, mem avg. loss: 0.683498, running mem acc: 0.805
==>>> it: 401, avg. loss: 1.717748, running train acc: 0.497
==>>> it: 401, mem avg. loss: 0.633218, running mem acc: 0.816
[0.027 0.027 0.086 0.246 0.145 0.114 0.065 0.101 0.078 0.626]
-----------run 4-----------avg_end_acc 0.1515-----------train time 2447.1072702407837
Task: 0, Labels:[59, 27, 99, 11, 53, 51, 9, 97, 67, 8]
Task: 1, Labels:[84, 0, 6, 20, 44, 46, 91, 68, 70, 90]
Task: 2, Labels:[96, 15, 14, 85, 75, 42, 30, 81, 92, 64]
Task: 3, Labels:[55, 45, 71, 76, 36, 47, 21, 17, 24, 82]
Task: 4, Labels:[7, 69, 79, 3, 18, 25, 32, 38, 33, 63]
Task: 5, Labels:[77, 88, 52, 60, 93, 5, 66, 57, 16, 89]
Task: 6, Labels:[98, 10, 78, 35, 22, 12, 4, 43, 40, 39]
Task: 7, Labels:[37, 72, 49, 48, 54, 80, 1, 41, 2, 19]
Task: 8, Labels:[29, 74, 83, 58, 62, 26, 73, 61, 65, 86]
Task: 9, Labels:[31, 13, 56, 95, 34, 28, 50, 23, 94, 87]
buffer has 2000 slots
-----------run 5 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.489303, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.463734, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.524730, running train acc: 0.205
==>>> it: 101, mem avg. loss: 2.312677, running mem acc: 0.259
==>>> it: 201, avg. loss: 2.208632, running train acc: 0.274
==>>> it: 201, mem avg. loss: 2.072790, running mem acc: 0.301
==>>> it: 301, avg. loss: 2.087608, running train acc: 0.298
==>>> it: 301, mem avg. loss: 1.867941, running mem acc: 0.359
==>>> it: 401, avg. loss: 1.994107, running train acc: 0.328
==>>> it: 401, mem avg. loss: 1.762580, running mem acc: 0.396
[0.466 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.493091, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.470254, running mem acc: 0.600
==>>> it: 101, avg. loss: 2.786852, running train acc: 0.221
==>>> it: 101, mem avg. loss: 1.996050, running mem acc: 0.437
==>>> it: 201, avg. loss: 2.396383, running train acc: 0.281
==>>> it: 201, mem avg. loss: 1.824047, running mem acc: 0.463
==>>> it: 301, avg. loss: 2.237637, running train acc: 0.309
==>>> it: 301, mem avg. loss: 1.676074, running mem acc: 0.510
==>>> it: 401, avg. loss: 2.115021, running train acc: 0.338
==>>> it: 401, mem avg. loss: 1.528308, running mem acc: 0.550
[0.266 0.469 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.348887, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.783934, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.805101, running train acc: 0.232
==>>> it: 101, mem avg. loss: 1.406823, running mem acc: 0.617
==>>> it: 201, avg. loss: 2.372143, running train acc: 0.300
==>>> it: 201, mem avg. loss: 1.247796, running mem acc: 0.640
==>>> it: 301, avg. loss: 2.195835, running train acc: 0.338
==>>> it: 301, mem avg. loss: 1.141824, running mem acc: 0.663
==>>> it: 401, avg. loss: 2.063770, running train acc: 0.367
==>>> it: 401, mem avg. loss: 1.049696, running mem acc: 0.686
[0.25 0.292 0.462 0. 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.591803, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.438095, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.451628, running train acc: 0.369
==>>> it: 101, mem avg. loss: 1.195000, running mem acc: 0.674
==>>> it: 201, avg. loss: 1.949464, running train acc: 0.458
==>>> it: 201, mem avg. loss: 1.145484, running mem acc: 0.667
==>>> it: 301, avg. loss: 1.804470, running train acc: 0.489
==>>> it: 301, mem avg. loss: 1.060942, running mem acc: 0.685
==>>> it: 401, avg. loss: 1.683415, running train acc: 0.514
==>>> it: 401, mem avg. loss: 0.957913, running mem acc: 0.718
[0.188 0.216 0.102 0.652 0. 0. 0. 0. 0. 0. ]
-----------run 5 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.093219, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.485474, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.862644, running train acc: 0.223
==>>> it: 101, mem avg. loss: 1.108386, running mem acc: 0.717
==>>> it: 201, avg. loss: 2.441749, running train acc: 0.289
==>>> it: 201, mem avg. loss: 0.988327, running mem acc: 0.737
==>>> it: 301, avg. loss: 2.287093, running train acc: 0.314
==>>> it: 301, mem avg. loss: 0.911507, running mem acc: 0.753
==>>> it: 401, avg. loss: 2.180297, running train acc: 0.334
==>>> it: 401, mem avg. loss: 0.857030, running mem acc: 0.768
[0.163 0.165 0.116 0.272 0.471 0. 0. 0. 0. 0. ]
-----------run 5 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.643694, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.493902, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.552498, running train acc: 0.341
==>>> it: 101, mem avg. loss: 1.209915, running mem acc: 0.666
==>>> it: 201, avg. loss: 2.123344, running train acc: 0.400
==>>> it: 201, mem avg. loss: 1.060327, running mem acc: 0.705
==>>> it: 301, avg. loss: 1.927784, running train acc: 0.443
==>>> it: 301, mem avg. loss: 0.975367, running mem acc: 0.718
==>>> it: 401, avg. loss: 1.816467, running train acc: 0.468
==>>> it: 401, mem avg. loss: 0.889102, running mem acc: 0.744
[0.164 0.13 0.072 0.222 0.11 0.569 0. 0. 0. 0. ]
-----------run 5 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.313493, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.826723, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.842690, running train acc: 0.219
==>>> it: 101, mem avg. loss: 1.038724, running mem acc: 0.747
==>>> it: 201, avg. loss: 2.436411, running train acc: 0.287
==>>> it: 201, mem avg. loss: 0.927999, running mem acc: 0.759
==>>> it: 301, avg. loss: 2.258984, running train acc: 0.316
==>>> it: 301, mem avg. loss: 0.820383, running mem acc: 0.787
==>>> it: 401, avg. loss: 2.163124, running train acc: 0.336
==>>> it: 401, mem avg. loss: 0.748139, running mem acc: 0.808
[0.166 0.169 0.061 0.245 0.106 0.213 0.462 0. 0. 0. ]
-----------run 5 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.765607, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.227935, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.470374, running train acc: 0.359
==>>> it: 101, mem avg. loss: 1.090107, running mem acc: 0.707
==>>> it: 201, avg. loss: 2.109177, running train acc: 0.407
==>>> it: 201, mem avg. loss: 0.932586, running mem acc: 0.741
==>>> it: 301, avg. loss: 1.948294, running train acc: 0.436
==>>> it: 301, mem avg. loss: 0.830496, running mem acc: 0.771
==>>> it: 401, avg. loss: 1.834971, running train acc: 0.465
==>>> it: 401, mem avg. loss: 0.778851, running mem acc: 0.784
[0.17 0.146 0.044 0.162 0.074 0.187 0.063 0.588 0. 0. ]
-----------run 5 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.198342, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.145915, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.478800, running train acc: 0.371
==>>> it: 101, mem avg. loss: 1.071105, running mem acc: 0.733
==>>> it: 201, avg. loss: 2.077831, running train acc: 0.430
==>>> it: 201, mem avg. loss: 0.896071, running mem acc: 0.764
==>>> it: 301, avg. loss: 1.886414, running train acc: 0.465
==>>> it: 301, mem avg. loss: 0.791619, running mem acc: 0.789
==>>> it: 401, avg. loss: 1.781400, running train acc: 0.486
==>>> it: 401, mem avg. loss: 0.714693, running mem acc: 0.807
[0.115 0.077 0.055 0.195 0.051 0.151 0.064 0.144 0.554 0. ]
-----------run 5 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.938799, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.337434, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.480568, running train acc: 0.368
==>>> it: 101, mem avg. loss: 0.929006, running mem acc: 0.755
==>>> it: 201, avg. loss: 1.908960, running train acc: 0.483
==>>> it: 201, mem avg. loss: 0.762375, running mem acc: 0.791
==>>> it: 301, avg. loss: 1.728318, running train acc: 0.513
==>>> it: 301, mem avg. loss: 0.686818, running mem acc: 0.813
==>>> it: 401, avg. loss: 1.604723, running train acc: 0.541
==>>> it: 401, mem avg. loss: 0.621501, running mem acc: 0.830
[0.109 0.135 0.034 0.204 0.068 0.138 0.062 0.118 0.166 0.665]
-----------run 5-----------avg_end_acc 0.1699-----------train time 2466.8818497657776
Task: 0, Labels:[77, 32, 34, 85, 28, 68, 40, 52, 18, 4]
Task: 1, Labels:[15, 81, 60, 11, 7, 50, 64, 45, 17, 44]
Task: 2, Labels:[78, 91, 88, 54, 16, 75, 83, 24, 39, 62]
Task: 3, Labels:[74, 31, 99, 1, 0, 33, 53, 69, 93, 92]
Task: 4, Labels:[19, 80, 10, 59, 71, 14, 57, 97, 43, 49]
Task: 5, Labels:[23, 20, 48, 27, 2, 29, 76, 41, 58, 55]
Task: 6, Labels:[9, 5, 89, 61, 94, 56, 42, 51, 25, 70]
Task: 7, Labels:[47, 6, 90, 95, 46, 87, 84, 82, 67, 86]
Task: 8, Labels:[38, 73, 13, 98, 65, 35, 72, 26, 8, 63]
Task: 9, Labels:[12, 66, 36, 22, 79, 21, 30, 3, 96, 37]
buffer has 2000 slots
-----------run 6 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.424465, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.827387, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.441040, running train acc: 0.252
==>>> it: 101, mem avg. loss: 2.246048, running mem acc: 0.269
==>>> it: 201, avg. loss: 2.175049, running train acc: 0.290
==>>> it: 201, mem avg. loss: 2.068972, running mem acc: 0.307
==>>> it: 301, avg. loss: 2.028873, running train acc: 0.319
==>>> it: 301, mem avg. loss: 1.882205, running mem acc: 0.360
==>>> it: 401, avg. loss: 1.940446, running train acc: 0.344
==>>> it: 401, mem avg. loss: 1.760353, running mem acc: 0.393
[0.488 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.124155, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.471427, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.830100, running train acc: 0.155
==>>> it: 101, mem avg. loss: 2.007412, running mem acc: 0.433
==>>> it: 201, avg. loss: 2.484368, running train acc: 0.227
==>>> it: 201, mem avg. loss: 1.798803, running mem acc: 0.474
==>>> it: 301, avg. loss: 2.304645, running train acc: 0.271
==>>> it: 301, mem avg. loss: 1.676674, running mem acc: 0.501
==>>> it: 401, avg. loss: 2.201093, running train acc: 0.299
==>>> it: 401, mem avg. loss: 1.572525, running mem acc: 0.531
[0.198 0.407 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.695428, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.710797, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.686658, running train acc: 0.285
==>>> it: 101, mem avg. loss: 1.355388, running mem acc: 0.636
==>>> it: 201, avg. loss: 2.268869, running train acc: 0.348
==>>> it: 201, mem avg. loss: 1.233098, running mem acc: 0.665
==>>> it: 301, avg. loss: 2.092359, running train acc: 0.378
==>>> it: 301, mem avg. loss: 1.167383, running mem acc: 0.680
==>>> it: 401, avg. loss: 1.988331, running train acc: 0.399
==>>> it: 401, mem avg. loss: 1.100888, running mem acc: 0.698
[0.166 0.142 0.49 0. 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.888567, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.786435, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.790213, running train acc: 0.254
==>>> it: 101, mem avg. loss: 1.371788, running mem acc: 0.641
==>>> it: 201, avg. loss: 2.370656, running train acc: 0.321
==>>> it: 201, mem avg. loss: 1.268843, running mem acc: 0.656
==>>> it: 301, avg. loss: 2.187636, running train acc: 0.361
==>>> it: 301, mem avg. loss: 1.167690, running mem acc: 0.679
==>>> it: 401, avg. loss: 2.039294, running train acc: 0.393
==>>> it: 401, mem avg. loss: 1.066892, running mem acc: 0.705
[0.148 0.108 0.181 0.532 0. 0. 0. 0. 0. 0. ]
-----------run 6 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.945261, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.347808, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.678255, running train acc: 0.268
==>>> it: 101, mem avg. loss: 1.116896, running mem acc: 0.718
==>>> it: 201, avg. loss: 2.232262, running train acc: 0.352
==>>> it: 201, mem avg. loss: 0.991887, running mem acc: 0.736
==>>> it: 301, avg. loss: 2.045683, running train acc: 0.383
==>>> it: 301, mem avg. loss: 0.939464, running mem acc: 0.741
==>>> it: 401, avg. loss: 1.958282, running train acc: 0.402
==>>> it: 401, mem avg. loss: 0.860477, running mem acc: 0.760
[0.083 0.091 0.212 0.271 0.481 0. 0. 0. 0. 0. ]
-----------run 6 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.344313, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.649494, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.669256, running train acc: 0.307
==>>> it: 101, mem avg. loss: 1.123230, running mem acc: 0.710
==>>> it: 201, avg. loss: 2.215271, running train acc: 0.377
==>>> it: 201, mem avg. loss: 1.015452, running mem acc: 0.726
==>>> it: 301, avg. loss: 2.011953, running train acc: 0.418
==>>> it: 301, mem avg. loss: 0.915072, running mem acc: 0.748
==>>> it: 401, avg. loss: 1.897490, running train acc: 0.442
==>>> it: 401, mem avg. loss: 0.825849, running mem acc: 0.770
[0.06 0.093 0.126 0.253 0.141 0.593 0. 0. 0. 0. ]
-----------run 6 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.469841, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.759230, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.543913, running train acc: 0.329
==>>> it: 101, mem avg. loss: 0.996563, running mem acc: 0.736
==>>> it: 201, avg. loss: 2.092412, running train acc: 0.397
==>>> it: 201, mem avg. loss: 0.893401, running mem acc: 0.752
==>>> it: 301, avg. loss: 1.909276, running train acc: 0.438
==>>> it: 301, mem avg. loss: 0.798484, running mem acc: 0.778
==>>> it: 401, avg. loss: 1.809289, running train acc: 0.458
==>>> it: 401, mem avg. loss: 0.726981, running mem acc: 0.799
[0.067 0.072 0.121 0.141 0.143 0.173 0.557 0. 0. 0. ]
-----------run 6 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.641023, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.470937, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.621981, running train acc: 0.304
==>>> it: 101, mem avg. loss: 1.011013, running mem acc: 0.731
==>>> it: 201, avg. loss: 2.132377, running train acc: 0.395
==>>> it: 201, mem avg. loss: 0.839622, running mem acc: 0.773
==>>> it: 301, avg. loss: 1.940537, running train acc: 0.429
==>>> it: 301, mem avg. loss: 0.741315, running mem acc: 0.799
==>>> it: 401, avg. loss: 1.806512, running train acc: 0.460
==>>> it: 401, mem avg. loss: 0.675164, running mem acc: 0.813
[0.051 0.09 0.121 0.144 0.1 0.132 0.18 0.593 0. 0. ]
-----------run 6 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.695618, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.457331, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.685711, running train acc: 0.271
==>>> it: 101, mem avg. loss: 0.963907, running mem acc: 0.733
==>>> it: 201, avg. loss: 2.336026, running train acc: 0.316
==>>> it: 201, mem avg. loss: 0.827039, running mem acc: 0.775
==>>> it: 301, avg. loss: 2.156183, running train acc: 0.347
==>>> it: 301, mem avg. loss: 0.742315, running mem acc: 0.798
==>>> it: 401, avg. loss: 2.044847, running train acc: 0.370
==>>> it: 401, mem avg. loss: 0.677813, running mem acc: 0.813
[0.04 0.087 0.167 0.137 0.097 0.118 0.147 0.195 0.483 0. ]
-----------run 6 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.950813, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.183434, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.512239, running train acc: 0.342
==>>> it: 101, mem avg. loss: 1.060104, running mem acc: 0.721
==>>> it: 201, avg. loss: 2.097214, running train acc: 0.411
==>>> it: 201, mem avg. loss: 0.877097, running mem acc: 0.761
==>>> it: 301, avg. loss: 1.951872, running train acc: 0.441
==>>> it: 301, mem avg. loss: 0.763038, running mem acc: 0.794
==>>> it: 401, avg. loss: 1.847733, running train acc: 0.460
==>>> it: 401, mem avg. loss: 0.692187, running mem acc: 0.812
[0.03 0.089 0.103 0.158 0.098 0.143 0.163 0.123 0.076 0.567]
-----------run 6-----------avg_end_acc 0.15499999999999997-----------train time 2632.0857589244843
Task: 0, Labels:[12, 25, 94, 43, 18, 3, 11, 84, 72, 26]
Task: 1, Labels:[41, 63, 52, 21, 60, 66, 82, 50, 7, 91]
Task: 2, Labels:[71, 76, 88, 40, 99, 85, 53, 16, 10, 90]
Task: 3, Labels:[14, 54, 13, 81, 38, 29, 23, 67, 93, 57]
Task: 4, Labels:[17, 75, 89, 69, 98, 34, 65, 68, 35, 0]
Task: 5, Labels:[30, 44, 24, 9, 49, 8, 80, 64, 33, 73]
Task: 6, Labels:[20, 19, 46, 32, 45, 48, 58, 2, 97, 92]
Task: 7, Labels:[5, 22, 56, 51, 86, 42, 4, 28, 95, 15]
Task: 8, Labels:[61, 27, 77, 87, 31, 74, 55, 79, 70, 36]
Task: 9, Labels:[1, 6, 39, 96, 37, 83, 59, 47, 62, 78]
buffer has 2000 slots
-----------run 7 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.459373, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.835314, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.636453, running train acc: 0.167
==>>> it: 101, mem avg. loss: 2.443826, running mem acc: 0.221
==>>> it: 201, avg. loss: 2.372414, running train acc: 0.221
==>>> it: 201, mem avg. loss: 2.221057, running mem acc: 0.249
==>>> it: 301, avg. loss: 2.238662, running train acc: 0.249
==>>> it: 301, mem avg. loss: 2.076012, running mem acc: 0.290
==>>> it: 401, avg. loss: 2.149004, running train acc: 0.275
==>>> it: 401, mem avg. loss: 2.008679, running mem acc: 0.310
[0.426 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 8.656170, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.082384, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.808567, running train acc: 0.248
==>>> it: 101, mem avg. loss: 2.524130, running mem acc: 0.279
==>>> it: 201, avg. loss: 2.372859, running train acc: 0.328
==>>> it: 201, mem avg. loss: 2.356813, running mem acc: 0.303
==>>> it: 301, avg. loss: 2.179746, running train acc: 0.368
==>>> it: 301, mem avg. loss: 2.182154, running mem acc: 0.351
==>>> it: 401, avg. loss: 2.042269, running train acc: 0.405
==>>> it: 401, mem avg. loss: 1.971062, running mem acc: 0.414
[0.128 0.596 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.871606, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.848329, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.755503, running train acc: 0.275
==>>> it: 101, mem avg. loss: 1.130463, running mem acc: 0.708
==>>> it: 201, avg. loss: 2.320084, running train acc: 0.331
==>>> it: 201, mem avg. loss: 1.052767, running mem acc: 0.714
==>>> it: 301, avg. loss: 2.151854, running train acc: 0.358
==>>> it: 301, mem avg. loss: 0.974725, running mem acc: 0.729
==>>> it: 401, avg. loss: 2.010622, running train acc: 0.391
==>>> it: 401, mem avg. loss: 0.903236, running mem acc: 0.748
[0.053 0.387 0.509 0. 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.828763, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.402969, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.766981, running train acc: 0.265
==>>> it: 101, mem avg. loss: 1.091205, running mem acc: 0.720
==>>> it: 201, avg. loss: 2.357346, running train acc: 0.325
==>>> it: 201, mem avg. loss: 1.010492, running mem acc: 0.732
==>>> it: 301, avg. loss: 2.156860, running train acc: 0.357
==>>> it: 301, mem avg. loss: 0.940323, running mem acc: 0.741
==>>> it: 401, avg. loss: 2.036664, running train acc: 0.377
==>>> it: 401, mem avg. loss: 0.867772, running mem acc: 0.757
[0.039 0.322 0.239 0.483 0. 0. 0. 0. 0. 0. ]
-----------run 7 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.768903, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.273910, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.519102, running train acc: 0.313
==>>> it: 101, mem avg. loss: 1.072551, running mem acc: 0.707
==>>> it: 201, avg. loss: 2.076965, running train acc: 0.390
==>>> it: 201, mem avg. loss: 1.002093, running mem acc: 0.712
==>>> it: 301, avg. loss: 1.867073, running train acc: 0.440
==>>> it: 301, mem avg. loss: 0.902792, running mem acc: 0.741
==>>> it: 401, avg. loss: 1.753686, running train acc: 0.465
==>>> it: 401, mem avg. loss: 0.813836, running mem acc: 0.770
[0.029 0.275 0.175 0.108 0.609 0. 0. 0. 0. 0. ]
-----------run 7 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.825694, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.990649, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.589592, running train acc: 0.307
==>>> it: 101, mem avg. loss: 1.009668, running mem acc: 0.750
==>>> it: 201, avg. loss: 2.157351, running train acc: 0.372
==>>> it: 201, mem avg. loss: 0.852519, running mem acc: 0.776
==>>> it: 301, avg. loss: 1.974858, running train acc: 0.407
==>>> it: 301, mem avg. loss: 0.758699, running mem acc: 0.795
==>>> it: 401, avg. loss: 1.855935, running train acc: 0.434
==>>> it: 401, mem avg. loss: 0.676005, running mem acc: 0.817
[0.032 0.265 0.168 0.096 0.269 0.534 0. 0. 0. 0. ]
-----------run 7 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.497041, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.687015, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.755907, running train acc: 0.259
==>>> it: 101, mem avg. loss: 1.089326, running mem acc: 0.716
==>>> it: 201, avg. loss: 2.300543, running train acc: 0.336
==>>> it: 201, mem avg. loss: 0.897815, running mem acc: 0.764
==>>> it: 301, avg. loss: 2.109253, running train acc: 0.382
==>>> it: 301, mem avg. loss: 0.781966, running mem acc: 0.794
==>>> it: 401, avg. loss: 1.981141, running train acc: 0.411
==>>> it: 401, mem avg. loss: 0.707089, running mem acc: 0.815
[0.032 0.27 0.16 0.059 0.196 0.223 0.543 0. 0. 0. ]
-----------run 7 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.202760, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.291011, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.720132, running train acc: 0.277
==>>> it: 101, mem avg. loss: 0.982876, running mem acc: 0.758
==>>> it: 201, avg. loss: 2.276543, running train acc: 0.359
==>>> it: 201, mem avg. loss: 0.842555, running mem acc: 0.783
==>>> it: 301, avg. loss: 2.081342, running train acc: 0.396
==>>> it: 301, mem avg. loss: 0.757955, running mem acc: 0.803
==>>> it: 401, avg. loss: 1.940704, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.687171, running mem acc: 0.821
[0.014 0.275 0.144 0.05 0.193 0.136 0.19 0.548 0. 0. ]
-----------run 7 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.547227, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.666669, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.569132, running train acc: 0.343
==>>> it: 101, mem avg. loss: 0.998925, running mem acc: 0.745
==>>> it: 201, avg. loss: 2.173751, running train acc: 0.398
==>>> it: 201, mem avg. loss: 0.847100, running mem acc: 0.774
==>>> it: 301, avg. loss: 1.995588, running train acc: 0.421
==>>> it: 301, mem avg. loss: 0.744859, running mem acc: 0.801
==>>> it: 401, avg. loss: 1.884755, running train acc: 0.443
==>>> it: 401, mem avg. loss: 0.677147, running mem acc: 0.822
[0.014 0.228 0.077 0.058 0.198 0.132 0.175 0.108 0.542 0. ]
-----------run 7 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.805775, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.646891, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.582336, running train acc: 0.323
==>>> it: 101, mem avg. loss: 1.000322, running mem acc: 0.742
==>>> it: 201, avg. loss: 2.110834, running train acc: 0.394
==>>> it: 201, mem avg. loss: 0.790720, running mem acc: 0.794
==>>> it: 301, avg. loss: 1.947144, running train acc: 0.425
==>>> it: 301, mem avg. loss: 0.700833, running mem acc: 0.814
==>>> it: 401, avg. loss: 1.826645, running train acc: 0.448
==>>> it: 401, mem avg. loss: 0.646136, running mem acc: 0.828
[0.023 0.182 0.084 0.074 0.155 0.122 0.149 0.081 0.14 0.55 ]
-----------run 7-----------avg_end_acc 0.156-----------train time 2527.3951234817505
Task: 0, Labels:[37, 28, 66, 70, 49, 24, 39, 80, 86, 12]
Task: 1, Labels:[85, 34, 52, 82, 91, 48, 2, 23, 17, 58]
Task: 2, Labels:[18, 44, 0, 65, 92, 95, 25, 33, 36, 41]
Task: 3, Labels:[67, 78, 29, 81, 13, 54, 15, 21, 99, 77]
Task: 4, Labels:[83, 32, 87, 43, 68, 69, 10, 71, 60, 89]
Task: 5, Labels:[57, 96, 27, 50, 90, 72, 53, 4, 40, 19]
Task: 6, Labels:[38, 31, 55, 8, 61, 73, 16, 22, 79, 7]
Task: 7, Labels:[42, 26, 76, 35, 63, 3, 93, 64, 88, 62]
Task: 8, Labels:[1, 56, 5, 30, 9, 45, 51, 98, 11, 20]
Task: 9, Labels:[6, 59, 84, 97, 14, 94, 47, 74, 75, 46]
buffer has 2000 slots
-----------run 8 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.483299, running train acc: 0.050
==>>> it: 1, mem avg. loss: 3.162857, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.519204, running train acc: 0.224
==>>> it: 101, mem avg. loss: 2.351665, running mem acc: 0.237
==>>> it: 201, avg. loss: 2.194489, running train acc: 0.269
==>>> it: 201, mem avg. loss: 2.104952, running mem acc: 0.285
==>>> it: 301, avg. loss: 2.028299, running train acc: 0.320
==>>> it: 301, mem avg. loss: 1.946598, running mem acc: 0.331
==>>> it: 401, avg. loss: 1.926344, running train acc: 0.348
==>>> it: 401, mem avg. loss: 1.810042, running mem acc: 0.372
[0.495 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.746885, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.124197, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.676889, running train acc: 0.260
==>>> it: 101, mem avg. loss: 2.018450, running mem acc: 0.405
==>>> it: 201, avg. loss: 2.271834, running train acc: 0.339
==>>> it: 201, mem avg. loss: 1.836719, running mem acc: 0.444
==>>> it: 301, avg. loss: 2.050705, running train acc: 0.394
==>>> it: 301, mem avg. loss: 1.739310, running mem acc: 0.466
==>>> it: 401, avg. loss: 1.919698, running train acc: 0.424
==>>> it: 401, mem avg. loss: 1.618645, running mem acc: 0.500
[0.309 0.587 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.087857, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.624774, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.795800, running train acc: 0.251
==>>> it: 101, mem avg. loss: 1.337885, running mem acc: 0.650
==>>> it: 201, avg. loss: 2.406708, running train acc: 0.293
==>>> it: 201, mem avg. loss: 1.240008, running mem acc: 0.663
==>>> it: 301, avg. loss: 2.229786, running train acc: 0.327
==>>> it: 301, mem avg. loss: 1.141159, running mem acc: 0.684
==>>> it: 401, avg. loss: 2.105185, running train acc: 0.352
==>>> it: 401, mem avg. loss: 1.024746, running mem acc: 0.718
[0.137 0.344 0.519 0. 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.177635, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.589542, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.818896, running train acc: 0.240
==>>> it: 101, mem avg. loss: 1.142737, running mem acc: 0.689
==>>> it: 201, avg. loss: 2.424950, running train acc: 0.303
==>>> it: 201, mem avg. loss: 1.110081, running mem acc: 0.686
==>>> it: 301, avg. loss: 2.261410, running train acc: 0.331
==>>> it: 301, mem avg. loss: 1.106698, running mem acc: 0.685
==>>> it: 401, avg. loss: 2.162844, running train acc: 0.346
==>>> it: 401, mem avg. loss: 1.060699, running mem acc: 0.699
[0.117 0.278 0.21 0.497 0. 0. 0. 0. 0. 0. ]
-----------run 8 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.103175, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.657426, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.629313, running train acc: 0.299
==>>> it: 101, mem avg. loss: 1.183270, running mem acc: 0.672
==>>> it: 201, avg. loss: 2.152890, running train acc: 0.379
==>>> it: 201, mem avg. loss: 1.129015, running mem acc: 0.686
==>>> it: 301, avg. loss: 1.965346, running train acc: 0.418
==>>> it: 301, mem avg. loss: 1.049710, running mem acc: 0.708
==>>> it: 401, avg. loss: 1.823467, running train acc: 0.451
==>>> it: 401, mem avg. loss: 0.954739, running mem acc: 0.734
[0.084 0.253 0.223 0.108 0.581 0. 0. 0. 0. 0. ]
-----------run 8 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.033952, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.268862, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.708922, running train acc: 0.274
==>>> it: 101, mem avg. loss: 1.070676, running mem acc: 0.720
==>>> it: 201, avg. loss: 2.321157, running train acc: 0.328
==>>> it: 201, mem avg. loss: 0.968164, running mem acc: 0.744
==>>> it: 301, avg. loss: 2.141550, running train acc: 0.363
==>>> it: 301, mem avg. loss: 0.855893, running mem acc: 0.772
==>>> it: 401, avg. loss: 2.027804, running train acc: 0.383
==>>> it: 401, mem avg. loss: 0.790531, running mem acc: 0.788
[0.045 0.194 0.105 0.089 0.214 0.479 0. 0. 0. 0. ]
-----------run 8 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.469357, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.513997, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.777180, running train acc: 0.247
==>>> it: 101, mem avg. loss: 1.124619, running mem acc: 0.715
==>>> it: 201, avg. loss: 2.334727, running train acc: 0.317
==>>> it: 201, mem avg. loss: 0.996187, running mem acc: 0.740
==>>> it: 301, avg. loss: 2.152750, running train acc: 0.349
==>>> it: 301, mem avg. loss: 0.872538, running mem acc: 0.772
==>>> it: 401, avg. loss: 2.039560, running train acc: 0.373
==>>> it: 401, mem avg. loss: 0.779483, running mem acc: 0.796
[0.039 0.242 0.094 0.079 0.205 0.124 0.541 0. 0. 0. ]
-----------run 8 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.360435, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.196974, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.718455, running train acc: 0.285
==>>> it: 101, mem avg. loss: 1.078434, running mem acc: 0.710
==>>> it: 201, avg. loss: 2.277845, running train acc: 0.354
==>>> it: 201, mem avg. loss: 0.886643, running mem acc: 0.765
==>>> it: 301, avg. loss: 2.105392, running train acc: 0.382
==>>> it: 301, mem avg. loss: 0.801410, running mem acc: 0.787
==>>> it: 401, avg. loss: 1.976321, running train acc: 0.415
==>>> it: 401, mem avg. loss: 0.719056, running mem acc: 0.810
[0.055 0.204 0.085 0.093 0.201 0.103 0.114 0.477 0. 0. ]
-----------run 8 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.354440, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.749486, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.531024, running train acc: 0.358
==>>> it: 101, mem avg. loss: 1.045954, running mem acc: 0.725
==>>> it: 201, avg. loss: 2.095567, running train acc: 0.407
==>>> it: 201, mem avg. loss: 0.872953, running mem acc: 0.761
==>>> it: 301, avg. loss: 1.946133, running train acc: 0.433
==>>> it: 301, mem avg. loss: 0.782512, running mem acc: 0.786
==>>> it: 401, avg. loss: 1.847266, running train acc: 0.457
==>>> it: 401, mem avg. loss: 0.698494, running mem acc: 0.811
[0.049 0.171 0.08 0.064 0.201 0.089 0.058 0.138 0.562 0. ]
-----------run 8 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.762014, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.181871, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.502903, running train acc: 0.339
==>>> it: 101, mem avg. loss: 0.936425, running mem acc: 0.744
==>>> it: 201, avg. loss: 2.034085, running train acc: 0.413
==>>> it: 201, mem avg. loss: 0.768617, running mem acc: 0.784
==>>> it: 301, avg. loss: 1.865874, running train acc: 0.447
==>>> it: 301, mem avg. loss: 0.696831, running mem acc: 0.807
==>>> it: 401, avg. loss: 1.765241, running train acc: 0.467
==>>> it: 401, mem avg. loss: 0.623293, running mem acc: 0.827
[0.053 0.14 0.104 0.044 0.195 0.081 0.08 0.08 0.158 0.585]
-----------run 8-----------avg_end_acc 0.152-----------train time 2543.4716382026672
Task: 0, Labels:[21, 4, 44, 77, 48, 75, 90, 40, 81, 16]
Task: 1, Labels:[8, 22, 42, 41, 35, 62, 7, 98, 6, 24]
Task: 2, Labels:[27, 80, 71, 96, 47, 33, 92, 31, 61, 91]
Task: 3, Labels:[55, 52, 79, 58, 43, 65, 0, 94, 46, 26]
Task: 4, Labels:[38, 53, 73, 74, 45, 9, 25, 82, 57, 56]
Task: 5, Labels:[68, 99, 60, 29, 83, 5, 95, 64, 12, 63]
Task: 6, Labels:[70, 18, 59, 51, 69, 39, 67, 97, 11, 13]
Task: 7, Labels:[89, 86, 10, 36, 30, 28, 15, 19, 23, 87]
Task: 8, Labels:[85, 72, 76, 20, 88, 93, 66, 34, 84, 32]
Task: 9, Labels:[3, 78, 17, 37, 54, 49, 50, 14, 1, 2]
buffer has 2000 slots
-----------run 9 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.120544, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.617083, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.613127, running train acc: 0.179
==>>> it: 101, mem avg. loss: 2.371817, running mem acc: 0.231
==>>> it: 201, avg. loss: 2.359849, running train acc: 0.231
==>>> it: 201, mem avg. loss: 2.172538, running mem acc: 0.271
==>>> it: 301, avg. loss: 2.191753, running train acc: 0.274
==>>> it: 301, mem avg. loss: 2.013949, running mem acc: 0.317
==>>> it: 401, avg. loss: 2.093562, running train acc: 0.296
==>>> it: 401, mem avg. loss: 1.916984, running mem acc: 0.344
[0.441 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.098277, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.378008, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.935359, running train acc: 0.172
==>>> it: 101, mem avg. loss: 2.261704, running mem acc: 0.351
==>>> it: 201, avg. loss: 2.525982, running train acc: 0.244
==>>> it: 201, mem avg. loss: 2.048927, running mem acc: 0.393
==>>> it: 301, avg. loss: 2.340217, running train acc: 0.290
==>>> it: 301, mem avg. loss: 1.847029, running mem acc: 0.455
==>>> it: 401, avg. loss: 2.228837, running train acc: 0.313
==>>> it: 401, mem avg. loss: 1.692535, running mem acc: 0.497
[0.135 0.442 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.423477, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.247947, running mem acc: 0.650
==>>> it: 101, avg. loss: 2.640900, running train acc: 0.291
==>>> it: 101, mem avg. loss: 1.400566, running mem acc: 0.616
==>>> it: 201, avg. loss: 2.180233, running train acc: 0.359
==>>> it: 201, mem avg. loss: 1.241131, running mem acc: 0.649
==>>> it: 301, avg. loss: 1.982572, running train acc: 0.406
==>>> it: 301, mem avg. loss: 1.170756, running mem acc: 0.668
==>>> it: 401, avg. loss: 1.869420, running train acc: 0.430
==>>> it: 401, mem avg. loss: 1.099925, running mem acc: 0.687
[0.082 0.187 0.569 0. 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.426653, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.279176, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.612793, running train acc: 0.328
==>>> it: 101, mem avg. loss: 1.240951, running mem acc: 0.670
==>>> it: 201, avg. loss: 2.224056, running train acc: 0.376
==>>> it: 201, mem avg. loss: 1.180653, running mem acc: 0.672
==>>> it: 301, avg. loss: 2.058625, running train acc: 0.405
==>>> it: 301, mem avg. loss: 1.066736, running mem acc: 0.696
==>>> it: 401, avg. loss: 1.935622, running train acc: 0.432
==>>> it: 401, mem avg. loss: 0.968292, running mem acc: 0.723
[0.072 0.114 0.203 0.605 0. 0. 0. 0. 0. 0. ]
-----------run 9 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.142316, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.576928, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.625875, running train acc: 0.316
==>>> it: 101, mem avg. loss: 1.087335, running mem acc: 0.707
==>>> it: 201, avg. loss: 2.202543, running train acc: 0.377
==>>> it: 201, mem avg. loss: 0.943995, running mem acc: 0.743
==>>> it: 301, avg. loss: 2.009743, running train acc: 0.422
==>>> it: 301, mem avg. loss: 0.828864, running mem acc: 0.773
==>>> it: 401, avg. loss: 1.866760, running train acc: 0.450
==>>> it: 401, mem avg. loss: 0.731779, running mem acc: 0.801
[0.048 0.142 0.156 0.276 0.607 0. 0. 0. 0. 0. ]
-----------run 9 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.929319, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.760754, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.464731, running train acc: 0.348
==>>> it: 101, mem avg. loss: 0.861999, running mem acc: 0.771
==>>> it: 201, avg. loss: 1.996821, running train acc: 0.437
==>>> it: 201, mem avg. loss: 0.775646, running mem acc: 0.785
==>>> it: 301, avg. loss: 1.811338, running train acc: 0.469
==>>> it: 301, mem avg. loss: 0.699306, running mem acc: 0.805
==>>> it: 401, avg. loss: 1.693592, running train acc: 0.498
==>>> it: 401, mem avg. loss: 0.646784, running mem acc: 0.820
[0.037 0.096 0.128 0.275 0.234 0.592 0. 0. 0. 0. ]
-----------run 9 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.435322, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.905392, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.514953, running train acc: 0.339
==>>> it: 101, mem avg. loss: 0.931387, running mem acc: 0.761
==>>> it: 201, avg. loss: 2.083657, running train acc: 0.398
==>>> it: 201, mem avg. loss: 0.801827, running mem acc: 0.788
==>>> it: 301, avg. loss: 1.906918, running train acc: 0.440
==>>> it: 301, mem avg. loss: 0.709797, running mem acc: 0.806
==>>> it: 401, avg. loss: 1.780945, running train acc: 0.472
==>>> it: 401, mem avg. loss: 0.635117, running mem acc: 0.825
[0.046 0.095 0.123 0.239 0.209 0.225 0.597 0. 0. 0. ]
-----------run 9 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.596126, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.767127, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.357536, running train acc: 0.386
==>>> it: 101, mem avg. loss: 0.923270, running mem acc: 0.765
==>>> it: 201, avg. loss: 1.944545, running train acc: 0.453
==>>> it: 201, mem avg. loss: 0.764019, running mem acc: 0.796
==>>> it: 301, avg. loss: 1.758554, running train acc: 0.489
==>>> it: 301, mem avg. loss: 0.674325, running mem acc: 0.819
==>>> it: 401, avg. loss: 1.665297, running train acc: 0.507
==>>> it: 401, mem avg. loss: 0.608821, running mem acc: 0.835
[0.057 0.081 0.132 0.24 0.181 0.163 0.198 0.623 0. 0. ]
-----------run 9 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.618112, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.294658, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.586261, running train acc: 0.325
==>>> it: 101, mem avg. loss: 0.867026, running mem acc: 0.778
==>>> it: 201, avg. loss: 2.172870, running train acc: 0.367
==>>> it: 201, mem avg. loss: 0.739413, running mem acc: 0.803
==>>> it: 301, avg. loss: 2.006553, running train acc: 0.401
==>>> it: 301, mem avg. loss: 0.647727, running mem acc: 0.831
==>>> it: 401, avg. loss: 1.904231, running train acc: 0.420
==>>> it: 401, mem avg. loss: 0.591324, running mem acc: 0.847
[0.032 0.075 0.12 0.2 0.178 0.164 0.144 0.201 0.552 0. ]
-----------run 9 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.737688, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.264111, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.545728, running train acc: 0.321
==>>> it: 101, mem avg. loss: 0.891467, running mem acc: 0.766
==>>> it: 201, avg. loss: 2.128180, running train acc: 0.384
==>>> it: 201, mem avg. loss: 0.738494, running mem acc: 0.806
==>>> it: 301, avg. loss: 1.949013, running train acc: 0.419
==>>> it: 301, mem avg. loss: 0.652600, running mem acc: 0.828
==>>> it: 401, avg. loss: 1.836256, running train acc: 0.437
==>>> it: 401, mem avg. loss: 0.594830, running mem acc: 0.842
[0.028 0.065 0.133 0.2 0.187 0.133 0.089 0.147 0.156 0.542]
-----------run 9-----------avg_end_acc 0.168-----------train time 2598.2259385585785
Task: 0, Labels:[57, 17, 3, 47, 0, 94, 66, 56, 44, 7]
Task: 1, Labels:[38, 10, 23, 18, 14, 86, 67, 87, 52, 5]
Task: 2, Labels:[83, 98, 76, 96, 49, 20, 58, 21, 22, 40]
Task: 3, Labels:[36, 33, 41, 92, 88, 9, 95, 11, 28, 62]
Task: 4, Labels:[25, 91, 2, 46, 89, 8, 78, 72, 79, 26]
Task: 5, Labels:[99, 37, 15, 48, 90, 24, 59, 80, 93, 65]
Task: 6, Labels:[53, 6, 27, 51, 60, 73, 34, 64, 35, 81]
Task: 7, Labels:[12, 77, 32, 74, 61, 43, 54, 13, 50, 68]
Task: 8, Labels:[97, 19, 1, 85, 63, 84, 75, 30, 42, 71]
Task: 9, Labels:[39, 29, 45, 31, 55, 16, 4, 82, 69, 70]
buffer has 2000 slots
-----------run 10 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.517881, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.687700, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.535598, running train acc: 0.210
==>>> it: 101, mem avg. loss: 2.307323, running mem acc: 0.248
==>>> it: 201, avg. loss: 2.226912, running train acc: 0.267
==>>> it: 201, mem avg. loss: 2.076577, running mem acc: 0.295
==>>> it: 301, avg. loss: 2.039200, running train acc: 0.316
==>>> it: 301, mem avg. loss: 1.866356, running mem acc: 0.355
==>>> it: 401, avg. loss: 1.913767, running train acc: 0.353
==>>> it: 401, mem avg. loss: 1.736595, running mem acc: 0.395
[0.541 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.186578, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.068785, running mem acc: 0.500
==>>> it: 101, avg. loss: 2.792997, running train acc: 0.226
==>>> it: 101, mem avg. loss: 2.060229, running mem acc: 0.413
==>>> it: 201, avg. loss: 2.409643, running train acc: 0.289
==>>> it: 201, mem avg. loss: 1.809448, running mem acc: 0.462
==>>> it: 301, avg. loss: 2.233301, running train acc: 0.324
==>>> it: 301, mem avg. loss: 1.631063, running mem acc: 0.509
==>>> it: 401, avg. loss: 2.090543, running train acc: 0.356
==>>> it: 401, mem avg. loss: 1.455732, running mem acc: 0.561
[0.208 0.473 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.591722, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.173227, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.624443, running train acc: 0.300
==>>> it: 101, mem avg. loss: 1.225916, running mem acc: 0.676
==>>> it: 201, avg. loss: 2.130590, running train acc: 0.388
==>>> it: 201, mem avg. loss: 1.104724, running mem acc: 0.698
==>>> it: 301, avg. loss: 1.918651, running train acc: 0.432
==>>> it: 301, mem avg. loss: 1.027460, running mem acc: 0.718
==>>> it: 401, avg. loss: 1.799968, running train acc: 0.457
==>>> it: 401, mem avg. loss: 0.945020, running mem acc: 0.738
[0.094 0.193 0.59 0. 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.920829, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.336894, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.617306, running train acc: 0.289
==>>> it: 101, mem avg. loss: 1.105894, running mem acc: 0.717
==>>> it: 201, avg. loss: 2.187614, running train acc: 0.351
==>>> it: 201, mem avg. loss: 0.953943, running mem acc: 0.741
==>>> it: 301, avg. loss: 2.007888, running train acc: 0.386
==>>> it: 301, mem avg. loss: 0.885073, running mem acc: 0.756
==>>> it: 401, avg. loss: 1.919051, running train acc: 0.406
==>>> it: 401, mem avg. loss: 0.811270, running mem acc: 0.774
[0.066 0.131 0.37 0.516 0. 0. 0. 0. 0. 0. ]
-----------run 10 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.489842, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.540555, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.854046, running train acc: 0.234
==>>> it: 101, mem avg. loss: 1.208285, running mem acc: 0.681
==>>> it: 201, avg. loss: 2.528900, running train acc: 0.270
==>>> it: 201, mem avg. loss: 1.134745, running mem acc: 0.691
==>>> it: 301, avg. loss: 2.356390, running train acc: 0.298
==>>> it: 301, mem avg. loss: 1.053090, running mem acc: 0.712
==>>> it: 401, avg. loss: 2.259578, running train acc: 0.315
==>>> it: 401, mem avg. loss: 0.980200, running mem acc: 0.729
[0.027 0.128 0.241 0.19 0.459 0. 0. 0. 0. 0. ]
-----------run 10 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.058362, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.813412, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.737207, running train acc: 0.278
==>>> it: 101, mem avg. loss: 1.296762, running mem acc: 0.648
==>>> it: 201, avg. loss: 2.309689, running train acc: 0.347
==>>> it: 201, mem avg. loss: 1.208238, running mem acc: 0.659
==>>> it: 301, avg. loss: 2.141962, running train acc: 0.372
==>>> it: 301, mem avg. loss: 1.106809, running mem acc: 0.684
==>>> it: 401, avg. loss: 2.040643, running train acc: 0.389
==>>> it: 401, mem avg. loss: 1.003296, running mem acc: 0.711
[0.027 0.113 0.18 0.184 0.121 0.531 0. 0. 0. 0. ]
-----------run 10 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.126886, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.351078, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.473592, running train acc: 0.357
==>>> it: 101, mem avg. loss: 1.224211, running mem acc: 0.664
==>>> it: 201, avg. loss: 2.061169, running train acc: 0.427
==>>> it: 201, mem avg. loss: 1.084968, running mem acc: 0.696
==>>> it: 301, avg. loss: 1.892117, running train acc: 0.454
==>>> it: 301, mem avg. loss: 0.957290, running mem acc: 0.729
==>>> it: 401, avg. loss: 1.791067, running train acc: 0.473
==>>> it: 401, mem avg. loss: 0.845054, running mem acc: 0.762
[0.019 0.088 0.133 0.144 0.078 0.159 0.593 0. 0. 0. ]
-----------run 10 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.421838, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.863021, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.608776, running train acc: 0.303
==>>> it: 101, mem avg. loss: 1.060748, running mem acc: 0.733
==>>> it: 201, avg. loss: 2.157215, running train acc: 0.376
==>>> it: 201, mem avg. loss: 0.911113, running mem acc: 0.765
==>>> it: 301, avg. loss: 1.960685, running train acc: 0.415
==>>> it: 301, mem avg. loss: 0.802939, running mem acc: 0.789
==>>> it: 401, avg. loss: 1.859882, running train acc: 0.438
==>>> it: 401, mem avg. loss: 0.727116, running mem acc: 0.807
[0.031 0.095 0.146 0.108 0.061 0.081 0.169 0.516 0. 0. ]
-----------run 10 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.758717, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.320936, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.485607, running train acc: 0.335
==>>> it: 101, mem avg. loss: 0.953603, running mem acc: 0.756
==>>> it: 201, avg. loss: 2.028043, running train acc: 0.418
==>>> it: 201, mem avg. loss: 0.792594, running mem acc: 0.789
==>>> it: 301, avg. loss: 1.834099, running train acc: 0.458
==>>> it: 301, mem avg. loss: 0.716500, running mem acc: 0.807
==>>> it: 401, avg. loss: 1.719138, running train acc: 0.484
==>>> it: 401, mem avg. loss: 0.660174, running mem acc: 0.821
[0.032 0.078 0.143 0.071 0.072 0.115 0.175 0.184 0.63 0. ]
-----------run 10 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.149339, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.194741, running mem acc: 1.000
==>>> it: 101, avg. loss: 2.620654, running train acc: 0.317
==>>> it: 101, mem avg. loss: 0.933220, running mem acc: 0.762
==>>> it: 201, avg. loss: 2.158245, running train acc: 0.400
==>>> it: 201, mem avg. loss: 0.813508, running mem acc: 0.783
==>>> it: 301, avg. loss: 1.985122, running train acc: 0.426
==>>> it: 301, mem avg. loss: 0.709405, running mem acc: 0.812
==>>> it: 401, avg. loss: 1.841146, running train acc: 0.458
==>>> it: 401, mem avg. loss: 0.646138, running mem acc: 0.831
[0.019 0.017 0.09 0.081 0.065 0.084 0.157 0.181 0.201 0.55 ]
-----------run 10-----------avg_end_acc 0.14450000000000002-----------train time 2629.6844050884247
Task: 0, Labels:[85, 6, 88, 31, 84, 91, 75, 49, 69, 76]
Task: 1, Labels:[3, 99, 24, 78, 32, 71, 81, 0, 63, 44]
Task: 2, Labels:[8, 13, 51, 61, 89, 20, 82, 16, 64, 55]
Task: 3, Labels:[30, 93, 95, 25, 57, 58, 83, 41, 15, 79]
Task: 4, Labels:[33, 72, 18, 35, 14, 23, 22, 87, 80, 26]
Task: 5, Labels:[17, 27, 56, 43, 54, 29, 97, 65, 50, 46]
Task: 6, Labels:[96, 10, 90, 12, 21, 19, 36, 67, 4, 34]
Task: 7, Labels:[40, 9, 53, 38, 37, 45, 62, 94, 86, 60]
Task: 8, Labels:[59, 98, 66, 68, 2, 7, 1, 77, 11, 92]
Task: 9, Labels:[52, 42, 48, 73, 74, 39, 5, 47, 28, 70]
buffer has 2000 slots
-----------run 11 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.189835, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.168808, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.541171, running train acc: 0.200
==>>> it: 101, mem avg. loss: 2.264231, running mem acc: 0.250
==>>> it: 201, avg. loss: 2.203083, running train acc: 0.269
==>>> it: 201, mem avg. loss: 2.034104, running mem acc: 0.297
==>>> it: 301, avg. loss: 2.010828, running train acc: 0.319
==>>> it: 301, mem avg. loss: 1.830894, running mem acc: 0.363
==>>> it: 401, avg. loss: 1.912500, running train acc: 0.353
==>>> it: 401, mem avg. loss: 1.690996, running mem acc: 0.407
[0.559 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.024690, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.321640, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.726428, running train acc: 0.232
==>>> it: 101, mem avg. loss: 1.743590, running mem acc: 0.507
==>>> it: 201, avg. loss: 2.359242, running train acc: 0.293
==>>> it: 201, mem avg. loss: 1.536989, running mem acc: 0.543
==>>> it: 301, avg. loss: 2.178935, running train acc: 0.330
==>>> it: 301, mem avg. loss: 1.378923, running mem acc: 0.593
==>>> it: 401, avg. loss: 2.063812, running train acc: 0.357
==>>> it: 401, mem avg. loss: 1.266082, running mem acc: 0.624
[0.233 0.429 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.537317, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.489530, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.780185, running train acc: 0.259
==>>> it: 101, mem avg. loss: 1.223219, running mem acc: 0.679
==>>> it: 201, avg. loss: 2.340605, running train acc: 0.335
==>>> it: 201, mem avg. loss: 1.151194, running mem acc: 0.694
==>>> it: 301, avg. loss: 2.171662, running train acc: 0.369
==>>> it: 301, mem avg. loss: 1.096604, running mem acc: 0.703
==>>> it: 401, avg. loss: 2.054652, running train acc: 0.393
==>>> it: 401, mem avg. loss: 1.041396, running mem acc: 0.716
[0.153 0.211 0.497 0. 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.506497, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.359183, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.771453, running train acc: 0.252
==>>> it: 101, mem avg. loss: 1.276556, running mem acc: 0.666
==>>> it: 201, avg. loss: 2.334628, running train acc: 0.310
==>>> it: 201, mem avg. loss: 1.224476, running mem acc: 0.672
==>>> it: 301, avg. loss: 2.153257, running train acc: 0.341
==>>> it: 301, mem avg. loss: 1.135418, running mem acc: 0.692
==>>> it: 401, avg. loss: 2.029568, running train acc: 0.374
==>>> it: 401, mem avg. loss: 1.015071, running mem acc: 0.724
[0.118 0.149 0.198 0.488 0. 0. 0. 0. 0. 0. ]
-----------run 11 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.613126, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.493856, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.748457, running train acc: 0.265
==>>> it: 101, mem avg. loss: 1.113203, running mem acc: 0.736
==>>> it: 201, avg. loss: 2.386869, running train acc: 0.315
==>>> it: 201, mem avg. loss: 1.039944, running mem acc: 0.731
==>>> it: 301, avg. loss: 2.208878, running train acc: 0.352
==>>> it: 301, mem avg. loss: 0.946079, running mem acc: 0.750
==>>> it: 401, avg. loss: 2.098850, running train acc: 0.371
==>>> it: 401, mem avg. loss: 0.868129, running mem acc: 0.771
[0.131 0.073 0.178 0.229 0.465 0. 0. 0. 0. 0. ]
-----------run 11 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.867691, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.240804, running mem acc: 0.800
==>>> it: 101, avg. loss: 2.717609, running train acc: 0.308
==>>> it: 101, mem avg. loss: 1.159337, running mem acc: 0.691
==>>> it: 201, avg. loss: 2.306640, running train acc: 0.358
==>>> it: 201, mem avg. loss: 0.999113, running mem acc: 0.729
==>>> it: 301, avg. loss: 2.145154, running train acc: 0.383
==>>> it: 301, mem avg. loss: 0.912478, running mem acc: 0.752
==>>> it: 401, avg. loss: 2.025131, running train acc: 0.410
==>>> it: 401, mem avg. loss: 0.830844, running mem acc: 0.773
[0.103 0.165 0.184 0.15 0.107 0.55 0. 0. 0. 0. ]
-----------run 11 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.392546, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.694815, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.582074, running train acc: 0.321
==>>> it: 101, mem avg. loss: 1.081163, running mem acc: 0.710
==>>> it: 201, avg. loss: 2.213374, running train acc: 0.361
==>>> it: 201, mem avg. loss: 0.957886, running mem acc: 0.728
==>>> it: 301, avg. loss: 2.027303, running train acc: 0.406
==>>> it: 301, mem avg. loss: 0.852747, running mem acc: 0.763
==>>> it: 401, avg. loss: 1.927963, running train acc: 0.424
==>>> it: 401, mem avg. loss: 0.775535, running mem acc: 0.788
[0.065 0.11 0.161 0.148 0.103 0.198 0.508 0. 0. 0. ]
-----------run 11 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.169339, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.504658, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.409145, running train acc: 0.389
==>>> it: 101, mem avg. loss: 0.973405, running mem acc: 0.759
==>>> it: 201, avg. loss: 1.931373, running train acc: 0.466
==>>> it: 201, mem avg. loss: 0.797592, running mem acc: 0.794
==>>> it: 301, avg. loss: 1.741714, running train acc: 0.503
==>>> it: 301, mem avg. loss: 0.706490, running mem acc: 0.812
==>>> it: 401, avg. loss: 1.609237, running train acc: 0.535
==>>> it: 401, mem avg. loss: 0.643387, running mem acc: 0.826
[0.064 0.079 0.131 0.112 0.08 0.144 0.181 0.639 0. 0. ]
-----------run 11 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.843805, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.639262, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.537906, running train acc: 0.325
==>>> it: 101, mem avg. loss: 0.861095, running mem acc: 0.778
==>>> it: 201, avg. loss: 2.086805, running train acc: 0.394
==>>> it: 201, mem avg. loss: 0.709975, running mem acc: 0.810
==>>> it: 301, avg. loss: 1.940486, running train acc: 0.424
==>>> it: 301, mem avg. loss: 0.627988, running mem acc: 0.832
==>>> it: 401, avg. loss: 1.832742, running train acc: 0.446
==>>> it: 401, mem avg. loss: 0.583471, running mem acc: 0.842
[0.054 0.075 0.111 0.103 0.084 0.068 0.093 0.269 0.534 0. ]
-----------run 11 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.101140, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.135612, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.375093, running train acc: 0.392
==>>> it: 101, mem avg. loss: 0.915956, running mem acc: 0.749
==>>> it: 201, avg. loss: 1.868183, running train acc: 0.481
==>>> it: 201, mem avg. loss: 0.742477, running mem acc: 0.791
==>>> it: 301, avg. loss: 1.669394, running train acc: 0.524
==>>> it: 301, mem avg. loss: 0.642697, running mem acc: 0.820
==>>> it: 401, avg. loss: 1.578260, running train acc: 0.540
==>>> it: 401, mem avg. loss: 0.574483, running mem acc: 0.839
[0.083 0.093 0.141 0.097 0.075 0.092 0.067 0.27 0.103 0.634]
-----------run 11-----------avg_end_acc 0.16549999999999998-----------train time 2605.7747802734375
Task: 0, Labels:[90, 74, 9, 39, 27, 58, 0, 37, 32, 77]
Task: 1, Labels:[94, 65, 84, 52, 71, 30, 21, 97, 8, 40]
Task: 2, Labels:[7, 73, 49, 6, 22, 87, 70, 3, 62, 4]
Task: 3, Labels:[43, 61, 91, 50, 66, 44, 5, 1, 95, 75]
Task: 4, Labels:[85, 13, 63, 56, 15, 67, 14, 36, 28, 29]
Task: 5, Labels:[89, 99, 53, 18, 64, 72, 69, 41, 82, 54]
Task: 6, Labels:[46, 23, 47, 59, 25, 83, 35, 76, 33, 34]
Task: 7, Labels:[57, 16, 51, 12, 93, 68, 24, 2, 31, 10]
Task: 8, Labels:[20, 38, 88, 11, 96, 78, 60, 45, 92, 17]
Task: 9, Labels:[48, 55, 86, 81, 79, 42, 98, 26, 19, 80]
buffer has 2000 slots
-----------run 12 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.257377, running train acc: 0.150
==>>> it: 1, mem avg. loss: 2.783452, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.502423, running train acc: 0.181
==>>> it: 101, mem avg. loss: 2.316297, running mem acc: 0.235
==>>> it: 201, avg. loss: 2.262949, running train acc: 0.232
==>>> it: 201, mem avg. loss: 2.118018, running mem acc: 0.266
==>>> it: 301, avg. loss: 2.116876, running train acc: 0.266
==>>> it: 301, mem avg. loss: 1.973954, running mem acc: 0.310
==>>> it: 401, avg. loss: 2.026005, running train acc: 0.296
==>>> it: 401, mem avg. loss: 1.867314, running mem acc: 0.343
[0.435 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.249852, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.524688, running mem acc: 0.400
==>>> it: 101, avg. loss: 2.890226, running train acc: 0.195
==>>> it: 101, mem avg. loss: 2.273713, running mem acc: 0.372
==>>> it: 201, avg. loss: 2.403394, running train acc: 0.301
==>>> it: 201, mem avg. loss: 2.070942, running mem acc: 0.400
==>>> it: 301, avg. loss: 2.189349, running train acc: 0.355
==>>> it: 301, mem avg. loss: 1.883119, running mem acc: 0.441
==>>> it: 401, avg. loss: 2.043502, running train acc: 0.387
==>>> it: 401, mem avg. loss: 1.756742, running mem acc: 0.471
[0.092 0.575 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.655917, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.681748, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.780860, running train acc: 0.248
==>>> it: 101, mem avg. loss: 1.379903, running mem acc: 0.621
==>>> it: 201, avg. loss: 2.323989, running train acc: 0.316
==>>> it: 201, mem avg. loss: 1.223047, running mem acc: 0.653
==>>> it: 301, avg. loss: 2.114620, running train acc: 0.354
==>>> it: 301, mem avg. loss: 1.105980, running mem acc: 0.684
==>>> it: 401, avg. loss: 2.012818, running train acc: 0.368
==>>> it: 401, mem avg. loss: 1.037005, running mem acc: 0.703
[0.108 0.373 0.427 0. 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.188715, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.916224, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.789864, running train acc: 0.253
==>>> it: 101, mem avg. loss: 1.270348, running mem acc: 0.674
==>>> it: 201, avg. loss: 2.354775, running train acc: 0.328
==>>> it: 201, mem avg. loss: 1.214207, running mem acc: 0.667
==>>> it: 301, avg. loss: 2.218764, running train acc: 0.349
==>>> it: 301, mem avg. loss: 1.159115, running mem acc: 0.673
==>>> it: 401, avg. loss: 2.084206, running train acc: 0.375
==>>> it: 401, mem avg. loss: 1.086726, running mem acc: 0.694
[0.051 0.201 0.186 0.462 0. 0. 0. 0. 0. 0. ]
-----------run 12 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.321850, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.785401, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.781317, running train acc: 0.235
==>>> it: 101, mem avg. loss: 1.238444, running mem acc: 0.679
==>>> it: 201, avg. loss: 2.332697, running train acc: 0.324
==>>> it: 201, mem avg. loss: 1.149954, running mem acc: 0.688
==>>> it: 301, avg. loss: 2.138244, running train acc: 0.362
==>>> it: 301, mem avg. loss: 1.035216, running mem acc: 0.712
==>>> it: 401, avg. loss: 2.037223, running train acc: 0.388
==>>> it: 401, mem avg. loss: 0.967758, running mem acc: 0.730
[0.027 0.181 0.156 0.206 0.528 0. 0. 0. 0. 0. ]
-----------run 12 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.232672, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.989838, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.614954, running train acc: 0.331
==>>> it: 101, mem avg. loss: 1.117936, running mem acc: 0.716
==>>> it: 201, avg. loss: 2.137420, running train acc: 0.417
==>>> it: 201, mem avg. loss: 1.003794, running mem acc: 0.733
==>>> it: 301, avg. loss: 1.956120, running train acc: 0.449
==>>> it: 301, mem avg. loss: 0.911681, running mem acc: 0.754
==>>> it: 401, avg. loss: 1.848569, running train acc: 0.469
==>>> it: 401, mem avg. loss: 0.843964, running mem acc: 0.773
[0.04 0.173 0.123 0.194 0.142 0.563 0. 0. 0. 0. ]
-----------run 12 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.095715, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.014509, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.518998, running train acc: 0.329
==>>> it: 101, mem avg. loss: 1.027070, running mem acc: 0.724
==>>> it: 201, avg. loss: 2.110429, running train acc: 0.404
==>>> it: 201, mem avg. loss: 0.887483, running mem acc: 0.755
==>>> it: 301, avg. loss: 1.918013, running train acc: 0.440
==>>> it: 301, mem avg. loss: 0.797369, running mem acc: 0.776
==>>> it: 401, avg. loss: 1.807105, running train acc: 0.461
==>>> it: 401, mem avg. loss: 0.741815, running mem acc: 0.789
[0.034 0.126 0.095 0.157 0.113 0.239 0.581 0. 0. 0. ]
-----------run 12 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.068079, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.698236, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.507452, running train acc: 0.343
==>>> it: 101, mem avg. loss: 0.964361, running mem acc: 0.729
==>>> it: 201, avg. loss: 2.038898, running train acc: 0.437
==>>> it: 201, mem avg. loss: 0.844381, running mem acc: 0.760
==>>> it: 301, avg. loss: 1.864232, running train acc: 0.468
==>>> it: 301, mem avg. loss: 0.757455, running mem acc: 0.780
==>>> it: 401, avg. loss: 1.742462, running train acc: 0.490
==>>> it: 401, mem avg. loss: 0.679897, running mem acc: 0.803
[0.024 0.174 0.103 0.115 0.099 0.173 0.173 0.618 0. 0. ]
-----------run 12 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.783829, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.335765, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.388472, running train acc: 0.372
==>>> it: 101, mem avg. loss: 0.975477, running mem acc: 0.746
==>>> it: 201, avg. loss: 1.940569, running train acc: 0.451
==>>> it: 201, mem avg. loss: 0.792848, running mem acc: 0.789
==>>> it: 301, avg. loss: 1.772970, running train acc: 0.485
==>>> it: 301, mem avg. loss: 0.699815, running mem acc: 0.811
==>>> it: 401, avg. loss: 1.669511, running train acc: 0.502
==>>> it: 401, mem avg. loss: 0.619996, running mem acc: 0.832
[0.035 0.119 0.1 0.13 0.102 0.183 0.12 0.213 0.597 0. ]
-----------run 12 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.419349, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.647834, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.678536, running train acc: 0.274
==>>> it: 101, mem avg. loss: 0.930434, running mem acc: 0.763
==>>> it: 201, avg. loss: 2.271431, running train acc: 0.340
==>>> it: 201, mem avg. loss: 0.784307, running mem acc: 0.794
==>>> it: 301, avg. loss: 2.074034, running train acc: 0.379
==>>> it: 301, mem avg. loss: 0.713420, running mem acc: 0.806
==>>> it: 401, avg. loss: 1.992376, running train acc: 0.399
==>>> it: 401, mem avg. loss: 0.661524, running mem acc: 0.819
[0.035 0.114 0.098 0.116 0.1 0.166 0.112 0.135 0.184 0.497]
-----------run 12-----------avg_end_acc 0.1557-----------train time 2608.7608783245087
Task: 0, Labels:[81, 62, 48, 54, 92, 69, 44, 17, 7, 40]
Task: 1, Labels:[68, 96, 75, 97, 56, 27, 59, 95, 46, 86]
Task: 2, Labels:[3, 37, 74, 2, 11, 26, 98, 45, 67, 23]
Task: 3, Labels:[42, 4, 25, 77, 1, 83, 9, 14, 10, 89]
Task: 4, Labels:[52, 29, 41, 70, 85, 65, 43, 61, 72, 38]
Task: 5, Labels:[39, 82, 57, 63, 15, 5, 79, 21, 47, 58]
Task: 6, Labels:[28, 91, 24, 13, 35, 49, 88, 50, 55, 33]
Task: 7, Labels:[12, 36, 16, 90, 34, 71, 78, 22, 87, 53]
Task: 8, Labels:[80, 94, 32, 19, 66, 0, 6, 64, 30, 31]
Task: 9, Labels:[60, 93, 18, 20, 84, 51, 99, 76, 73, 8]
buffer has 2000 slots
-----------run 13 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 3.959098, running train acc: 0.050
==>>> it: 1, mem avg. loss: 2.266457, running mem acc: 0.300
==>>> it: 101, avg. loss: 2.693590, running train acc: 0.185
==>>> it: 101, mem avg. loss: 2.463321, running mem acc: 0.245
==>>> it: 201, avg. loss: 2.328282, running train acc: 0.242
==>>> it: 201, mem avg. loss: 2.178192, running mem acc: 0.280
==>>> it: 301, avg. loss: 2.130127, running train acc: 0.288
==>>> it: 301, mem avg. loss: 1.973371, running mem acc: 0.321
==>>> it: 401, avg. loss: 2.009217, running train acc: 0.317
==>>> it: 401, mem avg. loss: 1.844590, running mem acc: 0.361
[0.446 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.383232, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.690739, running mem acc: 0.550
==>>> it: 101, avg. loss: 2.748249, running train acc: 0.225
==>>> it: 101, mem avg. loss: 2.083563, running mem acc: 0.405
==>>> it: 201, avg. loss: 2.324604, running train acc: 0.308
==>>> it: 201, mem avg. loss: 1.861153, running mem acc: 0.453
==>>> it: 301, avg. loss: 2.164358, running train acc: 0.343
==>>> it: 301, mem avg. loss: 1.660083, running mem acc: 0.502
==>>> it: 401, avg. loss: 2.024073, running train acc: 0.374
==>>> it: 401, mem avg. loss: 1.485873, running mem acc: 0.551
[0.12 0.523 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.947900, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.242770, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.880012, running train acc: 0.190
==>>> it: 101, mem avg. loss: 1.205163, running mem acc: 0.685
==>>> it: 201, avg. loss: 2.503846, running train acc: 0.245
==>>> it: 201, mem avg. loss: 1.115638, running mem acc: 0.701
==>>> it: 301, avg. loss: 2.309515, running train acc: 0.283
==>>> it: 301, mem avg. loss: 1.046339, running mem acc: 0.710
==>>> it: 401, avg. loss: 2.215081, running train acc: 0.300
==>>> it: 401, mem avg. loss: 0.993009, running mem acc: 0.721
[0.089 0.219 0.361 0. 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.650943, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.742707, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.893240, running train acc: 0.215
==>>> it: 101, mem avg. loss: 1.123824, running mem acc: 0.713
==>>> it: 201, avg. loss: 2.500915, running train acc: 0.257
==>>> it: 201, mem avg. loss: 1.129604, running mem acc: 0.704
==>>> it: 301, avg. loss: 2.352127, running train acc: 0.288
==>>> it: 301, mem avg. loss: 1.108346, running mem acc: 0.703
==>>> it: 401, avg. loss: 2.267806, running train acc: 0.305
==>>> it: 401, mem avg. loss: 1.068621, running mem acc: 0.710
[0.088 0.2 0.063 0.454 0. 0. 0. 0. 0. 0. ]
-----------run 13 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.072036, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.593135, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.691800, running train acc: 0.305
==>>> it: 101, mem avg. loss: 1.317419, running mem acc: 0.664
==>>> it: 201, avg. loss: 2.278079, running train acc: 0.364
==>>> it: 201, mem avg. loss: 1.262957, running mem acc: 0.662
==>>> it: 301, avg. loss: 2.105722, running train acc: 0.396
==>>> it: 301, mem avg. loss: 1.195606, running mem acc: 0.675
==>>> it: 401, avg. loss: 1.982915, running train acc: 0.418
==>>> it: 401, mem avg. loss: 1.096567, running mem acc: 0.700
[0.102 0.202 0.059 0.148 0.516 0. 0. 0. 0. 0. ]
-----------run 13 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.690141, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.262250, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.636736, running train acc: 0.300
==>>> it: 101, mem avg. loss: 1.208325, running mem acc: 0.681
==>>> it: 201, avg. loss: 2.190698, running train acc: 0.362
==>>> it: 201, mem avg. loss: 1.048665, running mem acc: 0.709
==>>> it: 301, avg. loss: 2.004990, running train acc: 0.411
==>>> it: 301, mem avg. loss: 0.950227, running mem acc: 0.731
==>>> it: 401, avg. loss: 1.902089, running train acc: 0.433
==>>> it: 401, mem avg. loss: 0.857150, running mem acc: 0.758
[0.08 0.157 0.028 0.071 0.217 0.6 0. 0. 0. 0. ]
-----------run 13 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.231692, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.989456, running mem acc: 0.750
==>>> it: 101, avg. loss: 2.566033, running train acc: 0.331
==>>> it: 101, mem avg. loss: 1.100000, running mem acc: 0.723
==>>> it: 201, avg. loss: 2.164060, running train acc: 0.394
==>>> it: 201, mem avg. loss: 0.930848, running mem acc: 0.759
==>>> it: 301, avg. loss: 1.983517, running train acc: 0.429
==>>> it: 301, mem avg. loss: 0.852066, running mem acc: 0.776
==>>> it: 401, avg. loss: 1.882589, running train acc: 0.445
==>>> it: 401, mem avg. loss: 0.770192, running mem acc: 0.796
[0.077 0.113 0.035 0.078 0.219 0.249 0.557 0. 0. 0. ]
-----------run 13 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.021660, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.429346, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.598072, running train acc: 0.306
==>>> it: 101, mem avg. loss: 1.017025, running mem acc: 0.720
==>>> it: 201, avg. loss: 2.164167, running train acc: 0.376
==>>> it: 201, mem avg. loss: 0.884785, running mem acc: 0.746
==>>> it: 301, avg. loss: 1.969117, running train acc: 0.413
==>>> it: 301, mem avg. loss: 0.791476, running mem acc: 0.774
==>>> it: 401, avg. loss: 1.858525, running train acc: 0.440
==>>> it: 401, mem avg. loss: 0.713248, running mem acc: 0.797
[0.056 0.122 0.021 0.058 0.192 0.176 0.168 0.545 0. 0. ]
-----------run 13 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.307083, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.590237, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.552793, running train acc: 0.325
==>>> it: 101, mem avg. loss: 1.029522, running mem acc: 0.727
==>>> it: 201, avg. loss: 2.162954, running train acc: 0.374
==>>> it: 201, mem avg. loss: 0.857149, running mem acc: 0.767
==>>> it: 301, avg. loss: 2.005573, running train acc: 0.396
==>>> it: 301, mem avg. loss: 0.758802, running mem acc: 0.793
==>>> it: 401, avg. loss: 1.899629, running train acc: 0.419
==>>> it: 401, mem avg. loss: 0.684734, running mem acc: 0.815
[0.067 0.045 0.017 0.051 0.174 0.152 0.159 0.169 0.508 0. ]
-----------run 13 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.360004, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.396177, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.556236, running train acc: 0.322
==>>> it: 101, mem avg. loss: 1.036353, running mem acc: 0.721
==>>> it: 201, avg. loss: 2.103134, running train acc: 0.400
==>>> it: 201, mem avg. loss: 0.859243, running mem acc: 0.763
==>>> it: 301, avg. loss: 1.893402, running train acc: 0.445
==>>> it: 301, mem avg. loss: 0.759684, running mem acc: 0.796
==>>> it: 401, avg. loss: 1.786111, running train acc: 0.472
==>>> it: 401, mem avg. loss: 0.685444, running mem acc: 0.816
[0.053 0.057 0.033 0.041 0.183 0.132 0.102 0.107 0.174 0.549]
-----------run 13-----------avg_end_acc 0.1431-----------train time 2603.259247303009
Task: 0, Labels:[25, 97, 73, 51, 12, 90, 16, 84, 19, 48]
Task: 1, Labels:[77, 30, 81, 60, 11, 95, 39, 2, 64, 62]
Task: 2, Labels:[69, 42, 35, 71, 80, 78, 24, 98, 44, 32]
Task: 3, Labels:[85, 13, 18, 74, 34, 14, 57, 9, 86, 87]
Task: 4, Labels:[43, 27, 1, 66, 88, 82, 68, 33, 5, 22]
Task: 5, Labels:[26, 50, 21, 41, 93, 3, 23, 91, 70, 8]
Task: 6, Labels:[0, 94, 54, 61, 59, 92, 89, 49, 79, 58]
Task: 7, Labels:[38, 96, 20, 4, 99, 53, 40, 10, 46, 83]
Task: 8, Labels:[36, 75, 67, 7, 28, 63, 56, 6, 17, 47]
Task: 9, Labels:[15, 72, 45, 55, 37, 65, 76, 52, 29, 31]
buffer has 2000 slots
-----------run 14 training batch 0-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 4.358658, running train acc: 0.100
==>>> it: 1, mem avg. loss: 2.967481, running mem acc: 0.200
==>>> it: 101, avg. loss: 2.582929, running train acc: 0.204
==>>> it: 101, mem avg. loss: 2.458889, running mem acc: 0.224
==>>> it: 201, avg. loss: 2.329128, running train acc: 0.239
==>>> it: 201, mem avg. loss: 2.239299, running mem acc: 0.244
==>>> it: 301, avg. loss: 2.181188, running train acc: 0.271
==>>> it: 301, mem avg. loss: 2.051827, running mem acc: 0.303
==>>> it: 401, avg. loss: 2.096934, running train acc: 0.291
==>>> it: 401, mem avg. loss: 1.947911, running mem acc: 0.331
[0.423 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 1-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 9.024535, running train acc: 0.000
==>>> it: 1, mem avg. loss: 2.048299, running mem acc: 0.450
==>>> it: 101, avg. loss: 2.826031, running train acc: 0.204
==>>> it: 101, mem avg. loss: 2.353848, running mem acc: 0.327
==>>> it: 201, avg. loss: 2.437938, running train acc: 0.278
==>>> it: 201, mem avg. loss: 2.189060, running mem acc: 0.341
==>>> it: 301, avg. loss: 2.248351, running train acc: 0.308
==>>> it: 301, mem avg. loss: 1.996692, running mem acc: 0.390
==>>> it: 401, avg. loss: 2.120198, running train acc: 0.336
==>>> it: 401, mem avg. loss: 1.804357, running mem acc: 0.448
[0.145 0.451 0. 0. 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 2-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.638925, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.187967, running mem acc: 0.700
==>>> it: 101, avg. loss: 2.867677, running train acc: 0.204
==>>> it: 101, mem avg. loss: 1.403636, running mem acc: 0.604
==>>> it: 201, avg. loss: 2.459344, running train acc: 0.278
==>>> it: 201, mem avg. loss: 1.313771, running mem acc: 0.622
==>>> it: 301, avg. loss: 2.260878, running train acc: 0.315
==>>> it: 301, mem avg. loss: 1.247918, running mem acc: 0.637
==>>> it: 401, avg. loss: 2.155340, running train acc: 0.332
==>>> it: 401, mem avg. loss: 1.185807, running mem acc: 0.651
[0.061 0.214 0.453 0. 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 3-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.759886, running train acc: 0.000
==>>> it: 1, mem avg. loss: 1.049620, running mem acc: 0.650
==>>> it: 101, avg. loss: 2.811434, running train acc: 0.250
==>>> it: 101, mem avg. loss: 1.392834, running mem acc: 0.629
==>>> it: 201, avg. loss: 2.435460, running train acc: 0.305
==>>> it: 201, mem avg. loss: 1.341295, running mem acc: 0.629
==>>> it: 301, avg. loss: 2.286328, running train acc: 0.336
==>>> it: 301, mem avg. loss: 1.290752, running mem acc: 0.638
==>>> it: 401, avg. loss: 2.180328, running train acc: 0.357
==>>> it: 401, mem avg. loss: 1.215492, running mem acc: 0.659
[0.067 0.2 0.239 0.469 0. 0. 0. 0. 0. 0. ]
-----------run 14 training batch 4-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 10.949551, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.437599, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.667984, running train acc: 0.292
==>>> it: 101, mem avg. loss: 1.302220, running mem acc: 0.655
==>>> it: 201, avg. loss: 2.226800, running train acc: 0.364
==>>> it: 201, mem avg. loss: 1.151142, running mem acc: 0.684
==>>> it: 301, avg. loss: 2.017009, running train acc: 0.405
==>>> it: 301, mem avg. loss: 1.051614, running mem acc: 0.712
==>>> it: 401, avg. loss: 1.891178, running train acc: 0.429
==>>> it: 401, mem avg. loss: 0.966127, running mem acc: 0.732
[0.044 0.198 0.191 0.167 0.564 0. 0. 0. 0. 0. ]
-----------run 14 training batch 5-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.930771, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.756248, running mem acc: 0.900
==>>> it: 101, avg. loss: 2.698502, running train acc: 0.300
==>>> it: 101, mem avg. loss: 1.182181, running mem acc: 0.705
==>>> it: 201, avg. loss: 2.262118, running train acc: 0.368
==>>> it: 201, mem avg. loss: 1.038262, running mem acc: 0.727
==>>> it: 301, avg. loss: 2.108587, running train acc: 0.388
==>>> it: 301, mem avg. loss: 0.931342, running mem acc: 0.750
==>>> it: 401, avg. loss: 1.990190, running train acc: 0.414
==>>> it: 401, mem avg. loss: 0.868667, running mem acc: 0.764
[0.057 0.096 0.195 0.115 0.235 0.553 0. 0. 0. 0. ]
-----------run 14 training batch 6-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.563370, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.740415, running mem acc: 0.850
==>>> it: 101, avg. loss: 2.408910, running train acc: 0.371
==>>> it: 101, mem avg. loss: 1.145265, running mem acc: 0.688
==>>> it: 201, avg. loss: 1.910146, running train acc: 0.470
==>>> it: 201, mem avg. loss: 0.990742, running mem acc: 0.717
==>>> it: 301, avg. loss: 1.719402, running train acc: 0.509
==>>> it: 301, mem avg. loss: 0.845708, running mem acc: 0.755
==>>> it: 401, avg. loss: 1.598108, running train acc: 0.536
==>>> it: 401, mem avg. loss: 0.766205, running mem acc: 0.776
[0.04 0.148 0.198 0.087 0.179 0.161 0.644 0. 0. 0. ]
-----------run 14 training batch 7-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 12.139684, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.585182, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.508467, running train acc: 0.323
==>>> it: 101, mem avg. loss: 0.913936, running mem acc: 0.759
==>>> it: 201, avg. loss: 2.100855, running train acc: 0.391
==>>> it: 201, mem avg. loss: 0.813265, running mem acc: 0.783
==>>> it: 301, avg. loss: 1.914420, running train acc: 0.432
==>>> it: 301, mem avg. loss: 0.730208, running mem acc: 0.801
==>>> it: 401, avg. loss: 1.809624, running train acc: 0.453
==>>> it: 401, mem avg. loss: 0.655646, running mem acc: 0.821
[0.056 0.146 0.15 0.073 0.177 0.138 0.293 0.539 0. 0. ]
-----------run 14 training batch 8-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.630907, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.190936, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.388817, running train acc: 0.379
==>>> it: 101, mem avg. loss: 0.854647, running mem acc: 0.782
==>>> it: 201, avg. loss: 1.940800, running train acc: 0.453
==>>> it: 201, mem avg. loss: 0.743290, running mem acc: 0.808
==>>> it: 301, avg. loss: 1.719147, running train acc: 0.499
==>>> it: 301, mem avg. loss: 0.667775, running mem acc: 0.822
==>>> it: 401, avg. loss: 1.599522, running train acc: 0.530
==>>> it: 401, mem avg. loss: 0.605869, running mem acc: 0.836
[0.047 0.099 0.125 0.041 0.161 0.139 0.198 0.145 0.639 0. ]
-----------run 14 training batch 9-------------
size: (5000, 32, 32, 3), (5000,)
==>>> it: 1, avg. loss: 11.956848, running train acc: 0.000
==>>> it: 1, mem avg. loss: 0.186578, running mem acc: 0.950
==>>> it: 101, avg. loss: 2.700618, running train acc: 0.280
==>>> it: 101, mem avg. loss: 0.866148, running mem acc: 0.784
==>>> it: 201, avg. loss: 2.265596, running train acc: 0.351
==>>> it: 201, mem avg. loss: 0.752482, running mem acc: 0.802
==>>> it: 301, avg. loss: 2.060721, running train acc: 0.391
==>>> it: 301, mem avg. loss: 0.658484, running mem acc: 0.825
==>>> it: 401, avg. loss: 1.955261, running train acc: 0.411
==>>> it: 401, mem avg. loss: 0.606845, running mem acc: 0.841
[0.033 0.127 0.122 0.044 0.137 0.119 0.182 0.119 0.196 0.492]
-----------run 14-----------avg_end_acc 0.1571-----------train time 2599.750324487686
----------- Total 15 run: 38257.48938179016s -----------
----------- Avg_End_Acc (0.15658666666666665, 0.0046353895248637655) Avg_End_Fgt (0.37292666666666663, 0.005743844949978044) Avg_Acc (0.24635227248677247, 0.00827137818566395) Avg_Bwtp (0.0, 0.0) Avg_Fwt (0.0, 0.0)-----------