[2021-05-30 22:39:23,961-rk0-my_train_LT.py#293] init done
fatal: Not a git repository (or any parent up to mount point /media/hdc/data4)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: Not a git repository (or any parent up to mount point /media/hdc/data4)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
[2021-05-30 22:39:23,992-rk0-my_train_LT.py#308] Version Information:
commit :
log :
[2021-05-30 22:39:23,994-rk0-my_train_LT.py#309] config
{
"META_ARC": "siamrpn_r50_l234_dwxcorr_lt",
"CUDA": true,
"TRAIN": {
"THR_HIGH": 0.6,
"THR_LOW": 0.3,
"NEG_NUM": 16,
"POS_NUM": 16,
"TOTAL_NUM": 64,
"EXEMPLAR_SIZE": 127,
"SEARCH_SIZE": 255,
"BASE_SIZE": 8,
"OUTPUT_SIZE": 25,
"RESUME": "",
"LOG_DIR": "./models/siamrpn_lt/logs",
"SNAPSHOT_DIR": "./models/siamrpn_lt/snapshot",
"EPOCH": 20,
"START_EPOCH": 0,
"BATCH_SIZE": 8,
"NUM_WORKERS": 1,
"MOMENTUM": 0.9,
"WEIGHT_DECAY": 0.0001,
"CLS_WEIGHT": 1.0,
"LOC_WEIGHT": 1.2,
"MASK_WEIGHT": 1,
"PRINT_FREQ": 20,
"LOG_GRADS": false,
"GRAD_CLIP": 10.0,
"BASE_LR": 0.005,
"LR": {
"TYPE": "log",
"KWARGS": {
"start_lr": 0.005,
"end_lr": 0.0005
}
},
"LR_WARMUP": {
"WARMUP": true,
"TYPE": "step",
"EPOCH": 5,
"KWARGS": {
"start_lr": 0.001,
"end_lr": 0.005,
"step": 1
}
}
},
"DATASET": {
"TEMPLATE": {
"SHIFT": 4,
"SCALE": 0.05,
"BLUR": 0.0,
"FLIP": 0.0,
"COLOR": 1.0
},
"SEARCH": {
"SHIFT": 64,
"SCALE": 0.18,
"BLUR": 0.2,
"FLIP": 0.0,
"COLOR": 1.0
},
"NEG": 0.2,
"GRAY": 0.0,
"NAMES": [
"GOT"
],
"VID": {
"ROOT": "data/crop511",
"ANNO": "data/crop511/train/train.json",
"FRAME_RANGE": 100,
"NUM_USE": 100000
},
"GOT": {
"ROOT": "./data/crop511",
"ANNO": "./data/crop511/train/train.json",
"FRAME_RANGE": 100,
"NUM_USE": 64000
},
"YOUTUBEBB": {
"ROOT": "data/yt_bb/crop511",
"ANNO": "data/yt_bb/train.json",
"FRAME_RANGE": 3,
"NUM_USE": -1
},
"COCO": {
"ROOT": "data/coco/crop511",
"ANNO": "data/coco/train2017.json",
"FRAME_RANGE": 1,
"NUM_USE": -1
},
"DET": {
"ROOT": "data/det/crop511",
"ANNO": "data/det/train.json",
"FRAME_RANGE": 1,
"NUM_USE": -1
},
"VIDEOS_PER_EPOCH": 64000
},
"BACKBONE": {
"TYPE": "resnet50",
"KWARGS": {
"used_layers": [
2,
3,
4
]
},
"PRETRAINED": "",
"TRAIN_LAYERS": [
"layer2",
"layer3",
"layer4"
],
"LAYERS_LR": 0.1,
"TRAIN_EPOCH": 10
},
"ADJUST": {
"ADJUST": true,
"KWARGS": {
"in_channels": [
512,
1024,
2048
],
"out_channels": [
128,
256,
512
]
},
"TYPE": "AdjustAllLayer"
},
"RPN": {
"TYPE": "MultiRPN",
"KWARGS": {
"anchor_num": 5,
"in_channels": [
128,
256,
512
],
"weighted": true
}
},
"MASK": {
"MASK": false,
"TYPE": "MaskCorr",
"KWARGS": {}
},
"REFINE": {
"REFINE": false,
"TYPE": "Refine"
},
"ANCHOR": {
"STRIDE": 8,
"RATIOS": [
0.33,
0.5,
1,
2,
3
],
"SCALES": [
8
],
"ANCHOR_NUM": 5
},
"TRACK": {
"TYPE": "SiamRPNLTTracker",
"PENALTY_K": 0.05,
"WINDOW_INFLUENCE": 0.28,
"LR": 0.22,
"EXEMPLAR_SIZE": 127,
"INSTANCE_SIZE": 255,
"BASE_SIZE": 8,
"CONTEXT_AMOUNT": 0.5,
"LOST_INSTANCE_SIZE": 831,
"CONFIDENCE_LOW": 0.8,
"CONFIDENCE_HIGH": 0.998,
"MASK_THERSHOLD": 0.3,
"MASK_OUTPUT_SIZE": 127
}
}
[2021-05-30 22:39:27,745-rk0-my_train_LT.py# 75] build train dataset
[2021-05-30 22:39:27,747-rk0-dataset.py# 38] loading GOT
[2021-05-30 22:39:27,759-rk0-dataset.py# 63] GOT loaded
[2021-05-30 22:39:27,783-rk0-dataset.py# 92] GOT start-index 0 select [64000/4] path_format {}.{}.{}.bmp
[2021-05-30 22:39:27,832-rk0-dataset.py#202] shuffle done!
[2021-05-30 22:39:27,833-rk0-dataset.py#203] dataset length 1280000
[2021-05-30 22:39:27,841-rk0-my_train_LT.py# 78] build dataset done
/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:122: UserWarning: Detected call of lr_scheduler.step()
before optimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()
before lr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
[2021-05-30 22:39:27,844-rk0-my_train_LT.py#347] (WarmUPScheduler) lr spaces:
[0.001 0.00137973 0.00190365 0.00262653 0.0036239 0.005
0.00424171 0.00359843 0.0030527 0.00258974 0.00219699 0.0018638
0.00158114 0.00134135 0.00113792 0.00096535 0.00081895 0.00069475
0.00058938 0.0005 ]
[2021-05-30 22:39:27,852-rk0-my_train_LT.py#348] model prepare done
/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
[2021-05-30 22:39:34,181-rk0-my_train_LT.py#282] Epoch: [1][20/8000] lr: 0.001000
batch_time: 0.161238 (0.315044) data_time: 0.000092 (0.011209)
total_loss: 0.860676 (1.279064) cls_loss: 0.482578 (0.566196)
loc_loss: 0.315082 (0.594057)
[2021-05-30 22:39:34,182-rk0-log_helper.py#111] Progress:20/160000,[0%], Speed:0.315 s/iter,Epoch-period 42:0 (M:S),ETA 0:14:00(D:H:M)
[2021-05-30 22:39:37,413-rk0-my_train_LT.py#282] Epoch: [1][40/8000] lr: 0.001000
batch_time: 0.162611 (0.237624) data_time: 0.000102 (0.005661)
total_loss: 1.051070 (1.156314) cls_loss: 0.447658 (0.500955)
loc_loss: 0.502843 (0.546132)
[2021-05-30 22:39:37,413-rk0-log_helper.py#111] Progress:40/160000,[0%], Speed:0.238 s/iter,Epoch-period 31:40 (M:S),ETA 0:10:33(D:H:M)
Traceback (most recent call last):
File "/media/hdc/data4/wxl/SiamTrackers-master/7-SiamRPNpp/SiamRPNpp-DW/bin/my_train_LT.py", line 357, in
main()
File "/media/hdc/data4/wxl/SiamTrackers-master/7-SiamRPNpp/SiamRPNpp-DW/bin/my_train_LT.py", line 351, in main
train(train_loader, dist_model, optimizer, lr_scheduler, tb_writer)
File "/media/hdc/data4/wxl/SiamTrackers-master/7-SiamRPNpp/SiamRPNpp-DW/bin/my_train_LT.py", line 237, in train
outputs = model(data)
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/SiamTrackers-master/7-SiamRPNpp/SiamRPNpp-DW/siamrpnpp/models/model_builder.py", line 112, in forward
cls_loss = select_cross_entropy_loss(cls, label_cls)
File "/media/hdc/data4/wxl/SiamTrackers-master/7-SiamRPNpp/SiamRPNpp-DW/siamrpnpp/models/loss.py", line 25, in select_cross_entropy_loss
loss_pos = get_cls_loss(pred, label, pos)
File "/media/hdc/data4/wxl/SiamTrackers-master/7-SiamRPNpp/SiamRPNpp-DW/siamrpnpp/models/loss.py", line 17, in get_cls_loss
return F.nll_loss(pred, label)
File "/home/hdc/anaconda3/envs/SiamTrackers/lib/python3.7/site-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: invalid argument 2: non-empty vector or matrix expected at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:31