argusswift / yolov4-pytorch Goto Github PK
View Code? Open in Web Editor NEWThis is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
Hi, where is the loss of label smooth written? Thanks.
请问训练时出现这个会影响训练结果吗?
val img size is 416
82%|████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 317/387 [00:32<00:07, 9.16it/s]Corrupt JPEG data: 1 extraneous bytes before marker 0xdb
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 387/387 [00:40<00:00, 9.57it/s]
非常感谢作者的分享!有个问题想要请教你,工程里面数据增强模块用的是mixup,是否尝试过论文里面的mosaic
请问在你这个项目里,你是用mobilent 的backbone替换的原本pytorch的darknet吗?
若果是的话,你是怎么做的?
classes = cfg.COCO_DATA["CLASSES"]
img_inds_file = os.path.join(
data_path, "ImageSets", "Main", file_type + ".txt"
)
请问前面这组anchor和后面注释的anchor是怎样的尺度关系?
MODEL = {"ANCHORS":[[(1.25, 1.625), (2.0, 3.75), (4.125, 2.875)], # Anchors for small obj(12,16),(19,36),(40,28)
[(1.875, 3.8125), (3.875, 2.8125), (3.6875, 7.4375)], # Anchors for medium obj(36,75),(76,55),(72,146)
[(3.625, 2.8125), (4.875, 6.1875), (11.65625, 10.1875)]], # Anchors for big obj(142,110),(192,243),(459,401)
"STRIDES":[8, 16, 32],
"ANCHORS_PER_SCLAE":3
}
请问用自己的数据集训练,测试自己的图片时为什么不显示类别?
您好,我使用了您训练的好的VOC模型进行测试,下载好您的代码之后,配置文件修改成了Mobilenet-YOLOv4,但是加载模型的时候报错了Initing PredictNet weights----->RuntimeError: Error(s) in loading state_dict for Build_Model:
size mismatch for _Build_Model__yolov4.predict_net.predict_conv.0.1.weight: copying a param with shape torch.Size([75, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 32, 1, 1]).
size mismatch for _Build_Model__yolov4.predict_net.predict_conv.0.1.bias: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([18]).
size mismatch for _Build_Model__yolov4.predict_net.predict_conv.1.1.weight: copying a param with shape torch.Size([75, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 96, 1, 1]).
size mismatch for _Build_Model__yolov4.predict_net.predict_conv.1.1.bias: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([18]).
size mismatch for _Build_Model__yolov4.predict_net.predict_conv.2.1.weight: copying a param with shape torch.Size([75, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 1280, 1, 1]).
size mismatch for _Build_Model__yolov4.predict_net.predict_conv.2.1.bias: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([18]).
you forget the class_id of classes
img_inds_file = os.path.join(data_path, 'ImageSets', 'Main', class_id+'_'+file_type+'.txt')
--gpu_id 0,1
这样书写会报错
[2020-10-06 21:13:34,984]-[train.py line:164]:===== Validate =====
Traceback (most recent call last):
File "train.py", line 212, in <module>
fp_16=opt.fp_16).train()
File "train.py", line 166, in train
APs, inference_time = Evaluator(self.yolov4, showatt=False).APs_voc()
File "D:\hua\YOLOv4-pytorch\eval\evaluator.py", line 32, in APs_voc
with open(img_inds_file, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\hua\\YOLOv4-pytorch/data\\VOCtest-2007\\VOCdevkit\\VOC2007\\ImageSets\\Main\\test.txt'
请问一下,作者可以写一下复现版本和原作者的COCO测试集的map、速度等指标的对比么?这样也可以了解整个复现版本是否到达了原作者相同水平
hello:
what is the final training loss when you train voc dataset.
thanks
Thanks a lot for your work! However, when I try to visualize heatmaps after training, I got the error:
Traceback (most recent call last):
File "eval_coco.py", line 158, in
heatmap=opt.heatmap).Inference()
File "eval_coco.py", line 117, in Inference
bboxes_prd = self.__evalter.get_bbox(img, v)
File "/root/CV/YOLOv4-pytorch-master/eval/evaluator.py", line 75, in get_bbox
bboxes_list.append(self.__predict(img, test_input_size, valid_scale))
File "/root/CV/YOLOv4-pytorch-master/eval/evaluator.py", line 97, in __predict
if self.showatt: _,p_d,beta = self.model(img)
ValueError: not enough values to unpack (expected 3, got 2)
I run the command like this:
python eval_coco.py --gpu_id 0 --visiual output --mode det --heatmap True
Here are my config details:
MODEL_TYPE = {"TYPE": 'YOLOv4'} #YOLO type:YOLOv4, Mobilenet-YOLOv4 or Mobilenetv3-YOLOv4
CONV_TYPE = {"TYPE": 'DO_CONV'} #conv type:DO_CONV or GENERAL
ATTENTION = {"TYPE": 'CBAM'} #6
train
TRAIN = {
"DATA_TYPE": 'COCO', #DATA_TYPE: VOC or COCO
"TRAIN_IMG_SIZE": 416,
"AUGMENT": True,
"BATCH_SIZE": 4,
"MULTI_SCALE_TRAIN": False,
"IOU_THRESHOLD_LOSS": 0.5,
"YOLO_EPOCHS": 50,
"Mobilenet_YOLO_EPOCHS": 120,
"NUMBER_WORKERS": 0,
"MOMENTUM": 0.9,
"WEIGHT_DECAY": 0.0005,
"LR_INIT": 1e-4,
"LR_END": 1e-5,
"WARMUP_EPOCHS": 0 # or None
}VAL = {
"TEST_IMG_SIZE": 416,
"BATCH_SIZE": 1,
"NUMBER_WORKERS": 1,
"CONF_THRESH": 0.5,
"NMS_THRESH": 0.45,
"MULTI_SCALE_VAL": True,
"FLIP_VAL": True,
"Visual": True
}
Hoping for your help,Thx
训练自己的数据集(BDD100K),按照要求已转为VOC格式的XML文件,并通过xml_to_txt.py成功生成TXT文件,然后在cfg文件修改好DATA_PATH(使用绝对路径),运行train.py报错
assert img is not None, 'File Not Found ' + img_path
AssertionError: File Not Found /home/amax/lf2/YOLOv4-pytorch/data/JPEGImages/943f0721-97e0dfd4.jpg
图片已放置在data/JPEGImages文件夹内
请问问题出在哪里?
YOLOv4-pytorch/utils/datasets.py
Line 69 in bcb1698
I want to see the detect result,but i don't find the value_voc.py. So can you updatae your code?
你好,我在训练过程中,训练一轮很快,而每轮训练之后的validate很慢,验证集只有训练集的七分之一。我设置的参数如下:
TRAIN = {
"DATA_TYPE": 'Customer', #DATA_TYPE: VOC ,COCO or Customer
"TRAIN_IMG_SIZE": 512,
"AUGMENT": True,
"BATCH_SIZE": 8,
"MULTI_SCALE_TRAIN": True,
"IOU_THRESHOLD_LOSS": 0.5,
"YOLO_EPOCHS": 50,
"Mobilenet_YOLO_EPOCHS": 120,
"NUMBER_WORKERS": 4,
"MOMENTUM": 0.9,
"WEIGHT_DECAY": 0.0005,
"LR_INIT": 1e-4,
"LR_END": 1e-6,
"WARMUP_EPOCHS": 2 # or None
}
VAL = {
"TEST_IMG_SIZE": 512,
"BATCH_SIZE": 8,
"NUMBER_WORKERS": 4,
"CONF_THRESH": 0.005,
"NMS_THRESH": 0.5,
"MULTI_SCALE_VAL": True,
"FLIP_VAL": True,
"Visual": True
}
训练和验证时的batch-size一样大,为什么验证比训练要慢很多
python eval_voc.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval
FOR m:I change the $DATA_TEST,and don't see the heatmaps whinch in the output,it start eval ,so does this command correct?
在训练自定义的数据集时,我总共有6类,我在训练过程中,测试准确率时,有几个类别的准确率是nan。请问有人遇到过吗,问题出现在什么地方。
[2020-10-11 16:54:01,960]-[train.py line:168]:boerner --> mAP : nan
INFO:YOLOv4:boerner --> mAP : nan
[2020-10-11 16:54:01,960]-[train.py line:168]:linnaeus --> mAP : nan
INFO:YOLOv4:linnaeus --> mAP : nan
[2020-10-11 16:54:01,960]-[train.py line:168]:armandi --> mAP : 0.6948593548804081
INFO:YOLOv4:armandi --> mAP : 0.6948593548804081
[2020-10-11 16:54:01,960]-[train.py line:168]:coleoptera --> mAP : 0.7212601708662137
INFO:YOLOv4:coleoptera --> mAP : 0.7212601708662137
[2020-10-11 16:54:01,960]-[train.py line:168]:leconte --> mAP : nan
INFO:YOLOv4:leconte --> mAP : nan
[2020-10-11 16:54:01,961]-[train.py line:168]:acuminatus --> mAP : 0.2419924563113839
INFO:YOLOv4:acuminatus --> mAP : 0.2419924563113839
[2020-10-11 16:54:01,961]-[train.py line:171]:mAP : nan
INFO:YOLOv4:mAP : nan
"ANCHORS":[[(1.25, 1.625), (2.0, 3.75), (4.125, 2.875)],
[(1.875, 3.8125), (3.875, 2.8125), (3.6875, 7.4375)],
[(3.625, 2.8125), (4.875, 6.1875), (11.65625, 10.1875)]]
这个比例是如何计算的?
is:issue is:open Darknet pre-trained weight : yolov4这个链接打不开!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Hi @argusswift ,
Thank you for the wonderful code. I am not able to download the mobilenet weights, can you please share them via onedrive or google drive ?
The network is working fine with yolo v4 weights but it runs with 15 fps. May be mobilenet could improve the test speed.
Thank you!
你好,请问有YOLOv4-tiny的实现版本吗?
[2020-10-10 00:46:31,414]-[train.py line:147]: === Epoch:[ 13/120],step:[260/377],img_size:[416],total_loss:nan|loss_ciou:nan|loss_conf:nan|loss_cls:nan|lr:0.0075
INFO:YOLOv4: === Epoch:[ 13/120],step:[260/377],img_size:[416],total_loss:nan|loss_ciou:nan|loss_conf:nan|loss_cls:nan|lr:0.0075
WARNING:root:NaN or Inf found in input tensor.
WARNING:root:NaN or Inf found in input tensor.
WARNING:root:NaN or Inf found in input tensor.
WARNING:root:NaN or Inf found in input tensor.
您好,这个是我的数据集的问题吗?
5000张图,只产生了4952个xml,请问问题出来哪里呢,谢谢
As described in the title
Multi GPU training support?
您好!~根据您的readme,数据整理好之后,运行voc.py,然后生成train_annotation.txt文件,但是里面没有归一化。想问一下,yolov4的txt文件是不需要作归一化?还是说需要自己单独作归一化操作?
Traceback (most recent call last):
File "D:/code/yolov4/YOLOv4-PyTorch/train.py", line 207, in
Trainer(weight_path=opt.weight_path,
File "D:/code/yolov4/YOLOv4-PyTorch/train.py", line 113, in train
for i, (imgs, label_sbbox, label_mbbox, label_lbbox, sbboxes, mbboxes, lbboxes) in enumerate(self.train_dataloader):
File "D:\anacoda\lib\site-packages\torch\utils\data\dataloader.py", line 435, in next
data = self._next_data()
File "D:\anacoda\lib\site-packages\torch\utils\data\dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\anacoda\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\anacoda\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\code\yolov4\YOLOv4-PyTorch\utils\datasets.py", line 42, in getitem
img_mix, bboxes_mix = self.__parse_annotation(self.__annotations[item_mix])
File "D:\code\yolov4\YOLOv4-PyTorch\utils\datasets.py", line 88, in __parse_annotation
img, bboxes = dataAug.RandomCrop()(np.copy(img), np.copy(bboxes))
File "D:\code\yolov4\YOLOv4-PyTorch\utils\data_augment.py", line 28, in call
max_bbox = np.concatenate([np.min(bboxes[:, 0:2], axis=0), np.max(bboxes[:, 2:4], axis=0)], axis=-1)
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
Has anyone ever come across this problem
您好,关于可视化这个,是输出的注意力机制的热力图吗? 还是使用的gradcam这类的显示?
[2020-10-08 15:59:54,446]-[train.py line:147]: === Epoch:[ 0/300],step:[3340/9999],img_size:[416],total_loss:152.8582|loss_ciou:32.0746|loss_conf:59.5948|loss_cls:61.1885|lr:0.0000
[2020-10-08 16:00:02,392]-[train.py line:147]: === Epoch:[ 0/300],step:[3350/9999],img_size:[416],total_loss:152.6998|loss_ciou:32.0685|loss_conf:59.5150|loss_cls:61.1160|lr:0.0000
Traceback (most recent call last):
File "train.py", line 211, in
fp_16=opt.fp_16).train()
File "train.py", line 113, in train
for i, (imgs, label_sbbox, label_mbbox, label_lbbox, sbboxes, mbboxes, lbboxes) in enumerate(self.train_dataloader):
File "/home/amax/anaconda3/envs/yolov5/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 363, in next
data = self._next_data()
File "/home/amax/anaconda3/envs/yolov5/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 403, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/amax/anaconda3/envs/yolov5/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/amax/anaconda3/envs/yolov5/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/amax/lf2/YOLOv4-pytorch/utils/datasets.py", line 50, in getitem
label_sbbox, label_mbbox, label_lbbox, sbboxes, mbboxes, lbboxes = self.__creat_label(bboxes)
File "/home/amax/lf2/YOLOv4-pytorch/utils/datasets.py", line 173, in __creat_label
label[best_detect][yind, xind, best_anchor, 0:4] = bbox_xywh
IndexError: index 52 is out of bounds for axis 1 with size 52
请问这是什么错误,应该如何解决?
你好,我想问一下,如果我想使用加入注意力机制的Yolov4,使用了yolov4的预训练文件,但是注意力模块并没有预训练参数,我可以在加载yolov4预训练文件之后便把这部分参数冻结,然后用自定义数据集只训练注意力模块中参数,训练完成后解除参数冻结,再用自己的数据集训练整个模型,这个操作可行吗?因为在imagenet上预训练这个模型,没有好的GPU资源。。。
Multicard training unsupported?
CUDA_VISIBLE_DEVICES=0,1,2,3 python -u train.py --weight_path weight/yolov4.weights --gpu_id 0,1,2,3
usage: train.py [-h] [--weight_path WEIGHT_PATH] [--resume] [--gpu_id GPU_ID]
[--log_path LOG_PATH] [--accumulate ACCUMULATE]
[--fp_16 FP_16]
train.py: error: argument --gpu_id: invalid int value: '0,1,2,3'
When I want to keep training with last.pt, during the training, it came out:
[2020-08-17 16:24:16,601]-[train.py line:153]: === Epoch:[ 0/301],step:[ 0/355],img_size:[416],total_loss:nan|loss_giou:nan|loss_conf:nan|loss_cls:nan|lr:0.0000
WARNING:root:NaN or Inf found in input tensor.
WARNING:root:NaN or Inf found in input tensor.
WARNING:root:NaN or Inf found in input tensor.
Still, I can't run the program with gpu, I tried to change the gpu_id but it doesn't work.
Please help me. Thank you!
[2020-08-28 08:35:20,162]-[train.py line:150]: === Epoch:[ 0/120],step:[ 0/42],img_size:[416],total_loss:1939.8105|loss_giou:13.1441|loss_conf:1918.4628|loss_cls:8.2036|lr:0.0000
[2020-08-28 08:35:21,411]-[train.py line:150]: === Epoch:[ 0/120],step:[ 10/42],img_size:[416],total_loss:1473.7147|loss_giou:15.7401|loss_conf:1448.6389|loss_cls:9.3356|lr:0.0000
[2020-08-28 08:35:22,632]-[train.py line:150]: === Epoch:[ 0/120],step:[ 20/42],img_size:[416],total_loss:837.1195|loss_giou:12.7785|loss_conf:816.7214|loss_cls:7.6196|lr:0.0000
[2020-08-28 08:35:23,850]-[train.py line:150]: === Epoch:[ 0/120],step:[ 30/42],img_size:[416],total_loss:582.2850|loss_giou:12.2135|loss_conf:562.8416|loss_cls:7.2298|lr:0.0000
[2020-08-28 08:35:25,084]-[train.py line:150]: === Epoch:[ 0/120],step:[ 40/42],img_size:[416],total_loss:451.4386|loss_giou:11.3422|loss_conf:433.4202|loss_cls:6.6761|lr:0.0000
[2020-08-28 08:35:25,324]-[train.py line:167]:===== Validate =====
val img size is 416
96%|#############################################################################################################################1 | 43/45 [00:25<00:01, 1.49it/s]Traceback (most recent call last):
File "train.py", line 216, in
fp_16=opt.fp_16).train()
File "train.py", line 169, in train
APs, inference_time = Evaluator(self.yolov4, showatt=False).APs_voc()
File "/home/mist/YOLOv4-pytorch-master/eval/evaluator.py", line 50, in APs_voc
bboxes_prd = self.get_bbox(img, multi_test, flip_test)
File "/home/mist/YOLOv4-pytorch-master/eval/evaluator.py", line 84, in get_bbox
bboxes = self.__predict(img, self.val_shape, (0, np.inf))
File "/home/mist/YOLOv4-pytorch-master/eval/evaluator.py", line 92, in __predict
org_h, org_w, _ = org_img.shape
ValueError: not enough values to unpack (expected 3, got 0)
我的数据集很少 但是为什么会出现返回值为0的问题呢 希望您能解答
Hi, I used my dataset to train the model, it came out an error: ValueError: Target size (torch.Size([1, 52, 52, 3, 1])) must be the same as input size (torch.Size([1, 52, 52, 3, 20])).
How should I change the input size or target size to make input and the target have the same size? Thank you!
如題
[2020-10-08 18:30:55,013]-[train.py line:103]:Train datasets number is : 6056
[2020-10-08 18:30:55,014]-[train.py line:106]: ======= start training ======
[2020-10-08 18:30:55,015]-[train.py line:112]:===Epoch:[0/120]===
[2020-10-08 18:30:57,136]-[train.py line:147]: === Epoch:[ 0/120],step:[ 0/6055],img_size:[416],total_loss:1865.3413|loss_ciou:11.1216|loss_conf:1839.0485|loss_cls:15.1712|lr:0.0000
[2020-10-08 18:31:03,770]-[train.py line:147]: === Epoch:[ 0/120],step:[ 10/6055],img_size:[416],total_loss:1901.0422|loss_ciou:24.7834|loss_conf:1856.7925|loss_cls:19.4663|lr:0.0000
[2020-10-08 18:31:10,181]-[train.py line:147]: === Epoch:[ 0/120],step:[ 20/6055],img_size:[416],total_loss:1882.1553|loss_ciou:27.2914|loss_conf:1833.5068|loss_cls:21.3571|lr:0.0000
[2020-10-08 18:31:16,431]-[train.py line:147]: === Epoch:[ 0/120],step:[ 30/6055],img_size:[416],total_loss:1846.4806|loss_ciou:31.4741|loss_conf:1790.6786|loss_cls:24.3282|lr:0.0000
[2020-10-08 18:31:22,503]-[train.py line:147]: === Epoch:[ 0/120],step:[ 40/6055],img_size:[416],total_loss:1772.0176|loss_ciou:26.2438|loss_conf:1725.1616|loss_cls:20.6121|lr:0.0000
[2020-10-08 18:31:28,393]-[train.py line:147]: === Epoch:[ 0/120],step:[ 50/6055],img_size:[416],total_loss:1694.6600|loss_ciou:29.3777|loss_conf:1642.6033|loss_cls:22.6792|lr:0.0000
[2020-10-08 18:31:34,330]-[train.py line:147]: === Epoch:[ 0/120],step:[ 60/6055],img_size:[416],total_loss:1600.6245|loss_ciou:28.4176|loss_conf:1550.2689|loss_cls:21.9383|lr:0.0000
[2020-10-08 18:31:40,415]-[train.py line:147]: === Epoch:[ 0/120],step:[ 70/6055],img_size:[416],total_loss:1503.2173|loss_ciou:27.0327|loss_conf:1455.2041|loss_cls:20.9806|lr:0.0000
[2020-10-08 18:31:46,070]-[train.py line:147]: === Epoch:[ 0/120],step:[ 80/6055],img_size:[416],total_loss:1407.1254|loss_ciou:25.8310|loss_conf:1361.0833|loss_cls:20.2110|lr:0.0000
[2020-10-08 18:31:52,333]-[train.py line:147]: === Epoch:[ 0/120],step:[ 90/6055],img_size:[416],total_loss:1321.8469|loss_ciou:27.7292|loss_conf:1272.6144|loss_cls:21.5031|lr:0.0000
[2020-10-08 18:31:58,035]-[train.py line:147]: === Epoch:[ 0/120],step:[100/6055],img_size:[416],total_loss:1236.7324|loss_ciou:26.4395|loss_conf:1189.6177|loss_cls:20.6749|lr:0.0000
Traceback (most recent call last):
File "/home/lky/code/yolov4/YOLOv4-pytorch-master/train.py", line 211, in
fp_16=opt.fp_16).train()
File "/home/lky/code/yolov4/YOLOv4-pytorch-master/train.py", line 113, in train
for i, (imgs, label_sbbox, label_mbbox, label_lbbox, sbboxes, mbboxes, lbboxes) in enumerate(self.train_dataloader):
File "/home/lky/anaconda3/envs/YOLOv4-pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/lky/anaconda3/envs/YOLOv4-pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/lky/anaconda3/envs/YOLOv4-pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/lky/anaconda3/envs/YOLOv4-pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/lky/code/yolov4/YOLOv4-pytorch-master/utils/datasets.py", line 44, in getitem
img_mix, bboxes_mix = self.__parse_annotation(self.__annotations[item_mix])
File "/home/lky/code/yolov4/YOLOv4-pytorch-master/utils/datasets.py", line 89, in __parse_annotation
img, bboxes = dataAug.RandomCrop()(np.copy(img), np.copy(bboxes))
File "/home/lky/code/yolov4/YOLOv4-pytorch-master/utils/data_augment.py", line 28, in call
max_bbox = np.concatenate([np.min(bboxes[:, 0:2], axis=0), np.max(bboxes[:, 2:4], axis=0)], axis=-1)
IndexError: too many indices for array
Process finished with exit code 1
前几个step还能训练,但是就突然报错了,这个训练集空目标的问题吗?要怎么解决呢?
Thank you for sharing your code. Can you share the method for calculating the anchors of self-dataset?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.