Git Product home page Git Product logo

efficientdet.pytorch's Introduction

EfficientDet: Scalable and Efficient Object Detection, in PyTorch

A PyTorch implementation of EfficientDet from the 2019 paper by Mingxing Tan Ruoming Pang Quoc V. Le Google Research, Brain Team. The official and original: comming soon.

Fun with Demo:

python demo.py --weight ./checkpoint_VOC_efficientdet-d1_97.pth --threshold 0.6 --iou_threshold 0.5 --cam --score

Table of Contents

       

Recent Update

  • [06/01/2020] Support both DistributedDataParallel and DataParallel, change augmentation, eval_voc
  • [17/12/2019] Add Fast normalized fusion, Augmentation with Ratio, Change RetinaHead, Fix Support EfficientDet-D0->D7
  • [7/12/2019] Support EfficientDet-D0, EfficientDet-D1, EfficientDet-D2, EfficientDet-D3, EfficientDet-D4,... . Support change gradient accumulation steps, AdamW.

Benchmarking

We benchmark our code thoroughly on three datasets: pascal voc and coco, using family efficientnet different network architectures: EfficientDet-D0->7. Below are the results:

1). PASCAL VOC 2007 (Train/Test: 07trainval/07test, scale=600, ROI Align)

model   mAP
[EfficientDet-D0(with Weight)](https://drive.google.com/file/d/1r7MAyBfG5OK_9F_cU8yActUWxTHOuOpL/view?usp=sharing 62.16

Installation

  • Install PyTorch by selecting your environment on the website and running the appropriate command.
  • Clone this repository and install package prerequisites below.
  • Then download the dataset by following the instructions below.
  • Note: For training, we currently support VOC and COCO, and aim to add ImageNet support soon.

prerequisites

  • Python 3.6+
  • PyTorch 1.3+
  • Torchvision 0.4.0+ (We need high version because Torchvision support nms now.)
  • requirements.txt

Datasets

To make things easy, we provide bash scripts to handle the dataset downloads and setup for you. We also provide simple dataset loaders that inherit torch.utils.data.Dataset, making them fully compatible with the torchvision.datasets API.

VOC Dataset

PASCAL VOC: Visual Object Classes

Download VOC2007 + VOC2012 trainval & test
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh datasets/scripts/VOC2007.sh
sh datasets/scripts/VOC2012.sh

COCO

Microsoft COCO: Common Objects in Context

Download COCO 2017
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh datasets/scripts/COCO2017.sh

Training EfficientDet

  • To train EfficientDet using the train script simply specify the parameters listed in train.py as a flag or manually change them.
python train.py --network effcientdet-d0  # Example
  • With VOC Dataset:
# DataParallel
python train.py --dataset VOC --dataset_root /root/data/VOCdevkit/ --network effcientdet-d0 --batch_size 32 
# DistributedDataParallel with backend nccl
python train.py --dataset VOC --dataset_root /root/data/VOCdevkit/ --network effcientdet-d0 --batch_size 32 --multiprocessing-distributed
  • With COCO Dataset:
# DataParallel
python train.py --dataset COCO --dataset_root ~/data/coco/ --network effcientdet-d0 --batch_size 32
# DistributedDataParallel with backend nccl
python train.py --dataset COCO --dataset_root ~/data/coco/ --network effcientdet-d0 --batch_size 32 --multiprocessing-distributed

Evaluation

To evaluate a trained network:

  • With VOC Dataset:
    python eval_voc.py --dataset_root ~/data/VOCdevkit --weight ./checkpoint_VOC_efficientdet-d0_261.pth
  • With COCO Dataset comming soon.

Demo

python demo.py --threshold 0.5 --iou_threshold 0.5 --score --weight checkpoint_VOC_efficientdet-d1_34.pth --file_name demo.png

Output:

Webcam Demo

You can use a webcam in a real-time demo by running:

python demo.py --threshold 0.5 --iou_threshold 0.5 --cam --score --weight checkpoint_VOC_efficientdet-d1_34.pth

Performance

TODO

We have accumulated the following to-do list, which we hope to complete in the near future

  • Still to come:
    • EfficientDet-[D0-7]
    • GPU-Parallel
    • NMS
    • Soft-NMS
    • Pretrained model
    • Demo
    • Model zoo
    • TorchScript
    • Mobile
    • C++ Onnx

Authors

Note: Unfortunately, this is just a hobby of ours and not a full-time job, so we'll do our best to keep things up to date, but no guarantees. That being said, thanks to everyone for your continued help and feedback as it is really appreciated. We will try to address everything as soon as possible.

References

Citation

@article{efficientdetpytoan,
    Author = {Toan Dao Minh},
    Title = {A Pytorch Implementation of EfficientDet Object Detection},
    Journal = {github.com/toandaominh1997/EfficientDet.Pytorch},
    Year = {2019}
}

efficientdet.pytorch's People

Contributors

cclauss avatar jackerz312 avatar mohamedalirashad avatar tbfly avatar toandaominh1997 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

efficientdet.pytorch's Issues

[CUDA/CPU ERROR] when I trained on my data, I found this error:

RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/weiqiang/cancer/code/models/efficientdet.py", line 130, in forward
P1, P2, P3, P4, P5, P6, P7 = self.efficientnet(inputs)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/efficientnet_pytorch/model.py", line 204, in forward
P1, P2, P3, P4, P5, P6, P7 = self.extract_features(inputs)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/efficientnet_pytorch/model.py", line 190, in extract_features
x = MBConvBlock(block_args, self._global_params)(x, drop_connect_rate = drop_connect_rate)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/efficientnet_pytorch/model.py", line 78, in forward
x = self._swish(self._bn1(self._depthwise_conv(x)))
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/weiqiang/anaconda3/lib/python3.7/site-packages/efficientnet_pytorch/utils.py", line 144, in forward
x = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'weight'

How to convert to onnx model?

@toandaominh1997 Hi, thanks for your Great work. I have tried to convert efficientdet-b0.pth to onnx model, and my code is follow :

def to_onnx(checkpoint_name,save_onnx_name='efficientdet.onnx', size_image=(512, 512)):
    from torch.autograd import Variable
    import torch.onnx as onnx

    checkpoint = torch.load(checkpoint_name)#,map_location=lambda storage, loc: storage)
    num_class = checkpoint['num_class']
    network = checkpoint['network']
    model=EfficientDet(num_classes=num_class,
                     network=network,
                     W_bifpn=EFFICIENTDET[network]['W_bifpn'],
                     D_bifpn=EFFICIENTDET[network]['D_bifpn'],
                     D_class=EFFICIENTDET[network]['D_class'],
                     is_training=False
                     )

    state_dict = checkpoint['state_dict']
    model.load_state_dict(state_dict)
    model.train(False)

    dummpy_input = Variable(torch.randn(1,3,512,512))
    output = onnx.export(model, dummpy_input, save_onnx_name,verbose=True, export_params=True)
to_onnx('checkpoint_VOC_efficientdet-d0_206.pth')

I can get efficientdet.onnx file when I run the above code. Moreover I use onnx.checker.check_model to check the onnx model and no error.

But when I use the same image to run demo.py and onnx_test.py, demo.py has result(can predict object) , and onnx_test.py is not. my onnx_test.py is follow:

import onnxruntime as rt
import cv2
import numpy as np

def run(image_name, onnx_path):
    img=cv2.imread(image_name)
    img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    img = cv2.resize(img, (512,512))
    img = img.reshape(1,3,512,512)
    img= img.astype(np.float32)

    sess = rt.InferenceSession(onnx_path)
    input_name = sess.get_inputs()[0].name
    output_name1 = sess.get_outputs()[0].name
    output_name2 = sess.get_outputs()[1].name
    output_name3 = sess.get_outputs()[2].name
    print(output_name1, output_name2, output_name3)
    output_name=[output_name1,output_name2, output_name3]
    pred_onnx = sess.run(output_name, {input_name: img})
    print(pred_onnx)

when run onnx_test.py, output :

2019-12-31 13:32:44.800826576 [W:onnxruntime:, graph.cc:2412 CleanUnusedInitializers] Removing initializer 'efficientnet._blocks.8._bn1.weight'. It is not used by any node and should be removed from the model.
2019-12-31 13:32:44.800834976 [W:onnxruntime:, graph.cc:2412 CleanUnusedInitializers] Removing initializer 'efficientnet._blocks.8._bn2.bias'. It is not used by any node and should be removed from the model.
2019-12-31 13:32:44.800843492 [W:onnxruntime:, graph.cc:2412 CleanUnusedInitializers] Removing initializer 'efficientnet._blocks.8._bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-12-31 13:32:44.800852497 [W:onnxruntime:, graph.cc:2412 CleanUnusedInitializers] Removing initializer 'efficientnet._blocks.5._depthwise_conv.weight'. It is not used by any node and should be removed from the model.
2019-12-31 13:32:44.800861084 [W:onnxruntime:, graph.cc:2412 CleanUnusedInitializers] Removing initializer 'efficientnet._blocks.8._bn2.running_mean'. It is not used by any node and should be removed from the model.
459 460 461
[array([], dtype=float32), array([], dtype=float32), array([], shape=(0, 4), dtype=float32)]
  • Q1: Is there a problem with my converted code?
  • Q2: " 'efficientnet._blocks.8._bn1.weight'. It is not used by any node and should be removed from the model." , how this mean?
  • Q3: The size of checkpoint_xxx-b0_xxx.pth is about 38M,
    Can i reduce the model file size?

Please take the time to answer the above three questions, thank you very much!

a RuntimeError when run demo.py

Hello, thanks for your code.My torchvision version is 0.4.2 and cuda version is 10.1. When I run demo.py I get a error :
RuntimeError: CUDA error: no kernel image is available for execution on the device (nms_cuda at /tmp/pip-req-build-9d9zypi6/torchvision/csrc/cuda/nms_cuda.cu:127) frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x6d (0x7fabd233fe7d in /home/qxmz/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/torch/lib/libc10.so) .
It happened when nms in the code:
anchors_nms_idx = nms(transformed_anchors[0, :, :], scores[0, :, 0], iou_threshold = self.iou_threshold)
Can you tell me how to solve it??? Forgive me for poor english. Thank you very much.
In addition, there were no problems with training.

detect result

您好,这边测试了一个weights/checkpoint_VOC_efficientdet-d1_37.pth的模型,结果是这个样子的,有几个好像未检测到
image

train error

OS:ubuntu18.04
RTX 1080ti * 1

I train efficientdet-b0 using:
python3 train.py --dataset VOC --dataset_root /home/**/data/VOCdevkit/ --network effcientdet-d0 --batch_size 8
Erro shows:
Traceback (most recent call last):
File "train.py", line 93, in
transform=get_augumentation(phase='train', width=EFFICIENTDET[args.network]['input_size'], height=EFFICIENTDET[args.network]['input_size']))
KeyError: 'effcientdet-d0'

could you tell me how to fix this error ?

RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'weight'

I met this problem, but I don't know how to salve it. Can you help me? Thank you very much!

File "/home/semtp/notebooks/EfficientDet_FCOS_792a/EfficientDet_FCOS/fcos_core/modeling/backbone/utils.py", line 139, in forward
x = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'weight'

Readme training part error

Hi,
You need to update https://github.com/toandaominh1997/EfficientDet.Pytorch#training-efficientdet

Change :

  • To train EfficientDet using the train script simply specify the parameters listed in train.py as a flag or manually change them.

python train.py --model_name effcientdet-d0 # Example

  • With VOC Dataset:

python train.py --dataset_root /root/data/VOCdevkit/ --n effcientdet-d0 # Example

to :

  • To train EfficientDet using the train script simply specify the parameters listed in train.py as a flag or manually change them.

python train.py --model_name efficientdet-d0 # Example

  • With VOC Dataset:

python train.py --dataset_root /root/data/VOCdevkit/ --network efficientdet-d0 # Example

Thank you for this good repository

Some problems in calculating mAP in eval.py

Hi,thank you for your sharing~
When I use this project training the VOCdataset, I can use the demo.py to test the pictures, but when using eval.py calculating mAP, I get very low mAP(No boxes to NMS, just 0.03),is there something wrong with eval.py?

run demo.py, image nothing happened

Hi, thanks very much for your code!
I trained a model 'checkpoint_VOC_efficientdet-d2_17.pth' on my own dataset, but when I demo.py this model no box appeared in the image. Nothing happened only image color changed.

Why need data augmentation when running demo.py? The output image changes color when I input an image.

Trained models

  1. is there any Trained models for test?
  2. Can you reproduce the results reported in EfficientDet paper?
    thanks very much.

how to do when image has no object labeled?

hi!
thanks for your great work.
target = np.array(target)
bbox = target[:, :4]
labels = target[:, 4]

when nothing to labeled, target shape is (0,)
in this time how to do?
thank you again

EfficientNet finetune

I have a problem, in your efficientNet, you do pooling 6 times. But in raw efficientNet, just 4 times. If you use efficientNet pretrained model, pooling time is different. I don't think it make sense. Looking forward to your answer.
blocks_args=BlockDecoder.decode([ 'r1_k3_s11_e1_i32_o16_se0.25', 'r2_k3_s22_e6_i16_o24_se0.25', 'r2_k5_s22_e6_i24_o40_se0.25', 'r3_k3_s22_e6_i40_o80_se0.25', 'r3_k5_s22_e6_i80_o112_se0.25', 'r4_k5_s22_e6_i112_o192_se0.25', 'r1_k3_s22_e6_i192_o320_se0.25', ]),

when I load pretrained model EfficientDet-D1(with Weight)

RuntimeError: Error(s) in loading state_dict for EfficientDet:
size mismatch for bbox_head.retina_cls.weight: copying a param with shape torch.Size([180, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([9, 256, 3, 3]).
size mismatch for bbox_head.retina_cls.bias: copying a param with shape torch.Size([180]) from checkpoint, the shape in current model is torch.Size([9]).

Performance of model efficient D0

Follow this paper, I saw efficient D0- Flops is smaller than yolov3 28 times. I think the speed of this model is higher than yolov3 ~ 28 times. Is this right?
I calculated the performance on GTX-2080 :
Yolov3-Alexab/darknet ~ 0.027 - 0.029 s/frame -> ~30frame/s
Your git - efficient-net D0 ~ 0.027s/frame -> ~30frame/s
This speed is same, so have any problem in here?
Can you explain it to me?
By the way, thank for your hard work 👍

an error in eval.py

get_augmentation func has no phase named 'valid', so eval.py throws a runtimeerror.

BiFPN

It seems that there are no learning weight for different feature maps of BiFPN as the paper, can you add it?

class BiFPN(nn.Module):
def init(self, num_channels):
super(BiFPN, self).init()
self.num_channels = num_channels

def forward(self, inputs):
    num_channels = self.num_channels
    P3_in, P4_in, P5_in, P6_in, P7_in = inputs

    P7_up = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P7_in)
    scale = int(P6_in.size(3)/P7_up.size(3))
    P6_up = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P6_in+self.Resize(scale_factor=scale)(P7_up))
    scale = int(P5_in.size(3)/P6_up.size(3))
    P5_up = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P5_in+self.Resize(scale_factor=scale)(P6_up))
    scale = int(P4_in.size(3)/P5_up.size(3))
    P4_up = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P4_in+self.Resize(scale_factor=scale)(P5_up))
    scale = int(P3_in.size(3)/P4_up.size(3))
    P3_out = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P3_in+self.Resize(scale_factor=scale)(P4_up))

    kernel_size = int(P3_out.size(3)/P4_up.size(3))
    P4_out = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P4_in + P4_up+nn.MaxPool2d(kernel_size=kernel_size)(P3_out))
    kernel_size = int(P4_out.size(3)/P5_up.size(3))
    P5_out = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P5_in + P5_up+nn.MaxPool2d(kernel_size=kernel_size)(P4_out))
    kernel_size = int(P5_out.size(3)/P6_up.size(3))
    P6_out = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P6_in + P6_up+nn.MaxPool2d(kernel_size=kernel_size)(P5_out))
    kernel_size = int(P6_out.size(3)/P7_up.size(3))
    P7_out = self.Conv(in_channels=num_channels, out_channels=num_channels, kernel_size=1, stride=1, padding=0, groups=num_channels)(P7_in + P7_up+nn.MaxPool2d(kernel_size=kernel_size)(P6_out))
    return P3_out, P4_out, P5_out, P6_out, P7_out

    

@staticmethod
def Conv(in_channels, out_channels, kernel_size, stride, padding, groups = 1):
    features = nn.Sequential(
        nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups),
        nn.BatchNorm2d(num_features=out_channels),
        nn.ReLU()
    )
    return features 
@staticmethod
def Resize(scale_factor=2, mode='nearest'):
    upsample = nn.Upsample(scale_factor=scale_factor, mode=mode)
    return upsample

Bug: Multi GPU

Current, I have a bug about data parallel on this repo. Please do not use multi GPU when training.

num_epoch

hi, thanks for sharing!
In train.py, paramter "num_epoch" defaults 500. I thought the total iteration num is 500*1000, but it seem not. Could you explain this param ?

voc0712.pth file

Thanks for publishing code, I met a problem, when I run demo.py, I didn't find "./weights/voc0712.pth" weight file, could you please push it, otherwise I can't visualize model performance. Thanks!

albumentations version???

can you tell me your albumentations version??
I got AttributeError: module 'albumentations.augmentations.transforms' has no attribute 'RandomResizedCrop'

No bboxes to NMS after training my dataset

Hi, I try to use your code to train my own dataset. The procedure is normal and the loss is down to 2.1 after 100 epochs training. But when I load this trained weight to do inference I see it output

No bboxes to NMS

And I print the max prediction score, it's very close to 0.
Maybe the model drops to the minimal value that it predicts nothing whatever is input. Then I try different optimizers and learning rates, the results are the same.
Any advice, please?

The mAP of EfficientDet-D0

Hello, @toandaominh1997 , thanks for your great work.
I saw your update of the EfficientDet-D0's weight and result. In the table of benchmarking, the title is

PASCAL VOC 2007 (Train/Test: 07trainval/07test, scale=600, ROI Align)
The mAP is 31.6. Do you mean the EfficientDet-D0's mAP in VOC07test is 31.6? I see the other implementation's mAP is ≈80 in VOC07test. Are there some troubles or the mAP=31.6 is in COCO Dataset.

model_name or network

python train.py --dataset_root /root/data/VOCdevkit/ --model_name effcientdet-d0 # Example

not model_name but network

infer time

any bady can give the infer time for the b0---b7.
thanks.

Training on custom dataset (9 classes)

Hi,

I am facing issue in biFpn module while starting the training using pretrained d1.

File "train.py", line 238, in
train()
File "train.py", line 183, in train
classification, regression, anchors = model(images)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/deval/Projects/EfficientDet/EfficientDet.Pytorch/models/efficientdet.py", line 57, in forward
x = self.extract_feat(inputs)
File "/home/deval/Projects/EfficientDet/EfficientDet.Pytorch/models/efficientdet.py", line 90, in extract_feat
x = self.neck(x[-5:])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/deval/Projects/EfficientDet/EfficientDet.Pytorch/models/bifpn.py", line 105, in forward
laterals = bifpn_module(laterals)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/deval/Projects/EfficientDet/EfficientDet.Pytorch/models/bifpn.py", line 186, in forward
pathtd[i - 1] = (w1[0, i-1]*pathtd[i - 1] + w1[1, i-1]*F.interpolate(pathtd[i], scale_factor=3, mode='nearest'))/(w1[0, i-1] + w1[1, i-1] + self.eps)
RuntimeError: The size of tensor a (10) must match the size of tensor b (15) at non-singleton dimension 3

Input Size : (640,640)
Num Classes : 9

The data loaded is in the same format as the COCO.

Can you please help what am I doing wrong?

what the ***k!

my number_class is 3(1bg+2classes)。eval is error and demo.py not work! which para error?

Loading checkpoint: ./weights/Final_efficientdet-d4.pth ...
Loaded pretrained weights for efficientnet-b4
Traceback (most recent call last):
File "eval.py", line 72, in
model.load_state_dict(checkpoint['state_dict'])
File "/tensorflow-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for EfficientDet:
Unexpected key(s) in state_dict: "neck.stack_bifpn_convs.3.w1", "neck.stack_bifpn_convs.3.w2", "neck.stack_bifpn_convs.3.bifpn_convs.0.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.0.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.1.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.1.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.2.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.2.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.3.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.3.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.4.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.4.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.5.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.5.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.6.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.6.0.conv.bias", "neck.stack_bifpn_convs.3.bifpn_convs.7.0.conv.weight", "neck.stack_bifpn_convs.3.bifpn_convs.7.0.conv.bias", "neck.stack_bifpn_convs.4.w1", "neck.stack_bifpn_convs.4.w2", "neck.stack_bifpn_convs.4.bifpn_convs.0.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.0.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.1.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.1.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.2.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.2.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.3.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.3.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.4.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.4.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.5.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.5.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.6.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.6.0.conv.bias", "neck.stack_bifpn_convs.4.bifpn_convs.7.0.conv.weight", "neck.stack_bifpn_convs.4.bifpn_convs.7.0.conv.bias", "neck.stack_bifpn_convs.5.w1", "neck.stack_bifpn_convs.5.w2", "neck.stack_bifpn_convs.5.bifpn_convs.0.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.0.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.1.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.1.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.2.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.2.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.3.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.3.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.4.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.4.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.5.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.5.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.6.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.6.0.conv.bias", "neck.stack_bifpn_convs.5.bifpn_convs.7.0.conv.weight", "neck.stack_bifpn_convs.5.bifpn_convs.7.0.conv.bias".
size mismatch for neck.lateral_convs.0.conv.weight: copying a param with shape torch.Size([224, 56, 1, 1]) from checkpoint, the shape in current model is torch.Size([88, 56, 1, 1]).
size mismatch for neck.lateral_convs.0.conv.bias: copying a param with shape torch.Size([224]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for neck.lateral_convs.1.conv.weight: copying a param with shape torch.Size([224, 112, 1, 1]) from checkpoint, the shape in current model is torch.Size([88, 112, 1, 1]).
size mismatch for neck.lateral_convs.1.conv.bias: copying a param with shape torch.Size([224]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for neck.lateral_convs.2.conv.weight: copying a param with shape torch.Size([224, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([88, 160, 1, 1]).
size mismatch for neck.lateral_convs.2.conv.bias: copying a param with shape torch.Size([224]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for neck.lateral_convs.3.conv.weight: copying a param with shape torch.Size([224, 272, 1, 1]) from checkpoint, the shape in current model is torch.Size([88, 272, 1, 1]).
size mismatch for neck.lateral_convs.3.conv.bias: copying a param with shape torch.Size([224]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for neck.lateral_convs.4.conv.weight: copying a param with shape torch.Size([224, 448, 1, 1]) from checkpoint, the shape in current model is torch.Size([88, 448, 1, 1]).
size mismatch for neck.lateral_convs.4.conv.bias: copying a param with shape torch.Size([224]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for neck.stack_bifpn_convs.0.bifpn_convs.0.0.conv.weight: copying a param with shape torch.Size([224, 224, 3, 3]) from checkpoint, the shape in current model is torch.Size([88, 88, 3, 3]).
》》》》》》》

Demo error:size mismatch for BIFPN.lateral_convs.0.conv.weight: copying a param with shape torch.Size([88, 40, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 40, 1, 1]).

Load pretrained Model
0    BlockArgs(kernel_size=3, num_repeat=2, input_filters=32, output_filters=16, expand_ratio=1, id_skip=True, stride=[1], se_ratio=0.25)
0    BlockArgs(kernel_size=3, num_repeat=2, input_filters=16, output_filters=16, expand_ratio=1, id_skip=True, stride=1, se_ratio=0.25)
1    BlockArgs(kernel_size=3, num_repeat=3, input_filters=16, output_filters=24, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
1    BlockArgs(kernel_size=3, num_repeat=3, input_filters=24, output_filters=24, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
1    BlockArgs(kernel_size=3, num_repeat=3, input_filters=24, output_filters=24, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
2    BlockArgs(kernel_size=5, num_repeat=3, input_filters=24, output_filters=40, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
2    BlockArgs(kernel_size=5, num_repeat=3, input_filters=40, output_filters=40, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
2    BlockArgs(kernel_size=5, num_repeat=3, input_filters=40, output_filters=40, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
3    BlockArgs(kernel_size=3, num_repeat=4, input_filters=40, output_filters=80, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
3    BlockArgs(kernel_size=3, num_repeat=4, input_filters=80, output_filters=80, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
3    BlockArgs(kernel_size=3, num_repeat=4, input_filters=80, output_filters=80, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
3    BlockArgs(kernel_size=3, num_repeat=4, input_filters=80, output_filters=80, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
4    BlockArgs(kernel_size=5, num_repeat=4, input_filters=80, output_filters=112, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
4    BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=112, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
4    BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=112, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
4    BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=112, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5    BlockArgs(kernel_size=5, num_repeat=5, input_filters=112, output_filters=192, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
5    BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5    BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5    BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5    BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
6    BlockArgs(kernel_size=3, num_repeat=2, input_filters=192, output_filters=320, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
6    BlockArgs(kernel_size=3, num_repeat=2, input_filters=320, output_filters=320, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
Loaded pretrained weights for efficientnet-b1
BIFPN in_channels: [40, 80, 112, 192, 320]
Traceback (most recent call last):
  File "demo.py", line 169, in <module>
    detect = Detect(weights = args.weight)
  File "demo.py", line 63, in __init__
    self.model.load_state_dict(state_dict)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 839, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for EfficientDet:
	Unexpected key(s) in state_dict: "BIFPN.stack_bifpn_convs.1.w1", "BIFPN.stack_bifpn_convs.1.w2", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.1.conv.bias", "BIFPN.stack_bifpn_convs.2.w1", "BIFPN.stack_bifpn_convs.2.w2", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.1.conv.bias". 
	size mismatch for BIFPN.lateral_convs.0.conv.weight: copying a param with shape torch.Size([88, 40, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 40, 1, 1]).
	size mismatch for BIFPN.lateral_convs.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.lateral_convs.1.conv.weight: copying a param with shape torch.Size([88, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 80, 1, 1]).
	size mismatch for BIFPN.lateral_convs.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.lateral_convs.2.conv.weight: copying a param with shape torch.Size([88, 112, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 112, 1, 1]).
	size mismatch for BIFPN.lateral_convs.2.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.lateral_convs.3.conv.weight: copying a param with shape torch.Size([88, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 192, 1, 1]).
	size mismatch for BIFPN.lateral_convs.3.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.lateral_convs.4.conv.weight: copying a param with shape torch.Size([88, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 320, 1, 1]).
	size mismatch for BIFPN.lateral_convs.4.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
	size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for regressionModel.conv1.weight: copying a param with shape torch.Size([256, 88, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
	size mismatch for classificationModel.conv1.weight: copying a param with shape torch.Size([256, 88, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).

Rotation

Can you add rotation support in this repo ....?

maybe bugs in eval.py

Hi @toandaominh1997
thanks for sharing the code, I had few queries:

  1. in eval.py, eval_voc function, when computing mAP, first for loop class, then for loop images.
    Shouldn't this line for i in range(valid_dataset.__num_class__()) be for i in range(len(valid_dataset)) ?
    截屏2019-12-20下午5 41 51

  2. when definevalid_dataset, used the default image_sets=[('2007', 'trainval'), ('2012', 'trainval')]. Shouldn't be [('2007', 'test')] ?
    截屏2019-12-20下午5 40 11

Data augmentation confusion

It shows dataset.get_augumentation will augment the train image, but you add centercrop after resize with same width and height, i'm confuse about it, and it seems redundant, and resize with no heigth/width ratio will weak structure of image

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.