Git Product home page Git Product logo

pytorch-yolov3-kitti's People

Contributors

packyan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pytorch-yolov3-kitti's Issues

train/val split used to obtain yolov3-kitti.weights

Thanks for sharing your work.
Could you please clarify the train/val split that was used to train the model corresponding to the yolov3-kitti.weights?
Is it the standard 3717 / 3769 split or was all of the 7481 images used for training?

Why are the AP scores all 0 when running test.py?

  • I'm using the pretrained kitti weights.
  • I'm testing on the KITTI 2d dataset downloaded from the official site.
  • I converted the kitti labels to coco format and put them under PyTorch-YOLOv3-kitti\data\kitti\labels\test
  • Test images are under PyTorch-YOLOv3-kitti\data\kitti\image\test

Format for targets

Hi!

I just wanted to ask about what is the final format of the targets when you pass them to the network for training?

From your code I understand that you read the GT from a txt file, but on datasets.py in line 96 you use np.readtxt but the labels begin with a string (see this example).

Could you please do me the favor of specifying what is the composition of the targets? I know that per image you limit the amount of boxes to 50, so the targets will have shape [batch_size, 50, 5]

Are they in the form [class, center_x_ratio, center_y_ratio, box_width_ratio, box_height_ratio]?

Where ratio means the value in pixels divided by the width of the image (for center_x and box_width) or by the height of the image (for center_y and box_height).

Thank you!

Training on Custom Dataset

Hi,

First of all thank you for the code and detailed steps involved.
I was trying to run the same code with another dataset (Virtual KITTI). I am able to start the training process but I am not sure how to decide the number of epochs. Is there any way to implement early stopping in the code or what should be the process. Can you please guide

plt.get_cmap uses vega20b which doesn't exist

On executing detect.py if Vega20b is used as color map,an errror is thrown of the following form

Vega20b' is not a valid value for name; supported values are 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'winter', 'winter_r'

The error is resolved if I replace Vega20b with tab20b.Maybe it's because later versions of matplotlib don't support vega20b.

KITTI training performance

Hi packyan,
The KITTI val performance of using the pre-trained model is very nice, which is:
Average Precisions:

  • Class '0' - AP: 0.8874476379466757
  • Class '1' - AP: 0.8776504929104587
  • Class '2' - AP: 0.9094174650326788
  • Class '3' - AP: 0.7935572902584845
  • Class '4' - AP: 0.6938401233821029
  • Class '5' - AP: 0.7484522134591276
  • Class '6' - AP: 0.9765143493123802
  • Class '7' - AP: 0.6567834290009646
    mAP: 0.8179578751628592

However, when doing the training with the Imagenet pretrained-model, I am only able to get the following results, which is:
Average Precisions:

  • Class '0' - AP: 0.5895481840925929
  • Class '1' - AP: 0.056750284687503164
  • Class '2' - AP: 0.05041402945683988
  • Class '3' - AP: 0.25446338964588194
  • Class '4' - AP: 0.0
  • Class '5' - AP: 0.042098299062717656
  • Class '6' - AP: 0.0
  • Class '7' - AP: 0.0
    mAP: 0.12415927336819195

    I'm wondering what thing do I miss or configure? I use the same label-transforming pre-processing code as in the instruction and training without changing the python arguments. Is anything wrong I did or any arguments need to be changed?

Thanks a lot!

Kitti.weights

Hi
What "kitti.weights" ? And how to download it?
Please help me

Train.py and kitti labels

Hi,

So I am using my own data from http://www.cvlibs.net/datasets/kitti/raw_data.php?type=residential
I am using the second to last data set 2011_10_03_drive_0027 (17.6 GB), I am able to run all necessary code until I get to the train.py file I keep getting the same error of

train.py", line 247
for i in tqdm.tqdm(range(len(all_annotations)), desc=f"Computing AP for class '{label}'"):
^
SyntaxError: invalid syntax

and if not that other ones that I will post once I get them again, also my visdom screen stays blue the whole time, and I'm not sure how to fix that issue. I've looked on StackOverflow and GitHub nothing resolves the issue.
I also do not know how to obtain labels for the dataset I am using can anyone help, please.

RuntimeError: CUDA out of memory

I'm getting RuntimeError for GPU even after I have set batch size to 1 instead of mentioned in repo 16.
Could you please guide me into this for any possible correction?
GPU is having capacity of 8GB

Thank you!

pretrained kitti weight

Download pretrained weights please download weights/yolov3-kitti.weights

Thank you for your code repository. I saw the trained weight of the Kitti in your readme to be download. Where can I download it?

运行python3 detect.py --image_folder /data/samples时报错,求解答

Config:
Namespace(batch_size=1, class_path='data/kitti.names', conf_thres=0.8, config_path='config/yolov3-kitti.cfg', image_folder='/data/samples', img_size=416, n_cpu=8, nms_thres=0.4, use_cuda=True, weights_path='weights/kitti.weights')
/usr/local/lib/python3.5/dist-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
warnings.warn(warning.format(ret))
model path: weights/kitti.weights
data size : 0

Performing object detection:
Traceback (most recent call last):
File "detect.py", line 90, in
cmap = plt.get_cmap('Vega20b')
File "/usr/local/lib/python3.5/dist-packages/matplotlib/cm.py", line 182, in get_cmap
% (name, ', '.join(sorted(cmap_d))))
ValueError: Colormap Vega20b is not recognized. Possible values are: Accent, Accent_r, Blues, Blues_r, BrBG, BrBG_r, BuGn, BuGn_r, BuPu, BuPu_r, CMRmap, CMRmap_r, Dark2, Dark2_r, GnBu, GnBu_r, Greens, Greens_r, Greys, Greys_r, OrRd, OrRd_r, Oranges, Oranges_r, PRGn, PRGn_r, Paired, Paired_r, Pastel1, Pastel1_r, Pastel2, Pastel2_r, PiYG, PiYG_r, PuBu, PuBuGn, PuBuGn_r, PuBu_r, PuOr, PuOr_r, PuRd, PuRd_r, Purples, Purples_r, RdBu, RdBu_r, RdGy, RdGy_r, RdPu, RdPu_r, RdYlBu, RdYlBu_r, RdYlGn, RdYlGn_r, Reds, Reds_r, Set1, Set1_r, Set2, Set2_r, Set3, Set3_r, Spectral, Spectral_r, Wistia, Wistia_r, YlGn, YlGnBu, YlGnBu_r, YlGn_r, YlOrBr, YlOrBr_r, YlOrRd, YlOrRd_r, afmhot, afmhot_r, autumn, autumn_r, binary, binary_r, bone, bone_r, brg, brg_r, bwr, bwr_r, cividis, cividis_r, cool, cool_r, coolwarm, coolwarm_r, copper, copper_r, cubehelix, cubehelix_r, flag, flag_r, gist_earth, gist_earth_r, gist_gray, gist_gray_r, gist_heat, gist_heat_r, gist_ncar, gist_ncar_r, gist_rainbow, gist_rainbow_r, gist_stern, gist_stern_r, gist_yarg, gist_yarg_r, gnuplot, gnuplot2, gnuplot2_r, gnuplot_r, gray, gray_r, hot, hot_r, hsv, hsv_r, inferno, inferno_r, jet, jet_r, magma, magma_r, nipy_spectral, nipy_spectral_r, ocean, ocean_r, pink, pink_r, plasma, plasma_r, prism, prism_r, rainbow, rainbow_r, seismic, seismic_r, spring, spring_r, summer, summer_r, tab10, tab10_r, tab20, tab20_r, tab20b, tab20b_r, tab20c, tab20c_r, terrain, terrain_r, twilight, twilight_r, twilight_shifted, twilight_shifted_r, viridis, viridis_r, winter, winter_r

RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device

MY ENVIROMENT:

Name Version Build Channel

_libgcc_mutex 0.1 conda_forge https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
_openmp_mutex 4.5 2_kmp_llvm https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
blas 2.17 openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ca-certificates 2023.01.10 h06a4308_0 anaconda
certifi 2021.10.8 pypi_0 pypi
cffi 1.10.0 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
chardet 4.0.0 pypi_0 pypi
cudatoolkit 7.5 2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cycler 0.10.0 pypi_0 pypi
freetype 2.5.5 2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
future 0.18.3 pypi_0 pypi
idna 2.10 pypi_0 pypi
imageio 2.9.0 pypi_0 pypi
jbig 2.1 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
jpeg 9b 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
jsonpatch 1.32 pypi_0 pypi
jsonpointer 2.3 pypi_0 pypi
kiwisolver 1.1.0 pypi_0 pypi
libblas 3.8.0 17_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libcblas 3.8.0 17_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libffi 3.2.1 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libgcc-ng 12.2.0 h65d4601_19 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgfortran 3.0.0 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libgfortran-ng 7.5.0 h14aa051_20 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgfortran4 7.5.0 h14aa051_20 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
liblapack 3.8.0 17_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
liblapacke 3.8.0 17_openblas https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libopenblas 0.3.10 h5a2b251_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.30 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libsodium 1.0.10 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libstdcxx-ng 12.2.0 h46fd767_19 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libtiff 4.0.6 3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
llvm-openmp 12.0.1 h4bd325d_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
matplotlib 2.1.0 pypi_0 pypi
mkl 2022.0.1 h8d4b97c_803 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
networkx 2.4 pypi_0 pypi
ninja 1.7.2 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
numpy 1.13.1 py35_nomkl_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
olefile 0.44 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
openblas 0.2.19 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
openssl 1.0.2l 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pillow 7.2.0 pypi_0 pypi
pip 20.3.4 pypi_0 pypi
pycparser 2.18 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
python 3.5.4 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pytorch 0.4.1 py35_py27__9.0.176_7.1.2_2 pytorch
pytz 2022.7.1 pypi_0 pypi
pywavelets 1.1.1 pypi_0 pypi
pyzmq 16.0.2 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
readline 6.2 2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
requests 2.25.1 pypi_0 pypi
scikit-image 0.15.0 pypi_0 pypi
setuptools 36.4.0 py35_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
six 1.10.0 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
sqlite 3.13.0 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tbb 2021.7.0 h924138e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
tk 8.5.18 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
torchfile 0.1.0 pypi_0 pypi
torchvision 0.2.0 pypi_0 pypi
tornado 4.5.2 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
urllib3 1.26.9 pypi_0 pypi
version 0.1.0 pypi_0 pypi
visdom 0.1.8.3 pypi_0 pypi
websocket-client 0.59.0 pypi_0 pypi
wheel 0.29.0 py35_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
xz 5.2.3 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
zeromq 4.1.5 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
zlib 1.2.11 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free

QUESTION:
(pytroch-yolov3) siat@siat-Precision-3640-Tower:~/work/object_detection/PyTorch-YOLOv3-kitti-master$ python detect.py
Config:
Namespace(batch_size=1, class_path='./data/kitti.names', conf_thres=0.8, config_path='./config/yolov3-kitti.cfg', image_folder='./data/samples/', img_size=416, n_cpu=4, nms_thres=0.4, use_cuda=True, weights_path='./weights/kitti.weights')
/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead.
warnings.warn(warning.format(ret))
model path: ./weights/kitti.weights
using cuda model
data size : 20

Performing object detection:
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1535490206202/work/aten/src/THC/THCGeneral.cpp line=663 error=8 : invalid device function
Traceback (most recent call last):
File "detect.py", line 72, in
detections = model(input_imgs)
File "/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/siat/work/object_detection/PyTorch-YOLOv3-kitti-master/models.py", line 256, in forward
x = module(x)
File "/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/modules/activation.py", line 447, in forward
return F.leaky_relu(input, self.negative_slope, self.inplace)
File "/home/siat/Downloads/ENTER/envs/pytroch-yolov3/lib/python3.5/site-packages/torch/nn/functional.py", line 755, in leaky_relu
return torch._C._nn.leaky_relu(input, negative_slope)
RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device at /opt/conda/conda-bld/pytorch_1535490206202/work/aten/src/THCUNN/generic/LeakyReLU.cu:29

求问大佬

这个项目如果改变分类的数目,只识别car类型的话, 需要进行什么样的改造?

how to use multi gpus?

I didn't find multiple gpu training code in repro, and I found that use nn.DataParallel also does not work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.