Git Product home page Git Product logo

clocs's People

Contributors

dominicfmazza avatar pangsu0613 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

clocs's Issues

Considering 3D detection if there is no corresponding 2D detection

Hello @pangsu0613 . In the research paper you have mentioned that you will consider the 3D detection even if there is no corresponding 2D detection.
But what happens if you have the 2D detection and there is no corresponding 3D detection.??
image

I have taken this image from your research paper. Where there is 2D detection but no 3D still its showing the output 3D box.
Can you provide clarity on this ??

One more query:

  1. If we are taking the input as the velodyne infromation to the 3D detections.We need the pickle files like kitti_infos_train for calibration information ??

  2. Car -1 -1 -10 765.27 107.09 816.20 136.49 -1 -1 -1 -1000 -1000 -1000 -10 0.0000 . Can you please also explain this format output from cascaded RCNN which is present in the pickle file for 2D detections??

  3. Is there any straightforward method where we can directly use the 3D detections with out using seconds so that we can directly feed the 2D and 3D detections to clocs and run ??
    Thanks....

alternative dataset

Hello, thank you for your wonderful work, I have some questions:
Q1 have you test CLOCs on the available other indoor dataset?
Q2 if we want to use own 3D 2D detector and CLOCs on custom dataset, can we also follow the pipeline that you mentioned in README, or are there any other parts that needed to be modified?

Source of 2D Detections provided by CLOCs

Hello @pangsu0613! For reproducing results on second, we need to use 3D detections from a pretrained SECOND model and also 2D detections from Cascaded RCNN. You mention the following with regards to extracting 2D detections:

"For this example, we use detections with sigmoid scores, you could download the Cascade-RCNN detections for the KITTI train and validations set from here file name:'cascade_rcnn_sigmoid_data'"

My question is related to the model that predicted 'cascade_rcnn_sigmoid_data' 2D detections, is this model trained on training+validation or only on training set ? When I looked at cascaded-rcnn repo, it only provides pretrained models on training and validation set. I do not think they provide models trained only on training set. Could you please clarify this ?

AP is very lower

I use your model to eval. But the map is very low. only 72.12 in hard

TypeError: 'numpy.float64' object cannot be interpreted as an integer

Traceback (most recent call last):
File "./pytorch/train.py", line 918, in
fire.Fire()
File "/home/cute/anaconda3/envs/spconv-1.0/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/cute/anaconda3/envs/spconv-1.0/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/home/cute/anaconda3/envs/spconv-1.0/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 371, in train
raise e
File "./pytorch/train.py", line 356, in train
result = get_coco_eval_result(gt_annos, dt_annos, class_names)
File "/media/cute/New Volume/AAA/YYY/fusion/ClOCss/second/utils/eval.py", line 1017, in get_coco_eval_result
gt_annos, dt_annos, current_classes, overlap_ranges, compute_aos)
File "/media/cute/New Volume/AAA/YYY/fusion/ClOCss/second/utils/eval.py", line 828, in do_coco_style_eval
min_overlaps[:, i, j] = np.linspace(*overlap_ranges[:, i, j])
File "<array_function internals>", line 6, in linspace
File "/home/cute/anaconda3/envs/spconv-1.0/lib/python3.6/site-packages/numpy/core/function_base.py", line 113, in linspace
num = operator.index(num)
TypeError: 'numpy.float64' object cannot be interpreted as an integer

why am I getting this, please?

and when I trained I kept the velodyne_reduced with veldoyne data and velodyne is empty! it did not work with the opposite!!!

Hello @JessieW0806.

Hello @JessieW0806.

  1. We have tested CLOCs on nuScenes, but we can not share it with you for now.
  2. nuScenes has 10 classes. CLOCs is a very small network (similar size to a detection head in other detectors), we simply use 10 separatee CLOCs networks for 10 different classes. If there are too many detection candidates for each class, you can do some simple filtering (thresholding) to reduce the amount, and based on our experience, this works well.
  3. I think it is feasible. Currently, CLOCs takes 3D bbox (from 3D detector, it could be LiDAR or other sensor based) and 2D bbox (from 2D detector, it could be camera or other sensor based). I guess millimeter-wave radar can only provide top-down view (BEV) measurements, such as object center points or BEV bounding boxes. There are some modifications you need to make to project these radar detections into the image and design a data association metric (for CLOCs, we use IoU) to associate these project radar detections and other 2D detections. I think this depends what detection format you have.

Originally posted by @pangsu0613 in #40 (comment)

Tkinter install

Hi, I’m running on a HPC environment and it doesn’t support any GUI. Is it possible to run the training without tkinter?

Can CLOCs run on nuScenes dataset?

  1. Have you tested the performance of CLOCs on Nuscenes? Could you please share it with me?:)
  2. For Nuscenes, there are more types of objects to be detected than KITTI. It can be thought that there will be uncountable 3D boxes and 2D boxes to calculate IOU (it is impossible to train a network for each object separately). In this case, how to transfer CLOCs to Nuscenes gracefully?
  3. The reason for this problem is that I want to use millimeter-wave radar, which can add new modal information to CLOCS. Do you think this is feasible?
    Thanks a lot!!

KITTI dataset - number of examples in testing

I found from the KITTI website that there are only 7518 testing images. However, in your readme file, it states that there are 7580 testing images. I was wondering if KITTI has changed their number of testing images or it's a typo in your readme? Thanks!

RuntimeError: CUDA out of memory.

Thank you for the excellent work. It looks very cool.
I was trying to run it, but there was some error about the memory of GPU.
The log with the error is:

2d detection path: /home/wang/wang/git_files/test/CLOCs/d2_detection_data/data
sparse_shape: [  41 1600 1408]
num_class is : 1
load existing model
Restoring parameters from /home/wang/wang/git_files/test/CLOCs/model_dir/adam_optimizer-2.tckpt
{'Car': 5}
[-1]
load 14357 Car database infos
load 2207 Pedestrian database infos
load 734 Cyclist database infos
load 1297 Van database infos
load 56 Person_sitting database infos
load 488 Truck database infos
load 224 Tram database infos
load 337 Misc database infos
After filter database:
load 10520 Car database infos
load 2104 Pedestrian database infos
load 594 Cyclist database infos
load 826 Van database infos
load 53 Person_sitting database infos
load 321 Truck database infos
load 199 Tram database infos
load 259 Misc database infos
remain number of infos: 3712
remain number of infos: 3769
WORKER 0 seed: 1615082231
WORKER 1 seed: 1615082232
WORKER 2 seed: 1615082233
Traceback (most recent call last):
  File "./pytorch/train.py", line 926, in <module>
    fire.Fire()
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
    target=component.__name__)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "./pytorch/train.py", line 379, in train
    raise e
  File "./pytorch/train.py", line 248, in train
    all_3d_output_camera_dict, all_3d_output, top_predictions, fusion_input,tensor_index = net(example_torch,detection_2d_path)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/wang/wang/git_files/test/CLOCs/second/pytorch/models/voxelnet.py", line 310, in forward
    preds_dict = self.rpn(spatial_features)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/wang/wang/git_files/test/CLOCs/second/pytorch/models/rpn.py", line 314, in forward
    ups.append(self.deblocks[i](x))
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/wang/wang/git_files/test/CLOCs/torchplus/nn/modules/common.py", line 89, in forward
    input = module(input)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 106, in forward
    exponential_average_factor, self.eps)
  File "/home/wang/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/functional.py", line 1923, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 3.94 GiB total capacity; 2.45 GiB already allocated; 80.38 MiB free; 2.53 GiB reserved in total by PyTorch)

My GPU is NVIDIA GTX 1050TI with 4GB memory.
Is it because that my GPU memory is not enough?
Can I solve the problem by some operation without upgrading my GPU?
And how much is the lowest requirement for GPU memory of the project?

Steps to use Clocs with same models but a different dataset

Hello,

I have read through #7 but am still confused on how I should go about training Clocs. From what I have read, it sounds like I need to train the both the 2D and 3D model with my data and have it output data in KITTI format. Then, I need to train the fusion layer. I am confused on what the KITTI format is, is this the sparse tensor representation pointed out in the paper?

Thanks for all of your help so far, you have been extremely helpful and patient.

no module named 'fusion'

Hi, When I running train.py got another error as below:

Traceback (most recent call last):
  File "./pytorch/train.py", line 26, in <module>
    from second.pytorch.models import fusion
ImportError: cannot import name 'fusion'

'fusion.py' did exist in models folder, and I did add the CLOCs to the pythonpath.

Best luck

More Questions about fusion of PointRCNN and Cascade R-CNN

Hello @pangsu0613 ! Four months ago, I fused the candidates of PointRCNN and Cascade R-CNN successfully with the 2D candidates you provided. Now I want to try to train Cascade R-CNN from scratch and produce 2D candidates myself. I have two questions to ask you.

  1. I use the codebase of mmdetection to train Cascade R-CNN. After the refinement of three detection heads, I get 1000 candidates. Originally, the score threshold before NMS is 0.05, the IoU threshold in NMS is 0.5, the score threshold after NMS is 0.3. If I just set the score threshold before NMS to zero and save the candidates immediately after NMS, I may get more than 200 candidates. So I make two attempts. The first is to save the top 100 canidates with the highest scores. The second is to set IoU threshold to 0.2. Both attempts lead to worse fusion results than the result using the candidates you provided. Can you tell me the correct configuration?
  2. I find that the training process of CLOCs is unstable. For example, the evaluation results(AP) of epoch 1 to 5 are 80.9767, 82.8649, 83.3775, 81.5447, 82.4789, respectively. How to make the training process more stable?

I am looking forward to your reply. Thank you in advance!

RuntimeError: src/spconv/indice.cu 120 cuda execution failed with error 98

ERROR WHILE RUNNING THE INFERENCE

python /home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py evaluate --config_path=/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/configs/car.fhd.config --model_dir=/home/developer/deep_learning/deepti_ubuntu20/CLOCs/CLOCs_SecCas_pretrained --measure_time=True --batch_size=1

Predict_test: False
sparse_shape: [ 41 1600 1408]
num_class is : 1
load existing model
load existing model for fusion layer
Restoring parameters from /home/developer/deep_learning/deepti_ubuntu20/CLOCs/CLOCs_SecCas_pretrained/fusion_layer-11136.tckpt
remain number of infos: 3769
Generate output labels...
Traceback (most recent call last):
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py", line 920, in
fire.Fire()
File "/opt/conda/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/opt/conda/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py", line 658, in evaluate
for example in iter(eval_dataloader):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/builder/input_reader_builder.py", line 18, in getitem
return self._dataset[idx]
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/data/dataset.py", line 70, in getitem
prep_func=self._prep_func)
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/data/preprocess.py", line 313, in _read_and_prep_v9
count=-1).reshape([-1, num_point_features])
FileNotFoundError: [Errno 2] No such file or directory: '/home/developer/deep_learning/Projects/KITTI_DATASET_ROOT/KITTI_DATASET_ROOT/training/velodyne_reduced/000001.bin'
developer@f0f2e49fc4a2:/deep_learning/deepti_ubuntu20/CLOCs/second$
developer@f0f2e49fc4a2:
/deep_learning/deepti_ubuntu20/CLOCs/second$ python /home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py evaluate --config_path=/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/configs/car.fhd.config --model_dir=/home/developer/deep_learning/deepti_ubuntu20/CLOCs/CLOCs_SecCas_pretrained --measure_time=True --batch_size=1
Predict_test: False
sparse_shape: [ 41 1600 1408]
num_class is : 1
load existing model
load existing model for fusion layer
Restoring parameters from /home/developer/deep_learning/deepti_ubuntu20/CLOCs/CLOCs_SecCas_pretrained/fusion_layer-11136.tckpt
remain number of infos: 3769
Generate output labels...
/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function() found for signature:

getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))

There are 22 candidate implementations:

  • Of which 20 did not match due to:
    Overload of function 'getitem': File: : Line N/A.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
  • Of which 2 did not match due to:
    Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
    TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
    raised from /opt/conda/lib/python3.6/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/core/geometry.py (162)

File "core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
list(range(num_points_of_polygon - 1)), :]
^

@numba.jit
/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

cross = 0.0
for i in range(num_points):
^

@numba.jit
/opt/conda/lib/python3.6/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/opt/conda/lib/python3.6/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
Traceback (most recent call last):
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py", line 920, in
fire.Fire()
File "/opt/conda/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/opt/conda/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py", line 671, in evaluate
model_cfg.lidar_input,global_set)
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/train.py", line 462, in predict_kitti_to_anno
all_3d_output_camera_dict, all_3d_output, top_predictions, fusion_input,torch_index = net(example,detection_2d_path)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/models/voxelnet.py", line 304, in forward
voxel_features, coors, batch_size_dev)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/developer/deep_learning/deepti_ubuntu20/CLOCs/second/pytorch/models/middle.py", line 545, in forward
ret = self.middle_conv(ret)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/spconv/modules.py", line 123, in forward
input = module(input)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/spconv/conv.py", line 151, in forward
self.stride, self.padding, self.dilation, self.output_padding, self.subm, self.transposed, grid=input.grid)
File "/opt/conda/lib/python3.6/site-packages/spconv/ops.py", line 89, in get_indice_pairs
stride, padding, dilation, out_padding, int(subm), int(transpose))
RuntimeError: /home/developer/deep_learning/deepti_ubuntu20/spconv-8da6f967fb9a054d8870c3515b1b44eca2103634/src/spconv/indice.cu 120
cuda execution failed with error 98

Question about fusion of PointRCNN and Cascade R-CNN

Hello @pangsu0613! I know you and your lab mate are working for the fusion of other 3D and 2D detectors, but I am really interested in your work and eager to fuse the candidates of PointRCNN and Cascade R-CNN now.
I have got the 3D bounding boxes and 3D confidence scores from PointRCNN, but I can not understand your code for I am not familiar with SECOND. Could you please tell me how to modify your code to train the fusion network of PointRCNN and Cascade R-CNN?
I am looking forward to your early reply! Thank you in advance!

Implementation of Cascade RCNN used

Hi,
Thank you for sharing your work with us.

For my experiments, I want to modify the Camera (2D Detections) branch and LiDAR (3D Detections) branch before I run them through the CLOCs fusion network.

  1. For the LiDAR branch, I can do it easily by modifying the pre-trained weights provided here before they are loaded into the network.

  2. But for the Camera branch, the current codebase is using the bounding boxes provided here. There are no pre-trained weights available (which I am guessing stems from the fact that the code doesn't use pre-trained weights, instead uses the bounding boxes)

So, I have the option of producing these bounding boxes with modified weights for which I would like to know which Cascade RCNN implementation(Including the backbone network) was used to produce the current given set of bounding boxes. It would be great if you can let me know about the same.

Fusion of multiple 3D detection candidates.

Hello @pangsu0613 , Can we extend this approach to the fusion of multiple 3D object detection candidates(3D and 3D) instead of (2D and 3D).
If yes, what are the necessary changes to be made in the current implementation ??

Few more queries:

  1. CLOCS works on assumption that the Bounding Box (Output Detection) from Lidar is more accurate than camera as we are updating the final output with bounding boxes of LIDAR. But there are many instances where the lidar can fail like for example when there is high reflectance . How do we overcome this case ??

  2. Are there any other methods that you have come across where the bounding box refinement also happens in the fusion ??

AssertionError with latest spconv version

As title. This happens at spconv/spconv/utils/init.py.
Current version of spconv of this file has this default setting:

def points_to_voxel( ...
full_mean=False,
block_filtering=True,
...):

if full_mean:
assert block_filtering is False

This will result in assertion error.

If I directly change block_filtering=False, there will be other issues.
Is there any suggestions for this version of spconv?
Could you please help me with this issue? Thanks!!!

RTX3080 cuda out of memory

When I run

python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/home/yao/clocs/model_dir/

Environment
Ubuntu20.04
GTX3080
Conda environment:

Name Version Build Channel

_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_gnu conda-forge
blas 1.0 mkl
ca-certificates 2021.5.30 ha878542_0 conda-forge
certifi 2021.5.30 py37h89c1867_0 conda-forge
cudatoolkit 11.0.221 h6bb024c_0 anaconda
cycler 0.10.0 pypi_0 pypi
dataclasses 0.6 pypi_0 pypi
decorator 4.4.2 pypi_0 pypi
easydict 1.9 pypi_0 pypi
fire 0.4.0 pypi_0 pypi
freetype 2.10.4 h0708190_1 conda-forge
future 0.18.2 pypi_0 pypi
imageio 2.9.0 pypi_0 pypi
intel-openmp 2021.2.0 h06a4308_610
jpeg 9b h024ee3a_2
kiwisolver 1.3.1 pypi_0 pypi
kornia 0.5.4 pypi_0 pypi
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 hea4e1c9_2 conda-forge
libffi 3.3 h58526e2_2 conda-forge
libgcc-ng 9.3.0 h2828fa1_19 conda-forge
libgomp 9.3.0 h2828fa1_19 conda-forge
libpng 1.6.37 h21135ba_2 conda-forge
libstdcxx-ng 9.3.0 h6de172a_19 conda-forge
libtiff 4.2.0 h85742a9_0
libuv 1.41.0 h7f98852_0 conda-forge
libwebp-base 1.2.0 h7f98852_2 conda-forge
llvmlite 0.36.0 pypi_0 pypi
lz4-c 1.9.3 h9c3ff4c_0 conda-forge
matplotlib 3.4.2 pypi_0 pypi
mkl 2021.2.0 h06a4308_296
mkl-service 2.4.0 py37h5e8e339_0 conda-forge
mkl_fft 1.3.0 py37h42c9631_2
mkl_random 1.2.2 py37h219a48f_0 conda-forge
ncurses 6.2 h58526e2_4 conda-forge
networkx 2.5.1 pypi_0 pypi
ninja 1.10.2 h4bd325d_0 conda-forge
numba 0.53.1 pypi_0 pypi
numpy 1.21.0 pypi_0 pypi
numpy-base 1.20.2 py37hfae3a4d_0
olefile 0.46 pyh9f0ad1d_1 conda-forge
opencv-python 4.5.2.54 pypi_0 pypi
openssl 1.1.1k h7f98852_0 conda-forge
pcdet 0.3.0+65bd4cd dev_0
pillow 8.2.0 py37he98fc37_0
pip 21.1.2 pyhd8ed1ab_0 conda-forge
protobuf 3.17.3 pypi_0 pypi
pybind11 2.6.2 pypi_0 pypi
pyparsing 2.4.7 pypi_0 pypi
python 3.7.10 hffdb5ce_100_cpython conda-forge
python-dateutil 2.8.1 pypi_0 pypi
python_abi 3.7 1_cp37m conda-forge
pywavelets 1.1.1 pypi_0 pypi
pyyaml 5.4.1 pypi_0 pypi
readline 8.1 h46c0cb4_0 conda-forge
scikit-image 0.18.1 pypi_0 pypi
scipy 1.7.0 pypi_0 pypi
setuptools 49.6.0 py37h89c1867_3 conda-forge
shapely 1.7.1 pypi_0 pypi
six 1.16.0 pyh6c4a22f_0 conda-forge
spconv 1.0 pypi_0 pypi
sqlite 3.36.0 h9cd32fc_0 conda-forge
tensorboardx 2.2 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
tifffile 2021.6.14 pypi_0 pypi
tk 8.6.10 h21135ba_1 conda-forge
torch 1.7.0 pypi_0 pypi
torchaudio 0.9.0 pypi_0 pypi
torchvision 0.8.1+cu101 pypi_0 pypi
tqdm 4.61.1 pypi_0 pypi
typing_extensions 3.10.0.0 pyha770c72_0 conda-forge
wheel 0.36.2 pyhd3deb0d_0 conda-forge
xz 5.2.5 h516909a_1 conda-forge
zlib 1.2.11 h516909a_1010 conda-forge
zstd 1.4.9 ha95c52a_0 conda-forge

Error:
2d detection path: /home/yao/clocs/d2_detection_data
sparse_shape: [ 41 1600 1408]
num_class is : 1
load existing model
Restoring parameters from ../model_dir/voxelnet-30950.tckpt
Restoring parameters from /home/yao/clocs/model_dir/adam_optimizer-1.tckpt
{'Car': 5}
[-1]
load 14357 Car database infos
load 2207 Pedestrian database infos
load 734 Cyclist database infos
load 1297 Van database infos
load 56 Person_sitting database infos
load 488 Truck database infos
load 224 Tram database infos
load 337 Misc database infos
After filter database:
load 10520 Car database infos
load 2104 Pedestrian database infos
load 594 Cyclist database infos
load 826 Van database infos
load 53 Person_sitting database infos
load 321 Truck database infos
load 199 Tram database infos
load 259 Misc database infos
remain number of infos: 3712
remain number of infos: 3769
WORKER 0 seed: 1624653928
WORKER 1 seed: 1624653929
WORKER 2 seed: 1624653930
/home/yao/clocs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function() found for signature:

getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))

There are 22 candidate implementations:

  • Of which 20 did not match due to:
    Overload of function 'getitem': File: : Line N/A.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
  • Of which 2 did not match due to:
    Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
    TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
    raised from /home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/yao/clocs/second/core/geometry.py (162)

File "core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
list(range(num_points_of_polygon - 1)), :]
^

@numba.jit
/home/yao/clocs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function() found for signature:

getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))

There are 22 candidate implementations:

  • Of which 20 did not match due to:
    Overload of function 'getitem': File: : Line N/A.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
  • Of which 2 did not match due to:
    Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
    TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
    raised from /home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/yao/clocs/second/core/geometry.py (162)

File "core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
list(range(num_points_of_polygon - 1)), :]
^

@numba.jit
/home/yao/clocs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

cross = 0.0
for i in range(num_points):
^

@numba.jit
/home/yao/clocs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function() found for signature:

getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))

There are 22 candidate implementations:

  • Of which 20 did not match due to:
    Overload of function 'getitem': File: : Line N/A.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
  • Of which 2 did not match due to:
    Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
    With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
    TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
    raised from /home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/yao/clocs/second/core/geometry.py (162)

File "core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
list(range(num_points_of_polygon - 1)), :]
^

@numba.jit
/home/yao/clocs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

cross = 0.0
for i in range(num_points):
^

@numba.jit
/home/yao/clocs/second/core/geometry.py:146: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

cross = 0.0
for i in range(num_points):
^

@numba.jit
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):

# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/typed_passes.py:327: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.

To find out why, try turning on parallel diagnostics, see https://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.

File "utils/eval.py", line 127:
@numba.jit(nopython=True,parallel=True)
def build_stage2_training(boxes, query_boxes, criterion, scores_3d, scores_2d, dis_to_lidar_3d,overlaps,tensor_index):
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/numba/core/typed_passes.py:327: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.

To find out why, try turning on parallel diagnostics, see https://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.

File "utils/eval.py", line 231:
@numba.jit(nopython=True, parallel=True)
def d3_box_overlap_kernel(boxes, qboxes, rinc, criterion=-1):
^

state.func_ir.loc))
/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "./pytorch/train.py", line 918, in
fire.Fire()
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 371, in train
raise e
File "./pytorch/train.py", line 240, in train
all_3d_output_camera_dict, all_3d_output, top_predictions, fusion_input,tensor_index = net(example_torch,detection_2d_path)
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yao/clocs/second/pytorch/models/voxelnet.py", line 310, in forward
preds_dict = self.rpn(spatial_features)
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yao/clocs/second/pytorch/models/rpn.py", line 312, in forward
x = self.blocksi
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yao/clocs/torchplus/nn/modules/common.py", line 89, in forward
input = module(input)
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 178, in forward
self.eps,
File "/home/yao/Documents/anaconda3/envs/3d/lib/python3.7/site-packages/torch/nn/functional.py", line 2282, in batch_norm
input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 9.78 GiB total capacity; 1.54 GiB already allocated; 36.44 MiB free; 1.57 GiB reserved in total by PyTorch)

AP slightly lower than expected

Hi! I follow the instructions in README.md to run the training and evaluation code, but could not get the expected result.

The result of training:
Generate output labels...
validation_loss: 0.1410481701822625
generate label finished(6.10/s). start eval:

      Car [email protected], 0.70, 0.70:
      bbox AP:96.92, 93.65, 90.75
      bev  AP:96.29, 90.22, 87.14
      3d   AP:92.42, 81.08, 77.49
      aos  AP:96.77, 93.15, 90.01

      Car [email protected], 0.50, 0.50:
      bbox AP:96.92, 93.65, 90.75
      bev  AP:98.78, 96.04, 93.30
      3d   AP:96.75, 95.93, 93.17
      aos  AP:96.77, 93.15, 90.01
      
      Car coco [email protected]:0.05:0.95:
      bbox AP:79.38, 73.83, 70.88
      bev  AP:74.76, 69.60, 66.95
      3d   AP:64.88, 58.86, 56.25
      aos  AP:79.24, 73.45, 70.32

The result of evaluation with the the pretrained model you privided:
(pytorch1.2) pioneer2_6@amax:/data3/LZW/CLOCs/second$ python ./pytorch/train.py evaluate --config_path=./configs/car.fhd.config --model_dir=/data3/LZW/CLOCs_result/CLOCs_SecCas_pretrained --measure_time=True --batch_size=1
Predict_test: False
sparse_shape: [ 41 1600 1408]
num_class is : 1
load existing model
Restoring parameters from ../model_dir/voxelnet-30950.tckpt
load existing model for fusion layer
Restoring parameters from /data3/LZW/CLOCs_result/CLOCs_SecCas_pretrained/fusion_layer-11136.tckpt
remain number of infos: 3769
Generate output labels...
[100.0%][===================>][12.12it/s][05:09>00:00]
generate label finished(12.15/s). start eval:
validation_loss: 0.14429050794231943
avg example to torch time: 1.568 ms
avg prep time: 12.087 ms

      Car [email protected], 0.70, 0.70:
      bbox AP:96.90, 93.62, 90.73
      bev  AP:96.47, 90.28, 89.19
      3d   AP:92.64, 81.13, 77.59
      aos  AP:96.75, 93.14, 89.99

      Car [email protected], 0.50, 0.50:
      bbox AP:96.90, 93.62, 90.73
      bev  AP:96.95, 96.10, 93.36
      3d   AP:97.11, 95.91, 93.23
      aos  AP:96.75, 93.14, 89.99
      
      Car coco [email protected]:0.05:0.95:
      bbox AP:78.65, 73.76, 71.05
      bev  AP:74.59, 69.66, 67.42
      3d   AP:65.07, 59.21, 56.49
      aos  AP:78.51, 73.40, 70.47

I have seen similar issues asked by others, but I think my problem is not the same with them. I use the files you provided in google drive. I just change the code according to Common Errors & Solutions and a numpy error in the function do_coco_style_eval.

I am confused and hope you can help me solve this problem. Looking forward to your reply. Thank you very much!

Fusion of other 3D and 2D detectors

Hi. Congratulations on your work! I have followed the instructions and am able to reproduce the results.
One thing which remains unclear is step 2 of "Fusion of other 2D and 3D detectors". In which file or path to save the 3D detector outputs?
Also, after making the required changes as suggested in steps 2 and 3, should I just run the same command for the training?

CUDA error

I followed the instructions, but still meet some errors, please help.
I got RuntimeError: cuda runtime error (77)
What's the correct version of CUDA and CUDNN?

My environments:

$ conda list
# packages in environment at /home/ds1/anaconda3/envs/clocs:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
asn1crypto                0.22.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
blas                      1.0                         mkl    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
boost                     1.61.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
bzip2                     1.0.6                         3    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
ca-certificates           2021.1.19            h06a4308_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
certifi                   2016.2.28                py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cffi                      1.10.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cmake                     3.6.3                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
conda                     4.5.13                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
conda-env                 2.6.0                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cryptography              1.8.1                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cudatoolkit               9.0                  h13b8566_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudnn                     7.6.5                 cuda9.0_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
curl                      7.54.1                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
expat                     2.1.0                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
fire                      0.4.0                    pypi_0    pypi
freetype                  2.5.5                         2    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
icu                       54.1                          0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
idna                      2.6                      py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
imageio                   2.9.0                    pypi_0    pypi
intel-openmp              2020.2                      254    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jbig                      2.1                           0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
jpeg                      9b                            0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
krb5                      1.13.2                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libffi                    3.2.1                         1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libgcc-ng                 9.1.0                hdf63c60_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng                    1.6.30                        1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libssh2                   1.8.0                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
libstdcxx-ng              9.1.0                hdf63c60_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtiff                   4.0.6                         3    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
llvmlite                  0.35.0                   pypi_0    pypi
mkl                       2020.2                      256    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service               2.3.0            py36he8ac12f_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft                   1.2.0            py36h23d657b_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random                1.1.1            py36h0573a6f_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ncurses                   5.9                          10    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
networkx                  2.5                      pypi_0    pypi
ninja                     1.7.2                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
numba                     0.52.0                   pypi_0    pypi
numpy                     1.19.2           py36h54aff64_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base                1.19.2           py36hfa32c7d_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
olefile                   0.44                     py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
openssl                   1.0.2l                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
packaging                 16.8                     py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pillow                    4.2.1                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pip                       9.0.1                    py36_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
protobuf                  3.14.0                   pypi_0    pypi
pybind11                  2.6.2                    pypi_0    pypi
pycosat                   0.6.3            py36h27cfd23_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pycparser                 2.18                     py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pyopenssl                 17.0.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pyparsing                 2.2.0                    py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
python                    3.6.2                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
pytorch                   1.1.0           cuda90py36h8b0c50b_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pywavelets                1.1.1                    pypi_0    pypi
readline                  6.2                           2    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
requests                  2.14.2                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
ruamel_yaml               0.11.14                  py36_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
scikit-image              0.17.2                   pypi_0    pypi
scipy                     1.5.4                    pypi_0    pypi
setuptools                36.4.0                   py36_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
shapely                   1.7.1                    pypi_0    pypi
six                       1.10.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
spconv                    1.0                      pypi_0    pypi
sqlite                    3.13.0                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorboardx              2.1                      pypi_0    pypi
termcolor                 1.1.0                    pypi_0    pypi
tifffile                  2020.9.3                 pypi_0    pypi
tk                        8.5.18                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
torchvision               0.3.0           cuda90py36h6edc907_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
wheel                     0.29.0                   py36_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
xz                        5.2.3                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
yaml                      0.1.6                         0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
zlib                      1.2.11                        0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free

Here shows the error in detail:

/home/ds1/anaconda3/envs/clocs/CLOCs/second/pytorch/train.py(107)train()
-> torch.manual_seed(3)
(Pdb) c
2d detection path: /home/ds1/anaconda3/envs/clocs/CLOCs/d2_detection_data/data
sparse_shape: [  41 1600 1408]
num_class is : 1
load existing model
{'Car': 5}
[-1]
load 14357 Car database infos
load 2207 Pedestrian database infos
load 734 Cyclist database infos
load 1297 Van database infos
load 56 Person_sitting database infos
load 488 Truck database infos
load 224 Tram database infos
load 337 Misc database infos
After filter database:
load 10520 Car database infos
load 2104 Pedestrian database infos
load 594 Cyclist database infos
load 826 Van database infos
load 53 Person_sitting database infos
load 321 Truck database infos
load 199 Tram database infos
load 259 Misc database infos
remain number of infos: 3712
remain number of infos: 3769
WORKER 0 seed: 1612422291
WORKER 1 seed: 1612422292
WORKER 2 seed: 1612422293
/home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py:146: NumbaWarning: 
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function(<built-in function getitem>) found for signature:
 
 >>> getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))
 
There are 22 candidate implementations:
   - Of which 20 did not match due to:
   Overload of function 'getitem': File: <numerous>: Line N/A.
     With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
   - Of which 2 did not match due to:
   Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
     With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
      TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
  raised from /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py (162)

File "second/core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
        vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
                                 list(range(num_points_of_polygon - 1)), :]
                                 ^

  @numba.jit
/home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py:146: NumbaWarning: 
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function(<built-in function getitem>) found for signature:
 
 >>> getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))
 
There are 22 candidate implementations:
   - Of which 20 did not match due to:
   Overload of function 'getitem': File: <numerous>: Line N/A.
     With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
   - Of which 2 did not match due to:
   Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
     With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
      TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
  raised from /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py (162)

File "second/core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
        vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
                                 list(range(num_points_of_polygon - 1)), :]
                                 ^

  @numba.jit
/home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py:146: NumbaWarning: 
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function(<built-in function getitem>) found for signature:
 
 >>> getitem(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))
 
There are 22 candidate implementations:
   - Of which 20 did not match due to:
   Overload of function 'getitem': File: <numerous>: Line N/A.
     With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    No match.
   - Of which 2 did not match due to:
   Overload in function 'GetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 162.
     With argument(s): '(array(float32, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))':
    Rejected as the implementation raised a specific error:
      TypeError: unsupported array index type list(int64)<iv=None> in Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>)
  raised from /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/typing/arraydecl.py:69

During: typing of intrinsic-call at /home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py (162)

File "second/core/geometry.py", line 162:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
        vec1 = polygon - polygon[:, [num_points_of_polygon - 1] +
                                 list(range(num_points_of_polygon - 1)), :]
                                 ^

  @numba.jit
/home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py:146: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "second/core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    cross = 0.0
    for i in range(num_points):
    ^

  @numba.jit
/home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py:146: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "second/core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    cross = 0.0
    for i in range(num_points):
    ^

  @numba.jit
/home/ds1/anaconda3/envs/clocs/CLOCs/second/core/geometry.py:146: NumbaWarning: 
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "second/core/geometry.py", line 170:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    cross = 0.0
    for i in range(num_points):
    ^

  @numba.jit
/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "second/core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

  state.func_ir.loc))
/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "second/core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

  state.func_ir.loc))
/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "second/core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

  state.func_ir.loc))
/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "second/core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

  state.func_ir.loc))
/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/object_mode_passes.py:152: NumbaWarning: Function "points_in_convex_polygon_jit" was compiled in object mode without forceobj=True, but has lifted loops.

File "second/core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

  state.func_ir.loc))
/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/numba/core/object_mode_passes.py:162: NumbaDeprecationWarning: 
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "second/core/geometry.py", line 157:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
    <source elided>
    # first convert polygon to directed lines
    num_points_of_polygon = polygon.shape[1]
    ^

  state.func_ir.loc))
THCudaCheck FAIL file=../torch/csrc/generic/serialization.cpp line=23 error=77 : an illegal memory access was encountered
Traceback (most recent call last):
  File "./second/pytorch/train.py", line 107, in train
    torch.manual_seed(3)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ds1/anaconda3/envs/clocs/CLOCs/second/pytorch/models/voxelnet.py", line 304, in forward
    voxel_features, coors, batch_size_dev)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ds1/anaconda3/envs/clocs/CLOCs/second/pytorch/models/middle.py", line 545, in forward
    ret = self.middle_conv(ret)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/modules.py", line 123, in forward
    input = module(input)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/conv.py", line 157, in forward
    outids.shape[0])
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/functional.py", line 83, in forward
    return ops.indice_conv(features, filters, indice_pairs, indice_pair_num, num_activate_out, False, True)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/ops.py", line 112, in indice_conv
    int(inverse), int(subm))
RuntimeError: CUDA error: an illegal memory access was encountered (copy_to_cpu at /tmp/pip-req-build-9xcrj8au/aten/src/ATen/native/cuda/Copy.cu:199)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7fcf259c82da in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: (anonymous namespace)::copy_to_cpu(at::Tensor&, at::Tensor const&) + 0x39d (0x7fcf2c40939d in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #2: void (anonymous namespace)::_copy__cuda<int>(at::Tensor&, at::Tensor const&, bool) + 0x914 (0x7fcf2c4cbd64 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #3: at::native::_s_copy__cuda(at::Tensor&, at::Tensor const&, bool) + 0xdf (0x7fcf2c409aef in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #4: at::native::_s_copy_from_cuda(at::Tensor const&, at::Tensor const&, bool) + 0x42 (0x7fcf2c409c72 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #5: at::CUDAType::_s_copy_from(at::Tensor const&, at::Tensor const&, bool) const + 0x11b (0x7fcf2cd52a8b in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #6: at::native::_s_copy__cpu(at::Tensor&, at::Tensor const&, bool) + 0x7e (0x7fcf263f05ce in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #7: <unknown function> + 0x91b9df (0x7fcf267199df in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #8: at::native::copy_(at::Tensor&, at::Tensor const&, bool) + 0x673 (0x7fcf263f2343 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #9: torch::autograd::VariableType::copy_(at::Tensor&, at::Tensor const&, bool) const + 0x478 (0x7fcf29e299e8 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #10: at::native::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) + 0xa6b (0x7fcf265a7ffb in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #11: at::TypeDefault::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) const + 0x2b (0x7fcf268309fb in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #12: torch::autograd::VariableType::to(at::Tensor const&, c10::TensorOptions const&, bool, bool) const + 0x33f (0x7fcf29c1286f in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #13: at::Tensor spconv::indiceConv<float>(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long) + 0x187 (0x7fcec5bd3a37 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/libspconv.so)
frame #14: void torch::jit::detail::callOperatorWithTuple<at::Tensor (* const)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long), at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>(c10::FunctionSchema const&, at::Tensor (* const&&)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long), std::vector<c10::IValue, std::allocator<c10::IValue> >&, std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long>&, torch::Indices<0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>) + 0x2bc (0x7fcec5bd7e0c in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/libspconv.so)
frame #15: std::_Function_handler<int (std::vector<c10::IValue, std::allocator<c10::IValue> >&), torch::jit::createOperator<at::Tensor (*)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long)>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, at::Tensor (*&&)(at::Tensor, at::Tensor, at::Tensor, at::Tensor, long, long, long))::{lambda(std::vector<c10::IValue, std::allocator<c10::IValue> >&)#1}>::_M_invoke(std::_Any_data const&, std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x61 (0x7fcec5bd8081 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/spconv/libspconv.so)
frame #16: <unknown function> + 0x3991c0 (0x7fcf50c761c0 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #17: <unknown function> + 0x36f445 (0x7fcf50c4c445 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #18: <unknown function> + 0x11f43d (0x7fcf509fc43d in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #29: THPFunction_apply(_object*, _object*) + 0x747 (0x7fcf50c23347 in /home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/lib/libtorch_python.so)


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./second/pytorch/train.py", line 920, in <module>
    fire.Fire()
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/fire/core.py", line 471, in _Fire
    target=component.__name__)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "./second/pytorch/train.py", line 107, in train
    torch.manual_seed(3)
  File "/home/ds1/anaconda3/envs/clocs/CLOCs/torchplus/train/checkpoint.py", line 173, in save_models
    save(model_dir, model, name, global_step, max_to_keep, keep_latest)
  File "/home/ds1/anaconda3/envs/clocs/CLOCs/torchplus/train/checkpoint.py", line 90, in save
    torch.save(model.state_dict(), ckpt_path)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/serialization.py", line 224, in save
    return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/serialization.py", line 149, in _with_file_like
    return body(f)
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/serialization.py", line 224, in <lambda>
    return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
  File "/home/ds1/anaconda3/envs/clocs/lib/python3.6/site-packages/torch/serialization.py", line 303, in _save
    serialized_storages[key]._write_file(f, _should_read_directly(f))
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at ../torch/csrc/generic/serialization.cpp:23

Can't open file './pytorch/train.py'

Hello,

I have setup my environment according to the readme, but I cannot run the sample model. Did I do something wrong?

Any help would be appreciated.

Thanks.

Should velodyne_reduced folder be empty?

Hi, I've follow your instructions, and let the velodyne_reduced folder empty. But got this error when running train.py
Want some hints, thank you

Original Traceback (most recent call last):
  File "/opt/conda/envs/spconv-1.0/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/opt/conda/envs/spconv-1.0/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/opt/conda/envs/spconv-1.0/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/workspace/aixiding/CLOCs/second/pytorch/builder/input_reader_builder.py", line 18, in __getitem__
    return self._dataset[idx]
  File "/workspace/aixiding/CLOCs/second/data/dataset.py", line 70, in __getitem__
    prep_func=self._prep_func)
  File "/workspace/aixiding/CLOCs/second/data/preprocess.py", line 313, in _read_and_prep_v9
    count=-1).reshape([-1, num_point_features])
FileNotFoundError: [Errno 2] No such file or directory: '/workspace/aixiding/CLOCs/KITTI_DATASET_ROOT/training/velodyne_reduced/002445.bin'

Best luck

Preparation for CLOCs

Hi, I am planning to do a fusion with Yolov4 and Seconds/PointPillar. Would you be providing a tutorial/guide for the extraction of the bounding boxes before NMS?

Assertion Error

I am facing this Assertion Error, not sure if this is a problem for my packages?

1
2

Running Own 2D detector

Hello, I am running my own 2D detector, but I do not understand the Kitti Fusion format in which the sigmoid data is to be saved.
And are the sigmoid data generated only on Test Data? since the mentioned folder has 7840 files.
Sorry I am new to this If I missed anything.

Second v1.16

I already train second v1.6
So by modifing config file, can I do clocs training on secondv1.16??

I have a assertion error even though I install spconv1.0

How to quantize

How would I go about quantizing the pretrained model for execution on an accelerator? I'm guessing I would quantize the 2D model, then the 3D model, and then finally the fusion layer.

Training just the SECOND detector

Hi
I have made changes to the SECOND part of CLOCS and I would like to train the network with those changes. Is it possible for me to do that using

python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/dir/to/your_model_dir
If it is can you let me know what changes I would have to do?

Thank you

More questions for nuScenes

Hello @JessieW0806.

  1. We have tested CLOCs on nuScenes, but we can not share it with you for now.
  2. nuScenes has 10 classes. CLOCs is a very small network (similar size to a detection head in other detectors), we simply use 10 separatee CLOCs networks for 10 different classes. If there are too many detection candidates for each class, you can do some simple filtering (thresholding) to reduce the amount, and based on our experience, this works well.
  3. I think it is feasible. Currently, CLOCs takes 3D bbox (from 3D detector, it could be LiDAR or other sensor based) and 2D bbox (from 2D detector, it could be camera or other sensor based). I guess millimeter-wave radar can only provide top-down view (BEV) measurements, such as object center points or BEV bounding boxes. There are some modifications you need to make to project these radar detections into the image and design a data association metric (for CLOCs, we use IoU) to associate these project radar detections and other 2D detections. I think this depends what detection format you have.

Originally posted by @pangsu0613 in #40 (comment)

what is SW/HW environment on your side?

Hi,

CLOCs_PVCas |   | code | 95.96 % | 96.76 % | 91.08 % | 0.1 s | 1 core @ 2.5 Ghz (Python)

what is SW/HW environment when CLOCS got the performance 0.1S in kitti website.

Help with inference for own dataset

Hello, congrats on your nice work. I really enjoyed reading your paper and I am excited to try your code.
As I am doing now my master thesis in 3d object detection, I would like to try CLOCs in combination with the state-of-the-art 3D lidar-based detector CenterPoint. Basically, in my project, I have access to camera images and lidar point clouds from sensors that are mounted on a bridge to detect vehicles on the highway, so the sensors are static. I would appreciate it if you could help me with some questions:

  1. Do you think CLOCs is suitable for a real-time application? Only with CenterPoint theoretically, I could achieve 30 FPS

  2. Can you please give me some guideline steps on how to proceed with inference on my custom dataset?
    Besides first training CenterPoint on my own lidar frames, as stated in the README I should export the detection results in the format that SECOND supports? I guess I should also first train a 2D detector like Cascade R-CNN on my collected image dataset. Afterwards, I could train the CLOCs fusion layer. Once I have this trained model available, how can I use it for inference on unseen data, do you have maybe a sample script for this?

Some code questions

Hello @pangsu0613, could you please explain with words the idea behind this algorithm to find the overlaps between the projected_3d_boxes (here in the code this is called just 'boxes') and the 2d boxes ( here in the code called 'query_boxes')
a)

# pang added to build the tensor for the second stage of training
@numba.jit(nopython=True,parallel=True)
def build_stage2_training(boxes, query_boxes, criterion, scores_3d, scores_2d, dis_to_lidar_3d,overlaps,tensor_index):
    N = boxes.shape[0] #70400
    K = query_boxes.shape[0] #30

    max_num = 900000
    ind=0
    ind_max = ind
    for k in range(K):
        qbox_area = ((query_boxes[k, 2] - query_boxes[k, 0]) *
                     (query_boxes[k, 3] - query_boxes[k, 1]))
        for n in range(N):

            iw = (min(boxes[n, 2], query_boxes[k, 2]) -
                  max(boxes[n, 0], query_boxes[k, 0]))
            if iw > 0:
                ih = (min(boxes[n, 3], query_boxes[k, 3]) -
                      max(boxes[n, 1], query_boxes[k, 1]))
                if ih > 0:
                    if criterion == -1:
                        ua = (
                            (boxes[n, 2] - boxes[n, 0]) *
                            (boxes[n, 3] - boxes[n, 1]) + qbox_area - iw * ih)
                    elif criterion == 0:
                        ua = ((boxes[n, 2] - boxes[n, 0]) *
                              (boxes[n, 3] - boxes[n, 1]))
                    elif criterion == 1:
                        ua = qbox_area
                    else:
                        ua = 1.0

                    overlaps[ind,0] = iw * ih / ua
                    overlaps[ind,1] = scores_3d[n,0]
                    overlaps[ind,2] = scores_2d[k,0]
                    overlaps[ind,3] = dis_to_lidar_3d[n,0]
                    tensor_index[ind,0] = k
                    tensor_index[ind,1] = n
                    ind = ind+1

                elif k==K-1:
                    overlaps[ind,0] = -10
                    overlaps[ind,1] = scores_3d[n,0]
                    overlaps[ind,2] = -10
                    overlaps[ind,3] = dis_to_lidar_3d[n,0]
                    tensor_index[ind,0] = k
                    tensor_index[ind,1] = n
                    ind = ind+1
            elif k==K-1:
                overlaps[ind,0] = -10
                overlaps[ind,1] = scores_3d[n,0]
                overlaps[ind,2] = -10
                overlaps[ind,3] = dis_to_lidar_3d[n,0]
                tensor_index[ind,0] = k
                tensor_index[ind,1] = n
                ind = ind+1
    if ind > ind_max:
        ind_max = ind
    return overlaps, tensor_index, ind

b) here when you calculate the feature 'distance_to_the_lidar', why do you divide by 82.0 ?

dis_to_lidar = torch.norm(box_preds[:,:2],p=2,dim=1,keepdim=True)/82.0

c) also, I don't understand why the output scores of the fusion network 'cls_pred' are in raw log format even though the input 3d and 2d scores were in sigmoid format. Can you please tell me the reason

Bug in fusion process

Hi Pang,

Thanks for your contribution~

I notice that there is a bug in your code: voxelnet.py L498-L501.

            box_2d_detector = np.zeros((200, 4))
            box_2d_detector[0:top_predictions.shape[0],:]=top_predictions[:,:4]
            box_2d_detector = top_predictions[:,:4]
            box_2d_scores = top_predictions[:,4].reshape(-1,1)

Actually, the shape of "box_2d_detector" varies at each iter (instead of remaining 200) with your implementation.
Thus assigning values to "out_1" according to the coordinates in fusion.py L48-L50 is wrong logically.
But due to the maxpooling operation along the axis of box_2d, I think this bug has no effect on the results.
I think we could replace the aforementioned code with:

            box_2d_detection = np.zeros((200, 5)) 
            box_2d_detection[0:top_predictions.shape[0],:]=top_predictions[:,:5]
            box_2d_detector = box_2d_detection[:,:4]
            box_2d_scores = box_2d_detection[:,4].reshape(-1,1)

Or pass the shape to fusion_layer.forward()...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.