mrmoore98 / vectormapnet_code Goto Github PK
View Code? Open in Web Editor NEWThis is the official code base of VectorMapNet (ICML 2023)
Home Page: https://tsinghua-mars-lab.github.io/vectormapnet/
License: GNU General Public License v3.0
This is the official code base of VectorMapNet (ICML 2023)
Home Page: https://tsinghua-mars-lab.github.io/vectormapnet/
License: GNU General Public License v3.0
VectorMapNet_code/configs/vectormapnet.py
Line 82 in 4bcc8b4
When I reproduced the program file in vectormapnet.py, I found that the result fluctuated to a certain extent during multiple training with the same configuration. Up to 0.8 mAP (evaluated by chamfer distance) discrepancy occurs in two trainings under the very same configuration. Is this randomness normal in anyone else's reproduction? Will fixing random seeds in train.py helps in solving it?
Hey, I pay attention to your work, which is very excellent. When is the code open source? Look forward to your reply.
hi,
{0.5, 1.0, 1.5} are the predefined thresholds of Chamfer distance in paper, Are they physical distances on the BEV? Do they need to be converted to pixel distances for evaluation?
why CD thresholds set are [2, 4, 6] in hdmapnet code?
thanks!
Hi! Thanks for making your code open! I tried use this code to test its performance on nuScenes v1.0 mini, but I got a bad performance about only 1% map after 130 epochs. My steps are as follows:
Data process. I downloaded nuScenes v1.0 mini (3.88G) and the map expansion(0.38G). And I run python tools/data_converter/nuscenes_converter.py --data-root your/dataset/nuScenes/ --version v1.0-mini, getting two .pkl file successfully. My datasets directory is as follows:
Training. I set samples_per_gpu as 2 and works_per_gpu as 8. No other changes about the configs. And I run python tools/train.py configs/vectormapnet.py. There is no mistake during training, but the mAP is low.
Here is the log file: https://naniko.obs.cn-central-221.ovaijisuan.com/VectorMap/result.zip
Did I do something wrong ?
In paper, you use bbox, sme and extreme pts as 3 types of key-points and predict them in map element detector.
In code, it seems map element detector only uses bbox, is my observation wrong? or just the last 2 types of key-points unnecessary?
Looking forward to your reply.
Thx for your good work.
Hello, I attempted to replicate your work, but the accuracy I obtained was significantly lower than the model accuracy that you provided. My mAP was only around 23. I am wondering if the config file that I used differs from the one that you actually used, as I noticed that the code defaults to using the gt input of bbox for the gen_net, rather than the output of det_net as reported in the paper.
Additionally, while debugging the program, I encountered instances where the batch size of some of the input queries was 0,
which caused the program to crash. I have currently implemented a try statement to filter out this issue. Have you encountered this problem before?
Hi, I managed to run your evaluation script and got the same result.
In the next step I would like to visualize the predicted map as BEV figures which are shown in the paper.
Can you tell me which scripts do you use for this task?
Thank you a lot
Hi, thanks for your great work! I am wondering would you provide any code that can verify the result with the downstream prediction that you presented in your paper. Thanks!
Hi, the state_dict checkpoint doesn't match the current model.
I'm wondering how to obtain a matching one.
This was also pointed out as a sub-issue in #14.
$ python tools/test.py configs/vectormapnet.py /home/me/vectormapnet.pth --eval name
<frozen importlib._bootstrap>:219: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
plugin
work_dir: ./work_dirs/vectormapnet
collecting samples...
collected 6019 samples in 0.30s
2023-12-13 01:44:03,157 - mmcv - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
2023-12-13 01:44:03,158 - mmcv - INFO - Use load_from_openmmlab loader
Downloading: "https://download.openmmlab.com/pretrain/third_party/resnet50_msra-5891d200.pth" to /root/.cache/torch/hub/checkpoints/resnet50_msra-5891d200.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 89.9M/89.9M [00:04<00:00, 20.0MB/s]
2023-12-13 01:44:13,163 - mmcv - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.bias
missing keys in source state_dict: layer3.0.conv2.conv_offset.weight, layer3.0.conv2.conv_offset.bias, layer3.1.conv2.conv_offset.weight, layer3.1.conv2.conv_offset.bias, layer3.2.conv2.conv_offset.weight, layer3.2.conv2.conv_offset.bias, layer3.3.conv2.conv_offset.weight, layer3.3.conv2.conv_offset.bias, layer3.4.conv2.conv_offset.weight, layer3.4.conv2.conv_offset.bias, layer3.5.conv2.conv_offset.weight, layer3.5.conv2.conv_offset.bias, layer4.0.conv2.conv_offset.weight, layer4.0.conv2.conv_offset.bias, layer4.1.conv2.conv_offset.weight, layer4.1.conv2.conv_offset.bias, layer4.2.conv2.conv_offset.weight, layer4.2.conv2.conv_offset.bias
Use load_from_local loader
The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1x1.weight
I crop this snippet in dghead.py
which is used to perform prediction. It checked and see that batch
seems to contain the ground-truth detection (class labels and bounding boxes in batch['det']
). Since VectorMapNet supposes to be an end-to-end map learning model, I don't understand why ground truth is require to perform inference here . Did I understand something wrong here?
I want to use your model make prediction with my custom camera data, can you suggest on how to do it?
@torch.no_grad()
def inference(self, batch: dict, context: dict, gt_condition=False, **kwargs):
'''
num_samples_batch: number of sample per batch (batch size)
'''
outs = {}
bbox_dict = self.det_net(batch['det'], context=context)
bbox_dict = self.det_net.post_process(bbox_dict)
I have trained the model on a single gpu with your latest configurations, but the performance is severely lower than that reported by your paper. I want to ask if there are some additional settings for training?
Thank you for the public release of your code! I was evaluating the VectorMapNet model with the checkpoint provided and faced some problems as in the screen above.
Any idea how to solve these?
Good morning, thank you for sharing your work. I would like to know how you implement the vectorization of the lane points or which methods you used to derive yours and if this method could work with lanes with a particular slope for example in a hill or mountains.
Thank you
Hi!Could you provide the r50 pretrain model used in the config? I use a r50 with ImageNet pretrain, but I got a bad result with mAP 0.34.
hi, dear authors, i tried to replace the backbone from IPM_encoder to the BEVDepth's LSS style, however, the gen_loss seems not converge after 2 epoch. Do you think it's ok to replace some network ?
I want to know how to prepare my training test data and get a data structure that is exactly the same as your data tree and can be used for training, can you help me?
hello, dear author , do you have vertified that if downsample the rate from 4 to 16, the model has a drop on performance ?
like this one
in_channels=[256, 512, 1024, 2048],
upsample_strides=[0.25, 0.5, 1, 2],
out_channels=[128, 128, 128, 128],
hello, Can you show the loss curve?
bevformer代码在去年五月左右开源。
作者为什么先采用IPM生成bev特征,然后使用类似于Deter-head检测地图元素,而不是使用bevformer直接检测出地图元素呢,请问是因为效果不好,还是没有尝试过呢?
Were Map Element Detector and Polyline Generator trained simultaneously?Or Map Element Detector should be trained first?Looking forward to your kindly reply
in
def poly_geoms_to_vectors(self, polygon_geoms: list):
for geom in polygon_geoms:
for poly in geom:
exteriors.append(poly.exterior)
show that
'MultiPolygon' object has no attribute 'exterior'
i check the gorms:
[<MULTIPOLYGON (((5.806 -12.166, 6.497 -12.139, 16.382 -11.565, 19.506 -11.43...>]
[<MULTIPOLYGON (((15.928 -12.742, 16.621 -15, 12.609 -15, 8.42 -15, 8.306 -15...>]
[<MULTIPOLYGON (((-30 -6.392, -30 -4.398, -29.721 -4.36, -26.824 -3.96, -24.6...>]
[<MULTIPOLYGON (((-27.27 -8.146, -26.997 -8.141, -26.404 -8.041, -25.868 -7.7...>]
whether this problem arise for my use the mini-data(nuscenes) or not
hope for some reply
Hi, thanks for this great work.
Is there config and model for Argoverse2 dataset available? Do we need to train a new model with Argoverse2 dataset or can use the given model ?
Best regards
mmdet3d 0.17.3 requires numpy<1.20.0, but av2 0.2.1 requires numpy>=1.21.5, how can I handle this conflict?
Have you trained on single GPU?
Can you provide an instruction document explaining the general process of converting a raster map to a vector map? There is a lot of code involved and it would be easier to understand with the help of documentation. It is also convenient to use other datasets for training
'MultiPolygon' object is not iterable
Hello VectorMapNet team,
I'm trying to train this model with camera + lidar (fusion) and facing some issues. I modified the config file to:
input_modality = dict(
use_lidar=True,
use_camera=True,
use_radar=False,
use_map=False,
use_external=False)
and added use_lidar=True
inside model/backbone
in config. After the following changes, I got this error:
File "vector_map_net/plugin/models/mapers/base_mapper.py", line 91, in forward
return self.forward_train(*args, **kwargs)
File "vector_map_net/plugin/models/mapers/vectormapnet.py", line 109, in forward_train
_bev_feats = self.backbone(img, img_metas=img_metas, points=points)
File "/home/ubuntu/venv_lane_line_gt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "vector_map_net/plugin/models/backbones/ipm_backbone.py", line 230, in forward
lidar_feat = self.get_lidar_feature(points)
File "vector_map_net/plugin/models/backbones/ipm_backbone.py", line 286, in get_lidar_feature
ptensor, pmask = points
TypeError: cannot unpack non-iterable NoneType object
Looks like points is None
, Can you help in fixing this issue.
work_dir: ./work_dirs/vectormapnet
collecting samples...
collected 81 samples in 0.00s
2023-07-05 14:38:13,830 - mmcv - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
2023-07-05 14:38:13,830 - mmcv - INFO - Use load_from_openmmlab loader
2023-07-05 14:38:13,878 - mmcv - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.bias
missing keys in source state_dict: layer3.0.conv2.conv_offset.weight, layer3.0.conv2.conv_offset.bias, layer3.1.conv2.conv_offset.weight, layer3.1.conv2.conv_offset.bias, layer3.2.conv2.conv_offset.weight, layer3.2.conv2.conv_offset.bias, layer3.3.conv2.conv_offset.weight, layer3.3.conv2.conv_offset.bias, layer3.4.conv2.conv_offset.weight, layer3.4.conv2.conv_offset.bias, layer3.5.conv2.conv_offset.weight, layer3.5.conv2.conv_offset.bias, layer4.0.conv2.conv_offset.weight, layer4.0.conv2.conv_offset.bias, layer4.1.conv2.conv_offset.weight, layer4.1.conv2.conv_offset.bias, layer4.2.conv2.conv_offset.weight, layer4.2.conv2.conv_offset.bias
Use load_from_local loader
The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1x1.weight
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 7.6 task/s, elapsed: 11s, ETA: 0sstart evaluation!
len of the results 81
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 11455.6 task/s, elapsed: 0s, ETA: 0s
Done!
----------thershold:0.5----------
results path: ./work_dirs/vectormapnet/results_nuscence.pkl
metric: chamfer
threshold: -0.5
update: True
fix_interval: False
class_num: ['ped_crossing', 'divider', 'contours']
Formatting ...
Data formatting done in 1.658856s!!
cls:ped_crossing done in 0.008414s!!
cls:divider done in 0.015380s!!
cls:contours done in 0.020888s!!
+--------------+-----+------+--------+-------+
| class | gts | dets | recall | ap |
+--------------+-----+------+--------+-------+
| ped_crossing | 76 | 656 | 0.605 | 0.435 |
| divider | 460 | 1149 | 0.617 | 0.425 |
| contours | 282 | 1030 | 0.245 | 0.093 |
+--------------+-----+------+--------+-------+
| mAP | | | | 0.318 |
+--------------+-----+------+--------+-------+
----------thershold:1----------
results path: ./work_dirs/vectormapnet/results_nuscence.pkl
metric: chamfer
threshold: -1
update: False
fix_interval: False
class_num: ['ped_crossing', 'divider', 'contours']
Formatting ...
Data formatting done in 1.180227s!!
cls:ped_crossing done in 0.011127s!!
cls:divider done in 0.016809s!!
cls:contours done in 0.020043s!!
+--------------+-----+------+--------+-------+
| class | gts | dets | recall | ap |
+--------------+-----+------+--------+-------+
| ped_crossing | 76 | 656 | 0.934 | 0.887 |
| divider | 460 | 1149 | 0.876 | 0.806 |
| contours | 282 | 1030 | 0.628 | 0.459 |
+--------------+-----+------+--------+-------+
| mAP | | | | 0.717 |
+--------------+-----+------+--------+-------+
----------thershold:1.5----------
results path: ./work_dirs/vectormapnet/results_nuscence.pkl
metric: chamfer
threshold: -1.5
update: False
fix_interval: False
class_num: ['ped_crossing', 'divider', 'contours']
Formatting ...
Data formatting done in 1.196037s!!
cls:ped_crossing done in 0.013920s!!
cls:divider done in 0.015027s!!
cls:contours done in 0.022493s!!
+--------------+-----+------+--------+-------+
| class | gts | dets | recall | ap |
+--------------+-----+------+--------+-------+
| ped_crossing | 76 | 656 | 0.974 | 0.956 |
| divider | 460 | 1149 | 0.946 | 0.892 |
| contours | 282 | 1030 | 0.794 | 0.669 |
+--------------+-----+------+--------+-------+
| mAP | | | | 0.839 |
+--------------+-----+------+--------+-------+
ped_crossing: 0.7592802941799164
divider: 0.7075984378655752
contours: 0.40709205220143
map: 0.6246569280823072
VectormapNet Evaluation Results:
{'mAP': 0.6246569280823072}
{'mAP': 0.6246569280823072}
Hi author, thanks for posting the code. I noticed that there is a visualization folder, but is there any specific instructions about how to run it? As 'data' appears to have three keys only ['img_metas', 'img', 'polys'] and the rendering would not work.
Really appreciate for your help and looking forward to your reply!
Hi developers!
I just want to try the code using v1.0-mini dataet, but after I successfully ran the nuscenses_convertor.py
, I met with problems while running python tools/train.py configs/vectormapnet.py
, the log is as below:
Traceback (most recent call last):
File "D:\Miniconda3\envs\vectormapnet\lib\site-packages\mmcv\utils\registry.py", line 69, in build_from_cfg
return obj_cls(**args)
File "E:\ZJU_Research\EXPLORE\VectorMapNet\plugin\datasets\nusc_dataset.py", line 35, in __init__
super().__init__(
File "E:\ZJU_Research\EXPLORE\VectorMapNet\plugin\datasets\base_dataset.py", line 57, in __init__
self.pipeline = Compose(pipeline)
File "e:\zju_research\explore\vectormapnet\mmdetection3d\mmdet3d\datasets\pipelines\compose.py", line 31, in __init__
transform = build_from_cfg(transform, MMDET_PIPELINES)
File "D:\Miniconda3\envs\vectormapnet\lib\site-packages\mmcv\utils\registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: VectorizeLocalMap: [Errno 2] No such file or directory: './datasets/nuScenes\\maps\\expansion\\boston-seaport.json'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/train.py", line 261, in <module>
main()
File "tools/train.py", line 224, in main
datasets = [build_dataset(cfg.data.train)]
File "e:\zju_research\explore\vectormapnet\mmdetection3d\mmdet3d\datasets\builder.py", line 46, in build_dataset
dataset = build_from_cfg(cfg, MMDET_DATASETS, default_args)
File "D:\Miniconda3\envs\vectormapnet\lib\site-packages\mmcv\utils\registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: NuscDataset: VectorizeLocalMap: [Errno 2] No such file or directory: './datasets/nuScenes\\maps\\expansion\\boston-seaport.json'
I found that in map_transform.py
, you wrote:
self.MAPS = ['boston-seaport', 'singapore-hollandvillage',
'singapore-onenorth', 'singapore-queenstown']
and I am wondering whether we can use the v1.0-mini to execute the code and if yes, how to solve the problem? Should I manually download this map files to corrsponding positions?
Thanks!
@Mrmoore98 As you didn't provide the visualization code, so I generated the result for mini nuScenes dataset. But the result was not good enough for some cases. As I know you output the confidence for each item accordingly, so I was wondering how to use the confidence value to filter the bad result.
Hi, when i evaluate this network, i got this error, ( RuntimeError: CUDA out of memory. Tried to allocate 1.05 GiB (GPU 0; 5.80 GiB total capacity; 2.91 GiB already allocated; 686.94 MiB free; 3.72 GiB reserved in total by PyTorch)), do you have some suggestions?
I am currently trying to install MMDetection3D. Unfortunately I encountered a few issues while building the package from source. I've managed to work through most of the issues. Now the build process fails with the following error message:
/root/anaconda3/envs/compiler_compat/ld: cannot find /root/mmdetection3d-0.17.3/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/reordering_cuda.o: No such file or directory
I have already checked that the CUDA_HOME and LD_LIBRARY_PATH environment variables are correctly set, and I have also verified that I have the correct version of CUDA installed (11.1). I also revised torch/utils/cpp_extension.py to work with the current ninja version.
If I don't plan to use lidar can I do without this package? It's already taken a long time to troubleshoot compile errors and I'm stumped by this one.
Hello VectorMapNet Team,
i would be very happy if you could provide the config files to enable vizualisation of the nuScenes inference results.
kind regards
Hello:
I have a question about network output.
I see that the description of the output of the detection layer in the paper is the key point of the output. But I see that there is a bbox item in the output of the network, and I don't quite understand what this item refers to?
output:
{'bboxes': array([[6.78633928e-01, 6.01761818e+01, 1.93001175e+02, 6.79676743e+01],
[1.08551838e-01, 4.70667648e+01, 1.86040192e+02, 5.29992828e+01],
[6.80525422e-01, 2.62875919e+01, 1.92955597e+02, 3.81990242e+01],
[1.02617390e-01, 3.25979652e+01, 1.84140854e+02, 4.01845512e+01],
[1.05492517e-01, 5.98437958e+01, 1.91393829e+02, 6.87072525e+01],
[1.81369720e+02, 2.33432560e+01, 1.93190247e+02, 5.72784538e+01],
[1.88910370e+02, 4.01633382e-01, 1.93277222e+02, 8.93388557e+00],
[1.79586243e+02, 2.42032280e+01, 1.93305893e+02, 4.72062874e+01],
[1.91144730e+02, 2.27989311e+01, 1.93756271e+02, 2.95024147e+01],
[6.78633928e-01, 6.01761818e+01, 1.93001175e+02, 6.79676743e+01],
[1.90761978e+02, 2.59762257e-01, 1.94180695e+02, 5.82593441e+00],
[1.90761978e+02, 2.59762257e-01, 1.94180695e+02, 5.82593441e+00],
[6.80525422e-01, 2.62875919e+01, 1.92955597e+02, 3.81990242e+01],
[1.85216492e+02, 2.65239029e+01, 1.93401489e+02, 3.85059166e+01],
[1.02617390e-01, 3.25979652e+01, 1.84140854e+02, 4.01845512e+01],
[2.76825070e-01, 3.56894493e+01, 7.70324850e+00, 6.78138351e+01],
[1.85932480e+02, 3.02446365e+01, 1.94091919e+02, 5.06113396e+01],
[1.39293045e-01, 3.63400269e+01, 4.07710886e+00, 3.95115547e+01],
[1.15843095e-01, 4.77807922e+01, 6.45543623e+00, 5.14295387e+01],
[3.00876260e-01, 3.58237915e+01, 1.86568237e+02, 4.11329613e+01],
[5.80193043e-01, 6.37207260e+01, 1.93202469e+02, 8.26292801e+01],
[1.39293045e-01, 3.63400269e+01, 4.07710886e+00, 3.95115547e+01],
[6.78633928e-01, 6.01761818e+01, 1.93001175e+02, 6.79676743e+01],
[1.81008484e+02, 8.73821945e+01, 1.94158020e+02, 9.38624649e+01],
[1.81397156e+02, 6.52198868e+01, 1.93095413e+02, 9.34513397e+01],
[1.81397156e+02, 6.52198868e+01, 1.93095413e+02, 9.34513397e+01],
[1.90826172e+02, 1.05982714e+01, 1.93410660e+02, 2.62504349e+01],
[1.88526321e+02, 5.90005226e+01, 1.94155319e+02, 6.37259140e+01],
[1.79007812e+02, 2.92288876e+01, 1.92638367e+02, 3.67737999e+01],
[1.08551838e-01, 4.70667648e+01, 1.86040192e+02, 5.29992828e+01],
[2.93112993e-01, 5.80386937e-01, 4.47237730e+00, 2.74098625e+01],
[2.76825070e-01, 3.56894493e+01, 7.70324850e+00, 6.78138351e+01],
[1.81008484e+02, 8.73821945e+01, 1.94158020e+02, 9.38624649e+01],
[1.81369720e+02, 2.33432560e+01, 1.93190247e+02, 5.72784538e+01],
[1.81008484e+02, 8.73821945e+01, 1.94158020e+02, 9.38624649e+01]],
dtype=float32),
'det_gt': {'labels': array([2, 2, 0, 1, 1, 1]), 'bboxes': array([[6.45161280e-01, 2.52892107e+01, 1.92903221e+02, 3.84038806e+01],
[6.45161280e-01, 5.85715175e+01, 1.92903221e+02, 6.65090799e+01],
[1.83304691e+02, 2.39421144e+01, 1.92903221e+02, 6.25449777e+01],
[1.00000000e-01, 3.21644127e+01, 1.79604363e+02, 3.84038806e+01],
[1.00000000e-01, 5.85715175e+01, 1.85901928e+02, 6.65090799e+01],
[1.00000000e-01, 4.58673418e+01, 1.82653165e+02, 5.14953911e+01]])},
Can you please provide the config file for using the lidar? just modifying the current config file and changing "use_lidar" in "input_modality" to true isn't working
Thanks for sharing the great work!
I noticed that you mentioned "using fine-tuning to handle the exposure bias" at the end of section 2. But you did not provide any additional info about how to fine-tune the predicted keypoints. Are there any details about the implementation?
Thanks for your consideration.
Hi developers!
I just want to try the code using v1.0-mini dataet, but after I successfully ran the nuscenses_convertor.py, I met with problems while running python tools/train.py configs/vectormapnet.py, the log is as below:
collecting samples...
collected 323 samples in 0.01s
Traceback (most recent call last):
File "/home/long/anaconda3/envs/vechdmap/lib/python3.8/site-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/home/long/VectorMapNet_code-mian/plugin/datasets/pipelines/map_transform.py", line 65, in __init__
self.nusc_maps[loc] = NuScenesMap(
File "/home/long/anaconda3/envs/vechdmap/lib/python3.8/site-packages/nuscenes/map_expansion/map_api.py", line 100, in __init__
raise Exception('Error: You are using an outdated map version (%s)! '
Exception: Error: You are using an outdated map version (%s)! Please go to https://www.nuscenes.org/download to download the latest map!
But i did download the map from the official website. Do you have a better solution?
Thanks!
Hi, Can you please share camera + LiDAR modality checkpoint ?
hello, it's a good job and thanks for you sharing.
when i prepare the data and run the command "python tools/data_converter/nuscenes_converter.py --data-root /data/public/datasets/nuscenes", i met the error like this:
from . import roiaware_pool3d_ext
ImportError: /home/engineers/maoruiwang/codes/code_map/VectorMapNet_code/mmdetection3d-0.17.3/mmdet3d/ops/roiaware_pool3d/roiaware_pool3d_ext.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c1015SmallVectorBaseIjE8grow_podEPvmm
it seems that the pytorch and cuda version doesn't match, and my envs are:
import torch as t
t.version
'1.10.0+cu113'
import mmdet
import mmcv
mmdet.version
'2.27.0'
mmcv.version
'1.4.0'
hope for reply, thanks.
Hello I tried to export VectorMapNet loaded with the given checkpoint to ONNX, but I can't figure up how to pass the input to the export call:
# model is init like in tools/test.py
model = build_model(cfg.model, test_cfg=cfg.get('test_cfg'))
...
mm = MMDataParallel(model, device_ids=[0])
dataset = build_dataset(cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=1,
workers_per_gpu=1,
dist=False,
shuffle=False)
for i, data in enumerate(data_loader):
torch.onnx.export(mm.module, args=data, f='VectorMapNet.onnx')
break
But this fails with: RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DataContainer
I don't exactly know what are the input and output tensors of the model.
I also tried to use:
from torchviz import make_dot
for i, data in enumerate(data_loader):
with torch.no_grad():
yhat = mm(return_loss=False, rescale=True, **data)
break
make_dot(yhat, params=dict(list(mm.module.named_parameters()))).render("VectorMapNet", format="png")
to just plot the model, to get a better insight, but this fails with TypeError: unhashable type: 'list'
I think because make_dot() can't handle the post processed yhat prediction from the mm() call.
So how could you export the VectorMapNet to ONNX? I'm really new to torch.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.