Git Product home page Git Product logo

opendrivelab / openlane-v2 Goto Github PK

View Code? Open in Web Editor NEW
532.0 21.0 61.0 10.19 MB

[NeurIPS 2023 Track Datasets and Benchmarks] OpenLane-V2: The First Perception and Reasoning Benchmark for Road Driving

Home Page: https://proceedings.neurips.cc/paper_files/paper/2023/hash/3c0a4c8c236144f1b99b7e1531debe9c-Abstract-Datasets_and_Benchmarks.html

License: Apache License 2.0

Python 12.48% Jupyter Notebook 87.52%
topology-reasoning traffic-element-recognition 3d-lane-detection

openlane-v2's People

Contributors

faikit avatar gihharwtw avatar hilookas avatar hli2020 avatar huangmozhi9527 avatar peggypeppa avatar ricardlee avatar sephyli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openlane-v2's Issues

#L243

iou_cost=dict(type='IoUCost', weight=0.0), # Fake cost. This is just to make it compatible with DETR head.

Are these settings(bev_range=[-50.0, -25.0, -3.0, 50.0, 25.0, 2.0], normalize=True) needed to add in this place as in baseline config?

Low reproduction performance of the baseline_large

Hi, I trained the baseline_large in 4 V100 with the default config you guys provided (without change). However, the reproduction performance is much lower than the val benchmark. The metrics we obtain are as follows:

  • DET_l: 7.8543
  • DET_t: 43.6408
  • TOP_ll: 0.0001
  • TOP_lt: 4.22
  • OLS: 18.21

Is this normal? I noticed the performance you mentioned in another issue is much better than ours. Are they the same model? Would you mind providing the training log of the baseline_large or the one using InternImage as backbone?

segmentation fault (core dumped)

在跑baseline配置时候,发现f_score.py文件中的from ortools.graph import pywrapgraph在启动训练时会引发segmentation fault 错误 .
当在valuate.py文件中注释掉from .f_score import f1和etrics['F-Score for 3D Lane']['score'] = f1.bench_one_submit(gts=gts, preds=preds)这两句话时,可以正常启动训练。

想问一下大家有遇到这样的问题,是怎么解决的。

numpy==1.23.4
ortools==9.2.9972

Evaluation

Hello! Is the input of the evaluation function the control point of the Bezier curve or the 3D line? If the output of the model is the control point of the Bezier curve, do I need to convert the result into a 3D line for evaluation?

Confusions about the TOP

Hi, thanks for the remarkable work!
I am confused about the TOP calculation.
image

  1. In the paper, the meaning of the N (v) is explained as the ordered list of neighbors of vertex v ranked by confidence, but the vertex v seems to be the groundtruth, so I can not understand where does the confidence come from?
  2. What does the N̂ ′ (v) mean? Could you explain that why the neighbors of vertex v need to be ranked by confidence? And could you show me the related codes?
    Any helps are welcomed!

Chamfer Distance

Hello,

In the issues, there is an indicated strong baseline. In that, chamfer distance is indicated as DET_l_chamfer. How is it calculated? Are thresholds again the same with Frechet Thresholds? Can we directly calculate it as below?

metrics['OpenLane-V2 Score']['DET_l_chamfer'] = _mAP_over_threshold(
    gts=gts, 
    preds=preds, 
    distance_matrixs=distance_matrixs['chamfer'], 
    distance_thresholds=THRESHOLDS_FRECHET,
    object_type='lane_centerline',
    filter=lambda _: True,
    inject=True, # save tp for eval on graph
).mean()

Thanks in advance

Prediction format issue

Hi, thank you for your work. I have an issue regarding the format prediction part during evaluation for val/test in openlane_v2_dataset.py#L386.

The code sorted lane centerline prediction by their confidence, and stored them in the output result dictionary. However, the topology part did not get selected accordingly. I am not entirely sure, but based on the evaluation script function _mAP_topology_lclc, it seems that the sequence of lclc topology relies upon the lc pred.

If my observation is correct, a possible correction is:

prediction['topology_lclc'] = result['pred_topology_lclc'][sorted_index][:, sorted_index]
prediction['topology_lcte'] = result['pred_topology_lcte'][sorted_index]

Test Time

After submitting the test file to the website, there is no test result after more than an hour. The status keeps 'running', what is the reason for this?

The rules about temporal information

I am curious about whether there are some rules about the usage of temporal information (information from other timestamps except the current timestamp) in this competition. Can you ban the use of future frames officially?

submission

Great work! I encountered some errors when submitting the pkl file, I am not sure if it is a script error or a parameter error.

My test script:

bash ./tools/dist_test.sh projects/openlanev2/configs/baseline_large.py work_dirs/baseline_large/epoch_24.pth 8 --eval bbox --eval-options dump=True dump_dir=work_dirs/baseline_large

My test data config:

    test=dict(
        type=dataset_type,
        data_root=data_root,
        meta_root=meta_root,
        collection='data_dict_subset_A_val',
        pipeline=test_pipeline,
        test_mode=True),

After submitting the result.pkl:

Traceback (most recent call last):
  File "/code/scripts/workers/submission_worker.py", line 500, in run_submission
    submission_metadata=submission_serializer.data,
  File "/tmp/tmpigcmv59_/compute/challenge_data/challenge_1925/main.py", line 58, in evaluate
    raise Exception(f'The submission file size is limited to 500 MB.')
Exception: The submission file size is limited to 500 MB.

Thanks for your help!

Baseline problem

捕获
Hi~ When training this baseline, I met such a problem. Do you have any suggestions or views on this problem?

How a month is defined

The submission rule is every month a team can submit results 10 times. I am wondering how a month is defined. Is it a calendar month? For example, if I submit a result on April 30th, how many times remain on May 1th? 9 times or 10 times?

Some problems about topology loss

image
Hello, when we looked at the baseline model, we found that the model will use 1-gt, and 0 as positive when calculating the topology loss. Why do you do this, shouldn’t 1 be positive?

mmdetection3d cannot start training ?

I use "python tools/train.py projects/openlanev2/configs/baseline.py" to start training, while an error is raised.
Traceback (most recent call last):
File "/home/ooxx/miniconda3/envs/openlanev2/lib/python3.8/site-packages/mmcv/utils/misc.py", line 73, in import_modules_from_strings
imported_tmp = import_module(imp)
File "/home/ooxx/miniconda3/envs/openlanev2/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 843, in exec_module
File "", line 219, in _call_with_frames_removed
File "/data/mmdetection3d/projects/openlanev2/baseline/init.py", line 2, in
from .datasets import *
File "/data/mmdetection3d/projects/openlanev2/baseline/datasets/init.py", line 2, in
from .openlane_v2_dataset import *
File "/data/mmdetection3d/projects/openlanev2/baseline/datasets/openlane_v2_dataset.py", line 34, in
from openlanev2.dataset import Collection
ModuleNotFoundError: No module named 'openlanev2.dataset'

What should I do to fix this ?

Pose Heading may not be correct

Good night,

I am trying to accumulate the 3D lanemarking detections using the information provided by the pose in the following manner:
points_global = np.dot(current_pose, points_lidar).
where current_pose=np.array(frame.pose.transform).reshape(4,4)

Points_lidar are already in vehicle frame as stated in the documentation, so I just use the pose to transform from vehicle frame to world/global frame.

When I visualize the accumulated point cloud, detections of different frames accumulate at an angle, and they do not overlap correctly. Moreover, when I plot the position of the car together with the heading/yaw, it does not align with the trajectory of the car.

I am working on segment segment-6935841224766931310_2770_310_2790_310, but I seen this in other segments. I am doing something wrong????

Screenshot from 2023-04-26 00-34-00
Screenshot from 2023-04-26 00-32-38

Thank you,
I look forward to hearing from you.
Javier Pastor Fernández

Segments length (number of frames)

Good evening,
I am writing this issue to ask if the dataset contains larger segments compared to the first version (V1) of the dataset (200 frames). I am trying to work out accumulation of detections and pseudo-mapping.

Thank you,
I am looking forward to hearing from you.

Javier Pastor Fernandez

CUDA out of memory

Hello,
I have 32GB v100 gpus, but I still can't fit batch size 1 for the large baseline. I was wondering how do you train it and on which gpus.
I didn't find any option to lower image resolution for training, am I wrong?
Do you train with half precision?
Thank you for the clarifications

The input image resolution

Hello,

I could not understand the input image resolutions to the image backbones from the configs of baseline models.

Can you share that information?

Thank you in advance

test evaluation question

Hi, could you explain the way of evaluation when testing DET_l?
It seems that Frechat Distance is used. Will submission with 11 points per lane and submission with 201 points per lane be evaluated by ground truth with same points num?

Evaluation on Eval.ai

It seems that the evaluation is not started and keeps 'submitted' after submitting the file. And the status of another file that was submitted last Sunday keeps 'running' as well. Is there something wrong with the website?
image

Could this dataset be used for HD map construction evaluation?

Thanks for your dataset!

I see that you aim to facilitate online mapping. But whether or not to use offline maps does not lead to a conclusion in the short term.

So I'm wondering if this dataset can be used for offline global HD map accuracy evaluation? If so, this will be very helpful for the research of automatic HD map construction. Thanks!

Issue with Unsorted Confidence Array in Recall Threshold Calculation

In the current implementation of the evaluation code, there seems to be an issue with the calculation of confidence_thresholds. This problem arises due to the fact that the confidence array is not sorted when the thresholds corresponding to certain percentile values of recall are extracted.

Here's the existing code that causes the issue:

confidence = np.asarray(confidence)
sorted_idx = np.argsort(-confidence)
tp = tp[sorted_idx]

tps = np.cumsum(tp, axis=0)
eps = np.finfo(np.float32).eps
recalls = tps / np.maximum(num_gt, eps)

taken = np.percentile(recalls, np.arange(10, 101, 10))
taken_idx = {r: i for i, r in enumerate(recalls)}
confidence_thresholds = confidence[np.asarray([taken_idx[t] for t in taken])]

The recalls values are sorted according to confidence, but when calculating confidence_thresholds, the original confidence array isn't sorted, leading to potential inaccuracies.

Suggested Fix:

A potential fix for this issue could involve creating a sorted confidence array before calculating confidence_thresholds. Below is a suggested modification:

confidence = np.asarray(confidence)
sorted_idx = np.argsort(-confidence)
sorted_confidence = confidence[sorted_idx]
tp = tp[sorted_idx]

tps = np.cumsum(tp, axis=0)
eps = np.finfo(np.float32).eps
recalls = tps / np.maximum(num_gt, eps)

taken = np.percentile(recalls, np.arange(10, 101, 10))
taken_idx = {r: i for i, r in enumerate(recalls)}
confidence_thresholds = sorted_confidence[np.asarray([taken_idx[t] for t in taken])]

In this fix, the confidence_thresholds are calculated using the sorted confidence array, ensuring that the confidence thresholds corresponding to the percentiles of recall values are correctly calculated.

Request for Deadline Extension:

While working on this issue, we've found that the debugging process was quite time-consuming and required considerable effort to identify and propose a solution. Given that this was a complex issue that could not be foreseen at the beginning of the competition, and given the time we've spent on debugging and proposing a solution, we kindly request a deadline extension for the competition.

Checkpoint for baseline large

Thanks for the work, do you have any plan to release the checkpoint for the baseline large model?
Moreover, I see you set samples_per_gpu=1, workers_per_gpu=8. That means the batch_size is 1 and the num_workers is 8? Am I missing something?

Road Lane

Hi, thanks for the amazing work!

I want to confirm whether the annotations only contains the centerline of the lane and does not contain the lane line label information, because I did not find it in the labeling file but you metioned "Following the OpenLane dataset, we annotate lanes in 3D space to reflect their properties in the real world." in the Readme.

if no, Will it be provided later or how to obtain it?

OpenLane-V2 2.0 plugin release plan and time frame?

Hi, we are interested specifically in the 2.0 version of the dataset. I noticed that the plugins for working with the 2.0 version of the dataset are not yet available #61. Could you please share with us the plan and estimated time frame for releasing the plugin? It would also be great if you could update the instructions to let the users know that the 2.0 plugins are not yet available.

Thank you!

The directions of lane labels

I note the label points of a lane are organized along the x-axis (the x-axis coordinates of the 11 label points are equally spaced). However, I find the x-axis coordinates of lane labels could be incremental or digressive, which indicates the direction of label labels. Even if the lane prediction is correct, the evaluation result will be detection failure if the direction of a lane prediction is not the same as the lane label. Therefore, we are wondering how the directions of lane labels are decided, i.e., in which cases the x-axis coordinates are incremental or digressive.

Pre-training model

HI!
Thank you for running the competition! I would like to check if my code is correct, but it takes a long time to train the model. Can you provide a pre-training model? Thank you!

Performance on baseline_large

It seems I can't reproduce the 2D detection performance of baseline_large configuration and only 1% accuracy in 2D detection is achieved (The loss of detection is decreasing during the training process). I try to overfit a single sample (i.e., train/00000/315967376899927209). Though it got an accuracy of 85% in detection after training for 20 epochs, the visualization result is somewhat strange and the detected area has little relation to the gt boxes. Is there inconsistency between training and evaluation processes?

overfit

Baseline model question

Hi, will you provide the train and test code for baseline model and provide the baseline result?

Toponet

Hi, I noticed that there are results of toponet on the list, will this result be involved in the final ranking?

Question Regarding the Pose to GPS Transformation

Thanks for the awesome dataset! I was wondering if the GPS location can be exposed? I was hoping to get the GPS location from the pose estimate, but in the AV2 dataset the city location is required to compute this projection. Is there any ways we can get which city a frame belongs to? Specifically, I was hoping to pull information at the location, which requires converting the city coordinates to WGS84 coordinates.

Thanks!

ModuleNotFoundError: No module named 'openlanev2.dataset'

https://github.com/OpenDriveLab/OpenLane-V2/blob/bdd46b17638535d1818b48bc9f028bbfd77762a1/plugin/models/baseline/datasets/openlane_v2_dataset.py#L34C4-L34C4

I'm trying to train a model based on the instructions, but I got an unexpected error: ModuleNotFoundError: No module named 'openlanev2.dataset'.
I've modified the 'from openlanev2.dataset import Collection' to 'from openlanev2.centerline.dataset import Collection', and the next 3 lines in the same way, and it worked fine for now.
But is that right? Or it's actually due to some missing during my env configuration?

Problem for Eval

Greetings, we have the following problem during testing, is this a problem caused by the parameters?
TypeError: format results() got an unexpected keyword argument 'dump‘
The test command is: /nvme0n1/OpenLane_v2/mmdetection3d/tools/dist_test.sh /nvme0n1/OpenLane_v2/mmdetection3d/projects/openlanev2/OpenLane-V2/plugin/mmdet3d/configs/baseline.py /nvme0n1/OpenLane_v2/work_dirs/baseline/epoch_2.pth 8 --out=/nvme0n1/OpenLane_v2/work_dirs/out/out.pkl --format-only --eval-options dump=True dump_dir=/nvme0n1/OpenLane_v2/work_dirs visualization=True visualization_dir=/nvme0n1/OpenLane_v2/work_dirs/vis

test server error

After submitting, the test server returns stderr "Exception: The submission file size is limited to 400 MB.", but the submission file has already within the limit.

Evaluation

Hi, can you explain the definition of topology evaluation in this task to help us have a better understanding?

train

Hello, can you give me some training advice, our end-to-end training results are not very good

SD map info

Hi, I downloaded and played w/ the SD-map. Actually, I found the SD-map only have road-level info, but not lane-level info. The worst thing is that the SD-map do not have the topo info in the intersection (turn left, turn right..)

Would you plan to update the SD(ADAS)-map in the future?

Overfitting experiments

I have a question regarding the resnet18 backbone.
Do you know if it works or was it ever successfully tested? Since it's the only one that trains as is on V100 / A5000 gpus I have tried to overfit it to one sample, but it doesn't really work. I have tried many things, different learning rates, different learning rate schedulers, frozen training, predicting only lc and more. I tried for lots of epochs as well ( ~2000).

Same overfitting experiment works on the internimage backbone ( starting from the provided checkpoint, at 1/4th resolution). Still have to test if the same training works from scratch.

I am planning to test Resnet50 backbone at 1/4th resolution and update this post next, but I had hoped for the overfitting experiment on the small backbone to converge as well.

ModuleNotFoundError: No module named 'mmdet.core'

Hello,

When I attempt to run the training script, I reach the error "No module named 'mmdet.core'".

I have installed mmdet according to the instructions, and can locate the folder in my Anaconda environment site-packages.
However, there is no 'core' folder.
Do you know why this may be happening? Have I missed a critical step?

Thank you for your help.

Complete Leaderboard

Hi, since the submitted result can be made private, so when will the complete leaderboard results be available?

Error when trying to download data

I see this error on Google Drive when trying to download the data:

"Sorry, you can't view or download this file at this time.

Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator."

Likewise, I am unable to download the data from the Baidu link.
Does anyone else see this error?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.