Git Product home page Git Product logo

Comments (11)

bradyz avatar bradyz commented on July 17, 2024 4

Sorry for the delay - try this notebook
to visualize merge predictions from two models

from cross_view_transformers.

bradyz avatar bradyz commented on July 17, 2024 3

Derrick is correct - all of the models in this work were trained for single classes for closer comparison to prior works like Lift-Splat and FIERY

from cross_view_transformers.

DerrickXuNu avatar DerrickXuNu commented on July 17, 2024 1

I think most of the papers right now have separate models for dynamic objects and static road layout, that's why you can only see a single model here. Just my personal perspective.

from cross_view_transformers.

lzm2275965881 avatar lzm2275965881 commented on July 17, 2024 1

@gongyan1 Have you found a way to combine the two visualization results? Please give me some advice,thanks !

from cross_view_transformers.

gongyan1 avatar gongyan1 commented on July 17, 2024

Thank you for your answers, but I want to ask how to achieve the result graph shown by the author.

from cross_view_transformers.

gongyan1 avatar gongyan1 commented on July 17, 2024

@yangyangsu29 Did you solve the problem now? Thanks.

from cross_view_transformers.

yysu-888 avatar yysu-888 commented on July 17, 2024

@yangyangsu29 Did you solve the problem now? Thanks.
training each class (vehicle and driveable area)separately,it can reproduce the paper‘s result indeed;
in visualization, the predict‘s map overlay together as follow:

road_19

from cross_view_transformers.

gongyan1 avatar gongyan1 commented on July 17, 2024

@yangyangsu29
python3 scripts/train.py \ +experiment=cvt_nuscenes_vehicle data.dataset_dir=/media/datasets/nuscenes \ data.labels_dir=/media/datasets/cvt_labels_nuscenes
Thanks for your reply, please allow me to refine the question further.
You mean that when the above command is executed, only the vehicle-related model is trained, I need to change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, but the visualization is not saved after this command. Therefore, I edit the _log_image function in cross_view_transformer/callbacks/visualization_callback.py to store the visualization feature map by opencv. I would like to ask how to add the two feature maps, and whether the author has implemented it in this repo. In other words, I wish you could be a little more detailed.

from cross_view_transformers.

yysu-888 avatar yysu-888 commented on July 17, 2024

@yangyangsu29 python3 scripts/train.py \ +experiment=cvt_nuscenes_vehicle data.dataset_dir=/media/datasets/nuscenes \ data.labels_dir=/media/datasets/cvt_labels_nuscenes Thanks for your reply, please allow me to refine the question further. You mean that when the above command is executed, only the vehicle-related model is trained, I need to change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, but the visualization is not saved after this command. Therefore, I edit the _log_image function in cross_view_transformer/callbacks/visualization_callback.py to store the visualization feature map by opencv. I would like to ask how to add the two feature maps, and whether the author has implemented it in this repo. In other words, I wish you could be a little more detailed.

yes , change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, then run ./scripts/example.ipynd(modify ckpt_path and so on) to infer the val datasets, in visualization, you need write less codes to show them in a figure as showed above

from cross_view_transformers.

gongyan1 avatar gongyan1 commented on July 17, 2024

@yangyangsu29
Now, I've generated the resulting figure of the roads and vehicle as you showed earlier (https://user-images.githubusercontent.com/49515300/174055573-75356d4f-6838-456a-87de-2747d24ca09f.png). However, I don't know how to combine them. In ./scripts/example.ipynd, there is only one pth path, I try to load two paths at the same time, and then generate the visualization result (as shown in the code below), but the result is not very good. Can you explain how the visualizations overlay together? Can you send the modified file to my email? My email is [email protected], thank you for your patience!

`with torch.no_grad():
for batch in loader:
batch = {k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in batch.items()}
pred = network(batch)
pred_v = network_v(batch)

    visualization = np.vstack(viz(batch=batch, pred=pred))
    visualization_v = np.vstack(viz(batch=batch, pred=pred_v))
    visualization = visualization + visualization_v
    images.append(visualization)`

from cross_view_transformers.

gongyan1 avatar gongyan1 commented on July 17, 2024

@yangyangsu29 Can you be more detailed about this "in visualization, you need write less codes to show them in a figure as showed above"? Thanks.

from cross_view_transformers.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.