Comments (11)
Sorry for the delay - try this notebook
to visualize merge predictions from two models
from cross_view_transformers.
Derrick is correct - all of the models in this work were trained for single classes for closer comparison to prior works like Lift-Splat and FIERY
from cross_view_transformers.
I think most of the papers right now have separate models for dynamic objects and static road layout, that's why you can only see a single model here. Just my personal perspective.
from cross_view_transformers.
@gongyan1 Have you found a way to combine the two visualization results? Please give me some advice,thanks !
from cross_view_transformers.
Thank you for your answers, but I want to ask how to achieve the result graph shown by the author.
from cross_view_transformers.
@yangyangsu29 Did you solve the problem now? Thanks.
from cross_view_transformers.
@yangyangsu29 Did you solve the problem now? Thanks.
training each class (vehicle and driveable area)separately,it can reproduce the paper‘s result indeed;
in visualization, the predict‘s map overlay together as follow:
from cross_view_transformers.
@yangyangsu29
python3 scripts/train.py \ +experiment=cvt_nuscenes_vehicle data.dataset_dir=/media/datasets/nuscenes \ data.labels_dir=/media/datasets/cvt_labels_nuscenes
Thanks for your reply, please allow me to refine the question further.
You mean that when the above command is executed, only the vehicle-related model is trained, I need to change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, but the visualization is not saved after this command. Therefore, I edit the _log_image function in cross_view_transformer/callbacks/visualization_callback.py to store the visualization feature map by opencv. I would like to ask how to add the two feature maps, and whether the author has implemented it in this repo. In other words, I wish you could be a little more detailed.
from cross_view_transformers.
@yangyangsu29
python3 scripts/train.py \ +experiment=cvt_nuscenes_vehicle data.dataset_dir=/media/datasets/nuscenes \ data.labels_dir=/media/datasets/cvt_labels_nuscenes
Thanks for your reply, please allow me to refine the question further. You mean that when the above command is executed, only the vehicle-related model is trained, I need to change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, but the visualization is not saved after this command. Therefore, I edit the _log_image function in cross_view_transformer/callbacks/visualization_callback.py to store the visualization feature map by opencv. I would like to ask how to add the two feature maps, and whether the author has implemented it in this repo. In other words, I wish you could be a little more detailed.
yes , change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, then run ./scripts/example.ipynd(modify ckpt_path and so on) to infer the val datasets, in visualization, you need write less codes to show them in a figure as showed above
from cross_view_transformers.
@yangyangsu29
Now, I've generated the resulting figure of the roads and vehicle as you showed earlier (https://user-images.githubusercontent.com/49515300/174055573-75356d4f-6838-456a-87de-2747d24ca09f.png). However, I don't know how to combine them. In ./scripts/example.ipynd, there is only one pth path, I try to load two paths at the same time, and then generate the visualization result (as shown in the code below), but the result is not very good. Can you explain how the visualizations overlay together? Can you send the modified file to my email? My email is [email protected], thank you for your patience!
`with torch.no_grad():
for batch in loader:
batch = {k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in batch.items()}
pred = network(batch)
pred_v = network_v(batch)
visualization = np.vstack(viz(batch=batch, pred=pred))
visualization_v = np.vstack(viz(batch=batch, pred=pred_v))
visualization = visualization + visualization_v
images.append(visualization)`
from cross_view_transformers.
@yangyangsu29 Can you be more detailed about this "in visualization, you need write less codes to show them in a figure as showed above"? Thanks.
from cross_view_transformers.
Related Issues (20)
- error training HOT 1
- How long does it take to train the model?
- Geometric reasoning in cross-view attention
- How to train this for segmenting more than 2 classes? HOT 1
- Question about camera extrinsics
- Question about the implementation of 'camera-aware positional encoding' part
- The labels of dataset Argoverse.
- loss function mutation when training
- When run [train.py] RuntimeError: unmatched '}' in format string HOT 2
- Discrepancy in the validation IOU values for Driveable Area segmentation HOT 14
- About setting1 HOT 1
- The label link is broken
- generating labels error : [OSError]: [Error 24] Too many open files HOT 1
- train error : urllib.error.URLError : <urlopen error Error instantiating 'cross_view_transformer.model.backbones.efficientnet.EfficientNetExtractor' : <urlopen error [Errno 111] Connection refused> ]
- Error executing job with overrides HOT 1
- no test.py
- about the model test
- Error running training (LR Scheduler) HOT 1
- 运行时出现以下错误
- visualize of attention
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cross_view_transformers.