Comments (7)
Hi,
Thanks for being interested in our work. I have updated the eval
file, which I found that it does not save imagenet pickle file properly. Could you rerun the code, and check the new results?
Since ImageNet is significantly larger than the other datasets, as you can see from the training file, we first save ImageNet result at epoch 300, and then fine-tuning another 100 epochs for the rest of the datasets.
So you need to do evaluation separately for imagenet
and nonimagenet
. And then run the coco_results.py.
But from your attached performance, there was definitely something wrong with the training side. Could you provide the results for the entire training epochs?
So I can have a detailed look...
Thanks.
from mtan.
Also, could you confirm whether the scheduler.step()
has been moved properly due to the new pytorch version update? I have also fixed this problem just now...
From your posted results, it seems that the network was not updated properly.
from mtan.
Hi,
Thanks for being interested in our work. I have updated the
eval
file, which I found that it does not save imagenet pickle file properly. Could you rerun the code, and check the new results?Since ImageNet is significantly larger than the other datasets, as you can see from the training file, we first save ImageNet result at epoch 300, and then fine-tuning another 100 epochs for the rest of the datasets.
So you need to do evaluation separately for
imagenet
andnonimagenet
. And then run the coco_results.py.But from your attached performance, there was definitely something wrong with the training side. Could you provide the results for the entire training epochs?
So I can have a detailed look...
Thanks.
Thanks for the quick response.
there is another question when i rerun the eval code. The test annotations contain only the image names and not class labels, so the code
im_test_set[i] = torch.utils.data.DataLoader(torchvision.datasets.ImageFolder(data_path + data_name[i] + '/test', transform=data_transform(data_path, data_name[i], train=False)), batch_size=100, shuffle=False)
will make an error like this:
Traceback (most recent call last):
File "model_wrn_eval.py", line 229, in
transform=data_transform(data_path, data_name[i], train=False)),
File "/usr/local/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 178, in init
target_transform=target_transform)
File "/usr/local/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 79, in init
"Supported extensions are: " + ",".join(extensions)))
RuntimeError: Found 0 files in subfolders of: decathlon-1.0-data/imagenet12/test
Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif
from mtan.
Also, could you confirm whether the
scheduler.step()
has been moved properly due to the new pytorch version update? I have also fixed this problem just now...From your posted results, it seems that the network was not updated properly.
thanks very much and I will retrain the model immediately
from mtan.
Hi Chris,
Sorry for the confusion in the code. I will take a detailed look and re-run on my own and get back to you.
Sk.
from mtan.
Hi Chris,
Sorry for the confusion in the code. I will take a detailed look and re-run on my own and get back to you.
Sk.
thanks
from mtan.
Hello,
I made some modifications to the code, and they should all work correctly now. I have also fully tested each component, and they are good to go.
Could you check out the updated readme, and follow each step to run the code.
For 'eval' mode (not train on validation dataset), you should be expected to see the reported TEST (actually should be validation performance) results roughly close to the one reported in the paper. For evaluating the test dataset, please run the code in all
mode, so it trains on both the training and validation dataset.
Let me know whether this works.
Sk.
from mtan.
Related Issues (20)
- depth and segmentation label in cityscape HOT 1
- NYUv2 surface normals grey boundary HOT 1
- Missing items in cityscapes/val/label_19 HOT 2
- Training with my own dataset HOT 16
- Definition of multi-task loss function HOT 5
- Training MTAN-DeepLabv3 HOT 10
- About the instantiation of the model HOT 1
- DWA HOT 1
- The para Temperature of DWA HOT 6
- How do you pre-process NYUv2 dataset? HOT 7
- about the uncertainty loss HOT 4
- about visulization of the result HOT 1
- Avg Cost accumulation HOT 2
- why dwa didn't consider common scale of gradient? HOT 8
- 关于Attention Module的设计 HOT 4
- results inconsistency HOT 19
- Question about the pre-processed cityscapes HOT 5
- DWA的loss梯度爆炸问题 HOT 2
- Question about the structure of the encoder_block_att. HOT 2
- Used Mtan on depth estimation and object detection HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mtan.