Git Product home page Git Product logo

glpdepth's Introduction

Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth [Paper]

PWC PWC

Downloads

  • [Downloads] Trained ckpt files for NYU Depth V2 and KITTI
  • [Downloads] Predicted depth maps png files for NYU Depth V2 and KITTI Eigen split test set

Google Colab

Open In Colab

Thanks for the great Colab demo from NielsRogge

Requirements

Tested on

python==3.7.7
torch==1.6.0
h5py==3.6.0
scipy==1.7.3
opencv-python==4.5.5
mmcv==1.4.3
timm=0.5.4
albumentations=1.1.0
tensorboardX==2.4.1
gdown==4.2.1

You can install above package with

$ pip install -r requirements.txt

Or you can pull docker image with

$ docker pull doyeon0113/glpdepth

Inference and Evaluate

Dataset

NYU Depth V2
$ cd ./datasets
$ wget http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat
$ python ../code/utils/extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/
KITTI

Download annotated depth maps data set (14GB) from [link] into ./datasets/kitti/data_depth_annotated

$ cd ./datasets/kitti/data_depth_annotated/
$ unzip data_depth_annotated.zip

With above two instrtuctions, you can perform eval_with_pngs.py/test.py for NYU Depth V2 and eval_with_pngs for KITTI.

To fully perform experiments, please follow [BTS] repository to obtain full dataset for NYU Depth V2 and KITTI datasets.

Your dataset directory should be

root
- nyu_depth_v2
  - bathroom_0001
  - bathroom_0002
  - ...
  - official_splits
- kitti
  - data_depth_annotated
  - raw_data
  - val_selection_cropped

Evaluation

  • Evaluate with png images

    for NYU Depth V2

    $ python ./code/eval_with_pngs.py --dataset nyudepthv2 --pred_path ./best_nyu_preds/ --gt_path ./datasets/nyu_depth_v2/ --max_depth_eval 10.0 
    

    for KITTI

    $ python ./code/eval_with_pngs.py --dataset kitti --split eigen_benchmark --pred_path ./best_kitti_preds/ --gt_path ./datasets/kitti/ --max_depth_eval 80.0 --garg_crop
    
  • Evaluate with model (NYU Depth V2)

    Result images will be saved in ./args.result_dir/args.exp_name (default: ./results/test)

    • To evaluate only

      $ python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir <path_for_ckpt> --do_evaluate  --max_depth 10.0 --max_depth_eval 10.0
      
    • To save pngs for eval_with_pngs

      $ python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir <path_for_ckpt> --save_eval_pngs  --max_depth 10.0 --max_depth_eval 10.0
      
    • To save visualized depth maps

      $ python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir <path_for_ckpt> --save_visualize  --max_depth 10.0 --max_depth_eval 10.0
      

    In case of kitti, modify arguments to --dataset kitti --max_depth 80.0 --max_depth_eval 80.0 and add --kitti_crop [garg_crop or eigen_crop]

Inference

  • Inference with image directory
    $ python ./code/test.py --dataset imagepath --data_path <dir_to_imgs> --save_visualize
    

Train

for NYU Depth V2

$ python ./code/train.py --dataset nyudepthv2 --data_path ./datasets/ --max_depth 10.0 --max_depth_eval 10.0  

for KITTI

$ python ./code/train.py --dataset kitti --data_path ./datasets/ --max_depth 80.0 --max_depth_eval 80.0  --garg_crop

To-Do

  • Add inference
  • Add training codes
  • Add dockerHub link
  • Add colab

License

For non-commercial purpose only (research, evaluation etc).

Citation

@article{kim2022global,
  title={Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth},
  author={Kim, Doyeon and Ga, Woonghyun and Ahn, Pyungwhan and Joo, Donggyu and Chun, Sehwan and Kim, Junmo},
  journal={arXiv preprint arXiv:2201.07436},
  year={2022}
}

References

[1] From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation. [code]

[2] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. [code]

glpdepth's People

Contributors

vinvino02 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

glpdepth's Issues

Training not converge

 Firstly, thanks for the nice paper and open source project!

 I am trying to reproduce the training process. I am using NYUDepthV2 dataset from Kaggle, it should be identical, just filename is different. Nothing else changed, only dataloader was refactored as 'kaggle' and it loads csv files for training and test. 

As you can see from the logs.txt (copied below) in the 25 epochs, the silog is reducing (as expected), but RMSE jumps up dramatically around epoch 15. Maybe it was caused by the poly LR schedule around half_epoch. Anyway, it seems with the default settings, I could not reproduce the best-trained model using NYU depth v2 dataset. Is there anything I should try/modify?

Thanks a lot!
Shawn

Copy of the logs.txt:

gpu_or_cpu:gpu,
data_path:./datasets/,
dataset:kaggle,
exp_name:local_kaggle,
batch_size:12,
workers:8,
max_depth:10.0,
max_depth_eval:10.0,
min_depth_eval:0.001,
do_kb_crop:1,
kitti_crop:None,
epochs:25,
lr:0.0001,
crop_h:448,
crop_w:576,
log_dir:./logs,
val_freq:1,
save_freq:10,
save_model:False,
save_result:False,

Epoch: 001 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9725     2.6005     2.8542     3.6123     1.5674     2.5565 

====================================================================================================

Epoch: 002 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9749     2.6100     2.8581     3.7024     1.6068     2.6198 

====================================================================================================

Epoch: 003 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9734     2.6032     2.8548     3.6429     1.5810     2.5776 

====================================================================================================

Epoch: 004 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9740     2.6057     2.8561     3.6663     1.5912     2.5941 

====================================================================================================

Epoch: 005 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9737     2.6048     2.8557     3.6555     1.5865     2.5866 

====================================================================================================

Epoch: 006 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9734     2.6035     2.8551     3.6427     1.5810     2.5775 

====================================================================================================

Epoch: 007 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9734     2.6029     2.8547     3.6415     1.5805     2.5766 

====================================================================================================

Epoch: 008 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9733     2.6028     2.8547     3.6377     1.5788     2.5739 

====================================================================================================

Epoch: 009 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9734     2.6035     2.8551     3.6432     1.5812     2.5778 

====================================================================================================

Epoch: 010 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9737     2.6048     2.8557     3.6541     1.5860     2.5854 

====================================================================================================

Epoch: 011 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9734     2.6030     2.8547     3.6412     1.5804     2.5763 

====================================================================================================

Epoch: 012 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9733     2.6027     2.8545     3.6391     1.5795     2.5748 

====================================================================================================

Epoch: 013 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9735     2.6036     2.8551     3.6449     1.5820     2.5789 

====================================================================================================

Epoch: 014 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9736     2.6043     2.8555     3.6496     1.5840     2.5822 

====================================================================================================

Epoch: 015 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0606     3.5239    27.9788     7.3483     1.4549     0.6088     1.0641 

====================================================================================================

Epoch: 016 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0606     3.5245    27.9877     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 017 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 018 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 019 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 020 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 021 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 022 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 023 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 024 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

Epoch: 025 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0071     0.0228     0.0605     3.5245    27.9878     7.3496     1.4550     0.6088     1.0642 

====================================================================================================

The model and loaded state dict do not match exactly

Hi,I change the input size as 224x224,and I get a message 'the model and loaded state dict do not match exactly',the code still works.However,I'm not sure if the mit_b4 pre-trained model has already loaded.Can you help me?And I want to know,have you ever done an experiment which input size as 224x224.If you have done the experiment,can you tell me the corresponding evaluation index ?Thank you very much!!

TypeError: only integer scalar arrays can be converted to a scalar index while python ../code/utils/extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/

When I ran the following
python ../code/utils/extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/

Got the follwoing:

795 training images
654 test images
reading nyu_depth_v2_labeled.mat
<HDF5 dataset "sceneTypes": shape (1, 1449), type "|O">
Traceback (most recent call last):
  File "../code/utils/extract_official_train_test_set_from_mat.py", line 88, in <module>
    scenes = [u''.join(chr(c) for c in h5_file[obj_ref]) for obj_ref in h5_file['sceneTypes'][0]]
  File "../code/utils/extract_official_train_test_set_from_mat.py", line 88, in <listcomp>
    scenes = [u''.join(chr(c) for c in h5_file[obj_ref]) for obj_ref in h5_file['sceneTypes'][0]]
  File "../code/utils/extract_official_train_test_set_from_mat.py", line 88, in <genexpr>
    scenes = [u''.join(chr(c) for c in h5_file[obj_ref]) for obj_ref in h5_file['sceneTypes'][0]]
TypeError: only integer scalar arrays can be converted to a scalar index

Any clue to how to resolve this error.

For the configuration of the SUNRGBD dataset parameters?

1.The model can predict the maximum and minimum depth values of the image, max_depth, min_depth?
2.During the evaluation phase, the maximum and minimum depth masks for the SUNRGBD dataset,max_depth_eval and min_depth_eval?
3.When using the depth map of the SUNRGBD dataset, do you choose the original depth map (which is missing), or the depth map in the depth_bfx folder that has already filled in the missing depth values?
4.Thanks to the author for answering this question, because many papers and codes do not mention any information.

Apple M1 MPS device just gives noise

Hi - nice work btw!

On my M1 MacBook Pro it works perfectly on the CPU, but I just get noise if I try and use the hardware accelerated MPS device.

CPU:
0

MPS:
0

I assume this is really a PyTorch/MPS issue rather than with your code, but I wonder if you can shed any light on what the problem might be?

There are no errors indicating usage of PyTorch features unimplemented for Apple hardware, and unsurprisingly using PYTORCH_ENABLE_MPS_FALLBACK=1 has no effect.

a problem of metrics

Hi,guys!
Firstly, thanks for the great paper and open source project!
I am trying to repeat your work. And I have downloaded your model from the url in model.py. However,after testing,we found that the metrics had a significant difference from your results. So I wonder if it is convenient for you to provide me with a well-trained model.
Thanks a lot!

permission error to download the pretrained weights

When calling the model with "model = GLPDepth(max_depth=args.max_depth, is_train=True)" is_train=True, I got below permission error for downloading the pretrained weights. Can you share the pretrained weights? Thanks

image

I can't download pre_trained mit_b4 .

print("Download pre-trained encoder weights...")
id = '1BUtU42moYrOFbsMCE-LTTkUE-mrWnfG2'
url = 'https://drive.google.com/uc?id=' + id
output = './code/models/weights/mit_b4.pth'
gdown.download(url, output, quiet=False)
I can't download mit_b4 。The probable reason is that this link has been downloaded by so many people that I can no longer download it。Can you provide a new download link?
image

Bug in GLPDepth/code/dataset/nyudepthv2.py file

if the scale_size is true,
the depth is resized with image data and not depth data.

Following is the code snippet:

if self.scale_size:
            image = cv2.resize(image, (self.scale_size[0], self.scale_size[1]))
            depth = cv2.resize(image, (self.scale_size[0], self.scale_size[1]))

why do you divide depth by 1000

As I understand you make the depth to.Tensor() so values are(0,1) and then you divide by 1000. So the gt has values ranging from(0,0.001)??

About SiLog Loss NaN

I replaced GLPDepth's backbone with MiT-B1, and after training 10 epochs, the loss of the model increased with each batch of data, and finally the loss became NaN. Has this happened to the author? I guarantee that the model has not changed anything other than backbone, the dataset is NYU V2, and the input size is 640x480.

About your visualized results

Thanks for your great work!
I have colored your predicted depth maps, but it is quite diffirent from the result in your paper.
It is here something wrong?
yours:
image
mine:
image

Adding GLPN to HuggingFace Transformers

Hi,

Thanks for the impressive work! As the model uses SegFormer's Mix-b4 as encoder, and I already ported SegFormer to HuggingFace Transformers as seen here, it was relatively easy to port this model as well.

Here's a notebook for quick inference on an image: https://colab.research.google.com/drive/1v6fzr4XusKdXAaeGZ1gKe1kh9Ce_WQhl?usp=sharing

Both models are hosted on the hub: https://huggingface.co/models?other=glpn. If you're not familiar with HuggingFace's hub, basically each model has its own Github repo (it's based on git-LFS), so you can git add, git commit and git push to each repo separately. Each model has its own git history, like this one.

Are you interested in creating a KAIST organization on the hub, similar to other oganizations like facebook, microsoft, google? Such that we can host the models under that name? That way, you'll be able to do:

from transformers import GLPNForDepthEstimation

model = GLPNForDepthEstimation.from_pretrained("kaist/glpn-nyu")

Or, if you prefer, we can host the weights under your name as well.

Let me know if you're interested!

Kind regards,

Niels

Test on SUN RGB-D dataset.

Hello, I am a student and I would like to ask how to test on SUN RGB-D dataset and get the corresponding depth map? I hope the author can answer and give the code. Thanks, I hope to get your reply. Have a nice life.

Error: test.py

Traceback (most recent call last):
File "./code/test.py", line 18, in
from configs.test_options import TestOptions
ModuleNotFoundError: No module named 'configs.test_options'

FPS rate

Thank you for your paper and implementation!
Could you share information about inferens speed, what FPS with your hardware setup?

test on SUN-RGB

Hello, I am a student and I would like to ask how to test on SUN RGB-D dataset and get the corresponding depth map? I hope the author can answer and give the code. Thanks, I hope to get your reply. Have a nice life.

Details on loss function

Hi,
Nice work!

What loss function do you use? I couldn’t find any details in the paper (or I may have overlooked).

The input resolution seems to have a big impact on the model

Hi,I used the input size as 576x448,the model can achieve the result of the paper, but when I changed the input size to 224x224, the model rmse dropped from 0.34 to 0.50. I don't understand why this is, is this normal? Should I retrain the mit_b4 model again?Looking forward to your reply.

Questions about hardware

Hi, I am curious about what hardware was used to train the model (apologies if I missed it somewhere within the paper).

Wrong with :python ../code/utils/extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/

(glp) ning@ubuntu:~/GLPDepth/datasets$ python ../code/utils/extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/
Traceback (most recent call last):
File "../code/utils/extract_official_train_test_set_from_mat.py", line 36, in
import h5py
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/h5py/init.py", line 45, in
from ._conv import register_converters as _register_converters,
File "h5py/_conv.pyx", line 17, in init h5py._conv
File "/home/ning/GLPDepth/code/utils/logging.py", line 7, in
import torch
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/init.py", line 751, in
from .functional import * # noqa: F403
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/functional.py", line 8, in
import torch.nn.functional as F
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/nn/init.py", line 1, in
from .modules import * # noqa: F403
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/nn/modules/init.py", line 2, in
from .linear import Identity, Linear, Bilinear, LazyLinear
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 6, in
from .. import functional as F
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/nn/functional.py", line 18, in
from .._jit_internal import boolean_dispatch, _overload, BroadcastingList1, BroadcastingList2, BroadcastingList3
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/_jit_internal.py", line 24, in
import torch.distributed.rpc
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/distributed/init.py", line 55, in
from .distributed_c10d import * # noqa: F403
File "/home/ning/miniconda3/envs/glp/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 63, in
logger = logging.getLogger(name)
AttributeError: module 'logging' has no attribute 'getLogger'

Top side colored

Hi, is there a reason why above always appears like everything is very close (white and green colors) ? I am using Google Colab

About vertical cutdepth

hi,I use the vertical cutdepth in fastdepth, and I find that the result is not good.Delta1 was reduced from 0.799 to 0.78.I make sure that the dataset is correct.and I change nothing in original code.Do you know why?

depth prediction is not absolute depth

Hello, I have a question about the predictions made by your network. I printed the predicted depth values and found that they do not correspond to the true distance. Does the network predict relative depth? The labels in the NYUv2 dataset are absolute depth, so why did the network predict relative depth? Could you please advise me on how to modify the network to predict true depth?

Google Colab

Hi, I was wondering when will you be uploading a google colab file? Thanks

Runtime Error when evaluating the model for NYU Depth V2

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument max in method wrapper_clamp_Tensor)

We get this error when running the following command as in instructions:
python ./code/test.py --dataset nyudepthv2 --data_path ./datasets/ --ckpt_dir <path_for_ckpt> --do_evaluate --max_depth 10.0 --max_depth_eval 10.0

The workaround is to add device parameter in line 294 of MIM-Depth-Estimation/models/swin_transformer_v2.py as shown below:

logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01, device=self.logit_scale.device))).exp()
Please correct this issue

train_list.txt

I notice that train_list.txt does not include all the images from NYU depth V2 dataset from bts. For example, images in study_room_0005b scene:
...
/study_room_0005b/rgb_00087.jpg /study_room_0005b/sync_depth_00087.png
/study_room_0005b/rgb_00089.jpg /study_room_0005b/sync_depth_00089.png
/study_room_0005b/rgb_00090.jpg /study_room_0005b/sync_depth_00090.png
/study_room_0005b/rgb_00092.jpg /study_room_0005b/sync_depth_00092.png
/study_room_0005b/rgb_00093.jpg /study_room_0005b/sync_depth_00093.png

Images such as rgb_00091.jpg, rgb_00094.jpg, etc. are missing from the list.

It looks like images were selected to generate the final list used in the training process. What were the criteria used to select images for training?

Thanks

About train process and validate result

Thanks for your impressive work! I've been trying to replicate your work, but there were a strange validate result. Why the metrics suddenly became abnormal and constant from the 11th epoch?
Logs are as follows:
gpu_or_cpu:gpu,
data_path:../dataset/,
dataset:nyudepthv2,
exp_name:test,
batch_size:12,
workers:1,
max_depth:10.0,
max_depth_eval:10.0,
min_depth_eval:0.001,
do_kb_crop:1,
kitti_crop:None,
epochs:25,
lr:0.0001,
crop_h:448,
crop_w:576,
log_dir:./logs,
val_freq:1,
save_freq:10,
save_model:False,
save_result:False,

Epoch: 001 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.8535     0.9817     0.9966     0.1276     0.0726     0.4282     0.1548     0.0534     0.1418 

====================================================================================================

Epoch: 002 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.8559     0.9824     0.9962     0.1301     0.0729     0.4214     0.1528     0.0531     0.1376 

====================================================================================================

Epoch: 003 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.8856     0.9853     0.9972     0.1135     0.0619     0.3891     0.1392     0.0474     0.1268 

====================================================================================================

Epoch: 004 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.8847     0.9834     0.9970     0.1113     0.0608     0.3920     0.1403     0.0480     0.1282 

====================================================================================================

Epoch: 005 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.8932     0.9869     0.9976     0.1072     0.0565     0.3765     0.1338     0.0456     0.1219 

====================================================================================================

Epoch: 006 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.9033     0.9871     0.9969     0.1025     0.0534     0.3661     0.1297     0.0440     0.1187 

====================================================================================================

Epoch: 007 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.9005     0.9865     0.9975     0.1050     0.0541     0.3617     0.1303     0.0444     0.1184 

====================================================================================================

Epoch: 008 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.9038     0.9868     0.9974     0.1030     0.0535     0.3565     0.1285     0.0436     0.1169 

====================================================================================================

Epoch: 009 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.9055     0.9879     0.9976     0.1032     0.0521     0.3543     0.1276     0.0435     0.1157 

====================================================================================================

Epoch: 010 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.9060     0.9870     0.9975     0.1020     0.0559     0.3672     0.1299     0.0442     0.1180 

====================================================================================================

Epoch: 011 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 012 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 013 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 014 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 015 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 016 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 017 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 018 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 019 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 020 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 021 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 022 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 023 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 024 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

Epoch: 025 - 025

    d1         d2         d3    abs_rel     sq_rel       rmse   rmse_log      log10      silog 
0.0000     0.0000     0.0000     0.9995     2.6884     2.8657     7.8043     3.3855     5.5246 

====================================================================================================

dataset

Hello, I am a new student, I would like to ask you for the download link of the dataset about the file train_list. TXT data set, but I can't get it down.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.