Git Product home page Git Product logo

deepmvs's People

Contributors

e1ichan avatar inchangchoi avatar phuang17 avatar rasmus25 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepmvs's Issues

Model test and evaluation

Hello,
Could you share with example test dataset? I trained your model on GTA dataset, but I don't know how to test and evaluate them. Also I want to do it on my own dataset, but I can't understand how you use colmap, and what is in camera.txt and point_cloud.txt files.

About calculate photometric errors

Dear authors,
Thanks for your work and open source of DeepMVS!
I was wondering that in your paper you mentioned about how to calculate photometric and geometric errors, but I couldn't find it in your code.
How did you calculate both of them?
Thanks

Depth based pose warping not matching exactly

Hi,

I'm trying to use your MVS Synth database for a view synthesis related problem. However, when I warp a frame to the view of next frame, the far away buildings are showing some shift, while closeby objects match perfectly. Any idea why this may be happening or how to fix this?

For example, from video 0000, I warped frame 0001 to view of 0002 and the below are the true and warped images.

True frame 0002
0002

Frame 0001 warped to view of 0002
0002

My warping code is based on this

Results of trained model

I've tried on ETH3D dataset with provided trained model, and I got similar but different results, especially with more errors.

I followed the instructions, and conducted sparse reconstruction via colmap.

But the result seems include more errors such as belows.

It's a result of facade in ETH3D benchmark.
image

image

Most of huge errors are in sky region, even though I included MVS-SYNTH dataset in training.

So I've trained the model myself but still I got similar errors and I don't know why.

Is it because of colmap? or am I missing something?

How to reproduce results on DeMoN's testing dataset

Hi,

Thank you for sharing codes!
I want to reproduce results on DeMoN's testing dataset but I could only get noisier ones.

Could you give me detailed instructions?
For example,

  1. Should I run COLMAP with fixed intrinsics?
  2. When running testing script, is there anything I should be careful about?

Epic work.

First of all, i would like to thank you for your great work. very clean and well thought. good luck in your next paper.

I work in multi-view stereo also for quite some time now, with everyone is talking about deep learning i really want to dive into it and mix it with MVS and saliency. however, i could not find where to start. your paper is a good start, but i want you to share the process from beginning to the end of this great project.

thank you
best regards

How to convert depth map to real depth value?

Hi,
I've been doing a free viewpoint video research recently, and depth estimation is a key step of FVV pipeline. I've tryed to use deepmvs to do the depth estimation, but don't know how to convert the resulting depth estimation result to real depth value, can anyone know how to get the real depth value?

About the Evaluation Metrics

Dear authors,
Thanks for your work and open source DeepMVS!
In the previous issues someone has already asked about the error code.
I also want to know about your error code, but i couldn't find it in your latest code.
How did you Implement these methods.
Thanks!

About the extraction of camera's extrinsic and intrinsic

Hello,

I am verifying that the method of extracting extrinsic and intrinsic camera parameters using RenderDoc is general to different games.
So I would like to know if just using Renderdoc can realize the extraction of extrinsic and intrinsic (or projection matrix), whether other tools or modification of GTA are needed. Also, does the original Renderdoc work? or do I need to modify the source code to build a customized version?

Thank you!

Run test.py without gpu

not able to run

python python/test.py --load_bin --image_path path/to/images --sparse_path path/to/sparse --output_path path/to/output/directory

image

parser.add_argument("--no_gpu", dest = "use_gpu", action = "store_false", default = True, help = "Disable use of GPU.")

  • Yesn't false value
    NO Type=bool there

So how to run it with CPU, as I have tried all these things mentioned
yet no sucess

Question on test evaluation

Hi, thanks to your work.It's great.
I just wonder if I want to evaluate the DeMoN test datasets, how can I test it more efficiently.I see in your test instructions, I need to calculate the camera parameters again.But for DeMoN test datasets, all the parameters are provided.So have you used the provided camera parameters?

download_training_datasets.py

When I run 'download_training_datasets.py', it shows an error 'ValueError: Could not load bitmap "": LibRaw : failed to open input stream (unknown format)' due to the line:

img = imageio.imread(img.tobytes(), format = "RAW-FI")

after changed to:

img = imageio.imread(img.tobytes())

The code does not show any error; Not sure whether the change would cause problem for training.

CUDA version

Do I have to use CUDA 8? Would CUDA 9 also work?

The create of MVS-Synth dataset

Hi,

Thanks for your work and open-source MVS-Synth dataset.
I read your paper and am aware that you created this dataset from GTA5. However, how can you get the ground truth disparity maps, and the extrinsic and intrinsic camera parameters. I love this game but I think it can only capture images or videos. Hope you can lend me some advice.

Thanks a lot.

batch_size > 1 will not run

I have never tried batch_size > 1 due to GPU memory limit, but this should have worked. I will make changes so that batch_size > 1 also works.

Depth&Pose Consistency in MVS-Synth Dataset

Dear,

Thanks for your work and open-source MVS-Synth dataset.
However, I found that the consistency in the dataset is not good, i.e. pixels cannot be projected to the right position using the provided "ground truth depth and poses". I write a simple python code to demonstrate the pixels mismatch.

A simple example:
1

From left to right is: image1, image2, rendered image 2, overlapped image2 and rendered image2.
Clearly (best view in full resolution), the person on the street is not projected into the right position. Also, the lane marker and wall are not consistent as highlighted in the red circle. Meaning that the depth and poses are not consistence.

You can run more samples yourself.
check_consist.py.tar.gz
Just changing the line 7 according to your environment. Line 8 and line 9 selects the left and right image.

Is there any misunderstanding in my code? Hope I can hear from you.

Regards,
Kaixuan

Simple Test

Dear authors,
Thanks for your work and open source DeepMVS!
Is there a simple way to test the network on images with known intrinsic and extrinsic parameters but without colmap?
Thanks

RuntimeError: inconsistent tensor size

Please, do you have any idea why I can run into problem of RuntimeError: inconsistent tensor size, expected tensor [100 x 3 x 128 x 128] and src [100 x 3 x 112 x 128] to have the same number of elements, but got 4915200 and 4300800 elements. I am new to this field. See below :

Reloaded modules: colmap_helpers, generate_volume_test, model
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Loading the trained model...
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Successfully loaded the trained model.
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Loading the sparse model...
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Successfully loaded the sparse model.
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Creating VGG model...
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Successfully created VGG model.
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Start working on image 0/3.
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 0/3 col = 0/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 0/3 col = 1/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 0/3 col = 2/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 0/3 col = 3/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 1/3 col = 0/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 1/3 col = 1/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 1/3 col = 2/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 1/3 col = 3/4
<ipykernel.iostream.OutStream object at 0x0000024E05AE5438> Working on patch at row = 2/3 col = 0/4
Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/.../Desktop/DeepMVS-master/python/test.py', wdir='C:/Users/..../Desktop/DeepMVS-master/python')

File "C:\Users....\deepMVS2\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 668, in runfile
execfile(filename, namespace)

File "C:\Users....\deepMVS2\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/..../Desktop/DeepMVS-master/python/test.py", line 253, in
data_in_tensor[0, 0, :, 0, ...] = ref_img_tensor.expand(num_depths, -1, -1, -1)

RuntimeError: inconsistent tensor size, expected tensor [100 x 3 x 128 x 128] and src [100 x 3 x 112 x 128] to have the same number of elements, but got 4915200 and 4300800 elements respectively at c:\anaconda2\conda-bld\pytorch_1519492996300\work\torch\lib\th\generic/THTensorCopy.c:86

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.