svip-lab / indoor-sfmlearner Goto Github PK
View Code? Open in Web Editor NEW[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
Thank you very much for your work. When I trying to test the trainning scripts, I met some problem abour NYU2 dataset, because the entire dataset download link is failed, I download the different parts of the dataset individually and unzip them all, but the scenes does not match the training split, such as classroom__0016 is not in the dataset I downloaded, could you give me some advice?
Hi!
Thanks for sharing this great work! I'm trying to train the model but it seems that the official dataset lacks some data(bedroom_0076b).
I download the raw data from the official website, so I'm wondering if the train files split made some mistakes or my dataset is not prepared completely.
Thanks a lot!
Thank you for your excellent work. Why multi-scale training strategy is disabled in your code. Multi-scale loss works very well in Monodepth2.
Hi, I noticed that you've adopted the points selection strategy from DSO for its effectiveness and efficiency. Points from DSO are sampled from pixels that have large intensity gradients. I'm wondering why this strategy is effective and efficient and why not choose the sobel filter to get points with large intensity gradients?
大佬您好,感谢您之前的回复,我还有新的问题。
因为我自己想搭建一个多张自监督室内深度估计的网络,我参考了DeepV2D和sfmlearner indoor
我发现两者的测试集图片不完全一样
这个是DeepV2D中提供的测试集 wget https://www.dropbox.com/s/numnge239p7ll7o/nyu_test.zip
这个里面000/000.png 和 nyu_test中00001.h5的rgb不是完全一样
我把000.png按照sfmlearner Indoor中_unditort的方法进行去畸变了,然后再进行比较还是不一样,00001.h5冰箱上贴纸明显小一些,而000.png原图和去畸变后的图都能看到贴纸。我没有弄明白是为啥。
第一张图是DeepV2D的原图,第二张图是按照sfmlearner indoor的方式进行去畸变后的,第三张图是00001.h5的图
!明显感觉到00001.h5的图片相对于DeepV2D的原图有旋转的变化,如果我只比较[44: 471, 40: 601, :]这个范围的
np.allclose两个array 返回是false,将两个array相减结果数值比较大
比较了深度标签wget https://www.dropbox.com/s/u5pu0j2ysed64ja/nyu_groundtruth.npy
[44: 471, 40: 601, :]范围内,两者是一模一样的
I'm curious about the results
Should this be input_image = input_image.resize((thisW, thisH), pil.LANCZOS) ?
您好。
我正在做本科毕设,想利用您论文里的深度估计模块,想问下得到深度图片是绝对深度还是相对深度呀,另外颜色和深度的对应关系是什么样的。请问能告诉我,在network文件夹哪个.py文件中是有关这一部分的吗?
Indoor-SfMLearner/trainer_geo.py
Line 184 in efc8cdc
Thanks for your great work!
Can I ask two questions about the dataloader?
In 'NYUDataset', we undistort images.
Indoor-SfMLearner/datasets/nyu_dataset.py
Line 308 in efc8cdc
Indoor-SfMLearner/datasets/nyu_dataset.py
Line 391 in efc8cdc
Here we use self.full_res_shape (608, 448), instead of (640, 480), to compute the normalized intrinsics. Will this have a negative influence?
Indoor-SfMLearner/datasets/nyu_dataset.py
Line 329 in efc8cdc
It seems that the keypoints extracted by DSO mainly distribute around the edge of objects, thus the depth variance may be large, so I'm wondering if the same depth assumption is plausible, could you please share the idea of this implementation?
Before training, the function val evaluate the initial model on the NYUv2 test set, and the result is
abs_rel | sq_rel | rmse | rmse_log | a1 | a2 | a3 |
& 0.323 & 0.448 & 1.002 & 0.365 & 0.520 & 0.783 & 0.905
That shocks me, am i wrong ? why the initial model perform pretty well on the NYUv2 test set ?
Thanks for sharing code!
How did you get this result, is it directly using the monodepth2 pre-training model for evaluation, or using monodepth2 to train on the NYUv2 dataset before evaluating.
By the way, are networks.py and partialconv.py unrelated to your paper ?
for some reasons, i can't log in onedrive
I can not download the pretrained model, can you provide another link to me?
Thanks very much.
Indoor-SfMLearner/trainer_geo.py
Line 518 in 0d682b7
Hi,
I try to train my model without superpixel planar regularization. But it's likely that the training collapses and the output is all-zero. Have you faced and managed to solve this problem?
Thanks in advance.
Hi, is there a chance that someone can share the raw nyuv2 dataset resource? It seems the download link from nyuv2 website may be invalid today? nyu-v2. And I suspect my original nyuv2 dataset may be inconsistent.
Thanks in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.