onpix / llnerf Goto Github PK
View Code? Open in Web Editor NEW[ICCV2023] Lighting up NeRF via Unsupervised Decomposition and Enhancement
Home Page: https://whyy.site/paper/llnerf
License: MIT License
[ICCV2023] Lighting up NeRF via Unsupervised Decomposition and Enhancement
Home Page: https://whyy.site/paper/llnerf
License: MIT License
Hello! I want to consult a question about pycolmap. I pip install 'pycolmap' in my created environment at first. However, when I run "bash train.sh", I met this attribute error that module 'pycolmap' has no attribute 'SceneManager'. I notice that in 'datasets.py', there states a nerf-specific extension to the third party exists, but I can still not find the file 'scene_manager.py'. Can you provide some suggestions? Thank you very much.
Hi, dear author, thanks again for your work, I try to run your data on my own codebase now, but I confuse by 2 problems, I think maybe some place I have goes wrong:
(1). There are no paired data for the comparison οΌstill2 scene for exampleοΌ:
The normal light images:
The low light images:
As it shows that DSC01652.png ... is not in the normal-light part.
(2). When I run the data on my own codebase (LLFF data format), I found that I could not render NeRF on the normal-light images, the generated images are like the following, I see your code is based on RAW-NeRF, when I run RAW-NeRF data on my code base, everything is OK.
So is this the problem with the camera pose or where did I go wrong? Thank you in advance!
Hi, thanks for your great work!
I am trying to replicate your work using pytorch framework. Following your form of code, I get preliminary results:
It is obvious that there is an error in obtaining the enhanced image part.
I initially found that my understanding was wrong in the smooth loss part.
In your code, you first define three types of rays to calculate the loss by taking the raw pixels, horizontal and vertical pixels. So in my understanding, the ray origin or ray direction dimension of each batch should be (batch, 3,3)?
However, in my pytorch code, I defined the ray origin or ray direction dimension of each batch as (batch, 3), so I ignored the middle dimension, that is, I didn't get the three types of rays, which made smooth loss invalid.
So how should I fix it? Simply add a grid of extra pixels to get three types of rays? However, in the input network, its dimension needs to be changed into two-dimensional (batch* 3,3) to pass through the linear layers, and finally the reshape into (batch, 3,3) to calculate the smooth loss. Can such a method successfully obtain enhanced images?
I will appreciate for your helpοΌ
I can't create the environment successfully:
Collecting ipython==8.4.0
Downloading http://mirrors.aliyun.com/pypi/packages/fe/10/0a5925e6e8e4c948b195b4c776cae0d9d7bc6382008a0f7ed2d293bf1cfb/ipython-8.4.0-py3-none-any.whl (750 kB)
ββββββββββββββββββββββββββββββββββββββββ 750.8/750.8 kB 7.7 MB/s eta 0:00:00
Collecting jax==0.3.17
Downloading http://mirrors.aliyun.com/pypi/packages/87/74/950b7af8176499fdc3afea6352b4734325a1c735c026eeb3918b7e422b9a/jax-0.3.17.tar.gz (1.1 MB)
ββββββββββββββββββββββββββββββββββββββββ 1.1/1.1 MB 29.6 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement jaxlib==0.3.15+cuda11.cudnn82 (from versions: 0.1.63, 0.1.74, 0.1.75, 0.1.76, 0.3.0, 0.3.2, 0.3.5, 0.3.7, 0.3.10, 0.3.14, 0.3.15, 0.3.20, 0.3.22, 0.3.24, 0.3.25, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.6, 0.4.7, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.15)
ERROR: No matching distribution found for jaxlib==0.3.15+cuda11.cudnn82
failed
CondaEnvException: Pip failed
By the way, the cuda you said is 11.8,but in the environment.yaml,the cuda is 11.3,is it right?
Hello, could you please share any suggestions about how to get the ground truth of the camere pose when you collecting the dataset? Cuz i did not find any instruction in your paper. Do you use colmap to register the low-light images in your dataset?
you lose the function at internal/dataset.py -- 625col. if you choose the mode is RAW, you don't have self.image_paths in your code.
you should add:
colmap_files = sorted(utils.listdir(colmap_image_dir))
image_files = sorted(utils.listdir(image_dir))
colmap_to_image = dict(zip(colmap_files, image_files))
image_paths = [os.path.join(image_dir, colmap_to_image[f])
for f in image_names]
self.image_paths = np.array(image_paths)
after 625 col.
Hello, I have solved the problem about pycolmap, but when I run the code βbash scripts/train.sh,β I get a new error again, as follows
jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory allocating 20029400304 bytes.
Thank you so much for the reademe and code, if you can give me a little hint that would be really appreciatedοΌ
When I run scripts/train.sh I met a problem about pycolmap
from pycolmap import SceneManager
ImportError: cannot import name 'SceneManager' from 'pycolmap' (/home/gtyssg/miniconda3/envs/jax/lib/python3.9/site-packages/pycolmap.cpython-39-x86_64-linux-gnu.so)
I want to express my gratitude for making the LLNeRF dataset available. However, I have encountered a query while working with the dataset. I observed that during the evaluation of scenes like still2, still3, and still4 in the paper, there appears to be a difference in resolution between normal light images and low light images. Could you kindly offer a comprehensive explanation of how the resolution disparity was managed during the preprocessing phase?
Why is the enhancement process unsupervised in the paper, but a data supervision function: E [Ξ·(Μcr)β Ξ·(cr)] is added, which makes the enhanced color approach the color of the low-light image? How can this achieve enhancement?
I would like to ask how long it takes to train a model like a book
Hi
My GPU is V100 with 32GB memory. And I have only one gpu. But I found with your default setting, the training speed is very slow. For example, with the default setting(batch_size=1024.......), it takes about half an hour to finish 100 steps, so if i want to finish 10w steps,i need twenty days!
My question is why the speed is so slow? Can I change the batch_size to speed up? Or other variants need to change?Can you give me some suggestions?
Hello, I found that when rendering using your dataset and the dataset I made myself, the final result will have a red blur. This layer of red looks good on your data set, but it shows a messy pink shade on my data set. I would like to ask if this red can be removed?
When I try to run "bash train.sh",I firstly got the FileNotFoundError:"No such file or directory:'/LLNeRF/llnerf-dataset/book/transforms.json'."Then I tried all the datasets, but none own the 'transforms.json'. Could you help solve this problem? Thanks so much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.