Git Product home page Git Product logo

llnerf's Introduction

Hi there πŸ‘‹

  • πŸ”­ I’m currently in Hong Kong, China, and working on neural rendering research.
  • πŸ˜„ My recent work: lcdpnet, llnerf and NeP
  • 🏑 Visit my homepage to learn more!

llnerf's People

Contributors

onpix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

llnerf's Issues

AttributeError: module 'pycolmap' has no attribute 'SceneManager'

Hello! I want to consult a question about pycolmap. I pip install 'pycolmap' in my created environment at first. However, when I run "bash train.sh", I met this attribute error that module 'pycolmap' has no attribute 'SceneManager'. I notice that in 'datasets.py', there states a nerf-specific extension to the third party exists, but I can still not find the file 'scene_manager.py'. Can you provide some suggestions? Thank you very much.

Pair Data Problem.

Hi, dear author, thanks again for your work, I try to run your data on my own codebase now, but I confuse by 2 problems, I think maybe some place I have goes wrong:

(1). There are no paired data for the comparison (still2 scene for exampleοΌ‰:
The normal light images:
image
The low light images:
image

As it shows that DSC01652.png ... is not in the normal-light part.

(2). When I run the data on my own codebase (LLFF data format), I found that I could not render NeRF on the normal-light images, the generated images are like the following, I see your code is based on RAW-NeRF, when I run RAW-NeRF data on my code base, everything is OK.

So is this the problem with the camera pose or where did I go wrong? Thank you in advance!

WeChat51761edf6b7c60a8b723f6bb14bcd629

Comparison results in Table.1

Thanks for your nice work! It's a great work, I have a question is that in Table.1 there are comparison results with normal light scenes, but I download the dataset and it does not seem to have the normal-lit part, so where can I find the normal-light images? Thanks~

image

About smooth loss

Hi, thanks for your great work!

I am trying to replicate your work using pytorch framework. Following your form of code, I get preliminary results:
image

It is obvious that there is an error in obtaining the enhanced image part.
I initially found that my understanding was wrong in the smooth loss part.
In your code, you first define three types of rays to calculate the loss by taking the raw pixels, horizontal and vertical pixels. So in my understanding, the ray origin or ray direction dimension of each batch should be (batch, 3,3)?

However, in my pytorch code, I defined the ray origin or ray direction dimension of each batch as (batch, 3), so I ignored the middle dimension, that is, I didn't get the three types of rays, which made smooth loss invalid.

So how should I fix it? Simply add a grid of extra pixels to get three types of rays? However, in the input network, its dimension needs to be changed into two-dimensional (batch* 3,3) to pass through the linear layers, and finally the reshape into (batch, 3,3) to calculate the smooth loss. Can such a method successfully obtain enhanced images?

I will appreciate for your help!

About environment!

I can't create the environment successfully:

Collecting ipython==8.4.0
  Downloading http://mirrors.aliyun.com/pypi/packages/fe/10/0a5925e6e8e4c948b195b4c776cae0d9d7bc6382008a0f7ed2d293bf1cfb/ipython-8.4.0-py3-none-any.whl (750 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 750.8/750.8 kB 7.7 MB/s eta 0:00:00
Collecting jax==0.3.17
  Downloading http://mirrors.aliyun.com/pypi/packages/87/74/950b7af8176499fdc3afea6352b4734325a1c735c026eeb3918b7e422b9a/jax-0.3.17.tar.gz (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 29.6 MB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement jaxlib==0.3.15+cuda11.cudnn82 (from versions: 0.1.63, 0.1.74, 0.1.75, 0.1.76, 0.3.0, 0.3.2, 0.3.5, 0.3.7, 0.3.10, 0.3.14, 0.3.15, 0.3.20, 0.3.22, 0.3.24, 0.3.25, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.6, 0.4.7, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.15)
ERROR: No matching distribution found for jaxlib==0.3.15+cuda11.cudnn82

failed

CondaEnvException: Pip failed

By the way, the cuda you said is 11.8,but in the environment.yaml,the cuda is 11.3,is it right?

About the camera pose ground truth

Hello, could you please share any suggestions about how to get the ground truth of the camere pose when you collecting the dataset? Cuz i did not find any instruction in your paper. Do you use colmap to register the low-light images in your dataset?

About Code RAW

you lose the function at internal/dataset.py -- 625col. if you choose the mode is RAW, you don't have self.image_paths in your code.

you should add:
colmap_files = sorted(utils.listdir(colmap_image_dir))
image_files = sorted(utils.listdir(image_dir))
colmap_to_image = dict(zip(colmap_files, image_files))
image_paths = [os.path.join(image_dir, colmap_to_image[f])
for f in image_names]
self.image_paths = np.array(image_paths)

after 625 col.

Out of memory

Hello, I have solved the problem about pycolmap, but when I run the code ”bash scripts/train.sh,β€œ I get a new error again, as follows
jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory allocating 20029400304 bytes.

Thank you so much for the reademe and code, if you can give me a little hint that would be really appreciated!

About pycolmap

When I run scripts/train.sh I met a problem about pycolmap
from pycolmap import SceneManager
ImportError: cannot import name 'SceneManager' from 'pycolmap' (/home/gtyssg/miniconda3/envs/jax/lib/python3.9/site-packages/pycolmap.cpython-39-x86_64-linux-gnu.so)

Inquiry regarding dataset image resolutions

I want to express my gratitude for making the LLNeRF dataset available. However, I have encountered a query while working with the dataset. I observed that during the evaluation of scenes like still2, still3, and still4 in the paper, there appears to be a difference in resolution between normal light images and low light images. Could you kindly offer a comprehensive explanation of how the resolution disparity was managed during the preprocessing phase?

About data supervision

Why is the enhancement process unsupervised in the paper, but a data supervision function: E [Ξ·(Μƒcr)βˆ’ Ξ·(cr)] is added, which makes the enhanced color approach the color of the low-light image? How can this achieve enhancement?

About code

image
Is the code above correspond to the density network?

And if i want to remove pos_enc,
can i do this?

x = lifted_means
inputs = x

About train my own datasets

Hi,
When I train my own datasets, and after render it, there will be mp4 and other png. However, the mp4 i got can't be open:
image

So i found the png, but it got wrong scenes:
rgb_enhanced_008
While the original and normal scene is :
image

So what is the problem?

about train

I would like to ask how long it takes to train a model like a book

About train!

Hi
My GPU is V100 with 32GB memory. And I have only one gpu. But I found with your default setting, the training speed is very slow. For example, with the default setting(batch_size=1024.......), it takes about half an hour to finish 100 steps, so if i want to finish 10w steps,i need twenty days!
My question is why the speed is so slow? Can I change the batch_size to speed up? Or other variants need to change?Can you give me some suggestions?

About the red background in rendering results

Hello, I found that when rendering using your dataset and the dataset I made myself, the final result will have a red blur. This layer of red looks good on your data set, but it shows a messy pink shade on my data set. I would like to ask if this red can be removed?

No such file or directory:'/llnerf-dataset/***/transforms.json'

When I try to run "bash train.sh",I firstly got the FileNotFoundError:"No such file or directory:'/LLNeRF/llnerf-dataset/book/transforms.json'."Then I tried all the datasets, but none own the 'transforms.json'. Could you help solve this problem? Thanks so much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.