Git Product home page Git Product logo

emlight's People

Contributors

fnzhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

emlight's Issues

Why tone Fov image ?

Hello!

I found that you use a TonemapHDR(..) function to tone your images in '\crop' folder.
Is it mean that images in '\crop' are just cut from hdr, without any tone process ?

evaluation metrix

HI~I have some question about REMS and si-REMS.
I know the matrix, but i don't know which parameter will be calculated.
is the illumination map?
I also want to ask that how to calculate the Angular Error.

thanks!

Running inference only?

I have been trying to get the code to work on generate a rough HDRI from an image input but have had no luck so far, I have downloaded the pretrained model but I am not entirely sure where to start. Can anyone help me?

panorama warping

Hi, fnzhan:
It is mentioned in the article, when extracting crops from panorama,you apply the same image warping as in [1] to each image.
There is no open source code of [1], do you implement this method by yourself or ask the author for source code. Or you don't apply this method while training the network.
Best wishes.
[1]Learning to predict indoor illumination from a single image. In: SIGGRAPH Asia (2017)

something about the output of regression

Forgive me, my level is not enough,
the input of the first stage of the model is a local picture, then its output is a local light,
but I see from the code it should be global, there is no position information, how to achieve this

How to balance the size of the input image?

the different scale of input images leads to different information of the scene.
maybe one image includes just one object,for example one chair , and the other image include the same chair and the environment around it. each image means different region . one is smaller than the other. and the first just in the second.
how did this model balance it? or what did you do to fix it?
thks for any reply~

about supplementary file & daatset file structure

Hi!

  1. In the paper, you mentioned 'Detailed network structure of neural projector and the training settings are provided in the supplementary file.', but I can not fild the supplementary file. Would you like to tell me where is the supplementary file?

  2. What's more, I tried to train my own dataset with your code for comparison, but I don't know the original file structure so I got some problems. Could you tell me the structure of the original file and the format of the images in the folder?

Thank you very much!

Question about the train/test split

Dear authors,

Thanks for opening source your code, and it is really helpful. I have a question about the train/test split of the Laval HDR dataset when I read the paper. It says that 19,556 training pairs are generated from 2,100 HDR panoramas, and 200 images are randomly selected as the test set. I am not sure if the 200 images are selected from the 2,100 HDR panoramas, or just from the 19,556 generated images. If they are from the 19,556 images, how to make sure the corresponding HDR panoramas of them are not in the training set or not seen during the training process?

Best regards,
Hao XU

how to insert virtual object into a really scene

Hi fnzhan,
In your qualitative evaluation stage, how do you insert the virtual objects into a really scene LDR photo?
Do you implement this method by yourself or refer to some existing methods.
Best wishes

How to reconstruct the environment map with needlets coefficients?

Hello,fnzhan:
Thank for your release the code for needlets,But the code to generate 'SN_Matrix3.npy' metioned in gt_jen_j3.py was not found, Would you mind providing more details about extracting the needlet coefficients from a HDR panorama and reconstructing the environment map with the needlets coefficients?

pre trained models

Hi again!

Could you provide the pre trained models please (for reproduction of your results)?

Thanks!

dataset for training

Hi,

congrats for the great work!

I'm trying to run the training code and I'm getting the following error:

(base) root@bd14969643f5:~/codes/Illumination-Estimation/RegressionNetwork# CUDA_VISIBLE_DEVICES=2 python train.py
  + Number of params: 9.50M
0 optim: 0.001
Traceback (most recent call last):
  File "train.py", line 68, in <module>
    for i, para in enumerate(dataloader):
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
    data = self._next_data()
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/root/codes/Illumination-Estimation/RegressionNetwork/data.py", line 68, in __getitem__
    training_pair['depth'] = torch.from_numpy(gt['depth']).float()
KeyError: 'depth'

I generated the dataset using distribution_representation.py, but I could not find anywhere the depth being added to the list.

Thanks for the help!

Generator network weights

Hi! I was wondering if you could share the weights for your generator network on your training set?
Thank you!

why divide intensity by 500 in data.py?

Hi!
I found intensity term is difficult to convergence during trainning, is it a common phenomenon Or my own problem?
What's more, i found you divide intensity by 500 in data.py before trainning, why do this?

training_pair['intensity'] = torch.from_numpy(np.array(gt['intensity'])).float() * alpha / 500

Best wishes!

three spheres with different materials

hi,
i have some question about evaluation.

The paper mention that the scenes used in evaluations consist of three spheres with different materials including diffuse gray, matte silver and mirror silver.

Was it bulided with blender?

if it was builded with blender , can you provide the parameters of three different materials ball?

How to reconstruct environment map?

envmap

I tried both train and test code on Laval Indoor dataset, and from the test code, I get one result as the image shown above. It seems like the Guassian Map that mentioned in the paper, and I wonder how to reconstruct the environment map from this image.

Thank you so much for your excellent work and I look forward to any reply.

How to evaluate on the Virtual Object Relighting (VOR) dataset

Hi, @fnzhan

I downloaded the Virtual Object Relighting (VOR) dataset and I do not how to evaluate the results. The dataset includes .blend, .blend1, and .jpg. What are they? I know blender a little bit. I read related issues and I know how to replace illumination maps. Do we need to save the rendered images one by one? After obtained rendered images (both gt and predicted illumination maps), do you release the metrics of RMSE, si-RMSE, Angular Error, and AMT? It would be nice if you provide more details about the evaluation. Thank you!

depth information

hi ,

i saw your code in data.py have ['depth'] term, but not in distribution_representation.py pkl's save para.
Is the 'depth' used in training? and how can i get the 'depth' information.

thanks!

requirements to run the code

Hi again!

Could you please share your running environment so I can try to run the code with little to no changes in it please?
I'm facing some silly problems like conversion from numpy to torch, which I suspect is because you use another pytorch version than mine, otherwise you would have the same problem as me. One example of error I'm getting:

$ Illumination-Estimation/RegressionNetwork/train.py

  • Number of params: 9.50M
    0 optim: 0.001
    Traceback (most recent call last):
    File "Illumination-Estimation/RegressionNetwork/train.py", line 82, in
    dist_emloss = GMLoss(dist_pred, dist_gt, depth_gt).sum() * 1000.0
    File "/miniconda3/envs/pt110/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
    result = self.forward(*input, **kwargs)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/samples_loss.py", line 43, in forward
    scaling=self.scaling, geometry=geometry)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/samples_loss.py", line 72, in sinkhorn_tensorized
    self.distance = distance(batchsize=B, geometry=geometry)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/utils.py", line 79, in init
    anchors = geometric_points(self.N, geometry)
    File "/Illumination-Estimation/RegressionNetwork/gmloss/utils.py", line 70, in geometric_points
    points[:, 0] = radius * np.cos(theta)
    TypeError: mul(): argument 'other' (position 1) must be Tensor, not numpy.ndarray

Thanks!

benchmark in GMLight

Hi, fnZhang.
Do you reproduce the paper “Fast Spatially-Varying Indoor Lighting Estimation“ as reference, or in any other ways to get the predictions.

Question about Sparse Needles (ICCV 2021).

image

  1. As I understand it, in the paper j=1,2,3. When j = 1, k = 1 ~ 12, when j = 2, k = 1 ~ 48, when j = 3, k = 1 ~ 192, right? In the above picture, what does j=k mean?

  2. Does n mean j_max (3 in this paper)?

how to render

hi, your work is so beautiful!and thks for your code.
i want to know what should i do to use the result in blender?
thks for any reply

regression model training lr

Dear zhan:
I am training the regression ,now I happend to the probelms that the intensity output of '*.pth' model is positive number, however convered the onnx model output is negative number. would you like help me sovle the problem?

Some error when I run distribution_representation.py

Traceback (most recent call last):
File "distribution_representation.py", line 145, in
para, map = extractor.compute(hdr)
File "distribution_representation.py", line 94, in compute
hdr = self.steradian * hdr
ValueError: operands could not be broadcast together with shapes (128,256,1) (1024,2048,3)

When I run distribution_representation.py, I encounter the above error, how can I solve it, thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.