Git Product home page Git Product logo

sky-optimization's Introduction

sky-optimization's People

Contributors

orlyliba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sky-optimization's Issues

Code

Can you provide source code and your models?

Dataset

Are you planning to publish your accurate sky-mask dataset?

Questions about smooth upsample and weighted downsample

I am trying to reproduce the paper. And I have some questions:

  1. How to perform smooth upsample when s=16?
    For a dowmsampling factor s=64, I can use 4x4 triangle kernel three times. But for s=16, should I use 4x4 twice or use [4x4, 2x2, 2x2] or somethind else?

  2. How can I perform modified guided filter after model inference?
    In my opinion, the model output is already a low-resolution map. How to apply weighted downsample in this case? How to use confidence map which is gotten by formula (1) and (2) if I don't use weight downsample? I guess that the model's output will be down-sampled to 64x, is that right? But this process increases the computational workload.

Detail about the smooth upsample

Hello, we realize the modified guided filter follow the pseudocode. The smooth upsampling we do as follow:

  1. biliner upsampling by s=4;
  2. smooth the upsampling result;
  3. biliner upsampling by s=4;
  4. smooth the upsampling result;
  5. biliner upsampling by s=4;
    Is it the right processing procedure to do the smooth upsampling?
    If the processing procedure is right, how to choose the kernel for smooth filter?
    Another question is that how to use the weighted downsample when inference. Because of the confidence map is low resolution , how to use the function(4):modified_guided_filter, which the C is high resolution.
    Thank you.

Parameter optimization

Hello!

Can you please describe a little bit the chroma and luma tuning and the best values for them? Thank you!

Question about downsampling upsampling and image size

Hi and thank you for your great paper.

I am a bit confused by the input and output image sizes in the proposed method to refine the coarse mask.
Let's say the source images are 4000x3000 and the inferred mask is 256x256, and forget about the post-processing steps (denoising, luminosity, etc...). So the idea is to do guided upscaling of the inferred mask to the input image size to restore missing details (holes in trees, fine details).

  1. Is the modified guided filter supposed to do this upscaling alone, or does it only refine the mask at coarse resolution? 256x256 pixels, or quarter or input image size? (1024x768)

  2. Is the 64x scale factor for the full size image? and should I have to upscale the mask to 4000x3000 using (one single ?) bilinear interpolation before applying the modified guided filter? (in the provided code, I can see 256x factor, or 4x64 ??).

  3. In the suppl. material pdf, you said you use the modified guided filter to compute an image of a quarter resolution of the original image. Then, use a bilinear interpolation... but to me that will produce blured mask, not 1:1 quality mask. Do you then apply another small guided filter?

  4. Is the 64x scale factor the equivalent of a box filter radius 64 pixels in the original guided filter implementation?

Many thanks if you can help with my questions.

smooth_upsample

Can you provide pseudocode or code for the smooth_upsample() algorithm, described in section A?

About Density estimation algorithm

Hi, thanks for your great working~!I almost implement your algorithm successfully ,but I face a little doubt about "DE"

  1. "DE" meas Density estimation algorithm, right?
  2. what is the |{sky}| in formula 8?
  3. according formula 8, I can get the new “pi” of “undetermined” pixel,so should I make statistics the ”low“ and ”high“ one more time based on the formula 1??( low score is 0.3 and high score is 0.5 in your paper)or I don't need to change anything, just modify the “pi” of “undetermined” pixel??

thanks
best regards

Training process

Question about training your network. When you training UNet, what do you do with labels? Do you use "probabilities" after guided filter algorithm or do you use binary labels (with some threshold after guided filter)? If you wrote about training process somewhere in the paper, please show where. Thank you

loss function in training phase

According to paper, I think that the groundtruth and output of model are continuous rather than binary. Is that right?

If it is right, which loss function was chosen to train the model? The CrossEntropy loss requires that the target be binary.

Or is a soft label for CE loss?

solve_image_ldl3

Hello!

Can you please explain a bit the A matrix and the shape of it. Moreover, can you also explain the idea of the ldl3 and the function that you use in the paper?

Thank you very much!

Dataset download?

image

Thank you so much for the link to download the dataset, it was a great job! However, I have encountered the above problems in the process of downloading, which cannot be solved temporarily. Could you please provide additional download links?

Question about Density estimation algorithm

in Section B you provide Equation 8:
image
and when I try to implement this I am faced with a problem. I tried to calculate the minimum and maximum probability, and got it [0,63493.6]
Steps:

  1. If I have image with [0..1] pixels range,
    image
    this sum in range [0..3]
  2. If I divide this sum with sigma=0.01 (from paper):
    image
    I have range [0..15000]
  3. with minus
    image
    I have [-15000..0]
  4. with exp
    image
    range = [0..1]
  5. this part is constant:
    image
    and equal to 63493.6 (https://www.wolframalpha.com/input/?i=1%2F%282*pi*%280.01%29%5E2%29%5E%283%2F2%29)
    and when we multiply range [0..1] to 63493.6 we get range [0..63493.6]

So, my question is: The probability should be within such limits? Is it possible to get this probability within [0..1]? Because you use threshold pc = 0.6 in Section 3.2 after Density estimation algorithm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.