NTIRE CVPRW 2020
Website: https://google.github.io/sky-optimization/
Authors: Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, Jonathan T. Barron
Repository and website for Sky Optimization: Semantically aware image processing of skies in low-light photography
Home Page: https://google.github.io/sky-optimization/
License: Apache License 2.0
NTIRE CVPRW 2020
Website: https://google.github.io/sky-optimization/
Authors: Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, Jonathan T. Barron
Can you provide source code and your models?
Are you planning to publish your accurate sky-mask dataset?
I am trying to reproduce the paper. And I have some questions:
How to perform smooth upsample when s=16?
For a dowmsampling factor s=64, I can use 4x4 triangle kernel three times. But for s=16, should I use 4x4 twice or use [4x4, 2x2, 2x2] or somethind else?
How can I perform modified guided filter after model inference?
In my opinion, the model output is already a low-resolution map. How to apply weighted downsample in this case? How to use confidence map which is gotten by formula (1) and (2) if I don't use weight downsample? I guess that the model's output will be down-sampled to 64x, is that right? But this process increases the computational workload.
Hello, we realize the modified guided filter follow the pseudocode. The smooth upsampling we do as follow:
Hello!
Can you please describe a little bit the chroma and luma tuning and the best values for them? Thank you!
Could you please provide the code for the dataset creation? thank you!
Hi and thank you for your great paper.
I am a bit confused by the input and output image sizes in the proposed method to refine the coarse mask.
Let's say the source images are 4000x3000 and the inferred mask is 256x256, and forget about the post-processing steps (denoising, luminosity, etc...). So the idea is to do guided upscaling of the inferred mask to the input image size to restore missing details (holes in trees, fine details).
Is the modified guided filter supposed to do this upscaling alone, or does it only refine the mask at coarse resolution? 256x256 pixels, or quarter or input image size? (1024x768)
Is the 64x scale factor for the full size image? and should I have to upscale the mask to 4000x3000 using (one single ?) bilinear interpolation before applying the modified guided filter? (in the provided code, I can see 256x factor, or 4x64 ??).
In the suppl. material pdf, you said you use the modified guided filter to compute an image of a quarter resolution of the original image. Then, use a bilinear interpolation... but to me that will produce blured mask, not 1:1 quality mask. Do you then apply another small guided filter?
Is the 64x scale factor the equivalent of a box filter radius 64 pixels in the original guided filter implementation?
Many thanks if you can help with my questions.
Can you provide pseudocode or code for the smooth_upsample() algorithm, described in section A?
Hi, thanks for your great working~!I almost implement your algorithm successfully ,but I face a little doubt about "DE"
thanks
best regards
Question about training your network. When you training UNet, what do you do with labels? Do you use "probabilities" after guided filter algorithm or do you use binary labels (with some threshold after guided filter)? If you wrote about training process somewhere in the paper, please show where. Thank you
According to paper, I think that the groundtruth and output of model are continuous rather than binary. Is that right?
If it is right, which loss function was chosen to train the model? The CrossEntropy loss requires that the target be binary.
Or is a soft label for CE loss?
Thanks for your good reasearch. I almost implement the result by c++ and I train the model by U-2-Net. the result is much better than UNet. please refer to https://github.com/xiongzhu666/Sky-Segmentation-and-Post-processing
thanks
best regards
Hello!
Can you please explain a bit the A matrix and the shape of it. Moreover, can you also explain the idea of the ldl3 and the function that you use in the paper?
Thank you very much!
Hello!
Can you provide the code for the modified guided filter algorithm?
Thank you!
in Section B you provide Equation 8:
and when I try to implement this I am faced with a problem. I tried to calculate the minimum and maximum probability, and got it [0,63493.6]
Steps:
So, my question is: The probability should be within such limits? Is it possible to get this probability within [0..1]? Because you use threshold pc = 0.6 in Section 3.2 after Density estimation algorithm
Hi ,about Density estimation, I had implement gf_guided_filter and I want to add “de”,but I face the problem as below
I get the pi value of "undetermined" pix is about 63493 based on Equation 8 and Equation 9,what else did I ignore??
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.