Reference github repository for the paper "Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data". We propose a procedure to generate realistic DP data synthetically. Our synthesis approach mimics the optical image formation found on DP sensors and can be applied to virtual scenes rendered with standard computer software. Leveraging these realistic synthetic DP images, we introduce a new recurrent convolutional network (RCN) architecture that can improve defocus deblurring results and is suitable for use with single-frame and multi-frame data captured by DP sensors.
Hello
Thank you for your contributions.
I want to make a synthetic DP dataset using your code.
Unlike your ICCV2021 paper, in the code, I can not find the alpha which is a parameter for the cut-off frequency(D0) of the Butterworth filter.
In the code, 'cut_off_factor' parameters are {2.5, 2} unlike the paper (alpha={0.4,0.6,0.8,1.0}).
Could you tell me how to make the 'cut_off_factor' with the alpha parameter in the paper?
Thank you for your answer before!
I did some work to estimate the blur kernel of the camera,I want to know the effect of these blur kernel data,Can you release the parameters of camera blur kernel estimated in the paper?I want to do a blur kernel match.Thank you very much
Thanks for your great work and I am quite enlightened by your insights. I attempted to generate the depth map but the results i got was quite different from yours, showed in the attachment . I have checked your code thoroughly in your official repository but i could not find codes relating to the generation of depth map represented in your paper. Could you give me some explanations on the details of generating depth map in your paper?Thank you very much !