Git Product home page Git Product logo

kernelgan's Introduction

Blind Super-Resolution Kernel Estimation using an Internal-GAN

"KernelGAN"

Sefi Bell-Kligler, Assaf Shocher, Michal Irani

(Official implementation)

Paper: https://arxiv.org/abs/1909.06581

Project page: http://www.wisdom.weizmann.ac.il/~vision/kernelgan/

Accepted NeurIPS 2019 (oral)

Usage:

Quick usage on your data:

To run KernelGAN on all images in <input_image_path>:

python train.py --input-dir <input_image_path>

This will produce kernel estimations in the results folder

Extra configurations:

--X4 : Estimate the X4 kernel

--SR : Perform ZSSR using the estimated kernel

--real : Real-image configuration (effects only the ZSSR)

--output-dir : Output folder for the images (default is results)

Data:

Download the DIV2KRK dataset: dropbox

Reproduction code for your own Blind-SR dataset: github

kernelgan's People

Contributors

jfun9494 avatar sefibk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kernelgan's Issues

Random noise in the discriminator

Dear Author,
I am confused about the random noise added into the input of the discriminator. Could you please explain the function of it?

How do you get ground truth ?

Hi,

Thanks a lot for the code!

In the paper and the project page, you compare the estimated kernels with ground truth kernel but how do you get ground truth?

Thanks!

question about function resize_tensor_w_kernel

DownScaleLoss uses bicubic kernel to conv the img tensor,g_input'shape is [1,3,64,64]
image,in this expanding and conv2d way,i think the bicubic downsampling will get black-white img,and it is used in bicubic loss later,i'm confused about this.

Is the estimated kernel somehow used for further training in ZSSR?

I am new to this area. Very interesting work!

This is more of a clarification on the approach than an issue. I understand that once the kernel is estimated using the kernelGAN, it is supplied to existing SR techniques like ZSSR to get the final high-res output. However, ZSSR seems to have its own training functions (after a quick look at the code). Can you please explain how does this combination between kernel estimation and existing SR methods (that can input kernels directly) work? Example: I initially thought that since the kernel is already estimated, it could be somehow directly used to upscale the image...but ZSSR (from the original paper) seems to have 8 convolution layers with 64 channels each. Then, what role does the input kernel play in the training of ZSSR (how does it help existing ZSSR?)?

Thanks a lot.

Kernel_shift function clarification

I noticed that in the post_process function here a kernel_shift is applied to the kernel with sf=2.

Because sf=2, the center of mass of the kernel becomes shifted:

wanted_center_of_mass = np.array(kernel.shape) // 2 + 0.5 * (sf - (kernel.shape[0] % 2))

Could you clarify, why we need to satisfy the 2nd condition in the kernel_shift function (about top left pixel)? Why do we make the center of mass not placed in the true center of mass and why should it depend on the scale factor?

Output Kernel Size

Dear Author:

Thank you for your excellent work.
Recently I am running the kernel gan, and trying to downscale my images by the learned kernel. I found that the resultant kernel sizes is 17x17 ( for X2) and 33x33 (for X4). When I go back to your paper, the size of kernels are 13 x 13 and 25 x 25 respectively.
Is there any other configuration that I missed for the training?

Thank you!

Thank you!

Why there are intermediate kernels for x4?

In the given DIVRK dataset, there are intermediate kernels for x4. I guess they are x2 kernels and the given x4 kernels are derived from intermediate x2 kernels. Then, the derived x4 kernels are used to generate LR image. Am I right?

Why don't directly generate x4 kernels and use them to blur HR? What's the difference between anisotropic Gaussian x4 kernel and derived x4 kernel (from anisotropic Gaussian x2 kernel)? Would it be fair to compare with other methods that are using anisotropic Gaussian x4 kernels?

kernelGAN generated random kernel not consistent with simulated GT-kernel

hi, the idea of KernelGAN "estimating degradation kernel only with LR images and then generateing same distribution training-pairs to improve model generalization ability" is briliant. And it is really suitable in real scenario where train dataset is mostly unpaired.

I have done a simple simulation test based on your code:
Setting: 4 clean HR images from DIV2K are bluerd and downsampled with the same rotated gaussian kernel ( gt kernel ), and run your code to estimate the kernel on generated LR images.

generated LR:
image

the estimated kernel :
image

I found the estimated kernel are not consistent with simulated GT kernel, and the estimate kernel seems random. Hope you could give some suggestion.

Besides, does kernelGAN support 1X, 1.5X SR in theory?A little bit confused.

Result with misalignment and artifact

Thank you for your great work!
I'm trying some super resolution research work for remote sensing images。In my field,there are always no high resolution images for supervised learning , so I would like to take some unsupervised approach for my research.
When I do some test with your code in my dataset ,I found that most of the result images had anomalies. Some of them were mialignment , others were full of artifact,or noise.
I've tried to increase the iters to 6000, while it didn't make difference.
I will paste two typical results in this ISSUE。
I wonder if you have noticed this phenomena ?
Or some suggestion for me ?
Misalignment example:
image
Noise example:
image
image
image

Constraints differences with the paper

To my knowledge torch nn.L1Loss which you use, performs a reduction over the axes using mean operation.
Therefore when it is applied, for instance, to calculate sparsity loss, it divides the sum by factor of 169 (size of the kernel map).

This behaviour is not in agreement with the paper, where (for given sparsity constraint) summation is usead instead of a mean.
Could you specify which way of computing constraints is correct (or perhaps i'm wrong about the behaviour of L1loss or I've missed somehting).

I've tried to reproduce your results in tensorflow with not much success, the results depend strongly on hyperparameters, initialization and the way constraints are applied in time.
I would love to read about your experiments and what you discovered about the stability of the method.
The paper does not provide much information in that regard.

Unable to generate the .mat file of kernel

Thank you for sharing the code.

I meet the following problem. It seems that the kenel is not estimated successfully. Could you give me some suggestions?

G:\Anaconda\python.exe D:/2020/ReferenceCode/KernelGAN-master/train.py --input-dir test_images --real --SR
Scale Factor: X2 ZSSR: True Real Image: True


STARTED KernelGAN on: "test_images\im_1.png"...
0%| | 0/3000 [00:00<?, ?it/s]G:\Anaconda\lib\site-packages\torch\nn\modules\loss.py:93: UserWarning: Using a target size (torch.Size([13])) that is different to the input size (torch.Size([1, 1, 13, 13])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.l1_loss(input, target, reduction=self.reduction)
G:\Anaconda\lib\site-packages\torch\nn\modules\loss.py:93: UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.l1_loss(input, target, reduction=self.reduction)
G:\Anaconda\lib\site-packages\torch\nn\modules\loss.py:445: UserWarning: Using a target size (torch.Size([2])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
100%|███████████████████| 3000/3000 [02:04<00:00, 24.18it/s]
Traceback (most recent call last):
File "G:\Anaconda\lib\site-packages\scipy\io\matlab\mio.py", line 39, in _open_file
return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: 'D:\2020\ReferenceCode\KernelGAN-master\results\test_images\im_1lll\test_images\im_1_kernel_x2.mat'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/2020/ReferenceCode/KernelGAN-master/train.py", line 54, in
main()
File "D:/2020/ReferenceCode/KernelGAN-master/train.py", line 36, in main
train(conf)
File "D:/2020/ReferenceCode/KernelGAN-master/train.py", line 18, in train
gan.finish()
File "D:\2020\ReferenceCode\KernelGAN-master\kernelGAN.py", line 124, in finish
save_final_kernel(final_kernel, self.conf)
File "D:\2020\ReferenceCode\KernelGAN-master\util.py", line 214, in save_final_kernel
sio.savemat(os.path.join(conf.output_dir_path, '%s_kernel_x2.mat' % conf.img_name), {'Kernel': k_2})
File "G:\Anaconda\lib\site-packages\scipy\io\matlab\mio.py", line 266, in savemat
with _open_file_context(file_name, appendmat, 'wb') as file_stream:
File "G:\Anaconda\lib\contextlib.py", line 113, in enter
return next(self.gen)
File "G:\Anaconda\lib\site-packages\scipy\io\matlab\mio.py", line 19, in _open_file_context
f, opened = _open_file(file_like, appendmat, mode)
File "G:\Anaconda\lib\site-packages\scipy\io\matlab\mio.py", line 45, in _open_file
return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: 'D:\2020\ReferenceCode\KernelGAN-master\results\test_images\im_1lll\test_images\im_1_kernel_x2.mat'

about the GAN loss

Nice work! I try my own data, but there is the problem:
Traceback (most recent call last):
File "train.py", line 53, in
main()
File "train.py", line 36, in main
train(conf)
File "train.py", line 16, in train
gan.train(g_in, d_in)
File "F:\lz666\KernelGAN-master\kernelGAN.py", line 67, in train
self.train_g()
File "F:\lz666\KernelGAN-master\kernelGAN.py", line 82, in train_g
loss_g = self.criterionGAN(d_last_layer=d_pred_fake, is_d_input_real=True)
File "F:\lz666\KernelGAN-master\loss.py", line 26, in forward
return self.loss(d_last_layer, label_tensor)
File "E:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "E:\anaconda\lib\site-packages\torch\nn\modules\loss.py", line 87, in forward
return F.l1_loss(input, target, reduction=self.reduction)
File "E:\anaconda\lib\site-packages\torch\nn\functional.py", line 1700, in l1_loss
reduction = _Reduction.get_enum(reduction)
File "E:\anaconda\lib\site-packages\torch\nn\functional.py", line 30, in get_enum
raise ValueError(reduction + " is not a valid value for reduction")
ValueError: mean is not a valid value for reduction

I do not know much about GAN, my data is a 192*192 png file.

can't run in my MacBook Pro 2017 without NVDIA graphic card

When I run your code, I get the following error:

Traceback (most recent call last): File "train.py", line 52, in <module> main() File "train.py", line 34, in main conf = Config().parse(create_params(filename, args)) File "/Users/outro/Downloads/KernelGAN-master/configs.py", line 51, in parse self.set_gpu_device() File "/Users/outro/Downloads/KernelGAN-master/configs.py", line 67, in set_gpu_device torch.cuda.set_device(0) File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 292, in set_device torch._C._cuda_setDevice(device) AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'

I try to use torch.cuda.set_device(-1) and after that, I get the following error:

Traceback (most recent call last): File "train.py", line 52, in <module> main() File "train.py", line 35, in main train(conf) File "train.py", line 11, in train gan = KernelGAN(conf) File "/Users/outro/Downloads/KernelGAN-master/kernelGAN.py", line 21, in __init__ self.G = networks.Generator(conf).cuda() File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 304, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 201, in _apply module._apply(fn) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 223, in _apply param_applied = fn(param) File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 304, in <lambda> return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 196, in _lazy_init _check_driver() File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 94, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

Could you tell me how to solve this error?

Why type torch.FloatTensor but got torch.cuda.FloatTensor?

Thank you for sharing.
I get a error: RuntimeError: Function AddBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.cuda.FloatTensor
In # Calculate gradientstotal_loss_g.backward()
We have made any changes.

The settings of parameters

Hi, thank you for this nice work, and thank you for sharing the code.

I want to compare our method with the KernelGAN+ZSSR. Could you please tell me how to set paramters in the shared the code to reproce the results in the paper?

For example, the lambda_centralized and lambda_sparse are set to 1 and 5 respectively in the paper. However, there are 0 in the shared code.


lambda_sum2one = 0.5
lambda_bicubic = 5
lambda_boundaries = 0.5
lambda_centralized = 0
lambda_sparse = 0


To make a fair comparison, could you please tell me how should I change the code and parameetrs? By the way, for different testing images, are these parameetrs fixed or not ?

Thanks.

Best wishes!

the code is wrong theoritically, please check

I think 'depthwise conv' is preferred in function 'resize_tensor_w_kernel' of 'util.py' because we usually apply downsampling process to R,G,B seperatly.

Anyway ,the author makes the bicubic loss shutdown after hundreds of iterations, so this bug goes away.

'float' object has no attribute 'size'

When i execute training command given in README on the custom dataset of jpeg images, following error arises:

 UserWarning: Using a target size (torch.Size([13])) that is different to the input size (torch.Size([1, 1, 13, 13])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.l1_loss(input, target, reduction=self.reduction)

nn/functional.py", line 2152, in l1_loss
    if not (target.size() == input.size()):
AttributeError: 'float' object has no attribute 'size'

Is this due to images or implementation error?

About the parameter in the code

Hi! You said " As written in the paper, all the regularizations are inserted after the bicubic constraint is satisfied and discarded. The final co-efficients are as written in the paper (and you quoted). The parameters you snapped are at the beginning of the training before the bicubic "satisfaction"." Can you discriminate whether the bicubic is satified ? By iteration or other metrics? Thank you !

The time consuming can not be tolerable

Hello~ I use the function imresize to generate LR image of 4K images, but it would cost at least 100 s for each image. I wonder if i made some mistake. Can you give some advice to me, thanks a lot !!!

Copy patches instead of view

Every time you add noise to crop_im you change original self.input_image

crop_im += np.random.randn(*crop_im.shape) / 255.0

should change
crop_im = self.input_image[top:top + size, left:left + size, :] to
crop_im = self.input_image[top:top + size, left:left + size, :].copy()

windows device - tensorflow version issues

I am trying to run the program on a windows laptop but i keep running into the following error when running the ZSSR after the Kernal is done:
Loaded runtime CuDNN library: 7.0.5 but source was compiled with: 7.1.4. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.

I was wondering on what system you are running your code and if you came across this issue.
I checked my system and can't find a 7.0.5 version of CuDNN.

real LR

How do I generate div2k train 800 real LR image with your method? I saw that cubic is used in your imresize function, not kernelGAN

Dimension mismatch warning for loss calculation.

Hi,

Pytorch gives the following warnings when I run train.py for kernel estimation.

(venv) samim@joe:~/Desktop/ms/KernelGAN$ python train.py --input-dir ./images/ --SR --real
Scale Factor: X2 	ZSSR: True 	Real Image: True
************************************************************
STARTED KernelGAN on: "./images/KLE_9806.png"...
  0%|                              | 0/3000 [00:00<?, ?it/s]/home/samim/Desktop/ms/resLF/venv/lib/python3.7/site-packages/torch/nn/modules/loss.py:88: UserWarning: Using a target size (torch.Size([13])) that is different to the input size (torch.Size([1, 1, 13, 13])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.l1_loss(input, target, reduction=self.reduction)
/home/samim/Desktop/ms/resLF/venv/lib/python3.7/site-packages/torch/nn/modules/loss.py:88: UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.l1_loss(input, target, reduction=self.reduction)
/home/samim/Desktop/ms/resLF/venv/lib/python3.7/site-packages/torch/nn/modules/loss.py:431: UserWarning: Using a target size (torch.Size([2])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.mse_loss(input, target, reduction=self.reduction)
100%|███████████████████| 3000/3000 [01:27<00:00, 34.35it/s]
KernelGAN estimation complete!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running ZSSR X2...

ZSSR configuration is for a real image

It seems that there is a mismatch in the dimensions. Is it by design or is it not supposed to happen.

I also have another question about doing SR after kernel estimation using ZSSR. Does the ZSSR get trained on the input images or do you use a pretrained ZSSR somehow?

Best regards

Estimating SR kernel for an odd scaling factor (x3 or x5)

Hi, I was able to estimate x2 and x4 SR kernels easily thanks to your implementation. And I also had a look at the derivation for estimating the x4 kernel from the x2 kernel. However, I wonder if it would be possible to estimate a similar SR kernel with an odd scaling factor such as 3. If you could give me some hints or directions on how to achieve this it would be great! Thanks in advance.

Equivalence between generator and extracted kernel

Hi,

Thanks a lot for this great work! I have a quick question regarding the paper.

If I'm understanding it correctly, the idea is that the generator of KernelGAN can be always equated to a single kernel, which can be obtained via, e.g., KernelGAN.calc_curr_k. But do you mean that this equivalence is exact? In other words, the output of the generator is always exactly equal to convolving with this single kernel?

I tried to test this but from what I saw they do not seem to be the same. Can you please enlighten me on this? Many thanks in advance.

Question about ZSSR config

I have a couple of questions regarding ZSSR configs:

  1. I noticed that in the run_zssr function, real_img and noise_scale are completely ignored, and only scale_factor and kernel are specified in these lines:

    KernelGAN/util.py

    Lines 225 to 228 in cb88293

    if conf.X4:
    sr = ZSSR(conf.input_image_path, scale_factor=[[2, 2], [4, 4]], kernels=[k_2, analytic_kernel(k_2)]).run()
    else:
    sr = ZSSR(conf.input_image_path, scale_factor=2, kernels=[k_2]).run()

    Can you please clarify if my observation is right?

  2. In this issue the author mentions that ZSSR may run in to division problems when using python3. Is the ZSSR code in this repo susceptible to this kind of problem?

Using the KernelGAN to obtain Blur Kernel of Monocolor Images or Image with more than 3 channels (Hyperspectral Images)

Dear Team,
I was reading through the codes of KernelGAN and I am interested in adopting the code for :

  1. Monocolor (1 Channel) Image.
  2. Hyperspectral Images (More than 3 color channels).
    Could put let me know the alterations the code requires for this above purpose?

In addition, I need to output the (Estimated) Blurred Image after the training of the GAN Network. For this we need to pass the Entire Input LR image (Not patch-by-patch as done during training) to the Generator network (after the training has been done) and get the Output (Blurred LR Image) of the Generator. Please suggest ways to do this.

Iteration steps

In the paper, "The GAN trains for 3,000 iterations, alternating single optimization steps of G and D, with the
ADAM optimizer (β1 = 0:5; β2 = 0:999). Learning rate is 2e−4, decaying ×0.1 every 750 iters."
why choose iteration to be 3000, what if more or fewer iterations?

I tried different iterations and get different kernel results. Please let me know your thoughts.

How do I read the saved Kernel

sio.savemat(os.path.join(conf.output_dir_path, '%s_kernel_x2.mat' % conf.img_name), {'Kernel': k_2})
if conf.X4:
    k_4 = analytic_kernel(k_2)
    sio.savemat(os.path.join(conf.output_dir_path, '%s_kernel_x4.mat' % conf.img_name), {'Kernel': k_4})

Above is the code to save kernel. I try to read it with the following code:

 k_2=kernel = scipy.io.loadmat("./model/1004141-6h-A1_kernel_x2.mat")['kernel']

However, the following error is indicated. How should I read and apply it correctly

WARNING:tensorflow:From D:\sofe\anaconda\lib\site-packages\tensorflow_core\python\compat\v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Traceback (most recent call last):
File "D:/docum/PycharmProjects/KernelGAN-master/ZSSRforKernelGAN/test.py", line 25, in
k_2=kernel = scipy.io.loadmat("./model/1004141-6h-A1_kernel_x2.mat")['kernel']
KeyError: 'kernel'

about the pretrained weight

Thanks for your work. Could you please upload the pre-trained model weight for a quick test? thanks a lot

About GAN loss and generator function

Hi, thanks for the great work.

I have a couple questions about GAN setup.

  1. As I understand, the paper says that generator purpose is to model downsampling operation which is applying kernel to an image followed by subsampling operation like here https://github.com/sefibk/KernelGAN/blob/master/util.py#L37
    https://github.com/sefibk/KernelGAN/blob/master/imresize.py#L157
    but when I look on the generator structure I see it done in a different order https://github.com/sefibk/KernelGAN/blob/master/networks.py#L11
    (subsampling + applying kernel). So any kernel from the generator would be dilated 25x25 kernel instead of 13x13 kernel as the paper says. Is this intentional?

  2. LSGAN loss function L1 instead of L2 (nn.MSELoss)
    https://github.com/sefibk/KernelGAN/blob/master/loss.py#L15

Could you check the data download link?

I learned a lot through your thesis. Thank you.
I want to run the code, but it seems that there is a problem with the data download link.
Could you please confirm ?

The parameters when the image is saved

    **max_val = 255 if sr.dtype == 'uint8' else 1.
    plt.imsave(os.path.join(conf.output_dir_path, 'ZSSR_%s.tif' % conf.img_name), sr, vmin=0, vmax=max_val, dpi=1)**

I want to ask what is the significance of **max_val?**Since Matplotlib does not hold tiF images, I need another library, such as Imageio, to hold TIF images, but it does not have these parameters. Does this affect the final result?Because the image I get is whiter, just like the transparent white layer above.

Kernel estimation size problem

Training with image 'DIV2KRK/lr_x2/im_78.png', the ground truth kernel should be 'gt_k_x2/kernel_78.mat' right? The generated result kernel size seems different from the ground truth kernel? Is this the final result?

How to get final kernel size

Thanks for you excellent work!
I have a problem here.
when the config G_kernel_size=13, I get the final Kernel size 17x17
when the config G_kernel_size=11, the code get error,
How can I get the right Kernel size?

the settings for 2x downsampling

Hi. Thank you for sharing the code.
Could you tell me how to set the following parametres when generating 2x downsampled lr testing images, so make the gererated images are consistent with the dataset you used? Thanks

scale_factor = np.array([4, 4]) # choose scale-factor
avg_sf = np.mean(scale_factor) # this is calculated so that min_var and max_var will be more intutitive
min_var = 0.175 * avg_sf # variance of the gaussian kernel will be sampled between min_var and max_var
max_var = 2.5 * avg_sf
k_size = np.array([21, 21]) # size of the kernel, should have room for the gaussian
noise_level = 0.4 # this option allows deviation from just a gaussian, by adding multiplicative noise noise

The reproduced experimental result are lower than that in the paper.

Thanks for the great job. I reproduced the experiments for scale x2 and x4 on DIV2KRK dataset that you provided, but the PSNR/SSIM are 29.93/0.8548 and 26.76/0.7302 respectively, which are all lower than results in your paper. Could you please upload the all the visual results ?

Adjust the blurring level of output LR

Dear Author,

Thank you for your excellent work. Recently I am trying to generate some LR images from my HR images. The output LR from the estimated kernel is too blurred for my case. Is it possible to generate a LR that is less blurred. My guess is to adjust the filter size of generator, it is correct?

Best,

TK

Comparison experiments using SRMD in the paper

Thanks so much for the brilliant idea and work!
I noticed that you compared ZSSR with SRMD using the kernel estimated by KernelGAN, I am wondering how you dealt with the kernel size since SRMD uses 1515 kernels. Did you just crop the generated kernel into 1515 to fit the setting of SRMD or there was some other tricks adopted?
Thanks for your help and attention!

Why the warning:UserWarning: a target size that is different to the input size ??

Your work is admirable
I have a small question, I use the real image to run this program, why will prompt the target size is not the same as the input size warning?But you can do everything.Is there an image size restriction on This application? This will likely lead to penresults due to broadcasting.

D:\sofe\anaconda\python.exe D:/docum/PycharmProjects/KernelGAN-master/train.py --SR --real
WARNING:tensorflow:From D:\sofe\anaconda\lib\site-packages\tensorflow_core\python\compat\v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
output D:\docum\PycharmProjects\KernelGAN-master\results
Scale Factor: X2 ZSSR: True Real Image: True


STARTED KernelGAN on: "./test_images\1004141-6h-A1 original.tif"...
0%| | 0/3000 [00:00<?, ?it/s]### D:\sofe\anaconda\lib\site-packages\torch\nn\modules\loss.py:88: UserWarning: Using a target size (torch.Size([13])) that is different to the input size (torch.Size([1, 1, 13, 13])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.l1_loss(input, target, reduction=self.reduction)
D:\sofe\anaconda\lib\site-packages\torch\nn\modules\loss.py:88: UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.l1_loss(input, target, reduction=self.reduction)
D:\sofe\anaconda\lib\site-packages\torch\nn\modules\loss.py:431: UserWarning: Using a target size (torch.Size([2])) that is different to the input size (torch.Size([2, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
100%|██████████████████▉| 2999/3000 [04:19<00:00, 12.19it/s]out test_images\1004141-6h-A1 original
KernelGAN estimation complete!

real LR

hello,friend. I have a LR and filter.How get real LR?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.