Git Product home page Git Product logo

pesr's Introduction

PESR

Official implementation for Perception-Enhanced Single Image Super-Resolution via Relativistic Generative Networks ECCV Workshop 2018

Citation

Please our project if it is helpful for your research

@InProceedings{Vu_2018_ECCV_Workshops},
author = {Vu, Thang and Luu, Tung M. and Yoo, Chang D.},
title = {Perception-Enhanced Image Super-Resolution via Relativistic Generative Adversarial Networks},
booktitle = {The European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}

PSNR vs PESR

Dependencies

  • Nvidia GPUs (training takes 1 day on 4 Titan Xp GPUs)
  • At least 16G RAM
  • Python3
  • Pytorch 0.4
  • tensorboardX
  • tqdm
  • imageio

Datasets, models, and results

Dataset

  • Train: DIV2K (800 2K-resolution images)
  • Valid (for visualization): DIV2K (100 val images), PIRM (100 self-val images)
  • Test: Set5, Set14, B100, Urban100, PIRM (100 self-val images), DIV2K (100 val images)
  • Download train+val+test datasets
  • Download test only dataset

Pretrained models

  • Download pretrained models including 1 PSNR-optimized model and 1 perception-optimized model

Paper results

Quick start

  • Download test dataset and put into data/origin/ directory
  • Download pretrained model and put into check_point directory
  • Run python test.py --dataset <DATASET_NAME>
  • Results will be saved into results/ directory

Training

  • Download train+val+test dataset and put into data/origin directory
  • Pretrain with L1 loss: python train.py --phase pretrain --learning_rate 1e-4
  • Finetune on pretrained model with GAN: python train.py
  • Models with be saved into check_point/ direcory

Visualization

  • Start tensorboard: tensorboard --logdir check_point
  • Enter: YOUR_IP:6006 to your web browser.
  • Tensorboard when finetuning on pretrained model should be similar to:

Tensorboard

Tensorboard_imgs

Comprehensive testing

  • Test perceptual model: follow Quick start
  • Interpolate between perceptual model and PSNR model: python test.py --dataset <DATASET> --alpha <ALPHA> (with alpha being perceptual weight)
  • Test perceptual quality: refer to PIRM validation code

Quantitative and Qualitative results

RED and BLUE indicate best and second best respectively.

References

pesr's People

Contributors

thangvubk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

pesr's Issues

Out of CUDA memory error - any parameters to set?

Hi
I am very interested in testing your approach and compare with other ones for some sample photos I have. I am able to have it run for fairly small images, but the ones I really want to test are 640x360 and I get the "Out of CUDA memory" error during the process. I only have 2GB of GPU memory so are there parameter settings of the program that will allow the test.py routine to run for this size image on my computer? Also, is the 4x upscaling variable so that I could change it to 2x?
Thanks!
-Steve

could not reproduce paper results (resolved. thanks)

Thank you for sharing your outstanding work.
I followed the training order in the README file.
After pre-training with L1 loss using DIV2K dataset (200 epochs), I fine-tuned on the pre-trained model with GAN (200 epochs).
All arguments were set to the default values of your code.

The losses of my train are here.

< Pretrain > 
Model check_point/my_model. Epoch [200/200]. Learning rate: 5e-05
Finish train [200/200]. Loss: 6.02
Validating...
Finish valid [200/200]. Best PSNR: 27.5275dB. Cur PSNR: 27.4693dB

< Train > 
Model check_point/my_model/train. Epoch [200/200]. Learning rate: 2.5e-05
Finish train [200/200]. L1: 0.00. VGG: 101.57. G: 10.80. TV: 5.45. Total G: 117.82. D: 0.05
Validating...
Finish valid [200/200]. PSNR: 24.6043dB

The test results of newly trained model is much blurred compared to your paper results and well not preserving edge component.
How should I train to reproduce your paper results?


PS. Followings are my command.

< Pretrain >
python train.py --phase pretrain --learning_rate 1e-4

YOUR SETTINGS
scale: 4
train_dataset: DIV2K
valid_dataset: PIRM
num_valids: 10
num_channels: 256
num_blocks: 32
res_scale: 0.1
phase: pretrain
pretrained_model:
batch_size: 16
learning_rate: 0.0001
lr_step: 120
num_epochs: 200
num_repeats: 20
patch_size: 24
check_point: check_point/my_model
snapshot_every: 10
gan_type: RSGAN
GP: False
spectral_norm: False
focal_loss: True
fl_gamma: 1
alpha_vgg: 50
alpha_gan: 1
alpha_tv: 1e-06
alpha_l1: 0

< Train >
python train.py --pretrained_model check_point/my_model/pretrain/best_model.pt

YOUR SETTINGS
scale: 4
train_dataset: DIV2K
valid_dataset: PIRM
num_valids: 10
num_channels: 256
num_blocks: 32
res_scale: 0.1
phase: train
pretrained_model: check_point/my_model/pretrain/best_model.pt
batch_size: 16
learning_rate: 5e-05
lr_step: 120
num_epochs: 200
num_repeats: 20
patch_size: 24
check_point: check_point/my_model
snapshot_every: 10
gan_type: RSGAN
GP: False
spectral_norm: False
focal_loss: True
fl_gamma: 1
alpha_vgg: 50
alpha_gan: 1
alpha_tv: 1e-06
alpha_l1: 0

train error: RuntimeError: the derivative for 'weight' is not implemented

Hi, thanks for your outstanding work.

Problem:
I met an error when i finetune with pretrained model. RuntimeError: the derivative for 'weight' is not implemented. The details are as follows.

After pre-training with L1 loss using DIV2K dataset (200 epochs), I plan to finetune on the pre-trained model with GAN (200 epochs).

The loss of pretraining with L1 loss is here.

< Pretrain >
Model check_point/my_model. Epoch [200/200]. Learning rate: 5e-05
100%|██████████████████████████████████████████████████████████████████████████████████| 1000/1000 [02:08<00:00, 7.77it/s]
Finish train [200/200]. Loss: 5.98
Validating...
100%|██████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:06<00:00, 1.62it/s]
Finish valid [200/200]. Best PSNR: 27.5345dB. Cur PSNR: 27.4683dB

Followings are my command of pretrain .
< Pretrain >
python train.py --phase pretrain --learning_rate 1e-4
YOUR SETTINGS
fl_gamma: 1
valid_dataset: PIRM
num_epochs: 200
gan_type: RSGAN
check_point: check_point/my_model
spectral_norm: False
batch_size: 16
alpha_vgg: 50
res_scale: 0.1
focal_loss: True
lr_step: 120
snapshot_every: 10
GP: False
scale: 4
train_dataset: DIV2K
alpha_tv: 1e-06
learning_rate: 0.0001
alpha_gan: 1
pretrained_model:
num_valids: 10
num_channels: 256
num_repeats: 20
patch_size: 24
phase: pretrain
num_blocks: 32
alpha_l1: 0

Then i use pretrained model of best_model.pt saved in check_point/my_modedl/pretrain to finetune the model with GAN . It gave the error of RuntimeError: the derivative for 'weight' is not implemented.

Command of finetune .
python train.py --pretrained_model check_point/my_model/pretrain/best_model.pt

YOUR SETTINGS
num_repeats: 20
GP: False
spectral_norm: False
snapshot_every: 10
num_epochs: 200
gan_type: RSGAN
num_channels: 256
lr_step: 120
alpha_vgg: 50
num_blocks: 32
alpha_l1: 0
phase: train
num_valids: 10
batch_size: 16
focal_loss: True
valid_dataset: PIRM
pretrained_model: check_point/my_model/pretrain/best_model.pt
scale: 4
res_scale: 0.1
alpha_gan: 1
train_dataset: DIV2K
check_point: check_point/my_model
alpha_tv: 1e-06
learning_rate: 5e-05
patch_size: 24
fl_gamma: 1

Loading dataset...
Loading model using 1 GPU(s)
Fetching pretrained model check_point/my_model/pretrain/best_model.pt
Model check_point/my_model/train. Epoch [1/200]. Learning rate: 5e-05
0%| | 0/1000 [00:02<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 323, in
main()
File "train.py", line 257, in main
G_loss = f_loss_fn(pred_fake - pred_real, target_real) #Focal loss
File "/home/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/PESR-master/model/focal_loss.py", line 13, in forward
return F.binary_cross_entropy_with_logits(x, t, w)
File "/home/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py", line 2077, in binary_cross_entropy_with_logits
return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
RuntimeError: the derivative for 'weight' is not implemented

Does anyone have some idea about this problem?

AssertionError: size of input tensor and input format are different.

Hi ,thanks for your wonderful work .

When i retrained the work, I downloaded datasets and put them into data/origin dirctory.

Then i pretrained the code with command of python train.py --phase pretrain --learning_rate 1e-4.

However, it gave the error as follows:
Model check_point/my_model. Epoch [1/200]. Learning rate: 0.0001
100%|████████████████████████████████████████████████████████████████████████| 1000/1000 [04:41<00:00, 3.56it/s]
Finish train [1/200]. Loss: 8.88

Validating...
0%| | 0/10 [00:02<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 314, in
main()
File "train.py", line 291, in main
update_tensorboard(epoch, tb, i, lr, sr, hr)
File "/home/18PESR-master/utils.py", line 47, in update_tensorboard
tb.add_image(str(img_idx) + '_LR', inp, epoch)
File "/home/anaconda3/lib/python3.5/site-packages/tensorboardX/writer.py", line 548, in add_image
image(tag, img_tensor, dataformats=dataformats), global_step, walltime)
File "/home/anaconda3/lib/python3.5/site-packages/tensorboardX/summary.py", line 211, in image
tensor = convert_to_HWC(tensor, dataformats)
File "/home/anaconda3/lib/python3.5/site-packages/tensorboardX/utils.py", line 103, in convert_to_HWC
tensor shape: {}, input_format: {}".format(tensor.shape, input_format)

AssertionError: size of input tensor and input format are different. tensor shape: (1, 3, 155, 103), input_format: CHW

Did you have any idea about this question?

Thanks .
Best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.