Git Product home page Git Product logo

drct's Introduction

DRCT: Saving Image Super-resolution away from Information Bottleneck

✨✨ [CVPR NTIRE Oral Presentation]

PWC PWC PWC PWC

contributions welcome

Chih-Chung Hsu, Chia-Ming Lee, Yi-Shiuan Chou

Advanced Computer Vision LAB, National Cheng Kung University

Overview

  • Background and Motivation

In CNN-based super-resolution (SR) methods, dense connections are widely considered to be an effective way to preserve information and improve performance. (introduced by RDN / RRDB in ESRGAN...etc.)

However, SwinIR-based methods, such as HAT, CAT, DAT, etc., generally use Channel Attention Block or design novel and sophisticated Shift-Window Attention Mechanism to improve SR performance. These works ignore the information bottleneck that information flow will be lost deep in the network.

  • Main Contribution

Our work simply adds dense connections in SwinIR to improve performance and re-emphasizes the importance of dense connections in Swin-IR-based SR methods. Adding dense-connection within deep-feature extraction can stablize information flow, thereby boosting performance and keeping lightweight design (compared to the SOTA methods like HAT).

Benchmark results on SRx4 without x2 pretraining. Mulit-Adds are calculated for a 64x64 input.

Model Params Multi-Adds Forward FLOPs Set5 Set14 BSD100 Urban100 Manga109 Training Log
HAT 20.77M 11.22G 2053M 42.18G 33.04 29.23 28.00 27.97 32.48 -
DRCT 14.13M 5.92G 1857M 7.92G 33.11 29.35 28.18 28.06 32.59 -
HAT-L 40.84M 76.69G 5165M 79.60G 33.30 29.47 28.09 28.60 33.09 -
DRCT-L 27.58M 9.20G 4278M 11.07G 33.37 29.54 28.16 28.70 33.14 -
DRCT-XL (pretrained on ImageNet) - - - - 32.97 / 0.91 29.08 / 0.80 - - - log

Real DRCT GAN SRx4. (Coming Soon)

Model Training Data Checkpoint Log
Real-DRCT-GAN_MSE_Model DF2K + OST300 Checkpoint Log
Real-DRCT-GAN_Finetuned from MSE DF2K + OST300 Checkpoint Log

Updates

  • ✅ 2024-03-31: Release the first version of the paper at Arxiv.
  • ✅ 2024-04-14: DRCT is accepted by NTIRE 2024, CVPR.
  • ✅ 2024-06-02: The pretrained DRCT-L is released.
  • ❌ 2024-06-02: MambaDRCT is released. MODEL.PY
    • Training process for DRCT + MambaIR is very slow, if you are interested about it, you can try to optimize/fit it up. This may be caused by the Recurrent nature of mambaIR, coupled with the feature map reusing from DRCT, which causes the training speed to be too slow (it may also be a problem with the equipment we use or the package version).
    • We try to combine DRCT with SS2D in mambaIR. However, the CUDA version of our GPU cannot be updated to the latest version, which leads to difficulties in installing the package and optimizing the training speed. So we don't plan to continue fixting MambaDRCT. If you are interested, you are welcome to use this code.
  • ✅ 2024-06-09: The pretrained DRCT model is released. model zoo
  • ✅ 2024-06-11: We have received a large number of requests to release pre-trained models and training records from ImageNet for several downstream applications, please refer to the following link:

[Training log on ImageNet] [Pretrained Weight (without fine-tuning on DF2K)]

  • ✅ 2024-06-12: DRCT have been selected for oral presentation in NTIRE!
  • 💫 2024-06-14: Real_DRCT_GAN will be released in a few day. [MSE_Model] Stay tuned!
  • ✅ 2024-06-14: We have received a large number of requests to release Feature maps and LAM Visualization, please refer to ./Visualization/.
  • 2024-06-24: DRCT-v2 is on the development.

Environment

Installation

git clone https://github.com/ming053l/DRCT.git
conda create --name drct python=3.8 -y
conda activate drct
# CUDA 11.6
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
cd DRCT
pip install -r requirements.txt
python setup.py develop

How To Inference on your own Dataset?

python inference.py --input_dir [input_dir ] --output_dir [input_dir ]  --model_path[model_path]

How To Test

  • Refer to ./options/test for the configuration file of the model to be tested, and prepare the testing data and pretrained model.
  • Then run the following codes (taking DRCT_SRx4_ImageNet-pretrain.pth as an example):
python drct/test.py -opt options/test/DRCT_SRx4_ImageNet-pretrain.yml

The testing results will be saved in the ./results folder.

  • Refer to ./options/test/DRCT_SRx4_ImageNet-LR.yml for inference without the ground truth image.

Note that the tile mode is also provided for limited GPU memory when testing. You can modify the specific settings of the tile mode in your custom testing option by referring to ./options/test/DRCT_tile_example.yml.

How To Train

  • Refer to ./options/train for the configuration file of the model to train.
  • Preparation of training data can refer to this page. ImageNet dataset can be downloaded at the official website.
  • Validation data can be download at this page.
  • The training command is like
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 drct/train.py -opt options/train/train_DRCT_SRx2_from_scratch.yml --launcher pytorch

The training logs and weights will be saved in the ./experiments folder.

Citations

If our work is helpful to your reaearch, please kindly cite our work. Thank!

BibTeX

@misc{hsu2024drct,
  title={DRCT: Saving Image Super-resolution away from Information Bottleneck}, 
  author = {Hsu, Chih-Chung and Lee, Chia-Ming and Chou, Yi-Shiuan},
  year={2024},
  eprint={2404.00722},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}
@InProceedings{Hsu_2024_CVPR,
  author    = {Hsu, Chih-Chung and Lee, Chia-Ming and Chou, Yi-Shiuan},
  title     = {DRCT: Saving Image Super-Resolution Away from Information Bottleneck},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month     = {June},
  year      = {2024},
  pages     = {6133-6142}
}

Thanks

A part of our work has been facilitated by HAT, SwinIR, LAM framework, and we are grateful for their outstanding contributions.

Contact

If you have any question, please email [email protected] to discuss with the author.

drct's People

Contributors

ming053l avatar h-ann avatar dimabendera avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.