Git Product home page Git Product logo

unicolor's Introduction

Code release progress:

Todo:

  • Instructions for training code
  • Dataset preparation
  • Interactive interface

Done:

  • Basic training code
  • Environment
  • Pretrained model checkpoint
  • Sampling code

UniColor: A Unified Framework for Multi-Modal Colorization with Transformer

Project page | arXiv (LowRes version) | ACM TOG Paper | BibTex

alt text

Zhitong Huang $^{1*}$, Nanxuan Zhao $^{2*}$, Jing Liao $^{1\dagger}$

$^1$: City University of Hong Kong, Hong Kong SAR, China    $^2$: University of Bath, Bath, United Kingdom
$^*$: Both authors contributed equally to this research    $^\dagger$: Corresponding author

Abstract:

We propose the first unified framework UniColor to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing.

Two-stage Method:

Our framework consists of two stages. In the first stage, all different conditions (stroke, exemplar, and text) are converted to a common form of hint points. In the second stage, diverse results are generated automatically either from scratch or based on the condition of hint points. alt text

Environments

To setup Anaconda environment:

$ conda env create -f environment.yaml
$ conda activate unicolor

Pretrained Models

Download the pretrained models (including both Chroma-VQGAN and Hybrid-Transformer) from Hugging Face:

  • Trained model with ImageNet - put the file mscoco_step259999.ckpt under folder framework/checkpoints/unicolor_imagenet.
  • Trained model with MSCOCO - put the file imagenet_step142124.ckpt under folder framework/checkpoints/unicolor_mscoco.

To use exemplar-based colorization, download the pretrained models from Deep-Exemplar-based-Video-Colorization, unzip the file and place the files into the corresponding folders:

  • video_moredata_l1 under the sample/ImageMatch/checkpoints folder
  • vgg19_conv.pth and vgg19_gray.pth under the sample/ImageMatch/data folder

Sampling

In sample/sample.ipynb, we show how to use our framework to perform unconditional, stroke-based, exemplar-based, and text-based colorization.

Comments

Thanks to the authors who make their code and pretrained models publicly available:

BibTex

arXiv:

@misc{huang2022unicolor
      author = {Huang, Zhitong and Zhao, Nanxuan and Liao, Jing},
      title = {UniColor: A Unified Framework for Multi-Modal Colorization with Transformer},
      year = {2022},
      eprint = {2209.11223},
      archivePrefix = {arXiv},
      primaryClass = {cs.CV}
}

ACM Transactions on Graphics:

@article{10.1145/3550454.3555471,
      author = {Huang, Zhitong and Zhao, Nanxuan and Liao, Jing},
      title = {UniColor: A Unified Framework for Multi-Modal Colorization with Transformer},
      year = {2022},
      issue_date = {December 2022},
      publisher = {Association for Computing Machinery},
      address = {New York, NY, USA},
      volume = {41},
      number = {6},
      issn = {0730-0301},
      url = {https://doi.org/10.1145/3550454.3555471},
      doi = {10.1145/3550454.3555471},
      journal = {ACM Trans. Graph.},
      month = {nov},
      articleno = {205},
      numpages = {16},
}

unicolor's People

Contributors

luckyhzt avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.