Git Product home page Git Product logo

expression-manipulator's Introduction

Toward Fine-grained Facial Expression Manipulation (ECCV 2020, Paper)

Python 3.6 Pytorch 0.4.1 Pytorch 1.3.1

cover Arbitrary Facial Expression Manipulation. Our model can 1) perform continuous editing between two expressions (top); 2) learn to only modify one facial component(middle); 3) transform expression in paintings (bottom). From left to right, the emotion intensity is set to 0, 0.5, 0.75, 1, and 1.25.

Single/Multiple AU Editing

single-au Single/multiple au Editing. AU4: Brow Lowerer; AU5: Upper Lid Raiser; AU7: Lid Tightener; AU12: Lip Corner Puller; AU15: Lip Corner Depressor; AU20: Lip Stretcher. The legend below the images are relative AUs intensity. The higher (lower) AUs value means to strengthen (weaken) the corresponding facial action unit in input image.

Expression Transfer

arbitrary Arbitrary Facial Expresson Manipulation. The top-left image with blue box is input, the images in odd rows are image with target expression, the images in even rows are animated results.

gif

Resources

Here are some links for you to know Action Units better.

Prerequisites

  • Install PyTorch(version==0.4.1 or >= 1.3.0), torchvision

  • Install requirements.txt

    numpy
    matplotlib
    tqdm
    pickle
    opencv-python
    tensorboardX
    face_alignment

Getting Started

1. Data Preparation

  • prepare your images (EmotionNet, or AffectNet, etc.)

  • Extract the Action Units with OpenFace, and generate aus_dataset.pkl which contains a list of dict, e.g., [{'file_path': <path of image1>, 'aus':<extracted aus of image1>}, {'file_path': <path of image2>, 'aus':<extracted aus of image2>}]

  • Please refer to src/samples/aus_dataset.pkl

  • You may use the function of pickle to save pkl file

    with open('aus_dataset.pkl', 'wb') as f:
        pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)

2. Training

To train, please modify the parameters in launch/train.sh and run:

bash launch/train.sh

Citation

If you find this repository helpful, use this code or adopt ideas from the paper for your research, please cite:

@inProceedings{ling2020finegrained,
  title     = {Toward Fine-grained Facial Expression Manipulation},
  author    = {Ling, Jun and Xue, Han and Song, Li and Yang, Shuhui and Xie, Rong and Gu, Xiao},
  booktitle = {Proceedings of European Conference on Computer Vision (ECCV)},
  year      = {2020}
}

Contact

Please contact [email protected] or open an issue for any questions or suggestions.

Acknowledgement

expression-manipulator's People

Contributors

junleen avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.