Git Product home page Git Product logo

sted-gaze's Introduction

Self-Learning Transformations for Improving Gaze and Head Redirection

This repository is the official implementation of Self-Learning Transformations for Improving Gaze and Head Redirection, NeurIPS 2020.

Requirements

We tested our model with Python 3.8.3 and Ubuntu 16.04. The results in our paper are obtained with torch 1.3.1, but we have tested this code-base also with torch 1.7.0 and achieved similar performance. We provide the pre-trained model obtained with the updated version.

First install torch 1.7.0, and torchvision 0.8.1 following the guidance from here, and then install other packages:

pip install -r requirements.txt

To pre-process datasets, please follow the instructions of this repository. Note that we use full-face images with size128x128.

Usage

All available configuration parameters are defined in core/config_default.py. In order to override the default values, one can do:

  1. Pass the parameter via a command-line parameter. Please replace all _ characters with -.
  2. Create a JSON file such as config/st-ed.json.

The order of application are:

  1. Default parameters
  2. JSON-provided parameters
  3. CLI-provided parameters

Training

To train the gaze redirection model in the paper, run this command:

python train_st_ed.py config/ST-ED.json

You can check Tensorboard for training images, losses and evaluation metrics. Generated images from testsets are store in the model folder.

To train in a semi-supervised setting and generate augmented dataset, run this command (set num_labeled_samplesto a desired value):

python train_st_ed.py config/semi-supervise.json

Note that for semi-supervised training, we also train the estimator with only labeled images. We provide the script for training gaze and head pose estimators: train_facenet.py, so that you can train baseline and augmented estimators and evaluate the data augmentation performance of our method.

Training of redirector will take 1-2 days on a single GPU.

Evaluation

To evaluate pretrained full model, run:

python train_st_ed.py config/eval.json

Quantitative evaluation of all test datasets will take a few hours. If you want to speed up the process, try to disable the calculation of disentanglement metrics, or evaluate on partial dataset (this is what we do during training!)

Pre-trained Models

You can download pretrained models here:

License

This code base is dual-licensed under GPL or MIT licenses, with exceptions for files with NVIDIA Source Code License headers which are under Nvidia license.

sted-gaze's People

Contributors

zhengyuf avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.