The source code and dataset of "Effective Eyebrow Matting with Domain Adaptation" which will appear in Pacific Graphics 2022 conference.
Linux and Windows are both supported, but we recommend Linux for performance reason.
- torch >= 1.11.0
- 64-bit python 3.8
- tensorboardX
- numpy
- opencv-python
- toml
- easydict
- pprint
Path | Description |
---|---|
DAM-Net-eyebrow-matting-dataset | Main directory of the dataset |
├ annotated-dataset | Manually annotated test eyebrow matting dataset containing various eyebrow images |
├ image | 68 original real-world eyebrow images |
├ mask | 68 manually annotated corresponding eyebrow mattes |
├ trimap | Full gray trimap inputs for inference |
├ trimap2 | Trimap inputs for comparison methods [Sun et al.] [Li and Lu] |
├ real | 1,215 unlabeled real-world eyebrow images |
├ synthetic-dataset | Synthetic eyebrow matting dataset |
├ test | 200 synthetic eyebrow matting data for inference |
├ image | 200 rendered eyebrow images |
├ mask | 200 corresponding eyebrow mattes |
├ trimap | Full gray trimap inputs for inference |
├ trimap2 | Trimap inputs for comparison methods [Sun et al.] [Li and Lu] |
├ train | 800 synthetic eyebrow matting data for training |
├ image | 800 rendered eyebrow images |
├ mask | 800 corresponding eyebrow mattes |
We trained our network in a semi-supervised manner and can learn domain-invariant mid-level alpha features from the synthetic eyebrow matting dataset and unlabeled real-world images based on adversarial learning.
Path | Description |
---|---|
checkpoints | Main directory of the pretrained models. |
├ Baseline | Main directory of the Baseline model. |
├ best_model.pth | Baseline model trained with our synthetic matting dataset. Save to ./pretrain/Baseline/ . |
├ DAM-Net | Main directory of the DAM-Net model. |
├ best_model.pth | DAM-Net model trained with our synthetic matting dataset and unlabeled real-world images. Save to ./pretrain/DAM-Net/ . |
ResNet34_En_nomixup | Model of the customized ResNet-34 backbone trained on ImageNet. Save to ./pretrain/ . |
For inference, full gray trimaps of the same size as the inputs is required. Eyebrow images of any size can be used.
TOML files are used as configurations in ./config/
. You can find the definition and options in ./utils/config.py
.
Our source code is based on GCA. We train the network on a Windows desktop PC with a single NVIDIA GTX 2080 (8GB memory), Intel Xeon W-2123 3.60 GHz CPU, and 32GB RAM.
First, you need to set your training and validation data path in DAM-Net.toml:
[data]
train_fg = ""
train_alpha = ""
train_bg = ""
pupil_bg = ""
real_image = ""
test_merged = ""
test_alpha = ""
test_trimap = ""
Then, you can train the model by
python -u eyebrow_train.py --config=config/DAM_Net.toml
You can run the inference using the command:
sh ./test.sh your_test_image_path DAM-Net