Git Product home page Git Product logo

sadrnet's Introduction

SADRNet

Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

image

Requirements

python                 3.6.2
matplotlib             3.1.1  
Cython                 0.29.13
numba                  0.45.1
numpy                  1.16.0   
opencv-python          4.1.1
Pillow                 6.1.0                 
pyrender               0.1.33                
scikit-image           0.15.0                
scipy                  1.3.1
torch                  1.2.0                 
torchvision            0.4.0

Pretrained model

Link: https://drive.google.com/file/d/1mqdBdVzC9myTWImkevQIn-AuBrVEix18/view?usp=sharing .

Please put it under data/saved_model/SADRNv2/.

Please set ./SADRN as the working directory when running codes in this repo.

Predicting

  • Put images under data/example/.

  • Run src/run/predict.py.

The network takes cropped-out 256×256×3 images as the input.

Training

  • Download 300W-LP and AFLW2000-3D at http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3ddfa/main.htm .

  • Extract them into 'data/packs/AFLW2000' and 'data/packs/300W_LP'

  • Please refer to face3d to prepare BFM data. And move the generated files in Out/ to data/Out/

  • Run src/run/prepare_dataset.py, it will take several hours.

  • Run train_block_data.py. Some training settings are included in config.py and src/configs.

Acknowledgements

We especially thank the contributors of the face3d codebase for providing helpful code.

sadrnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

sadrnet's Issues

License

Hey guys

Thanks for sharing your work and releasing the code. However officially you would need to add a license to allow people to use your code or to build upon it for academic/industrial research.

You can probably add Apache-2.0 License like the mmediting (https://github.com/open-mmlab/mmediting) or BSD License like pix2pixHD (https://github.com/NVIDIA/pix2pixHD) or anything else

Do you plan to add a License to your code?

prepare_dataset.py: What is Extra_LP? train_blocks is empty

When I run src/run/prepare_dataset.py, I encounter an error: FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset/Extra_LP/all_image_data.pkl'

skip  data/dataset/300W_LP_crop/HELEN_Flip/HELEN_173153923_2_12
skip  data/dataset/300W_LP_crop/HELEN_Flip/HELEN_2236814888_2_3
skip  data/dataset/300W_LP_crop/IBUG_Flip/IBUG_image_018_5
skip  data/dataset/300W_LP_crop/LFPW/LFPW_image_train_0328_6
skip  data/dataset/300W_LP_crop/LFPW_Flip/LFPW_image_train_0380_3
skip  data/dataset/300W_LP_crop/landmarks/AFW
skip  data/dataset/300W_LP_crop/landmarks/HELEN
skip  data/dataset/300W_LP_crop/landmarks/IBUG
skip  data/dataset/300W_LP_crop/landmarks/LFPW
0 data added
saving data path list
data path list saved
0 data added
saving data path list
Traceback (most recent call last):
  File "src/run/prepare_dataset.py", line 19, in <module>
    train_dataset = make_dataset(TRAIN_DIR, 'train')
  File "./src/dataset/dataloader.py", line 217, in make_dataset
    raw_dataset.add_image_data(folder, mode)
  File "./src/dataset/dataloader.py", line 116, in add_image_data
    self.save_image_data_paths(all_data, data_dir)
  File "./src/dataset/dataloader.py", line 120, in save_image_data_paths
    ft = open(f'{data_dir}/all_image_data.pkl', 'wb')
FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset/Extra_LP/all_image_data.pkl'
worker: 7 end 236/250  data/packs/AFLW2000/image00451.jpgLEN_2208472833_2_8.jpgpgg
worker: 3 end 230/15307  data/packs/300W_LP/HELEN/HELEN_2618147986_1_3.jpg.jpgpgpg
worker: 5 end 236/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0304_13.jpgjpgg
worker: 1 end 242/15307  data/packs/300W_LP/IBUG_Flip/IBUG_image_030_3.jpgpg0.jpgg
worker: 4 end 245/250  data/packs/AFLW2000/image02453.jpgW_image_train_0731_3.jpg
worker: 0 end 239/15307  data/packs/300W_LP/HELEN/HELEN_248684423_1_12.jpg1_10.jpg
worker: 6 end 235/15301  data/packs/300W_LP/HELEN_Flip/HELEN_2419679570_1_5.jpg
worker: 2 end 245/15307  data/packs/300W_LP/HELEN/HELEN_2345048760_1_10.jpggjpgpg
worker: 6 end 15168/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0476_15.jpgpgggg
worker: 0 end 15281/15301  data/packs/300W_LP/HELEN_Flip/HELEN_2466594504_1_9.jpgggg
worker: 5 end 15270/15307  data/packs/300W_LP/HELEN/HELEN_2882149940_1_6.jpg0.jpgg
worker: 1 end 15287/15301  data/packs/300W_LP/HELEN_Flip/HELEN_3026147764_1_11.jpg
worker: 7 end 15272/15307  data/packs/300W_LP/HELEN/HELEN_111835766_1_16.jpgjpgpgg
worker: 2 end 15259/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0741_0.jpgg
worker: 3 end 15270/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0051_4.jpgg
worker: 4 end 15306/15307  data/packs/300W_LP/LFPW/LFPW_image_train_0382_5.jpgg

My question is what is Extra_LP? Is it a dataset? It is not mentioned in the readme.md nor the paper.
And where should I get the Extra_LP/all_image_data.pkl ?

Another problem is after I run src/run/prepare_dataset.py, I find SADRNet/data/dataset/train_blocks is generated, but it is empty. Is it correct? Or I missed something?

osmesa error

raceback (most recent call last):
File "src/run/predict.py", line 22, in
from src.visualize.render_mesh import render_uvm #render_face_orthographic,
File "/workspace/SADRNet/./src/visualize/render_mesh.py", line 16, in
r = pyrender.OffscreenRenderer(CROPPED_IMAGE_SIZE, CROPPED_IMAGE_SIZE)
File "/usr/local/lib/python3.8/dist-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/usr/local/lib/python3.8/dist-packages/pyrender/offscreen.py", line 134, in _create
self._platform.init_context()
File "/usr/local/lib/python3.8/dist-packages/pyrender/platforms/osmesa.py", line 19, in init_context
from OpenGL.osmesa import (
ImportError: cannot import name 'OSMesaCreateContextAttribs' from 'OpenGL.osmesa' (/usr/local/lib/python3.8/dist-packages/OpenGL/osmesa/init.py)

How to extract Point Cloud?

Thanks for sharing this project. As mentioned in the paper

we choose the face region containing 19K points

I am interested in extracting point cloud and use CPD to skin displacement. Can you point me to location to extract these point clouds?

Thanks

Instruction for running this open source code

I follow README.md for loading checkpoint "net_021.pth" prepared by author but the process is not that smoothly.
So I record what I have done for fix those problem.

Info

  • Done: Not Yet

Environment

  • I follow requirement.txt for install packages I needs for running by conda.
  • Every package installed are followed requirement.txt, and here is something you should noticed!
    image

Torch version

  • torch is 1.2.0, and the version has vital effect when you load "net_021.pth" as pre-trained checkpoint.
    Just as this guy mentioned, different version of torch have different strategy for convolution operation. So if you what to use pre-trained model "net_021.pth". Notice your yorch version.

How about running by newest torch such as 1.17.0 or 2.0.0?

  • the method of fixing mis-match of feature map size from the message below.
    image
    • I just turn self.conv2 in ResBlock4 from conv4x4 to conv3x3, and you can get a well-function but untrained model.
    • And maybe you can train it now ? (I haven't do this yet)

Numba

  • Error msg of numba may related to the version of its dependent package not listed in requirement.txt: llvmlite==0.32.1
    image

Inference

  • In README.md chapter Predicting
    • the step are too simple for follow currently, and if you are using other data not belong to AFLW2000-3D or 300w-lp, you can't get what you want....

Predicting AFLW2000-3D

  • original command is only suitable for AFLW2000-3D or 300w-lp which has go through pre-process by prepare_dataset.py
    • Like this
      image
    • And get this
      image

How to predict our own data?

  • The most easy way is
    • Put jpg file (ex. name.jpg) in data/example/name
    • Add a if here:
      • "src/dataset/dataloader.py; class ImageData; func read_path; before loadmat function line 41 "
        image
      • And you can predict whatever you want!!
        image

Face-Profiling Algorithm and 'data/dataset/Extra_LP' ?

  • Undone....

Welcome for any discussion. Hope we can fight together!!!

where is the uv_triangles.npy

Hello, thanks for your excellent work!

I tried run predict, but I can't run it because "uv_triangles.npy" is not found.
How do you generate this file?

how to generate uv_posmap_path for example images?

Hello,

Can someone get the predict task done with test images rather than a dataset? I tried to use some test images to test the program, however, it requests uv_posmap_path for each test image (in dataloader.py). I am not sure how these uv_posmap files are generated. Please help.

Thank in advance for your help.

To compute RecLoss, why do you compute outer_interocular_dist? Why do not you use bbox_size in KptNME2D directly?

To compute RecLoss, why do you compute outer_interocular_dist?

SADRNet/src/model/loss.py

Lines 353 to 359 in a5e6fac

outer_interocular_dist = y_true[uv_kpt_ind[36, 0], uv_kpt_ind[36, 1]] - y_true[
uv_kpt_ind[45, 0], uv_kpt_ind[45, 1]]
bbox_size = np.linalg.norm(outer_interocular_dist[0:3])
dist = torch.from_numpy(dist)
# loss = np.mean(dist / bbox_size)
loss = torch.mean(dist / bbox_size)

Why do not you use bbox_size in KptNME2D directly?

SADRNet/src/model/loss.py

Lines 255 to 260 in a5e6fac

left = torch.min(gt[:, 0, :], dim=1)[0]
right = torch.max(gt[:, 0, :], dim=1)[0]
top = torch.min(gt[:, 1, :], dim=1)[0]
bottom = torch.max(gt[:, 1, :], dim=1)[0]
bbox_size = torch.sqrt((right - left) * (bottom - top))
dist = dist / bbox_size

Request for a demo code

Hi,

Is there a demonstration code available for running videos or webcam feeds?
I tried to modify the predict.py script but running into some errors since a couple of days.
Thank you.

RuntimeError: CUDA error: device-side assert triggered

When I train the model from scratch, sometimes I encounter such an error:

$ python src/run/train_block_data.py
Namespace(visible_device='0')
True 4 0 GeForce GTX 1080
loading data path list
data path list loaded
2000 data added
Epoch: 0
[epoch:0,block:0/200, iter:39/38, time:41] loss: 4.39420 offset_uvm: 0.13145 face_uvm: 0.00000 kpt_uvm: 4.22837 attention_mask: 0.02870 edge: 0.00101 norm: 0.00467 
val_offset_uvm: 0.05806 val_face_uvm: 0.83816 val_kpt_uvm: 1.05301 val_attention_mask: 0.39309 125  
new best 2.3423 improved from 100000.0000
Epoch: 1
[epoch:1,block:0/200, iter:39/38, time:25] loss: 2.14687 offset_uvm: 0.13437 face_uvm: 0.00000 kpt_uvm: 1.98520 attention_mask: 0.02144 edge: 0.00104 norm: 0.00483 
val_offset_uvm: 0.05771 val_face_uvm: 0.42581 val_kpt_uvm: 0.45449 val_attention_mask: 0.24950 125  
new best 1.1875 improved from 2.3423
Epoch: 2
[epoch:2,block:0/200, iter:39/38, time:26] loss: 1.72705 offset_uvm: 0.13805 face_uvm: 0.00000 kpt_uvm: 1.56464 attention_mask: 0.01831 edge: 0.00106 norm: 0.00500 
/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
  File "src/run/train_block_data.py", line 346, in <module>
    trainer.train()
  File "src/run/train_block_data.py", line 281, in train
    'eval')
  File "/home/heyuan/Environments/anaconda3/envs/SADRNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/heyuan/Research/3d_face/SADRNet/src/model/SADRNv2.py", line 128, in forward
    face_uvm = self.rebuilder(offset_uvm, kpt_uvm,attention)
  File "/home/heyuan/Environments/anaconda3/envs/SADRNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/heyuan/Research/3d_face/SADRNet/src/model/modules.py", line 374, in forward
    [attention[i, 0, kpt_ind[i, :, 1] // 8, kpt_ind[i, :, 0] // 8] for i in range(B)])
  File "/home/heyuan/Research/3d_face/SADRNet/src/model/modules.py", line 374, in <listcomp>
    [attention[i, 0, kpt_ind[i, :, 1] // 8, kpt_ind[i, :, 0] // 8] for i in range(B)])
RuntimeError: CUDA error: device-side assert triggered

run error

When I run src/run/predict.py thanks

folders= 1
dirs= data/example
dirs= ['image00354', 'image00357', 'image00351', 'image00355', 'image00359', 'image00350', 'image00352']
7dirs= []
dirs= []
dirs= []
dirs= []
dirs= []
dirs= []
dirs= []
7 data added
saving data path list
data path list saved
get data 7
 11 7
Traceback (most recent call last):
  File "src/run/predict.py", line 246, in <module>
    evaluator.evaluate_example(predictor_1)
  File "src/run/predict.py", line 223, in evaluate_example
    out = predictor.model({'img': image}, 'predict')
  File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "./src/model/SADRNv2.py", line 98, in forward
    x = self.block1(x)
  File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "./src/model/modules.py", line 540, in forward
    out += identity
RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3

About data augmentation for occlusion

Thanks for your great work!!!

During the process of data augmentation, especially for the occlusion augmentation, I find that the whole face region in the image would be occluded for its random character. I want to know your operation on these ill images. Maybe just delete these images? Or others?

Hope for your reply! Thanks again!

Issue for running the pre-trained model with Pytorch version > 1.4

Hi,
Thanks a lot for sharing the code and the pre-trained model of your awesome paper. Really appreciated.

We tried to load and test the pre-trained model, but we face this error: Unexpected tensor shape for one of the building blocks, class ResBlock4(nn.Module)..

It seems that there was an update for the Conv layers after version 1.4 (https://discuss.pytorch.org/t/did-conv2d-shapes-change-between-torch-1-4-0-and-1-6-0/93859).

I like to ask whether you can share with us a new pre-trained model that trained with Pytorch version > 1.4 ?

Thanks,
Amin

ImportError: cannot import name 'uv_triangles'

When I run the python src/run/train_block_data.py command, the following error is reported. How can I solve it?

note: UV_TRIANGLES_PATH = '../data/uv_data/uv_triangles.npy' exists.

Traceback (most recent call last):
  File "src/run/train_block_data.py", line 343, in <module>
    trainer = SADRNv2Trainer()
  File "src/run/train_block_data.py", line 319, in __init__
    super(SADRNTrainer, self).__init__()
  File "src/run/train_block_data.py", line 122, in __init__
    self.model = self.get_model()
  File "src/run/train_block_data.py", line 325, in get_model
    from src.model.SADRNv2 import get_model
  File "/media/bangyanhe/disk/SADRNet-main/src/model/SADRNv2.py", line 2, in <module>
    from src.model.loss import *
  File "/media/bangyanhe/disk/SADRNet-main/src/model/loss.py", line 7, in <module>
    from src.dataset.uv_face import face_mask_np, face_mask_fix_rate, foreface_ind, uv_kpt_ind, uv_edges, uv_triangles
ImportError: cannot import name 'uv_triangles'

0 data added

image
I think is something wrong with the code, it can't read the image in the folder. After I change the code, there is no corresponding _info.mat :
image
What the format of testing images should be?
Thank you for your response.

the cuda version

Does the code support running in a CUDA 11.3 environment? The code is built with PyTorch version 1.2.0, which only supports CUDA 10.0.

There is a mismatch of the sizes of the feature maps

 File "D:\anaconda3\envs\mypytorch\lib\site-packages\torch\nn\modules\module.py", line 1488, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\Ubuntu\pyproject\3Dface_commpare\SADRNet-main\src\model\modules.py", line 540, in forward
    out += identity
RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3

In SADRNet-main\src\model\SADRNv2.py, class SADRNv2,the input of layer0 is size of [1, 3, 256, 256], and output is size of [1, 16, 255, 255],
微信截图_20230421065011

and then, there is a mismatch of the sizes of the feature maps

微信截图_20230421064933

No module named 'config'

Thanks for your great work! When I run src/run/predirct.py, it pointed out No module named 'config'. Should I set the working directory ? My current working directory is already SADRN.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.