mcg-nju / sadrnet Goto Github PK
View Code? Open in Web Editor NEW[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
License: Apache License 2.0
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
License: Apache License 2.0
Hi,
Thanks a lot for sharing the code and the pre-trained model of your awesome paper. Really appreciated.
We tried to load and test the pre-trained model, but we face this error: Unexpected tensor shape for one of the building blocks, class ResBlock4(nn.Module)..
It seems that there was an update for the Conv layers after version 1.4 (https://discuss.pytorch.org/t/did-conv2d-shapes-change-between-torch-1-4-0-and-1-6-0/93859).
I like to ask whether you can share with us a new pre-trained model that trained with Pytorch version > 1.4 ?
Thanks,
Amin
Thanks for your great work! When I run src/run/predirct.py
, it pointed out No module named 'config'
. Should I set the working directory ? My current working directory is already SADRN.
Creating train_blocks takes a lot of disk space. Why do you use train_blocks especially?
When I run src/run/predict.py thanks
folders= 1
dirs= data/example
dirs= ['image00354', 'image00357', 'image00351', 'image00355', 'image00359', 'image00350', 'image00352']
7dirs= []
dirs= []
dirs= []
dirs= []
dirs= []
dirs= []
dirs= []
7 data added
saving data path list
data path list saved
get data 7
11 7
Traceback (most recent call last):
File "src/run/predict.py", line 246, in <module>
evaluator.evaluate_example(predictor_1)
File "src/run/predict.py", line 223, in evaluate_example
out = predictor.model({'img': image}, 'predict')
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "./src/model/SADRNv2.py", line 98, in forward
x = self.block1(x)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "./src/model/modules.py", line 540, in forward
out += identity
RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3
When I run src/run/prepare_dataset.py, I encounter an error: FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset/Extra_LP/all_image_data.pkl'
skip data/dataset/300W_LP_crop/HELEN_Flip/HELEN_173153923_2_12
skip data/dataset/300W_LP_crop/HELEN_Flip/HELEN_2236814888_2_3
skip data/dataset/300W_LP_crop/IBUG_Flip/IBUG_image_018_5
skip data/dataset/300W_LP_crop/LFPW/LFPW_image_train_0328_6
skip data/dataset/300W_LP_crop/LFPW_Flip/LFPW_image_train_0380_3
skip data/dataset/300W_LP_crop/landmarks/AFW
skip data/dataset/300W_LP_crop/landmarks/HELEN
skip data/dataset/300W_LP_crop/landmarks/IBUG
skip data/dataset/300W_LP_crop/landmarks/LFPW
0 data added
saving data path list
data path list saved
0 data added
saving data path list
Traceback (most recent call last):
File "src/run/prepare_dataset.py", line 19, in <module>
train_dataset = make_dataset(TRAIN_DIR, 'train')
File "./src/dataset/dataloader.py", line 217, in make_dataset
raw_dataset.add_image_data(folder, mode)
File "./src/dataset/dataloader.py", line 116, in add_image_data
self.save_image_data_paths(all_data, data_dir)
File "./src/dataset/dataloader.py", line 120, in save_image_data_paths
ft = open(f'{data_dir}/all_image_data.pkl', 'wb')
FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset/Extra_LP/all_image_data.pkl'
worker: 7 end 236/250 data/packs/AFLW2000/image00451.jpgLEN_2208472833_2_8.jpgpgg
worker: 3 end 230/15307 data/packs/300W_LP/HELEN/HELEN_2618147986_1_3.jpg.jpgpgpg
worker: 5 end 236/15307 data/packs/300W_LP/LFPW/LFPW_image_train_0304_13.jpgjpgg
worker: 1 end 242/15307 data/packs/300W_LP/IBUG_Flip/IBUG_image_030_3.jpgpg0.jpgg
worker: 4 end 245/250 data/packs/AFLW2000/image02453.jpgW_image_train_0731_3.jpg
worker: 0 end 239/15307 data/packs/300W_LP/HELEN/HELEN_248684423_1_12.jpg1_10.jpg
worker: 6 end 235/15301 data/packs/300W_LP/HELEN_Flip/HELEN_2419679570_1_5.jpg
worker: 2 end 245/15307 data/packs/300W_LP/HELEN/HELEN_2345048760_1_10.jpggjpgpg
worker: 6 end 15168/15307 data/packs/300W_LP/LFPW/LFPW_image_train_0476_15.jpgpgggg
worker: 0 end 15281/15301 data/packs/300W_LP/HELEN_Flip/HELEN_2466594504_1_9.jpgggg
worker: 5 end 15270/15307 data/packs/300W_LP/HELEN/HELEN_2882149940_1_6.jpg0.jpgg
worker: 1 end 15287/15301 data/packs/300W_LP/HELEN_Flip/HELEN_3026147764_1_11.jpg
worker: 7 end 15272/15307 data/packs/300W_LP/HELEN/HELEN_111835766_1_16.jpgjpgpgg
worker: 2 end 15259/15307 data/packs/300W_LP/LFPW/LFPW_image_train_0741_0.jpgg
worker: 3 end 15270/15307 data/packs/300W_LP/LFPW/LFPW_image_train_0051_4.jpgg
worker: 4 end 15306/15307 data/packs/300W_LP/LFPW/LFPW_image_train_0382_5.jpgg
My question is what is Extra_LP? Is it a dataset? It is not mentioned in the readme.md nor the paper.
And where should I get the Extra_LP/all_image_data.pkl ?
Another problem is after I run src/run/prepare_dataset.py, I find SADRNet/data/dataset/train_blocks is generated, but it is empty. Is it correct? Or I missed something?
Hi,
Is there a demonstration code available for running videos or webcam feeds?
I tried to modify the predict.py script but running into some errors since a couple of days.
Thank you.
File "D:\anaconda3\envs\mypytorch\lib\site-packages\torch\nn\modules\module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "G:\Ubuntu\pyproject\3Dface_commpare\SADRNet-main\src\model\modules.py", line 540, in forward
out += identity
RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3
In SADRNet-main\src\model\SADRNv2.py
, class SADRNv2,the input of layer0 is size of [1, 3, 256, 256], and output is size of [1, 16, 255, 255],
and then, there is a mismatch of the sizes of the feature maps
Hello, thanks for publishing this great work.
I'm researching face alignment and face mesh and am wandering if you've compared SADRnet to M3-LRN, would you consider either more accurate?
https://github.com/choyingw/M3-LRN
Also, have you considered training it on ASMNet as backbone?
https://github.com/aliprf/ASMNet
Thanks for sharing this project. As mentioned in the paper
we choose the face region containing 19K points
I am interested in extracting point cloud and use CPD to skin displacement. Can you point me to location to extract these point clouds?
Thanks
POSMAP_FIX_RATE OFFSET_FIX_RATE:
SADRNet/src/configs/config_SADRN_v2_eval.py
Lines 3 to 4 in 8c07417
NME.rate:
Lines 67 to 69 in a5e6fac
Hello,
Can someone get the predict task done with test images rather than a dataset? I tried to use some test images to test the program, however, it requests uv_posmap_path for each test image (in dataloader.py). I am not sure how these uv_posmap files are generated. Please help.
Thank in advance for your help.
When I run the python src/run/train_block_data.py
command, the following error is reported. How can I solve it?
note: UV_TRIANGLES_PATH = '../data/uv_data/uv_triangles.npy' exists.
Traceback (most recent call last):
File "src/run/train_block_data.py", line 343, in <module>
trainer = SADRNv2Trainer()
File "src/run/train_block_data.py", line 319, in __init__
super(SADRNTrainer, self).__init__()
File "src/run/train_block_data.py", line 122, in __init__
self.model = self.get_model()
File "src/run/train_block_data.py", line 325, in get_model
from src.model.SADRNv2 import get_model
File "/media/bangyanhe/disk/SADRNet-main/src/model/SADRNv2.py", line 2, in <module>
from src.model.loss import *
File "/media/bangyanhe/disk/SADRNet-main/src/model/loss.py", line 7, in <module>
from src.dataset.uv_face import face_mask_np, face_mask_fix_rate, foreface_ind, uv_kpt_ind, uv_edges, uv_triangles
ImportError: cannot import name 'uv_triangles'
raceback (most recent call last):
File "src/run/predict.py", line 22, in
from src.visualize.render_mesh import render_uvm #render_face_orthographic,
File "/workspace/SADRNet/./src/visualize/render_mesh.py", line 16, in
r = pyrender.OffscreenRenderer(CROPPED_IMAGE_SIZE, CROPPED_IMAGE_SIZE)
File "/usr/local/lib/python3.8/dist-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/usr/local/lib/python3.8/dist-packages/pyrender/offscreen.py", line 134, in _create
self._platform.init_context()
File "/usr/local/lib/python3.8/dist-packages/pyrender/platforms/osmesa.py", line 19, in init_context
from OpenGL.osmesa import (
ImportError: cannot import name 'OSMesaCreateContextAttribs' from 'OpenGL.osmesa' (/usr/local/lib/python3.8/dist-packages/OpenGL/osmesa/init.py)
I did not find the data import and test code on the FLorence dataset , could you please provide a copy?
Thank you very much for sharing. :)
Could you please update the paper link?
Hey guys
Thanks for sharing your work and releasing the code. However officially you would need to add a license to allow people to use your code or to build upon it for academic/industrial research.
You can probably add Apache-2.0 License like the mmediting (https://github.com/open-mmlab/mmediting) or BSD License like pix2pixHD (https://github.com/NVIDIA/pix2pixHD) or anything else
Do you plan to add a License to your code?
I follow README.md for loading checkpoint "net_021.pth" prepared by author but the process is not that smoothly.
So I record what I have done for fix those problem.
Welcome for any discussion. Hope we can fight together!!!
Hello, thanks for your excellent work!
I tried run predict, but I can't run it because "uv_triangles.npy" is not found.
How do you generate this file?
When I finish running prepare_dataset.py, there is no all_image_data.pkl
file in 300W_LP_crop
, but there is one in AFLW2000_crop
, and then when I run train_block_data.py
, it returns an error: train_blocks/ 131.pkl
file cannot be found. Am I missing some steps?
Thanks for your great work!!!
During the process of data augmentation, especially for the occlusion augmentation, I find that the whole face region in the image would be occluded for its random character. I want to know your operation on these ill images. Maybe just delete these images? Or others?
Hope for your reply! Thanks again!
When I train the model from scratch, sometimes I encounter such an error:
$ python src/run/train_block_data.py
Namespace(visible_device='0')
True 4 0 GeForce GTX 1080
loading data path list
data path list loaded
2000 data added
Epoch: 0
[epoch:0,block:0/200, iter:39/38, time:41] loss: 4.39420 offset_uvm: 0.13145 face_uvm: 0.00000 kpt_uvm: 4.22837 attention_mask: 0.02870 edge: 0.00101 norm: 0.00467
val_offset_uvm: 0.05806 val_face_uvm: 0.83816 val_kpt_uvm: 1.05301 val_attention_mask: 0.39309 125
new best 2.3423 improved from 100000.0000
Epoch: 1
[epoch:1,block:0/200, iter:39/38, time:25] loss: 2.14687 offset_uvm: 0.13437 face_uvm: 0.00000 kpt_uvm: 1.98520 attention_mask: 0.02144 edge: 0.00104 norm: 0.00483
val_offset_uvm: 0.05771 val_face_uvm: 0.42581 val_kpt_uvm: 0.45449 val_attention_mask: 0.24950 125
new best 1.1875 improved from 2.3423
Epoch: 2
[epoch:2,block:0/200, iter:39/38, time:26] loss: 1.72705 offset_uvm: 0.13805 face_uvm: 0.00000 kpt_uvm: 1.56464 attention_mask: 0.01831 edge: 0.00106 norm: 0.00500
/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
File "src/run/train_block_data.py", line 346, in <module>
trainer.train()
File "src/run/train_block_data.py", line 281, in train
'eval')
File "/home/heyuan/Environments/anaconda3/envs/SADRNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/heyuan/Research/3d_face/SADRNet/src/model/SADRNv2.py", line 128, in forward
face_uvm = self.rebuilder(offset_uvm, kpt_uvm,attention)
File "/home/heyuan/Environments/anaconda3/envs/SADRNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/heyuan/Research/3d_face/SADRNet/src/model/modules.py", line 374, in forward
[attention[i, 0, kpt_ind[i, :, 1] // 8, kpt_ind[i, :, 0] // 8] for i in range(B)])
File "/home/heyuan/Research/3d_face/SADRNet/src/model/modules.py", line 374, in <listcomp>
[attention[i, 0, kpt_ind[i, :, 1] // 8, kpt_ind[i, :, 0] // 8] for i in range(B)])
RuntimeError: CUDA error: device-side assert triggered
Does the code support running in a CUDA 11.3 environment? The code is built with PyTorch version 1.2.0, which only supports CUDA 10.0.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.