Git Product home page Git Product logo

mgcnet's Introduction

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020)

This is an official python implementation of MGCNet. This is the pre-print version https://arxiv.org/abs/2007.12494.

Result

  1. video

  1. image image

  2. Full video can be seen in YouTube

Running code

1. Code + Requirement + thirdlib

We run the code with python3.7, tensorflow 1.13

git clone --recursive https://github.com/jiaxiangshang/MGCNet.git
cd MGCNet
(sudo) pip install -r requirement.txt

(1) For render loss(reconstruction loss), we use the differential renderer named tf_mesh_render. I find many issue happens here, so let's make this more clear. The tf_mesh_render does not return triangle id for each pixel after rasterise, we do this by our self and add these changes as submodule to mgcnet.

(2) Then how to compile tf_mesh_render, my setting is bazel==10.1, gcc==5.*, the compile command is

bazel build ...

The gcc/g++ version higher than 5.* will bring problems, a good solution is virtual environment with a gcc maybe 5.5. If the The gcc/g++ version is 4.* that you can try to change the compile cmd in BUILD file, about the flag -D_GLIBCXX_USE_CXX11_ABI=0 or -D_GLIBCXX_USE_CXX11_ABI=1 for 4.* or 5.*

2.Model

  1. 3dmm model + network weight

    We include BFM09/BFM09 expression, BFM09 face region from DengYu, BFM09 uv from 3DMMasSTN into a whole 3dmm model. https://drive.google.com/file/d/1RkTgcSGNs2VglHriDnyr6ZS5pbnZrUnV/view?usp=sharing Extract this file to /MGCNet/model

  2. pretain

    This include the pretrail model for the Resnet50 and vgg pretrain model for Facenet. Extract this file to /MGCNet/pretain

3.Data

  1. data demo: https://drive.google.com/file/d/1Du3iRO0GNncZsbK4K5sboSeCUv0-SnRV/view?usp=sharing

    Extract this file to /MGCNet/data, we can not provide all datas, as it is too large and the license of MPIE dataset not allow me to do this.

  2. data: landmark ground truth

    The detection method from https://github.com/1adrianb/2D-and-3D-face-alignment, and we use the SFD face detector

  3. data: skin probability

    I get this part code from Yu DENG([email protected]), maybe you can ask help from him.

4.Testing

  1. test_image.py This is used to inference a single unprocessed image(cmd in file). This file can also render the images(geometry, texture, shading,multi-pose), like above or in our paper(read code), which makes visualization and comparison more convenient.

  2. preprocess All the preprocess has been included in 'test_image.py', we show the outline here. (1) face detection and face alignment are package in ./tools/preprocess/detect_landmark,py. (2) face alignment by affine transformation to warp the unprocess image. Test all the images in a folder can follow this preprocess.

5.Training

  1. train_unsupervise.py

Useful tools(keep updating)

  1. face alignment tools
  2. 3D face render tools.
  3. Camera augment for rendering.

Citation

If you use this code, please consider citing:

@article{shang2020self,
  title={Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency},
  author={Shang, Jiaxiang and Shen, Tianwei and Li, Shiwei and Zhou, Lei and Zhen, Mingmin and Fang, Tian and Quan, Long},
  journal={arXiv preprint arXiv:2007.12494},
  year={2020}
}

Contacts

Please contact [email protected] or open an issue for any questions or suggestions.

Acknowledgements

Thanks the help from recent 3D face reconstruction papers Deep3DFaceReconstruction, 3DMMasSTN, PRNet, RingNet, 3DDFA and single depth estimation work DeepMatchVO. I would like to thank Tewari to provide the compared result.

mgcnet's People

Contributors

jiaxiangshang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mgcnet's Issues

How can I get the skin file of image?

I'm trying to use a custom dataset to train the model using train_unsupervised.py.

However, it seems that it needs a skin image (named as ..._skin.jpg).

How can I get those skin images from plain images?

Colab demo plz...

Dear @jiaxiangshang,
Thank you for you great work.

I don't know how to set the environment.
So I want you to write it so that you can practice with colab.
Is it possible?

bazel build problem

First, I use 'conda install bazel==0.11.1' to install bazel 0.11.1. I use the default gcc of ubuntu 18.04.2 is 7.5.0. However, when I run 'Bazel build ...', I got these error:
image
It seems that the bazel can not get the gcc file?

Then, I try to use 'conda install gcc==4.8.5' to install gcc 4.8.5. But after I install gcc 4.8.5, I can not import tensorflow. Here are the errors:
image

I really want to know how to install the bazel and gcc? Maybe I should not use conda to install them? Thanks for you help!

About eval on BU-3DFE

Hi,

  1. In your paper, before eval on BU-3DFE dataset, a dense correspondence map should be pre-computed first, which method should I use to compute this map.
  2. Which data should I use, F3D.wrl or RAW.wrl?
  3. Are input images F2D.bmp, or rendered by F3D.wrl.

Thanks!

About evaluating on AFLW2000-3D

hi,
You method need alignment the input according to 5 3d landmarks before inference, when evaluating on AFLW2000-3D, which landmarks detector do you use, or just use 5 landmarks from ground truth.
Thanks.

Preprocess on LS3D-W

Hi! I download the LS3D-W dataset and face some problems. Could you please give me some advise? Thanks very much!
(1) As we can see in the demo dataset, you concat three images from one identity to generate the training sample. Should I always concat three images from one identity to generate the training sample? Or I can contact three images from different identity?
image

(2)When I preprocessing the CelebA and 300W-LP datasets, I can get the identity information from the "identity_CelebA.txt" or the name of images. However, when I preprocess the LS3D-W dataset, I have no way to get the identity information of the images. Therefor, how do you generate the training samples for each identity with three images in LS3D-W dataset? Especially for the "300W-Testset-3D", "AFLW2000-3D-Reannotated", and "Menpo-3D" folder in LS3D-W dataset?
image

Is the provided pretrained weight, not from a model trained on MultiPIE dataset?

Hi!

I'm doing a personal project on top of your work with the provided pretrained weights
and I really thank you for sharing your work.

I have tested test_image.py on an image from MultiPIE dataset,
and the results are sometimes good, but sometimes bad.
Here's a bad example.
image

Results of left and right images are generally good, but this problem is often observed for frontal images.
I was wondering if the pretrained weights are not from a model that was trained with MultiPIE dataset.

I am planning to fine-tune the weights with MultiPIE dataset, but just needed to check if this would help.

Thank you.

bazel build problem

I use bazel=0.10.0 and gcc=7.5 to build mesh_renderer, but can not generate rasterize_trangles_kernel.so. thanks
image

Question about the initial value of the pose variables

Hi:
I have read your source code, there is three variables about pose,defined_pose_main,defined_pose_left and defined_pose_right. Could you tell me how their initial value was calculated? If I want to test with my own data, how should I set the initial value?thanks!
2020-10-13 17-27-18屏幕截图
2020-10-13 17-27-29屏幕截图

bazel build problem

Thanks for your excellent work. I am trying to show some comparison with you. But I am stucked at installing mesh_rendering step... Have you seen this problem beofre?
image

Preprocess on CelebA

Hi! Good morning!
As we can see in the demo data, you selected three images from the same identity to generate the training samples.
(1)I want to know the strategy that you used to select images. Are you just randomly select three images from the same identity?
(2)What's more,the CelebA dataset provides the ground truth landmarks. To construct the xxxx_cam.txt, I need the landmark position of all these three images. Should I use the ground truth landmarks from CelebA dataset, or use the predicted landmarks from "face_alignment" detector. I think we should use the predicted landmarks because the ECCV paper shows that "The images are automatically annotated by the 2D landmark detection method in [8] and the face detection method in [65]"?
(3) Thanks for your answer very much!

image

Error in build tf_mesh_renderer

I followed the instructrions to build tf_mesh_renderer.
I used bazel==4.1.0 downloaded from official github
I used gcc==5.4.0
I used git submodule to fetch the tf_mesh_renderer code and run bazel build ...
got the result as followed:
Starting local Bazel server and connecting to it...
INFO: Analyzed 10 targets (28 packages loaded, 336 targets configured).
INFO: Found 10 targets...
INFO: Elapsed time: 9.038s, Critical Path: 4.72s
INFO: 19 processes: 12 internal, 7 linux-sandbox.
INFO: Build completed successfully, 19 total actions

However, I build rasterize_triangles_kernel.so under path of tf_mesh_renderer/bazel-bin/mesh_renderer/kernels/rasterize_triangles_kernel.so instead of tf_mesh_renderer/mesh_renderer/kernels/rasterize_triangles_kernel.so as shown in the code.

The error is like:
tensorflow.python.framework.errors_impl.NotFoundError: tf_mesh_renderer/bazel-bin/mesh_renderer/kernels/rasterize_triangles_kernel.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrESs

Could you please help me out here?

Question about the filter of training sample

Hello!
I notice that you say in the paper that:

"We filter the face pose, face attribution, low-resolution images, and blurred images and obtain ∼390K face images"

I want to know more detail about the filter procedure. For example, how do you filter images according to face attribution and face pose? And how do you judge the blurred images? Because I generate about 420K samples but some samples are outliers, such as:
image

Thanks for your reply very much!

How to organize the data of multi view datasets?

Hi, for 300W-LP and Multi-PIE datasets, the face images of a person have different yaw angles i.e. [0°, 90°] for 300W-LP and [-90°, 90°] for Multi-PIE.

  1. how to select three view images to construct a data item? for example, the angle are less than a threshold.
  2. there are several compositions of the same person with different angles, what's the strategy you used to select data of one person? how many data you construct for one person?

NameError: name 'rasterize_triangles' is not defined

File "/home/wjj/MGCNet/src_common/geometry/render/api_tf_mesh_render.py", line 97,uses function "rasterize_triangles".
But tf_mesh_renderer/mesh_renderer/rasterize_triangles.py does not have
"def rasterize_triangles() ", only has rasterize().
When I rename "rasterize()" to rasterize_triangles() in
tf_mesh_renderer/mesh_renderer/rasterize_triangles.py, there's an error in the last function parameter (,[-1] * vertex_attributes.shape[2].value), "TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn."
Does anyone meet this issue?

Question of tf_mesh_render.

Hi, thank you for your work.

I already read your README, and you said: "I find many issue happens here, so let's make this more clear.
The tf_mesh_render does not return triangle id for each pixel after rasterise, we do this by our self and add these changes as submodule to mgcnet.".

I have one question for that. Did you use another code instead of tf_mesh_render?

If yes, can you explain where's that code for the change ?

Thanks.

cannot open shared object file

2021-03-26 16:52:41.274832: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
/research/cbim/vast
SVG path loading unavailable!
Traceback (most recent call last):
File "/research/cbim/vast/fw147/anaconda3/lib/python3.8/site-packages/trimesh/path/exchange/svg_io.py", line 18, in
from svg.path import parse_path
ModuleNotFoundError: No module named 'svg'
/research/cbim/vast/fw147/MGCNet
Traceback (most recent call last):
File "init.py", line 4, in
from train_unsupervise import *
File "/research/cbim/vast/fw147/MGCNet/train_unsupervise.py", line 20, in
from src_tfGraph.build_graph import MGC_TRAIN
File "/research/cbim/vast/fw147/MGCNet/src_tfGraph/build_graph.py", line 18, in
from .deep_3dmm_decoder import *
File "/research/cbim/vast/fw147/MGCNet/src_tfGraph/deep_3dmm_decoder.py", line 17, in
from src_common.geometry.render.api_tf_mesh_render import mesh_depthmap_camera, mesh_renderer_camera, mesh_renderer_camera_light,
File "/research/cbim/vast/fw147/MGCNet/src_common/geometry/render/api_tf_mesh_render.py", line 14, in
from thirdParty.tf_mesh_renderer.mesh_renderer.mesh_renderer import phong_shader, tone_mapper
File "/research/cbim/vast/fw147/MGCNet/thirdParty/tf_mesh_renderer/mesh_renderer/mesh_renderer.py", line 24, in
import thirdParty.tf_mesh_renderer.mesh_renderer.rasterize_triangles
File "/research/cbim/vast/fw147/MGCNet/thirdParty/tf_mesh_renderer/mesh_renderer/rasterize_triangles.py", line 33, in
rasterize_triangles_module = tf.load_op_library(
File "/research/cbim/vast/fw147/anaconda3/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 57, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /research/cbim/vast/fw147/MGCNet/thirdParty/tf_mesh_renderer/mesh_renderer/kernels/rasterize_triangles_kernel.so: cannot open shared object file: No such file or directory

How to acquire the Multi-PIE dataset?

Hi, I try to find a way to acquire the Multi-PIE dataset. When I check the home page of the Multi-PIE dataset, this home page shows that "Details on how to obtain the database are described here". However, this link is link to a platform. And I have no way to acquire the dataset. The page is shown below.
image

I want to know where can I acquire the Multi-PIE dataset. Thanks for your help.

the testing code does not work

Hi, I used the testing code in the test_unsupervise.py, but some errors occurred. Is the testing code unfinished?
Thank you!

Test on video

Hi. The "test_image.py" is used to inference a single unprocessed image. And I have a question: How to inference on a video directly? Since I want to get more frame consistency. If I inference these frames individually, there may have some jitter, and I want to know how to address this problem?

Questions about the pose and projection matrix in the code

Hi, sorry to disturb you. I have several questions about the pose and projection matrix code.

1. why we need to add defined_pose_main to pred_pose_render?
image

2. what do the variables "list_gpmm_mv" and "list_gpmm_eye" do ? I found they are calculated from rotation matrix and translation vector, and used in mesh_renderer_camera_light(), but I don't know the meaning of them.
image

3. in project3d_batch(), why the projected landmarks need to be divided by 'pt_batch_homo_2d_w'
image

Run test_image.py error

Can you help me? Thank you~
There is an error in step 4:

  1. ./runtests.sh # -D_GLIBCXX_USE_CXX11_ABI=0 to -D_GLIBCXX_USE_CXX11_ABI=1
  2. bazel build ...
  3. copy MGCNet/tf_mesh_renderer/bazel-out/k8-fastbuild/bin/mesh_renderer/kernels/rasterize_triangles_kernel.so
    to MGCNet/thirdParty/tf_mesh_renderer/mesh_renderer/kernels
  4. python test_images.py

NOTE: tensorflow-gpu1.13.1 ubuntu18.04

ERROR:
Connected to pydev debugger (build 201.7223.92)
/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/george/pycharm/face_3d/MGCNet
Finish copy
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:54: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
pt_mean = self.hdf5io_pt_model.GetValue('mean').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:58: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
pt_pcaBasis = self.hdf5io_pt_model.GetValue('pcaBasis').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:61: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
pt_pcaVariance = self.hdf5io_pt_model.GetValue('pcaVariance').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:76: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
rgb_mean = self.hdf5io_rgb_model.GetValue('mean').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:80: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
rgb_pcaBasis = self.hdf5io_rgb_model.GetValue('pcaBasis').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:83: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
rgb_pcaVariance = self.hdf5io_rgb_model.GetValue('pcaVariance').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:89: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
uv = self.hdf5io_rgb_model.GetValue('uv').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:133: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
exp_pcaBasis = self.hdf5io_exp_model.GetValue('pcaBasis').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:136: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
exp_pcaVariance = self.hdf5io_exp_model.GetValue('pcaVariance').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:145: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
mesh_tri_reference = self.hdf5io_pt_representer.GetValue('tri').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:153: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
idx_subTopo = self.hdf5io_pt_representer.GetValue('idx_sub').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:158: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
self.nplist_v_ring_f_flat_np = self.hdf5io_pt_representer.GetValue('vertex_ring_face_flat').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:160: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
self.nplist_ver_ref_face_num = self.hdf5io_pt_representer.GetValue('vertex_ring_face_num').value
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:236: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(list_v_ring_f), np.array(list_v_ring_f_index)
/home/george/pycharm/face_3d/MGCNet/src_common/geometry/gpmm/bfm09_tf_uv.py:167: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
idx_lm68_np = self.hdf5io_pt_representer.GetValue('idx_lm68').value
WARNING:tensorflow:From /home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/george/pycharm/face_3d/MGCNet/src_common/geometry/render/lighting.py:151: calling norm (from tensorflow.python.ops.linalg_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /home/george/pycharm/face_3d/MGCNet/src_common/geometry/render/api_tf_mesh_render.py:266: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
WARNING:tensorflow:From /home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
Traceback (most recent call last):
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: RasterizeTriangles expects vertices to have shape (-1, 4).
[[{{node RasterizeTriangles_12}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/george/.conda/envs/cuda10/lib/python3.7/contextlib.py", line 130, in exit
self.gen.throw(type, value, traceback)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 5253, in get_controller
yield g
File "/home/george/pycharm/face_3d/MGCNet/mytest_image.py", line 131, in
pred = system.inference(sess, image_rgb_b)
File "/home/george/pycharm/face_3d/MGCNet/src_tfGraph/build_graph.py", line 1103, in inference
results = sess.run(fetches, feed_dict={'pl_input:0':inputs})
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: RasterizeTriangles expects vertices to have shape (-1, 4).
[[node RasterizeTriangles_12 (defined at :223) ]]

Caused by op 'RasterizeTriangles_12', defined at:
File "/opt/pycharm-2020.1.1/plugins/python/helpers/pydev/pydevd.py", line 2131, in
main()
File "/opt/pycharm-2020.1.1/plugins/python/helpers/pydev/pydevd.py", line 2122, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/opt/pycharm-2020.1.1/plugins/python/helpers/pydev/pydevd.py", line 1431, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
File "/opt/pycharm-2020.1.1/plugins/python/helpers/pydev/pydevd.py", line 1438, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/opt/pycharm-2020.1.1/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/george/pycharm/face_3d/MGCNet/mytest_image.py", line 88, in
FLAGS, img_height=FLAGS.img_height, img_width=FLAGS.img_width, batch_size=FLAGS.batch_size
File "/home/george/pycharm/face_3d/MGCNet/src_tfGraph/build_graph.py", line 949, in build_test_graph
self.build_testVisual_graph()
File "/home/george/pycharm/face_3d/MGCNet/src_tfGraph/build_graph.py", line 1059, in build_testVisual_graph
self.gpmm_frustrum, gpmm_tar_mv, gpmm_tar_eye, fore=opt.flag_fore, tone=False
File "/home/george/pycharm/face_3d/MGCNet/src_tfGraph/deep_3dmm_decoder.py", line 267, in decoder_renderColorMesh
mtx_perspect_frustrum, list_mtx_model_view[i], list_cam_position[i], tone
File "/home/george/pycharm/face_3d/MGCNet/src_tfGraph/deep_3dmm_decoder.py", line 459, in gpmm_render_image
mtx_perspect_frustrum, cam_position, opt.img_width, opt.img_height)
File "/home/george/pycharm/face_3d/MGCNet/src_common/geometry/render/api_tf_mesh_render.py", line 99, in mesh_renderer_camera_light
image_width, image_height, [-1] * vertex_attributes.shape[2].value)
File "/home/george/pycharm/face_3d/MGCNet/thirdParty/tf_mesh_renderer/mesh_renderer/rasterize_triangles.py", line 125, in rasterize_triangles
image_height))
File "", line 223, in rasterize_triangles
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/george/.conda/envs/cuda10/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): RasterizeTriangles expects vertices to have shape (-1, 4).
[[node RasterizeTriangles_12 (defined at :223) ]]

Process finished with exit code 1

Question about the evaluation of 3D face reconstruction

Hi, Thanks for your great work and open source code!
Sorry to disturb you. I have trouble with how to preprocess the 'Florence' dataset and calculate the point-to-plane RMSE. I have made some effort but still do not deal with this. I would be very grateful if you can share this part of code with me. Looking forward to hearing from you. Thank you!

Why there is a "ssim loss" in the loss funtion?

Hi!
Good morning! I read the code of "build_graph.py" and found that there is a "ssim loss" in the loss funtion of MGCNet. However, I can not find the related description of "ssim loss" in the ECCV paper.
Maybe "ssim loss" is an additional loss?
image

Run test_image.py GPU memory error

when I run test_image.py,there is an error as follow:

error

How much gpu memory is needed for this project?

Thank you!

GPU info:
GTX2080, GPU Memory 8G

More Question about the evaluation of 3D face reconstruction

Hi Thanks for sharing your work!
Really appreciate it!
I went through the issues in #16 but still have some questions.

When you use the MICC dataset video, which frames did you use?
I watched the video, and found that it contains frames that don't capture the whole face cause it's too zoomed in.
I think the face detection in this case would fail and not be able to generate any reconstruction.
So then I wonder which frames you used.

Did you not need to specify them because you have the testing code of all other works that you compared with,
and ran them with the same frames?

Thanks!

Question about the "Average and standard deviation" of RMSE on MICC Florence

Hi!
I notice that only the table 1(b) provides the "Average and standard deviation" of metrics.
I want to know how to get the "Average and standard deviation". I think there are two ways:
(1)Train the MGCNet many times, and evaluate every MGCNet to get the average and standard deviation of RMSE.
(2)MICC has 53 people. So we can calculate the average RMSE of all 53 people and the standard deviation of them.

Could you tell me which way is used in your paper? Thanks very much.

image

Questions about pose of dataset

Hi. Thanks for great work!

I have few questsions of dataset.

  1. Is there any other dataset for training rather than 'Multi-Pie' and '300W-LP'?
  2. How get ground-truth pose data (camera extrinsic, rotation/translation) for '300W-LP' datasets? Isn't it wild images ?
  • I have not downloaded 'Multi-Pie' dataset, but I think this dataset might have gt pose data

Thank you.

Best regards,
YJHong.

How fast does your model render a face?

I use some pre-trained 3DMM models to reconstruct face image, but the speed of model is too low, I want to know how much time your model need to render a face image, thanks!

Reproduce results and some questions about the weight of different loss

Hi! Good morning!
I try to use three datasets(CelebA、300W-LP and LS3D-W balanced subset) to train the MGCNet with your code. But I got a much worse result compared with the pre-train model. As shown below:
The results produced by the pre-train model is wonderful.
image

But the results from my reproduce model is ugly.
image

I followed the training command in "train_unsupervise.py" without changing any parameter except the batchsize=1. The command is as follow:
image

I think there are some problems during my reproduce:
(1) The step is 20K, which is much smaller than 400K in the paper.
(2) I do not get the Multi-PLE dataset, so the result should be worse than pre-train.
(3) The weight of epipolar_weight was set to 0.00 in "train_unsupervise.py", I think it is a mistake. In the paper, this weight was set to 0.001.
(4) I set the batch size to 1 for the limited GPU source.
(5) Can the code support training with multiple GPUs?

Now I am trying to train again with the same hyper-parameters in the paper.
Thanks for your answers!

KeyError: 'TEST_SRCDIR'

Traceback (most recent call last):
File "/root/MGCNet/thirdParty/tf_mesh_renderer/mesh_renderer/rasterize_triangles.py", line 26, in
os.path.join(os.environ['TEST_SRCDIR'],
File "/opt/conda/lib/python3.6/os.py", line 669, in getitem
raise KeyError(key) from None
KeyError: 'TEST_SRCDIR'
Does the value 'TEST_SRCDIR' need to be assigned to myself?How to do it?

Training data preprocess

Hi!
I run the training code and It told me that
"FileNotFoundError: [Errno 2] No such file or directory: './data/eccv2020_MGCNet_data/train.txt'"

I do not know how to generate the "train.txt".
Could you please share the codes to generate the "train.txt" for the four datasets? (300W-LP, Multi-PIE, CelebA, LS3D)
Thank you very much, haha!

A training error: Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis...

Hi, at the beginning of the training, an error occur:

2020-10-20 16:10:17.143274: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1528. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143391: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1542. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143422: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1556. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143449: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1570. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143473: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1584. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143637: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1604. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143663: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1618. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143687: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1632. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143712: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1646. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.143735: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_1660. Error: ValidateStridedSliceOp returned partial shapes [1,?,3] and [?,3]
2020-10-20 16:10:17.161784: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2167. Error: Node strided_slice_2167 requested invalid slice index 2 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.161862: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2170. Error: Node strided_slice_2170 requested invalid slice index 3 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.161909: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2173. Error: Node strided_slice_2173 requested invalid slice index 4 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.161953: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2176. Error: Node strided_slice_2176 requested invalid slice index 5 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.161992: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2179. Error: Node strided_slice_2179 requested invalid slice index 6 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.162030: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2182. Error: Node strided_slice_2182 requested invalid slice index 7 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.162091: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2185. Error: Node strided_slice_2185 requested invalid slice index 8 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.162142: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2188. Error: Node strided_slice_2188 requested invalid slice index 9 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.162197: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2191. Error: Node strided_slice_2191 requested invalid slice index 10 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.162246: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2194. Error: Node strided_slice_2194 requested invalid slice index 11 on axis 1 from tensor of shape: [5,68,2]
2020-10-20 16:10:17.162295: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_2197. Error: Node strided_slice_2197 requested invalid slice index 12 on axis 1 from tensor of shape: [5,68,2]
........

But the model can be trained afer these errors occurred.

.......

2020-10-20 16:10:17.839343: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_4312. Error: ValidateStridedSliceOp returned partial shapes [?,1,2] and [?,2]
2020-10-20 16:10:17.839375: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_4315. Error: ValidateStridedSliceOp returned partial shapes [?,1,2] and [?,2]
Epoch  1:   100/  100 (time: 8.7066), Step 100:
total: [0.9219]
ga/pixel/ssim/depth/epipolar loss: [0.0000/0.0000/0.0000/0.0000/0.0000]
(weight)ga/pixel/ssim/depth/epipolar loss: [0.0000/0.0000/0.0000/0.0000/0.0000]
mm_pixel/mm_lm/mm_id/mm_reg_s/mm_reg_c loss: [0.2579/241.8536/0.9496/1.2511/0.0017]
(weight)mm_pixel/mm_lm/mm_id/mm_reg_s/mm_reg_c loss: [0.4900/0.2419/0.1899/0.0001/0.0000]

Epoch  1:   200/  200 (time: 0.5535), Step 200:
total: [1.4828]
ga/pixel/ssim/depth/epipolar loss: [0.0732/0.1618/0.0000/164.9637/32.4154]
(weight)ga/pixel/ssim/depth/epipolar loss: [0.0732/0.1618/0.0000/0.0165/0.0324]
mm_pixel/mm_lm/mm_id/mm_reg_s/mm_reg_c loss: [0.3084/659.8826/0.8153/7.1344/0.0120]
(weight)mm_pixel/mm_lm/mm_id/mm_reg_s/mm_reg_c loss: [0.5859/0.6599/0.1631/0.0007/0.0000]

Is this situation normal? If not, how can I avoid it?

Self-adjoint eigen decomposition was not successful.

when I use facenet as perception loss , this error happen: from align.py :line 110, in TransformFromPointsTF : tf.self_adjoint_eig(N)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Self-adjoint eighen decomposition was not successful , the input might not be valid.
[[{node SelfAdjointEigV2}]]
[[{{node gradients/Reshape_37_grad/Shape}}]]. my tf.version is 1.13.0

when I swith tf 1.13.1. there are another problem:
InvalidArgumentError :Got info = 3 for batch index 0 , expected info=0, debug_info = heevd

I can't found any solution for this problem .can you help me ? Thank you !

MGCNet on Colab

I am currently trying to set up MGCNet on Colab, and when I am installing bazel, an error message pop up:

ERROR: bazel does not currently work properly from paths containing spaces (com.google.devtools.build.lib.runtime.BlazeWorkspace@38f2b815).

I do not think my path contains any spaces...
Did anyone encounter the same problem. If so, how did you solve it?
And did anyone successfully set up MGCNet on Colab?

compile tf_mesh_renderer

When I use bazel to compile tf_mesh_renderer, I met the problem shown in the following:
MGCNet-master/thirdParty/tf_mesh_renderer/mesh_renderer/kernels/BUILD:8:8: Executing genrule //mesh_renderer/kernels:rasterize_triangles_kernel failed (Exit 1): bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.