Git Product home page Git Product logo

meingame's Introduction

MeInGame: Create a Game Character Face from a Single Portrait

This is the official PyTorch implementation of the AAAI 2021 paper: J. Lin, Y. Yuan, and Z. Zou, MeInGame: Create a Game Character Face from a Single Portrait, the Association for the Advance of Artificial Intelligence (AAAI), 2021.

3D display of the created game characters (click to view):

Watch the video

[Updates]

  • 2021.05.11: The RGB 3D Face Dataset (Google Drive, a password protected zip file) is now available! Please download and read the LICENSE AGREEMENT carefully, and send the signed license agreement to [email protected] via email to get the PASSWORD for the zip file.

Getting Started

Requirements

Install Dependencies

pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
pip install opencv-python fvcore h5py scipy scikit-image dlib face-alignment scikit-learn tensorflow-gpu==1.14.0 gast==0.2.2
pip install "git+https://github.com/Agent-INF/pytorch3d.git@3dface"

Testing with pre-trained network

  1. Clone the repository
git clone https://github.com/FuxiCV/MeInGame
cd MeInGame
  1. Prepare the Basel Face Model following the instructions on Deep3DFaceReconstruction, and rename those files as follows:
Deep3DFaceReconstruction/BFM/BFM_model_front.mat -> ./data/models/bfm2009_face.mat
Deep3DFaceReconstruction/BFM/similarity_Lm3D_all.mat -> ./data/models/similarity_Lm3D_all.mat
Deep3DFaceReconstruction/network/FaceReconModel.pb -> ./data/models/FaceReconModel.pb
  1. Download the pre-trained model, put the .pth file into ./checkpoints/celeba_hq_demo subfolder, and the .pkl file into ./data/models subfolder.

  2. Run the code.

python main.py -m test -i demo
# Or
python main.py -m test -i demo -c
# it will run on the CPU, if you don't have a qualified GPU.
  1. ./data/test subfolder contains several test images and ./results subfolder stores their reconstruction results. For each input test image, serveral output files can be obtained after running the demo code:
  • "xxx_input.jpg": an RGB image after alignment, which is the input to the network
  • "xxx_neu.obj": the reconstructed 3D face in neutral expression, which can be viewed in MeshLab.
  • "xxx_uv.png": the uvmap corresponding to the obj file.

Training with CelebA-HQ dataset

Data preparation

  1. Run following command to create training dataset from in-the-wild images.
python create_dataset.py
# You can modify the input_dir to your input images directory.
  1. Download our RGB 3D Face Dataset (Google Drive), unzip it, and place it into the ./data/dataset/celeba_hq_gt subfolder. Please download and read the LICENSE AGREEMENT carefully, and send the signed license agreement to [email protected] via email to get the PASSWORD for the zip file. Note: the permission for application is only open to researchers or faculty of universities or research institutes. It is prohibited for students or business entities to apply.

Training networks

After the dataset is ready, you can train the network with the following command:

python main.py -m train

Citation

Please cite the following paper if this model helps your research:

@inproceedings{lin2021meingame,
    title={MeInGame: Create a Game Character Face from a Single Portrait},
    author={Lin, Jiangke and Yuan, Yi and Zou, Zhengxia},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2021}
}

meingame's People

Contributors

fuxicv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meingame's Issues

fail to get the dataset unzip password

When I send the protocol message to the mail in the readme,my email was just rejected automatically.
Can you update the new email address or other solution to get the password?

How to run MeinGame model with our own head mesh?

I have a head mesh of my own and would like to run the Shape Transfer module after the BFM stage. What other data would I need apart from the head mesh (like landmarks, etc)? How do we calculate these data?

Time

It takes nearly 20-30 minutes to get the final output while using only CPU (16GB), while testing. Is it minimized with GPU usage? Or what is the time interval within which the final avatar is obtained?

段错误 (核心已转储)

(base) root@e3ae1075ee04:~/MeInGame# python main.py -m test -i demo
/root/miniconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/root/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /root/miniconda3/lib/python3.7/site-packages/tensorflow/python/compat/v2_compat.py:61: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
05-12 17:49:12 - x - INFO: - Namespace(adv_weight=0.01, batch_size=1, beta1=0.5, beta2=0.9, bfm_version='face', ckpt_interval=1000, con_weight=1, cpu=False, data_dir='./data/dataset/celeba_hq', data_gt_dir='./data/dataset/celeba_hq_gt', debug=False, epochs=400, face_model='230', gan_loss='nsgan', im_size=512, input='demo', l1_weight=3, learning_rate=0.0001, log_interval=10, mode='test', name='celeba_hq_demo', output=None, rename=False, restore=False, root_dir='.', sample_interval=1000, seed=1, start=0, std_weight=3, sty_weight=1, suffix='demo', sym_weight=0.1, use_cuda=True, uv_size=1024, workers=0)
Start testing...

2022-05-12 17:49:13.018253: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2022-05-12 17:49:13.034577: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499925000 Hz
2022-05-12 17:49:13.038326: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560dee0ed090 executing computations on platform Host. Devices:
2022-05-12 17:49:13.038363: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2022-05-12 17:49:13.038634: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2022-05-12 17:49:13.049418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6575
pciBusID: 0000:89:00.0
2022-05-12 17:49:13.049502: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2022-05-12 17:49:13.052726: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2022-05-12 17:49:13.054944: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2022-05-12 17:49:13.055338: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2022-05-12 17:49:13.057812: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2022-05-12 17:49:13.059635: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2022-05-12 17:49:13.066102: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2022-05-12 17:49:13.068350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-05-12 17:49:15.371611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-05-12 17:49:15.371656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-05-12 17:49:15.371663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-05-12 17:49:15.374523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10082 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:89:00.0, compute capability: 6.1)
2022-05-12 17:49:15.377022: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560e374d3970 executing computations on platform CUDA. Devices:
2022-05-12 17:49:15.377039: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
/root/miniconda3/lib/python3.7/site-packages/pytorch3d/io/obj_io.py:70: UserWarning: Faces have invalid indices
warnings.warn("Faces have invalid indices")
段错误 (核心已转储)

pytorch3d problem !Help!

/nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/io/obj_io.py:70: UserWarning: Faces have invalid indices
warnings.warn("Faces have invalid indices")
Traceback (most recent call last):
File "main.py", line 180, in
main()
File "main.py", line 175, in main
face_model=config.face_model)
File "/nvme/jiangyan/MeInGame/uv_inpainting.py", line 363, in predict
image, face_model)
File "/nvme/jiangyan/MeInGame/uv_inpainting.py", line 331, in preprocess
fragment = self.rasterizer(nsh_trans_mesh)
File "/nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, kwargs)
File "/nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterizer.py", line 128, in forward
cull_backfaces=raster_settings.cull_backfaces,
File "/nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py", line 145, in rasterize_meshes
cull_backfaces,
File "/nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py", line 197, in forward
cull_backfaces,
RuntimeError: Not compiled with GPU support (RasterizeMeshesNaive at /tmp/pip-req-build-9qdx_6q7/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.h:112)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f186d1dc193 in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: RasterizeMeshesNaive(at::Tensor const&, at::Tensor const&, at::Tensor const&, int, float, int, bool, bool) + 0x16b (0x7f184a2fd7ab in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #2: RasterizeMeshes(at::Tensor const&, at::Tensor const&, at::Tensor const&, int, float, int, int, int, bool, bool) + 0xed (0x7f184a2f4c3d in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #3: + 0x2d87d (0x7f184a30d87d in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #4: + 0x2d95e (0x7f184a30d95e in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #5: + 0x27a10 (0x7f184a307a10 in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)

frame #11: THPFunction_apply(_object
, _object
) + 0xa7f (0x7f186dc9614f in /nvme/jiangyan/anaconda3/envs/MeInGame/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #47: __libc_start_main + 0xf0 (0x7f1871fe9840 in /lib/x86_64-linux-gnu/libc.so.6)

What is the meaning of the fov_y variable?

I see the fov_y=12.5936; facial = np.tan(fov_y / 360.0 * math.pi) variable initialized in the uv_creator.py file. What is the meaning of this? Does this change if we use a different head mesh?

No module named 'face_alignment.models'

Please help me, tell how fix it
python main.py -m test -i demo /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/b/anaconda3/envs/fit/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Traceback (most recent call last): File "main.py", line 10, in <module> from uv_inpainting import UVInpainting File "/home/b/MeInGame/uv_inpainting.py", line 25, in <module> from lib.image_cropper import ImageCropper File "/home/b/MeInGame/lib/image_cropper.py", line 7, in <module> from lib import face_align File "/home/b/MeInGame/lib/face_align.py", line 9, in <module> from face_alignment.models import FAN, ResNetDepth ModuleNotFoundError: No module named 'face_alignment.models'

KeyError: 'meanshape'

/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/tensorflow/python/compat/v2_compat.py:61: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
05-16 18:17:59 - x - INFO: - Namespace(adv_weight=0.01, batch_size=1, beta1=0.5, beta2=0.9, bfm_version='face', ckpt_interval=1000, con_weight=1, cpu=False, data_dir='./data/dataset/celeba_hq', data_gt_dir='./data/dataset/celeba_hq_gt', debug=False, epochs=400, face_model='230', gan_loss='nsgan', im_size=512, input='demo', l1_weight=3, learning_rate=0.0001, log_interval=10, mode='test', name='celeba_hq_demo', output=None, rename=False, restore=False, root_dir='.', sample_interval=1000, seed=1, start=0, std_weight=3, sty_weight=1, suffix='demo', sym_weight=0.1, use_cuda=True, uv_size=1024, workers=0)
Start testing...

2021-05-16 18:17:59.049588: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-05-16 18:17:59.053103: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3696000000 Hz
2021-05-16 18:17:59.053327: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x555ffe617880 executing computations on platform Host. Devices:
2021-05-16 18:17:59.053352: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2021-05-16 18:17:59.053726: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2021-05-16 18:17:59.053823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-16 18:17:59.054057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1660 major: 7 minor: 5 memoryClockRate(GHz): 1.815
pciBusID: 0000:01:00.0
2021-05-16 18:17:59.054081: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2021-05-16 18:17:59.054132: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory
2021-05-16 18:17:59.054170: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory
2021-05-16 18:17:59.054204: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory
2021-05-16 18:17:59.054237: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory
2021-05-16 18:17:59.054269: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory
2021-05-16 18:17:59.056445: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2021-05-16 18:17:59.056457: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices...
2021-05-16 18:17:59.103599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-05-16 18:17:59.103617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2021-05-16 18:17:59.103621: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2021-05-16 18:17:59.104733: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-05-16 18:17:59.104963: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x555ffefc5e90 executing computations on platform CUDA. Devices:
2021-05-16 18:17:59.104972: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1660, Compute Capability 7.5
/home/gwc/anaconda3/envs/kehu1/lib/python3.7/site-packages/pytorch3d/io/obj_io.py:70: UserWarning: Faces have invalid indices
warnings.warn("Faces have invalid indices")
Traceback (most recent call last):
File "/home/MeInGame/main.py", line 180, in
main()
File "/home/MeInGame/main.py", line 146, in main
model = UVInpainting(config, device, sess, graph)
File "/home/MeInGame/uv_inpainting.py", line 83, in init
self.init_test()
File "/home/MeInGame/uv_inpainting.py", line 268, in init_test
self.reconstructor = Deep3DFace(self.sess, self.graph)
File "/home/MeInGame/lib/deep3d.py", line 23, in init
self.bfm = BFM_model('.', 'data/models/bfm2009_{}.mat'.format(bfm_version))
File "/home/MeInGame/lib/deep3d.py", line 121, in init
self.load_BFM09()
File "/home/MeInGame/lib/deep3d.py", line 130, in load_BFM09
self.shapeMU = model['meanshape'].astype(np.float32) # mean face shape
KeyError: 'meanshape'

demo result incorrect

Hi, thanks for your great work.
I installed all the dependencies and run the test code.
I got the following results:
000_input
000_uv
image
image

It looks like the face detail was not rendered. Any idea what's going wrong? Thanks.

Basel Face Model dependency

Hello,

I'd love to create something great with this, and make it user-facing, so people could enjoy it.
To cover the development costs of something like that, I'd likely use a brand to cover sponsor of the costs, which would make this a commercial application, I believe.

I saw you have a dependency on the BFM and so I got in touch with them to check the costs to use their face model, and I was quoted EUR 10’000 per year, or a one-time payment of EUR 40'000 for a perpetual license.

Unfortunately those costs make the commercial use of your library prohibitive, so I was wondering if during your research you had considered a cheaper/free alternative to BFM?

Sorry for the odd question, but I love what you built and would love to expand on it.

Best regards

invalid dataset link of 'RGB 3D face dataset'

Thanks for your contribution! Excellent work!!

It seems that the link of your dataset 'RGB 3D face dataset' is invalid in your README.md. Can you update your dataset link?

Thanks ~!

BFM_model_front.mat

the BFM_model_front.mat file is very difficult to obtain.
could you provide it directly??
thank you very much!

Face style

Could i change the face style of the game face? How ?

train error

    for items in train_loader:
  File "/home/admin/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 279, in __iter__
    return _MultiProcessingDataLoaderIter(self)
  File "/home/admin/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 719, in __init__
    w.start()
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/process.py", line 105, in start
    self._popen = self._Popen(self)
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/home/admin/miniconda3/envs/3dface/lib/python3.6/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects

weird Texture

Hi @FuxiCV thanks for sharing your great project. I am getting weird textures when i implement your repo. Would you mind to tell where i am doing mistake.
input Image:
000_input
Texture made:
000_uv

Question about shape transfer function

Hi guys,

While I was going through the paper, I found the definition of RBF interpolation mentioned in 3.3 shape transfer part is kind of confusing for me.

I know that x' are the 68 face landmarks on the game mesh, but what about the input x? In the paper it says ' input x is set to the original position of a game mesh vertex'. Does it mean that x represents vertexes on the game mesh? But if this is true, how can we make f(x) the transfer which is able to transfer the original 3DMM mesh into the game mesh? I'm quite confused about this part.

Thanks in advance for anyone who helps me.

Fail to get password of RGB 3D Face Dataset.

Thank you for your sharing.
But when I apply for the password of dataset, my email was returned.
Bounce reason is STMP error,may be the e-mail address([email protected])is full or has been deleted.
I would like to ask if there is any other way to apply for the password?

How to run your demo code on multiple GPUs

When I try to run main.py on two or more GPUs I get RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on the device: cuda:1. How do I solve this issue as my single GPU has low memory

UserWarning: Faces have invalid indices

2021-03-11 14:36:42.069441: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
/home/b/anaconda3/envs/fit/lib/python3.7/site-packages/pytorch3d/io/obj_io.py:70: UserWarning: Faces have invalid indices
warnings.warn("Faces have invalid indices")
Traceback (most recent call last):
File "main.py", line 180, in
main()
File "main.py", line 146, in main
model = UVInpainting(config, device, sess, graph)
File "/home/b/MeInGame/uv_inpainting.py", line 83, in init
self.init_test()
File "/home/b/MeInGame/uv_inpainting.py", line 268, in init_test
self.reconstructor = Deep3DFace(self.sess, self.graph)
File "/home/b/MeInGame/lib/deep3d.py", line 23, in init
self.bfm = BFM_model('.', 'data/models/bfm2009_{}.mat'.format(bfm_version))
File "/home/b/MeInGame/lib/deep3d.py", line 121, in init
self.load_BFM09()
File "/home/b/MeInGame/lib/deep3d.py", line 130, in load_BFM09
self.shapeMU = model['meanshape'].astype(np.float32) # mean face shape
KeyError: 'meanshape'

Help me please, how fix this problem?

Licence

This work looks very promising.

What's the licence of the code/model/dataset it's been trained on?

RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:2

Thanks for sharing your great work

I got some error when run the code

python main.py -m test -i demo

I'd appreciate your advice

Traceback (most recent call last):
  File "main.py", line 180, in <module>
    main()
  File "main.py", line 175, in main
    face_model=config.face_model)
  File "/home/ubuntu/hongiee/MeIn/uv_inpainting.py", line 363, in predict
    image, face_model)
  File "/home/ubuntu/hongiee/MeIn/uv_inpainting.py", line 295, in preprocess
    segments = self.segmenter.segment_torch(images)
  File "/home/ubuntu/hongiee/MeIn/lib/face_segment.py", line 92, in segment_torch
    segments = self.model(images)
  File "/home/ubuntu/anaconda3/envs/MeIn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/MeIn/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
    "them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:2

What is the meaning of get_rule_mask()?

Thanks.

MeInGame/lib/dataset.py

Lines 264 to 273 in a699098

def get_rule_mask(image):
R = image[..., 0]
G = image[..., 1]
B = image[..., 2]
mask = (R > 95) & (G > 40) & (B > 20) & ((
np.max(image, axis=-1) - np.min(image, axis=-1)) > 15) & (R > G) & (R > B)
# mask = (R > 95) & (G > 40) & (B > 20) & (
# (np.max(image, axis=-1) - np.min(image, axis=-1)) >
# 15) & ((R - G) > 20) & ((R - B) > 20)
return mask

libtorch_cpu.so: cannot open shared object file: No such file or directory

how can i fix this problem. I use cpu

command i use:

python main.py -m test -i demo -c

Error detail

from .graph_conv import GraphConv

File "/home/hd/anaconda3/envs/deep3d/lib/python3.6/site-packages/pytorch3d/ops/graph_conv.py", line 6, in
from pytorch3d import _C
ImportError: libtorch_cpu.so: cannot open shared object file: No such file or directory

Enviroment info

Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1

OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2

Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorch3d==0.2.0
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] Could not collect

Where can I find supplementary materials?

I want to learn more about your shape transfer module and in your original paper, you mentioned that the additional details are mentioned in the supplementary material. But I am not able to find it. Do you have any link for that?

landmarks 坐标点如何选择

你好,shape transfer 时 landmarks 坐标点如何选择 index是使用什么软件查看的 blender 还是maya
tgt_ref = meshio.Mesh(os.path.join(tgt_dir, 'deformed.obj'))
self.lm_idx = np.loadtxt(os.path.join(tgt_dir, 'landmarks.txt'), dtype=np.int32)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.