Git Product home page Git Product logo

barisgecer / ganfit Goto Github PK

View Code? Open in Web Editor NEW
623.0 50.0 65.0 19.1 MB

Project Page of 'GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction' [CVPR2019]

Home Page: http://openaccess.thecvf.com/content_CVPR_2019/html/Gecer_GANFIT_Generative_Adversarial_Network_Fitting_for_High_Fidelity_3D_Face_CVPR_2019_paper.html

License: GNU General Public License v3.0

Python 100.00%
3d-face-reconstruction generative-adversarial-network differentiable-rendering face-recognition cvpr2019 identity-features texture 3dface

ganfit's Issues

How long is the training time?

This is a really nice work!
I have a question about how much time does model train on an NVIDIA GTX 1080 TI GPU mentioned in paper?
Thank you!

How to get the ground truth and template landmarks

Thanks for sharing the evaluation code. I am trying to evaluate some other ground truth datasets like face warehouse. Just wondering how to manually annotate the template mesh? Is there any tool available for that? Also, it will be helpful if you could provide a sample of how you have generated the landmark files for the ground truths as the ground truth dataset does not have any landmarks provided on it.

About the dataset

Hi @barisgecer, thanks for your brilliant work.

In the paper, you use the Large Scale Face Model for 3DMM shape model and the 4DFAB database.
I want to reproduce this paper, but I can not get access to these two datasets, would you please help me how I can get them?

I am looking forward to your reply!

Best regards.

ValueError: The glob ganfit_plus/subject_01/Model/frontal1/obj/*.obj yields no assets

python3 micc_evaluation.py --template template.obj --template_lms landmark_ids.pkl ganfit_plus ganfit_reocnstruction

/home/nicolast0604/GANFit/env/lib/python3.6/site-packages/menpo/image/base.py:25: UserWarning: Falling back to scipy interpolation for affine warps
warn("Falling back to scipy interpolation for affine warps")
ID: 1
Traceback (most recent call last):
File "micc_evaluation.py", line 100, in
distances = benchmarker.benchmark(args.reconstruction_path)
File "micc_evaluation.py", line 83, in benchmark
distances[setting_id, scan-1, id-1] = self.calculate_error(fitting_path, gt_path, id, setting, scan, False)
File "micc_evaluation.py", line 56, in calculate_error
org = m3io.import_meshes(self.registration_path + gt_path + 'obj/.obj')[0]
File "/home/nicolast0604/GANFit/env/lib/python3.6/site-packages/menpo3d/io/input/base.py", line 185, in import_meshes
verbose=verbose,
File "/home/nicolast0604/GANFit/env/lib/python3.6/site-packages/menpo/io/input/base.py", line 880, in _import_glob_lazy_list
raise ValueError("The glob {} yields no assets".format(pattern))
ValueError: The glob ganfit_plus/subject_01/Model/frontal1/obj/
.obj yields no assets

Input of the Texture GAN

Hi @barisgecer, thanks for sharing this interesting work! The ideas are novel and the results are very impressive!

In the paper, you train a Texture GAN for with texture parameters (pt) as input generating UV texture maps for each ID with the ground-truth provided in the texture dataset.

After reading the paper, I wonder how to obtain the mentioned texture parameters pt? Is it estimated by 3DMM models? And is it the color value of each vertex or something else?

As a beginner in 3D face topic, I would appreciate it if you could provide more details about it, which would be very helpful to me!

I am looking forward to your reply!

Best regards.

Texture coordinate

Hi, I want to ask you about how to define the texture coordinate to correspond the 3D vertices with uv texture map, do you know any learning materials about this knowledge point.

latent parameters

Good job!
i have a question,how to get latent parameters from an image? Is from deep networks?
thanks for your apply

How to evaluate florence dataset for my model

Thankyou for providing the code for evaluation.

I am sorry for naive question but what I really can't understand is that of which pictures I have to predict meshes using my model.
If we generate meshes of Indoor-Cooperative, PTZ-Indoor, PTZ-Outdoor and compare it with registered ground truth meshes which are in frontal pose and neutral expression but predicted meshses will contain some pose and expression along with shape. How they are comparable with each other? How the frontal 1 or 2 is comparable with Indoor-Cooperative, PTZ-Indoor or PTZ-Outdoor.
I hope I made my point here.
What should be the file hirerichy of the predicted meshes?

The uv format of GANFIT

Hi, thanks for your work and all the implementation details you shared in the GitHub issue :)

I find lots of follow-up works are built on top of GANFIT, including AvatarMe, AvatarMe++, and FitMe. Are the 3DMM used by these methods share the same topology and UV parameterization as GANFIT? I see the template.obj on the project page of GANFIT, is this the template mesh used by GANFIT and its follow-ups (AvatarMe, AvatarMe++, and FitMe)? Is the this file the uv parameterization of template.obj?

Thanks in advance.

unable to reproduce the result

I am struggle to reproduce the result proposed in the paper.
While in practice, I found the GAN does not work in fact.

I really can not understand what's the purpose of this github as it can offer nothing benifical to the community.

About How to iteratively update the input vector Pt of PGGAN

Thanks for your great works. I have some issues about the input parameter Pt of PGGAN. Whether we need to roughly update the input parameter Pt through minimizing per-pixel Manhattan distance between UV texture of the input 2D image and the output UV texture of PGGAN firstly, then, adjust Pt through the loss of model fitting. In short, do we need a two-step adjustment of Pt. It is appreciated if I can receive your reply.

A problem about paper

Hello! I 've read your paper and tried to reproduce your texture generation network with some other dataset(NJU dataset). After training, the generation network is able to produce reasonable results. However, when I try to integrate the network to the optimization pipeline, I find that it's very hard to control the quality of the generated texture since there's no direct constrain on the texture or the latent code. Have you encountered the same problem before? And do you have any idea about how to maintain the generated texture in good quality?

About the Texture GAN

Hi @barisgecer, when I am reproducing the texture GAN, I encountered some problems.

The texture GAN's input is a 512-dimension latent vectors, and the arcface's output is also a 512-dimension, does this mean that the output of the Arcface is the input of the GAN? And if this output is the p_t parameter?

What's more, when have trained a texture GAN, the output should be a [512, 512, 3] uvmap, but in Figure 2 of the paper, the output's height and width is not be equal, so what's the output of the texture GAN? And how can the output uvmaps correspond to the 3D vertices so that the colors can be reprojected?

Looking forward to your reply. Thank you so much!

About images of MOFA-test dataset

Hi Baris, thanks for your impressive work. Currently I notice that some works would like to perform evaluation on MOFA-test dataset, mainly on the 7 images showed in GANfit. However, I struggle to find the source of these images. If possible, could you please share these 7 images to me, or tell me where I can download them? Many thanks to your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.