Comments (3)
We do not have UV texture of the input image, how can we do that?
No, it is not a two-step adjustment. Although you can find a better initialization point to start optimization (such as https://arxiv.org/abs/2105.07474), only fitting by the rendered textured trimesh (by its distance to the input image) should be sufficient.
Regards,
Baris
from ganfit.
@barisgecer Thanks for your reply. Actually, initially, I used 5000 UV textures to train PGGAN, and then fixed the trained PGGAN as a texture generator, and optimized the Pt vector of the PGGAN according to the Manhattan distance between the rendered image and the input image. The purpose of this is to independently verify the effectiveness of PGGAN. Especially when I finish the fitting process, the output UV texture of PGGAN is different from the identity of the input image like below:
The first row is the input images and generated UV texture by PCA method. The second row is the generaed UV texturex by pretrained PGGANs.
Besides, if I am just to test the fitting ability of PGGAN, like inversion of GAN, the results are as follows:
The left image is original UV texture, the right one is the generated UV texture by pretrained PGGAN.
Is my fitting method correct? Could you provide a more detailed texture fitting process?
Besides, "fitting with a generator network can be formulated as an optimization that minimizes per-pixel Manhattan distance between target texture in UV space Iuv and the network output G(pt) with respect to the latent parameter pt" in GANFIT paper, what's the 'target texture in UV space', the input real image or not? Can I say that the distance is between the input image and the rendered image corresponding to G(pt)?
Thank you for your great work again. It is looking forward to receiving your reply.
from ganfit.
Well, your approach is completely different than ours. You assume that you can extract texture UV from the input image and do the fitting on that image (why do you need to do texture fitting if you have the texture in the first place?). However, we believe that extracting pixels from the input image and complete missing parts actually would contain a lot of illumination from the scene. GANFit does the fitting after rendering with a proper lighting model to disentangle illumination and texture. Since we have a texture model that is quite high resolution and with the same sort of illumination (which we call albedo, in-fact it has some highlights), we can then completely remove the lighting by methods such as AvatarMe. If we had mix of lighting AvatarMe wouldn't be that successful.
I believe you should first understand that our fitting approach is based on rendering the texture and comparing it directly with the input image by a combination of loss functions which includes identity loss as well (please see the paper). Whereas yours assume to have the target texture and calculate a primative pixel-to-pixel Manhattan distance which does not care about the identity. Therefore it cannot have global affect.
from ganfit.
Related Issues (20)
- Amazing work! Would love to try out the demos! HOT 1
- Will the code be public? HOT 1
- Texture coordinate HOT 1
- How to evaluate florence dataset for my model HOT 1
- About images of MOFA-test dataset HOT 1
- How to get the ground truth and template landmarks HOT 2
- A problem about paper HOT 3
- How to send you images? HOT 1
- ValueError: The glob ganfit_plus/subject_01/Model/frontal1/obj/*.obj yields no assets HOT 1
- Which UV texture dataset did you use to train proGAN? HOT 1
- 没有开源代码放上来干嘛,又当又立么 HOT 2
- The uv format of GANFIT
- latent parameters HOT 1
- When can authors share source code and trained models? HOT 4
- How long is the training time? HOT 3
- Input of the Texture GAN HOT 4
- About the dataset HOT 6
- unable to reproduce the result HOT 1
- About the Texture GAN HOT 17
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ganfit.