evgenykashin / stylegan2-distillation Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://arxiv.org/abs/2003.03581
License: Other
Home Page: https://arxiv.org/abs/2003.03581
License: Other
When will you release the pre-trained model for style mix?
Hi!
I used the command on the Readme file to generate random images and saved the latent
python run_generator.py generate-images-custom --network=gdrive:networks/stylegan2-ffhq-config-f.pk --truncation-psi=0.7 --num 5000 --result-dir /mnt/generated_faces
Now I want to learn directions but the notebook that you recommend uses 18*512 for the latents while your repo generates 1x512 dimensional latents (https://github.com/Puzer/stylegan-encoder/blob/master/Learn_direction_in_latent_space.ipynb)
How do I go about converting these 1x512 to 18x512 latents so I can train and apply the learned directions?
thank you
when I use Learn_direction_in_latent_space.ipynb to find a direction in dlatents, I don't have the data.tsv file, Can you tell me what is that?
Dear Evgeny,
Do you have plans to release the code? I'm very interested to play with it
Best
Diego
Your gender dataset link doesn't work. I hope you check it.
I really like your work and am looking forward to try the code out when you release it. I have one question and I hope you would like to answer.
You write the following:
"StyleGAN2 latent space is not per- fectly disentangled, so the transformations made by our network are not per- fectly pure. Despite the latent space is not disentangled enough to make pure transformations, impurities are not so severe"
What does it mean that the latent space is not perfectly disentangled? Could you elaborate on that?
I believe the Model_infer.ipynb notebook is not complete. Will you release it any time soon?
Hi,
I’ve followed your notebook where you suggest to train directions based on the 1x512 latents and then tile the resulting directions to 18x512.
Unfortunately it doesn’t really work.
I have 300k images annotated with the Microsoft API and I would love to contribute more directions to this repo but none of the directions I learn work as well as the ones provided.
Could you clarify how exactly those directions where trained?
I’ve tried training in either the 1x512 and 18x512 but even a simple gender direction learning based on Puzer’s original notebook won’t work well.
On his notebook he trained on the mapped latents 18x512 and when he applied then he use a [:8] portion of the latent only.
Your notebook suggests training on the 1x512 and applying a tile to save the learned direction as 18x512. Unfortunately that process doesn’t work.
Would you mind updating your notebook with direction learning instructions that can be reproduced ? It would be great if you could clarify exactly how the provided directions were trained and how exactly to apply them (if use use a portion only of the learned direction as Puzer or if you use the whole direction vector when adding to the latent)
Thank you
Hi, thanks for your share~
Do you use some virtual environment like conda ?
If yes, could you please share it like using a *.yaml ?
Hi again!
I have a quick question: If I have already generated an image using python run_generator.py generate-images-custom
, and I want to apply another transformation to that same image (for example, I changed the gender of the image using my own attribute directions, then I want to also change the age with another attribute latent vector). How can I do that?
Thanks!
Moaz
I want to transform custom image, for example my image using latern direction that I trained beside using pix2pix. Is that possible ?
Hi, very impressive work !
Is there any possibility to share your attributes predictor or indicate which predictors you used in the paper ?
Thanks a lot.
If not can you give me some pointers on how to do it? TY
As mentioned in the title, I tried to use the colab demo, but couldn't make it work.
Hi!
How many samples on average did you use to learn the latent directions you provide?
Is it bigger than 10k samples?
thanks
Hi, I have the following doubts.
I am training Pix2Pix for 1024 input, Should the loadSize still be 512 or should I change it to 1024?
Would using batch size of 2 instead of 8 result in performance deterioration?
Thank you.
Robert is updating his directions for stylegan3 https://twitter.com/robertluxemburg/status/1448352980521177090.
Using fractions.gcd()
is depreciated as of 3.9, use math.gcd()
Hi
Thank you for your great work.
Could I know the "Synthetic dataset for gender swap" is the images generated by stylegan2 as you mentioned in the paper or generated by Pix2pixHD?
Can you please provide any details on pix2pixHD parameters / modifications used to train models?
I'm trying to train some models using similar procedure (generate pairs from stylegan), but if use vanila pix2pixHD settings results are not so good.
Дропбокс не давал скачать датасет. А сейчас Линка вообще протухла (404). Можно как-то получить доступ к данным?
Example: To generate images with a smile you use the smile.py
direction_path ../stylegan2directions/smile.npy
Now what if I wanted a person that was male and had a smile.
Do I have to run each individually or can I combine the generation in one step?
@EvgenyKashin
I want to download your gender-dataset in Dropbox, but it shows "The zip file is too large" and failed to download. Can you zip the file or separately save on several folders as the FFHQ dataset?
Hopes to hear from you.
Hi, thanks for your interesting work. I got a question about the comparison method of the gender swap task. I am wondering why you did not compare your method with CycleGAN, which is a popular baseline of unpair image-to-image translation task. Look forward to your reply.
Thanks for your stunning work!
I'd like to play with this but I don't have a hardware to train with.
someone who has trained this, can share their weights, I would appreciate it
Hello,
I am trying to run your Model_infer.ipynb file on Colab but I got this error: No such file or directory: 'checkpoints/r512_smile_big_v2/latest_net_G.pth'. I couldn't find such a directory as the error mentioned actually. I think you have written that it was going to be announced later, is it so? If so, can you provide the date you will release it? I am looking forward to work on this repository.
So if I create a smile images what is pix2pixHD doing? When creating the pix2pixHD do I have to train first or is it pretrained for the stylegan2directions. TY
@EvgenyKashin
Thank you for your great gender-swap dataset! Would you mind sharing the Aging dataset too?
Hey there!
Love hat you did here :)
Is there any chance to get access to the retained model by chance?
Hi,
I would like to create a dataset of images generated with interfacegan method.
This dataset should be as varied as possible (i.e age, gender, pose, etc combinations) and does not have to be tagged.
I can use your code to generate the images myself but before doing it I was wondering if such a dataset already exists?
I saw that you supplied gender swap dataset. What about other latent space directions dataset?
Thanks!
Hi,
I am just wondering, how am I supposed to use Learn_direction_in_latent_space.ipynb?
What should "data.tsv" be? How should I organize my data here?
Thanks!
Hello,
Could you please explain more how to run Style Mixing please.
Thank you
Just thought I would add your todo as an issue to be worked on. Thank you very much for this repo and explanation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.