Hello, I'm tzt101 👋
tzt101 / michigan Goto Github PK
View Code? Open in Web Editor NEWMichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing (SIGGRAPH 2020)
License: MIT License
MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing (SIGGRAPH 2020)
License: MIT License
Hello there, I'm trying to re-train MichiGAN on my custom dataset. I got few questions:
1. Input shape:
2. Number of gpus:
I used 2 RTX and faced the error
RuntimeError: CUDA error: device-side assert triggered
Any idea why? Will the code run smoothly with ONE GPU only?
Thank you.
I'm having this issue while making inference on pretrained model.
I googled it, and some says i misdownloaded the model, or the data, or something else.
I have no idea where or what to do next or to fix it. need some help.
Traceback (most recent call last):
File "demo.py", line 521, in
opt = DemoOptions().parse()
File "/home/student/michigan_test/MichiGAN/options/base_options.py", line 235, in parse
torch.cuda.set_device(opt.gpu_ids[0])
File "/home/student/anaconda3/envs/michigan_env/lib/python3.7/site-packages/torch/cuda/init.py", line 311, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
I tried cuda_launch_blocking=1 but still doesnt work..
The test options at https://github.com/tzt101/MichiGAN/blob/master/options/test_options.py have flags ilke --four_image_show
and --input_relation
that have not been implemented. Is this intentional?
I am unable to run it on CPU. It gives following error when running demo.py.
./checkpoints\MichiGAN\SInpaintingModel_gen.pth
THCudaCheck FAIL file=..\aten\src\THC\THCGeneral.cpp line=51 error=38 : no CUDA-capable device is detected
Traceback (most recent call last):
File "demo.py", line 329, in edit
orient_stroke = cal_stroke_orient.stroke_to_orient(mask_stroke)
File "C:\Users\dmishra\MichiGAN\ui_util\cal_orient_stroke.py", line 143, in stroke_to_orient
stroke_mask_tensor = torch.unsqueeze(stroke_mask_tensor, 0).cuda()
File "C:\Users\dmishra\Anaconda3\envs\michigan\lib\site-packages\torch\cuda_init_.py", line 162, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at ..\aten\src\THC\THCGeneral.cpp:51
Thank you for your work!
I run the inference script many times with different flags. Could you tell me how to achieve hair reshaping?
Could you describe the preprocessing pipeline and input data format? I wasted a lot of time trying to run the inference not on a FFHQ dataset.
Thanks for publishing a great work!
I tried inference.py
following the procedure described in README, but I got an error at unpickling.
Network [SPADEBGenerator] was created. Total number of parameters: 109.5 million. To see the architecture, do print(network).
Network [InpaintGenerator] was created. Total number of parameters: 16.1 million. To see the architecture, do print(network).
Traceback (most recent call last):
File "inference.py", line 25, in <module>
model = Pix2PixModel(opt)
File "/content/MichiGAN/models/pix2pix_model.py", line 32, in __init__
self.netG, self.netD, self.netE, self.netIG, self.netFE, self.netB, self.netD2, self.netSIG = self.initialize_networks(opt)
File "/content/MichiGAN/models/pix2pix_model.py", line 185, in initialize_networks
netG = util.load_network(netG, 'G', opt.which_epoch, opt)
File "/content/MichiGAN/util/util.py", line 228, in load_network
weights = torch.load(save_path)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 764, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.
So I manually downloaded the .pth files from Google Drive, but again, I got another error at the same point.
Network [InpaintGenerator] was created. Total number of parameters: 16.1 million. To see the architecture, do print(network).
Traceback (most recent call last):
File "inference.py", line 25, in <module>
model = Pix2PixModel(opt)
File "/content/MichiGAN/models/pix2pix_model.py", line 32, in __init__
self.netG, self.netD, self.netE, self.netIG, self.netFE, self.netB, self.netD2, self.netSIG = self.initialize_networks(opt)
File "/content/MichiGAN/models/pix2pix_model.py", line 185, in initialize_networks
netG = util.load_network(netG, 'G', opt.which_epoch, opt)
File "/content/MichiGAN/util/util.py", line 228, in load_network
weights = torch.load(save_path)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 781, in _legacy_load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 4248659 more bytes. The file might be corrupted.
How do I load the pretrained models?
Thanks for your works! I want to train michigan on my own dataset, but I can not find the codes about extract orientation map. Could you please provide the relevant codes ?
Thank you for the insightful work!
Could you provide us more details about the training process of background inpainter?
I know that you cannot provide the codes due to the copyright issue,
but I am still curious to know which training dataset and hole masks you used to train the network.
or if you just used the pretrained network, could you give us information about the company?
very cool project!
I run the inference.py , And i want to know how to change the hair shape ?
thank you very much.
Thanks for your work and sharing.
Sorry to disturb you and I have a question...........
I only want to change the hair styles and get the hair mask,and I hope the output can be png image or include semi-transparent hair details information mask. Then I can use it to composite new image with new background. Is it possible and how can I make it, please.
Hello. Thank you for your amazing work.
What model did you use to create label images(hair mask) in FFHQ?
I am trying to test "cal_orientation.py"
Hi,
Are you still sharing the pre-trained model? It shows that the model has been moved to "https://mailustceducn-my.sharepoint.com/personal/tzt_mail_ustc_edu_cn/_layouts/15/Authenticate.aspx?Source=%2Fpersonal%2Ftzt%5Fmail%5Fustc%5Fedu%5Fcn%2F%5Flayouts%2F15%2Fdownload%2Easpx%3FUniqueId%3Dc540b02a%252D3155%252D4180%252Da820%252Dd47d1e42345c"
Howere, whe I tried to download there, following error showed: "Connecting to mailustceducn-my.sharepoint.com (mailustceducn-my.sharepoint.com)|13.107.136.9|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2022-02-07 14:27:03 ERROR 403: Forbidden."
Could you please provide the guidance? Thank you
git clone https://github.com/tzt101/MichiGAN.git
cd MichiGAN/
Hello,
I am running your script demo.py. I have downloaded the checkpoints and the dataset from One Drive. In addition to the 3 demo images that are included in the demo package, I have picked 7 other frames from the validation split of your dataset. More specifically, I have included identities 56000.jpg, 56001.jpg, ......, 56006.jpg.
Similar to Figure 1 in the paper, I am able to reproduce the results. Following is the image of the identity 67172 in the validation set. I am using the UI to remove the lower portion of the hair mask. As can be seen, the following image is able to fill in background information properly and render the image with the new hair style.
However, when I try to do similar operations on other identities, I am unable to get good results.
In the following examples, I have tried 2 variations. First, I have used different people for tag and ref. Second, I have used the same person for both tag and ref. In both examples, I have simply removed small portions of the hair in the bottom. From the results, it seems that the model is simply copy-pasting the hair from the tag image, and not filling in the right background.
Next, I try to use different hair masks on identities. Similar to the previous results, the synthesized images don't seem to transfer the hair style correctly. When the hair masks are different, the model is simply copy pasting the target hair mask.
Note:
Question: Am I missing something here? It would be great if the author can comment on the above results.
Thank you!
Nice work! I want to train the orientation inpainting and stroke inpainting model by myself,what should i do?
Another question is that, I see there is a blend network in the project, i specify the parameter '--use_blender' and use style,content,rgb,background,confidence,rgb loss to train the network,but the losses are not normal which their values are Nan.Have you trained the blend network successfully and could you give me any suggestion? What is the difference between using the blend network or not?
Thanks in advance.
Hi, I would like to use my own data of pictures, and I am wondering how to get the "val_labels"? Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.