Git Product home page Git Product logo

michigan's Introduction

Hello, I'm tzt101 👋

michigan's People

Contributors

pleaseconnectwifi avatar tzt101 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

michigan's Issues

Input shape & number of gpus

Hello there, I'm trying to re-train MichiGAN on my custom dataset. I got few questions:

1. Input shape:

  • Original Image: (x, x, 3)
  • Label: (x, x, 3)
  • Orient: (x, x, 3)
    Where x is crop_size, am I right?

2. Number of gpus:
I used 2 RTX and faced the error
RuntimeError: CUDA error: device-side assert triggered
Any idea why? Will the code run smoothly with ONE GPU only?

Thank you.

problem with running deomo

Traceback (most recent call last):
File "demo.py", line 521, in
opt = DemoOptions().parse()
File "/home/student/michigan_test/MichiGAN/options/base_options.py", line 235, in parse
torch.cuda.set_device(opt.gpu_ids[0])
File "/home/student/anaconda3/envs/michigan_env/lib/python3.7/site-packages/torch/cuda/init.py", line 311, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal

I tried cuda_launch_blocking=1 but still doesnt work..

The generated hair has low resolution

Hi,first thank you for your work!
I found the generated hair region has low resolution than the target image,the target image and the reference image are both 512*512,what can I do to solve this problem?
(To protect the privacy of the photographer, I added mosaics to the face)
inpaint_real2
inpaint_11

unable to run it on CPU

I am unable to run it on CPU. It gives following error when running demo.py.

./checkpoints\MichiGAN\SInpaintingModel_gen.pth
THCudaCheck FAIL file=..\aten\src\THC\THCGeneral.cpp line=51 error=38 : no CUDA-capable device is detected
Traceback (most recent call last):
File "demo.py", line 329, in edit
orient_stroke = cal_stroke_orient.stroke_to_orient(mask_stroke)
File "C:\Users\dmishra\MichiGAN\ui_util\cal_orient_stroke.py", line 143, in stroke_to_orient
stroke_mask_tensor = torch.unsqueeze(stroke_mask_tensor, 0).cuda()
File "C:\Users\dmishra\Anaconda3\envs\michigan\lib\site-packages\torch\cuda_init_.py", line 162, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at ..\aten\src\THC\THCGeneral.cpp:51

Cannot load pretrained models

Thanks for publishing a great work!

I tried inference.py following the procedure described in README, but I got an error at unpickling.

Network [SPADEBGenerator] was created. Total number of parameters: 109.5 million. To see the architecture, do print(network).
Network [InpaintGenerator] was created. Total number of parameters: 16.1 million. To see the architecture, do print(network).
Traceback (most recent call last):
  File "inference.py", line 25, in <module>
    model = Pix2PixModel(opt)
  File "/content/MichiGAN/models/pix2pix_model.py", line 32, in __init__
    self.netG, self.netD, self.netE, self.netIG, self.netFE, self.netB, self.netD2, self.netSIG = self.initialize_networks(opt)
  File "/content/MichiGAN/models/pix2pix_model.py", line 185, in initialize_networks
    netG = util.load_network(netG, 'G', opt.which_epoch, opt)
  File "/content/MichiGAN/util/util.py", line 228, in load_network
    weights = torch.load(save_path)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 595, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 764, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.

So I manually downloaded the .pth files from Google Drive, but again, I got another error at the same point.

Network [InpaintGenerator] was created. Total number of parameters: 16.1 million. To see the architecture, do print(network).
Traceback (most recent call last):
  File "inference.py", line 25, in <module>
    model = Pix2PixModel(opt)
  File "/content/MichiGAN/models/pix2pix_model.py", line 32, in __init__
    self.netG, self.netD, self.netE, self.netIG, self.netFE, self.netB, self.netD2, self.netSIG = self.initialize_networks(opt)
  File "/content/MichiGAN/models/pix2pix_model.py", line 185, in initialize_networks
    netG = util.load_network(netG, 'G', opt.which_epoch, opt)
  File "/content/MichiGAN/util/util.py", line 228, in load_network
    weights = torch.load(save_path)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 595, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 781, in _legacy_load
    deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 4248659 more bytes. The file might be corrupted.

How do I load the pretrained models?

Versions

  • Colaboratory
  • Python 3.6.9
  • PyTorch 1.7.0
  • Cuda 10.1

how to generate orientation map

Thanks for your works! I want to train michigan on my own dataset, but I can not find the codes about extract orientation map. Could you please provide the relevant codes ?

How to train background inpainter?

Thank you for the insightful work!

Could you provide us more details about the training process of background inpainter?
I know that you cannot provide the codes due to the copyright issue,
but I am still curious to know which training dataset and hole masks you used to train the network.
or if you just used the pretrained network, could you give us information about the company?

How to discard the original background?

Thanks for your work and sharing.
Sorry to disturb you and I have a question...........
I only want to change the hair styles and get the hair mask,and I hope the output can be png image or include semi-transparent hair details information mask. Then I can use it to composite new image with new background. Is it possible and how can I make it, please.

How to generate hair mask images?

Hello. Thank you for your amazing work.
What model did you use to create label images(hair mask) in FFHQ?
I am trying to test "cal_orientation.py"

pre-trained model missing

Hi,

Are you still sharing the pre-trained model? It shows that the model has been moved to "https://mailustceducn-my.sharepoint.com/personal/tzt_mail_ustc_edu_cn/_layouts/15/Authenticate.aspx?Source=%2Fpersonal%2Ftzt%5Fmail%5Fustc%5Fedu%5Fcn%2F%5Flayouts%2F15%2Fdownload%2Easpx%3FUniqueId%3Dc540b02a%252D3155%252D4180%252Da820%252Dd47d1e42345c"
Howere, whe I tried to download there, following error showed: "Connecting to mailustceducn-my.sharepoint.com (mailustceducn-my.sharepoint.com)|13.107.136.9|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2022-02-07 14:27:03 ERROR 403: Forbidden."
Could you please provide the guidance? Thank you

Hair shape is not changing

Hello,

I am running your script demo.py. I have downloaded the checkpoints and the dataset from One Drive. In addition to the 3 demo images that are included in the demo package, I have picked 7 other frames from the validation split of your dataset. More specifically, I have included identities 56000.jpg, 56001.jpg, ......, 56006.jpg.

Similar to Figure 1 in the paper, I am able to reproduce the results. Following is the image of the identity 67172 in the validation set. I am using the UI to remove the lower portion of the hair mask. As can be seen, the following image is able to fill in background information properly and render the image with the new hair style.

Screen Shot 2020-08-11 at 11 37 25 PM

However, when I try to do similar operations on other identities, I am unable to get good results.

In the following examples, I have tried 2 variations. First, I have used different people for tag and ref. Second, I have used the same person for both tag and ref. In both examples, I have simply removed small portions of the hair in the bottom. From the results, it seems that the model is simply copy-pasting the hair from the tag image, and not filling in the right background.

Screen Shot 2020-08-11 at 7 51 01 PM

Screen Shot 2020-08-11 at 11 34 42 PM

Next, I try to use different hair masks on identities. Similar to the previous results, the synthesized images don't seem to transfer the hair style correctly. When the hair masks are different, the model is simply copy pasting the target hair mask.

Screen Shot 2020-08-11 at 11 05 02 PM

Screen Shot 2020-08-11 at 11 12 46 PM

Note:

  1. In all of the above experiments, the hair colour/appearance seems to transfer well. However, referring the above examples, it can be seen that the hair shape is not changing as expected.
  2. No changes were made to any scripts in this repository. Results posted above are produced by simply running demo.py.

Question: Am I missing something here? It would be great if the author can comment on the above results.

Thank you!

How to train the orientation inpainting and stroke inpainting model?

Nice work! I want to train the orientation inpainting and stroke inpainting model by myself,what should i do?
Another question is that, I see there is a blend network in the project, i specify the parameter '--use_blender' and use style,content,rgb,background,confidence,rgb loss to train the network,but the losses are not normal which their values are Nan.Have you trained the blend network successfully and could you give me any suggestion? What is the difference between using the blend network or not?
Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.