bryandlee / animegan2-pytorch Goto Github PK
View Code? Open in Web Editor NEWPyTorch implementation of AnimeGANv2
License: MIT License
PyTorch implementation of AnimeGANv2
License: MIT License
Thanks for your open source code.
About the dataset of Face Portrait v1/2, What face and animation data sets are used to train this model? I am troubled by the lack of appropriate data sets. I would appreciate it if you could let me know.
I'm interested in experimenting your Paprika style model, using [GitHub] repo for converting images. Usually, people use this interface, which takes so long time.
I want to train the offered available pre-trained models on Google Colab Notebook
and experiment Additional Model Weights, but not only I couldn't manage to run Torch Hub Usage according to instructions but also I couldn't run the face2paint
option due to the following error:
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-13-35ffb2517470> in <module>()
----> 1 face2paint = torch.hub.load('bryandlee/animegan2-pytorch:main', 'face2paint', size=512, device="cpu")
2
3 img = Image.open(...).convert("RGB")
4 out = face2paint(model, img)
8 frames
/usr/lib/python3.7/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: rate limit exceeded
I can't figure out to resolve this issue to run the model on the Colab notebook. Please feel free to edit or comment on Google Colab Notebook I shared.
No matter what image I try I get this following error:
MessageError: RangeError: Maximum call stack size exceeded.
I’m running on google colab and no matter what I do I keep getting this error.
Great work! I'd like to convert your pretrained model, face_paint_512_v2, to pytorch mobile.
from torch.utils.mobile_optimizer import optimize_for_mobile
Do you know the model specifics that will allow a conversion to the lite interpreter for mobile?
Thank you for your stunning works, do you plan to release the training code (including the training code about the Face Portrait model)?
Hello, I want to implement the training code for this project.
Did you learn the same as the original animeganv2?
Did you have any other additions for training code implementation?
The portrait is on a png image with a transparent background and cannot be stylized。How to modify the code to support it?
I get this error on Arch Linux even if I installed the package python-pytorch
the package python-pytorch-cuda
doesn't work either, since it's missing a file according to this error
ImportError: libcupti.so.11.5: cannot open shared object file: No such file or directory
I think it'd be a good idea to add a requirements.txt file in the project with all needed depencies. The actual ones aren't sufficient!
model loaded: ./weights/paprika.pt
Traceback (most recent call last):
File "test.py", line 92, in
test(args)
File "test.py", line 48, in test
out = net(image.to(device), args.upsample_align).cpu()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/init3/Tools/animegan2-pytorch/model.py", line 106, in forward
out = self.block_e(out)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 443, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 12.74 GiB (GPU 0; 10.76 GiB total capacity; 1.19 GiB already allocated; 7.09 GiB free; 2.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
input:
samples/inputs/1.jpg
Hi,
I am trying to run the program on a ubnutu 20.4 machine with GTX1650Ti GPU, but face lack of memory issue.
Does this module have any low mem option i.e like reduce channel option with less memory requirement.
Or any other work around possible?
python3 test.py --input_dir inputimg --output_dir outputimg --device cuda
model loaded: ./weights/paprika.pt
Traceback (most recent call last):
File "test.py", line 92, in
test(args)
File "test.py", line 48, in test
out = net(image.to(device), args.upsample_align).cpu()
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/murugan86/anime/animagen-pytorch-mur/animegan2-pytorch/model.py", line 91, in forward
out = self.block_a(input)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "/home/murugan86/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 7.96 GiB (GPU 0; 3.82 GiB total capacity; 2.07 GiB already allocated; 549.38 MiB free; 2.09 GiB reserved in total by PyTorch)
ruti
Due to the use of GPU to indicate that there was not enough memory, I used the CPU option and paprika.pt.
But the generated pictures are some vague.
It seems to be blurred.
How can I improve the picture definition?
Are the two weight files, face_paint_512_v1.pt and face_paint_512_v2.pt, trained under pytorch or trained under tensorflow and converted to pytorch models? Is it possible to provide the weight files in tensorflow format?
RuntimeError: CUDA out of memory. Tried to allocate 12.44 GiB
Hi! Great work! May I ask what GAN loss are you using? I'm trying to achieve a similar effect with WGAN, but the result is quite poor.
Also, you mentioned that you're using a face segmentation model to separate background from the face. May I ask you which model are you using?
之前做过可视化,用的2次样条插值法,能够平滑些,可以试试
Hello, your work is really great. I try to apply the filter to some full-body photos. But in some pictures, they look dirty in some parts. I find another app that may use your work, their results look much better, and the pictures are 'clean'. Do you have any ideas to improve the results? I uploaded the results and the original picture I used.
Hello, when I use the [face_paint_512_v2.pt] model to convert the full body photo, it is very ugly, but the photo containing only the face is good, how can I combine the face only and the full body photo together? Does the model need to be retrained?
photo1 use other app,i think its use animegan2.
photo2 convert full body photo.its ugly width face
photo3 only convert face, its nice.and face like photo 1
Could you please tell me the data size of face dataset and anime-face dataset?Thanks!
What is the loss function used by the "Face Portrait v2" model? The effect is amazing.
Great work! I'd like to convert your pretrained model, celeba_distill, to pytorch mobile.
from torch.utils.mobile_optimizer import optimize_for_mobile
Do you know the model specifics that will allow a conversion to the lite interpreter for mobile?
Due to the use of GPU to indicate that there was not enough memory, I used the CPU option and paprika.pt.
But the generated pictures are some vague.
It seems to be blurred.
How can I improve the picture definition?
Hello,
Is this model still available for testing?
https://drive.google.com/file/d/10T6F3-_RFOCJn6lMb-6mRmcISuYWJXGc/view?usp=sharing
Best wishes!
my torch.version is 1.2.0 and have problem about torch.load() .
What version do you use?
RuntimeError: ./face_paint_512_v2.pt is a zip archive (did you mean to use torch.jit.load()?)
I managed to convert a normal video to cartoon with Face Portrait v2 model. The result seems good to me. I, therefore, suggest adding a function/script to convert videos to their cartoon modes. And the video to image and image to video functions can be easily implemented using imageio or ffmpeg.
Could you let us know who to train another model like "Face Portrait v1 "?
as I know, Animegan2 is not for facial style transfer.. so I really want to know the detail steps to train facial model by using Animegan2.
Thanks very much!
Hi, I'm trying it on cola, but my eyes were recognized as eyes, the result just like that, It's a little scary.
I hope you can improve it, thanks a lot.
文章介绍了模型怎么使用,https://mp.weixin.qq.com/s/9FRoRZNJQEFwfPSxu1kSPg
但从效果看,图片结果并不是很理想
I stumbled upon this project and looked at the documentation, hoping to find out what it was. I expected that at the very top, there would be something like, "animegan2-pytorch is a _____________ that does ______________.
Nothing. There's usage information, and some weird sample images that don't say anything about what this project actually does. I suspect it has something to do with faces.
Seriously. The first thing someone should see when they visit this page is a sentence or two about what the project actually is and what it does.
Quick question:
Is it possible to train with new style like from Netflix style animation "ARCANE" I really love their rendering of face.
If possible, is it hard, does it take a long time using M1?
在移动端推理的时候,Android机内存直接飙到1.4GB,性能稍一般的机器,进程直接被系统kill掉了。那个权重参数是否有缩减的可能,小白求指教~
Is there any way to change the parameters (kernel width, step size, padding), of the NN blocks defined in model.py?
Hi Bryan, I wanted to make this to work locally on my machine, so I don't need to queue, but I kept getting error when running: python convert_weights.py:
Traceback (most recent call last):
File "/Users/jimmygunawan/animegan2-pytorch-main-2/convert_weights.py", line 7, in <module>
from AnimeGANv2.net import generator as tf_generator
File "/Users/jimmygunawan/animegan2-pytorch-main-2/AnimeGANv2/net/generator.py", line 1, in <module>
import tensorflow.contrib as tf_contrib
ModuleNotFoundError: No module named 'tensorflow.contrib'
I am not programmer, so far I did try to install tensorflow via miniforge on this Mac machine, I think I also have conda stuff and tensorflow installed, but issue above persist.
Any idea how to fix this? Thanks!
In the basic usage, you say that
Weight Conversion from the Original Repo (Requires TensorFlow 1.x)
TensorFlow 1.x can only installed for python2.
However the code in test.py is using python3 syntax.
How did you manage to make it work?
What licence is this released under?
Thanks for publishing a very useful model and sample code. We've been using your "Face Portrait v2 model" in our experimental NFT project mypfp.io, an anime-like NFT avatar generator service. Since it was implemented in PyTorch, we were able to quickly develop an API using Flask.
We've put a sample video on mypfp.io for you to play with if you want and we would be happy if you could add some collaboration projects section on the README.md.
Thanks!
mypfp.io
NFT minted in the video
Is there a way that the output is more anime or cartoony for v2? It still looks like it's a real image somehow.
Hi again! I'm using one of your models in my bots.
Is something like "Cartoon style is based on animegan2-pytorch by Brian Lee." okay?
Rather than F.interpolate with optional alignment of corners you could try ResizeRight and in places where convolution strides are above 1 you can add Antialiased CNNs's BlurPool after them and their activation functions to make it more shift-invariant, hopefully increasing quality and making things more consistent. It's thankfully painless to implement.
In test.py, you convert from BGR to RGB for the source image.
But I did not understand the reason why you convert from BGR to RGB after finishing the inference period.
I was wondering ıf it would be possible to re use the data that is generated (probably) while tracking the head it one place to put it in it's original place.
use case:
after processing a video with animegan2 the head would be in its own place so the original head can be replaced with the animegan version easliy.
right now I can do it by tracking the original head and using the tracking data to put the processed head in place but it is too jumpy...
Any suggestion for 2d to 3d image reconstruction? I was lookin at
This project by MS seems to be trained on actual faces, maybe it could work applied to the generated 2d anime picture...
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.