Git Product home page Git Product logo

s2ml-generators's Introduction

  .s5SSSs.  .s5SSSs.  .s5ssSs.  .s5SSSs.  .s s.  s.  .s    s.  .s5SSSs.  .s5SSSs.  .s5SSSs.  .s5SSSs.  .s5 s.  
        SS.       SS.    SS SS.       SS.    SS. SS.       SS.       SS.       SS.       SS.       SS.     SS. 
  sS    `:; sS    S%S sS SS S%S sS    `:; sS S%S S%S sS    S%S sS    `:; sS    S%S sS    `:; sS    `:; ssS SSS 
  SS        SS    S%S SS :; S%S SS        SS S%S S%S SS    S%S SS        SS    S%S SS        SS        SSS SSS 
  `:;;;;.   SS    S%S SS    S%S SSSs.     SS S%S S%S SSSs. S%S SSSs.     SS .sS;:' SSSs.     `:;;;;.    SSSSS  
        ;;. SS    S%S SS    S%S SS        SS S%S S%S SS    S%S SS        SS    ;,  SS              ;;.   SSS   
        `:; SS    `:; SS    `:; SS        SS `:; `:; SS    `:; SS        SS    `:; SS              `:;   `:;   
  .,;   ;,. SS    ;,. SS    ;,. SS    ;,. SS ;,. ;,. SS    ;,. SS    ;,. SS    ;,. SS    ;,. .,;   ;,.   ;,.   
  `:;;;;;:' `:;;;;;:' :;    ;:' `:;;;;;:' `:;;:'`::' :;    ;:' `:;;;;;:' `:    ;:' `:;;;;;:' `:;;;;;:'   ;:'   
                                                                                                               

The majority of my public work can be found at Somewhere Systems

GitHub Streak

s2ml-generators's People

Contributors

jubiss avatar somewheresy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s2ml-generators's Issues

facehq not working

Hi everything is working but facehq model won't run for some reason - error message is -


ModuleNotFoundError Traceback (most recent call last)
in ()
77 print('Using seed:', seed)
78
---> 79 model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
80 perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
81

12 frames
/usr/lib/python3.7/importlib/_bootstrap.py in find_and_load_unlocked(name, import)

ModuleNotFoundError: No module named 'taming.modules.misc'

any fix for this?

SSL error on downloading model

Hi Justin, I'm getting an SSL error after running the VQGAN+CLIP module. I checked the URL and it appears that forcing http results in a download, so maybe pytorch.org let their SSL cert slide? I tried to figure out if I could change model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device) to spit out an http URL instead of https, but I don't really know enough Python to help. Part of the error message:

Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth"
...
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1091)

Error when trying to run Download pre-trained models

Output:

Executing using VQGAN+CLIP method
Using device: cuda:0
Using text prompt: ['Happy flowers in a sunlit field']
Using image prompts: ['/content/test.jpg']
Using seed: 5476245293527641887
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
[<ipython-input-41-541d444115c2>](https://localhost:8080/#) in <module>()
     77 print('Using seed:', seed)
     78 
---> 79 model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
     80 perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
     81 

NameError: name 'load_vqgan_model' is not defined

name 'torch' is not defined

When trying to run this it comes with the issue of torch not being defined and as some one who knows little to nothing of code I do not know how to fix this. Many thanks for any help.

torch.cuda.empty_cache()
with torch.no_grad():
torch.cuda.empty_cache()

'RuntimeError: CUDA out of memory' on P100

Hi there,

I'm using Colab Pro to do some ML experiments and often fail to initialise the RN50x4 CLIP model. Sometimes I even have trouble getting the RN101 model to load up. RN50x16 has never worked in my experience. The notebook is running on a P100 GPU and mentions that "x4 and x16 models for CLIP may not work reliably on lower-memory machines".

I'm just wondering if I need an even more capable GPU (in terms of VRAM) or if there is some problem with the code? I'm not an expert with Tensorflow/ML so apologies if there's a simple solution to this.

2 vqgan models don't load

Justin,
So many thanks for developing this version of vqgan art generator. I thought I'd send you a note on a persistent problem I'm having. I can't load vqgan models - wikiart_16384 & coco off the collab notebook.
Should I run this off Linux for better reliability?
Thanks again!

Color problem in ESRGAN output

I thought ESRGAN was simply inverting the colors of the original but I took the image results of ESRGAN into a graphics program and this isn't the case. Hue is shifted in parts of the image, and maintained in other parts.
Examples attached. The original is in blue and green, ESRGAN's result gave a red background but foreground elements are maintained somehow. I have my own drive folder that I upscale images from, but ESRGAN still changes the colors even if you change the target directory to '../ESRGAN/LR/' to use the given example images.

0200

0200
`

Error when trying to run VQGAN+CLIP

Output:

Executing using VQGAN+CLIP method
Using device: cuda:0
Using text prompt: ['Happy flowers in a sunlit field']
Using image prompts: ['/content/test.jpg']
Using seed: 5476245293527641887
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
[<ipython-input-41-541d444115c2>](https://localhost:8080/#) in <module>()
     77 print('Using seed:', seed)
     78 
---> 79 model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
     80 perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
     81 

NameError: name 'load_vqgan_model' is not defined

cant download 512 unconditional diffusion model

Hello,
It seems the-eye.eu is unreachable, therefore I cannot download the 512x512 unconditional diffusion model, and it is not provided on the openai repo (there is only the 256x256 one)

Any idea where i could find the model ?

Thanks a lot

ModuleNotFoundError: No module named 'tensorflow.python.keras.engine.keras_tensor'

I just started seeing this error when running this notebook on Colab recently, during the "Load libraries and definitions" block. Here's a screenshot...

Screen Shot 2021-11-13 at 10 14 54 AM

I've run hundreds of cycles of this notebook in the past, but within the last week or two, it started failing every time.

I'm not an experienced python dev, so I'm a bit out of my element, but I can help troubleshoot from my side if that helps!

Disconnect

i am trying to generate a image by collab clound processors, but it keeps disconnecing after 2 to 3 hours, i have tryed to use autoclick, but it dies nor work, some one knows how to stay more time processing?

Diffusion model fails to load with smaller image size setting

Currently when adjusting the image size parameter on the guided diffusion model in the s2ML notebook, it will fail with the following error:

RuntimeError: Error(s) in loading state_dict for UNetModel:
	Missing key(s) in state_dict: "input_blocks.7.0.skip_connection.weight", "input_blocks.7.0.skip_connection.bias", "input_blocks.7.1.norm.weight", "input_blocks.7.1.norm.bias", "input_blocks.7.1.qkv.weight", "input_blocks.7.1.qkv.bias", "input_blocks.7.1.proj_out.weight", "input_blocks.7.1.proj_out.bias", "input_blocks.8.1.norm.weight", "input_blocks.8.1.norm.bias", "input_blocks.8.1.qkv.weight", "input_blocks.8.1.qkv.bias", "input_blocks.8.1.proj_out.weight", "input_blocks.8.1.proj_out.bias", "input_blocks.10.1.norm.weight", "input_blocks.10.1.norm.bias", "input_blocks.10.1.qkv.weight", "input_blocks.10.1.qkv.bias", "input_blocks.10.1.proj_out.weight", "input_blocks.10.1.proj_out.bias", "input_blocks.11.1.norm.weight", "input_blocks.11.1.norm.bias", "input_blocks.11.1.qkv.weight", "input_blocks.11.1.qkv.bias", "input_blocks.11.1.proj_out.weight", "input_blocks.11.1.proj_out.bias", "input_blocks.13.0.skip_connection.weight", "input_blocks.13.0.skip_connection.bias". 
	Unexpected key(s) in state_dict: "input_blocks.15.0.in_layers.0.weight", 
... [omitted for brevity]
"input_blocks.17.0.out_layers.3.weight", "input_blocks.17.0.out_layers....
	size mismatch for input_blocks.0.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 3, 3, 3]).
	size mismatch for input_blocks.0.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for input_blocks.1.0.in_layers.0.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for input_blocks.1.0.in_layers.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for input_blocks.1.0.in_layers.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
... [omitted for brevity]

I would be happy to take a stab at this if you have any ideas. But i'm not entirely sure this request even makes sense as I'm still learning about the diffusion model and how it works. Is it possible to have it generate smaller images? (and ultimately reduce the memory footprint of the model).

If you have other ideas of how to reduce vram usage i would be interested in hearing that as well / discussing further!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.