Git Product home page Git Product logo

sd-webui-stablesr's Introduction

StableSR for Stable Diffusion WebUI

Licensed under S-Lab License 1.0

CC BY-NC-SA 4.0

English|中文

  • StableSR is a competitive super-resolution method originally proposed by Jianyi Wang et al.
  • This repository is a migration of the StableSR project to the Automatic1111 WebUI.

Relevant Links

Click to view high-quality official examples!

If you find this project useful, please give me & Jianyi Wang a star! ⭐


Important Update

  • 2023.07.01: We occasionally found that proper negative prompts can significantly enhance the details of StableSR.
    • We use CFG Scale=7 with the following negative prompts: 3d, cartoon, anime, sketches, (worst quality:2), (low quality:2)
    • Click comparison1 to see the significant power of negative prompts.
    • Postive prompts are not very useful, but it also helps. You can try (masterpiece:2), (best quality:2), (realistic:2),(very clear:2)
    • With the above prompts, we are trying our best to approach close-source project GigaGAN's quality (while ours are still worse than their demo). Click comparison2 to see our current capability on 128x128->1024x1024 upscaling.
  • 2023.06.30: We are happy to release a new SD 2.1 768 version of StableSR! (Thanks to Jianyi Wang)
    • It produces similar amount of details, but with significantly less artifacts and better color.
    • It supports the resolution of 768 * 768.
  • To enjoy the new model:
    • Use the SD 2.1 768 base model. It can be download from HuggingFace
    • The corresponding SR Module (~400MB): Official Resource, 我的百度网盘-提取码8ju9
    • Now you can use a larger tile size in the Tiled Diffusion (96 * 96, the same as default settings), the speed can be slightly faster.
    • Keep other things the same.
  • Janyi Wang keeps trying to train more powerful SR modules suitable for AIGC images. These models will be tuned on SD2.1 768 or SDXL later.

Features

  1. High-fidelity detailed image upscaling:
    • Being very detailed while keeping the face identity of your characters.
    • Suitable for most images (Realistic or Anime, Photography or AIGC, SD 1.5 or Midjourney images...) Official Examples
  2. Less VRAM consumption
    • I remove the VRAM-expensive modules in the official implementation.
    • The remaining model is much smaller than ControlNet Tile model and requires less VRAM.
    • When combined with Tiled Diffusion & VAE, you can do 4k image super-resolution with limited VRAM (e.g., < 12 GB).

    Please be aware that sdp may lead to OOM for some unknown reasons. You may use xformers instead.

  3. Wavelet Color Fix
    • The official StableSR will significantly change the color of the generated image. The problem will be even more prominent when upscaling in tiles.
    • I implement a powerful post-processing technique that effectively matches the color of the upscaled image to the original. See Wavelet Color Fix Example.

Usage

1. Installation

⚪ Method 1: Official Market

  • Open Automatic1111 WebUI -> Click Tab "Extensions" -> Click Tab "Available" -> Find "StableSR" -> Click "Install"

⚪ Method 2: URL Install

installation

2. Download the main components

We currently has two versions. They have similar amount of details, but the 768 has less artifacts.

🆕 SD2.1 768 Version

  • You MUST use the Stable Diffusion V2.1 768 EMA checkpoint (~5.21GB) from StabilityAI

    • You can download it from HuggingFace
    • Put into stable-diffusion-webui/models/Stable-Diffusion/
  • Download the extracted StableSR module

    • Official Resource
    • Put the StableSR module (~400MB) into your stable-diffusion-webui/extensions/sd-webui-stablesr/models/

SD2.1 512 Version (Sharper, but more artifacts)

  • You MUST use the Stable Diffusion V2.1 512 EMA checkpoint (~5.21GB) from StabilityAI

    • You can download it from HuggingFace
    • Put into stable-diffusion-webui/models/Stable-Diffusion/
  • Download the extracted StableSR module

    • Official resources: HuggingFace (~1.2 G). Note that this is a zip file containing both the StableSR module and the VQVAE.
    • My resources: <GoogleDrive> <百度网盘-提取码aguq>
    • Put the StableSR module (~400MB) into your stable-diffusion-webui/extensions/sd-webui-stablesr/models/

While we use SD2.1 checkpoint, you can still upscale ANY image (even from SD1.5 or NSFW). Your image won't be censored and the output quality won't be affected.

3. Optional components

  • Install Tiled Diffusion & VAE extension
    • The original StableSR easily gets OOM for large images > 512.
    • For better quality and less VRAM usage, we recommend Tiled Diffusion & VAE.
  • Use the Official VQGAN VAE

4. Extension Usage

  • At the top of the WebUI, select the v2-1_512-ema-pruned checkpoint you downloaded.
  • Switch to img2img tag. Find the "Scripts" dropdown at the bottom of the page.
    • Select the StableSR script.
    • Click the refresh button and select the StableSR checkpoint you have downloaded.
    • Choose a scale factor.
  • Euler a sampler is recommended. CFG Scale=7, Steps >= 20.
    • While StableSR can work without any prompts, we recently found that negative prompts can significantly improve details. Example negative prompts: 3d, cartoon, anime, sketches, (worst quality:2), (low quality:2)
    • Click to see [comparison] with/without pos/neg prompts(https://imgsli.com/MTg5MjM1)
  • For output image size > 512, we recommend using Tiled Diffusion & VAE, otherwise, the image quality may not be ideal, and the VRAM usage will be huge.
  • Here are the official Tiled Diffusion settings:
    • Method = Mixture of Diffusers
      • For StableSR 768 version, you can use Latent tile size = 96, Latent tile overlap = 48
      • For StableSR 512 version, you can use Latent tile size = 64, Latent tile overlap = 32
    • Latent tile batch size as large as possible before Out of Memory.
    • Upscaler MUST be None (will not upscale here; instead, upscale in StableSR).
  • The following figure shows the recommended settings for 24GB VRAM.
    • For a 6GB device, just change Tiled Diffusion Latent tile batch size to 1, Tiled VAE Encoder Tile Size to 1024, Decoder Tile Size to 128.
    • SDP attention optimization may lead to OOM. Please use xformers in that case.
    • You DON'T need to change other settings in Tiled Diffusion & Tiled VAE unless you have a very deep understanding. These params are almost optimal for StableSR. recommended settings

5. Options Explained

  • What is "Pure Noise"?
    • Pure Noise refers to starting from a fully random noise tensor instead of your image. This is the default behavior in the StableSR paper.
    • When enabling it, the script ignores your denoising strength and gives you much more detailed images, but also changes the color & sharpness significantly
    • When disabling it, the script starts by adding some noise to your image. The result will be not fully detailed, even if you set denoising strength = 1 (but maybe aesthetically good). See Comparison.
    • If you disable Pure Noise, we recommend denoising strength=1
  • What is "Color Fix"?
    • This is to mitigate the color shift problem from StableSR and the tiling process.
    • AdaIN simply adjusts the color statistics between the original and the outcome images. This is the official algorithm but ineffective in many cases.
    • Wavelet decomposes the original and the outcome images into low and high frequency, and then replace the outcome image's low-frequency part (colors) with the original image's. This is very powerful for uneven color shifting. The algorithm is from GIMP and Krita, which will take several seconds for each image.
    • When enabling color fix, the original image will also show up in your preview window, but will NOT be saved automatically.

6. Important Notice

Why my results are different from the official examples?

  • It is not your or our fault.
    • This extension has the same UNet model weights as the StableSR if installed correctly.
    • If you install the optional VQVAE, the whole model weights will be the same as the official model with fusion weights=0.
  • However, your result will be not as good as the official results, because:
    • Sampler Difference:
      • The official repo does 100 or 200 steps of legacy DDPM sampling with a custom timestep scheduler, and samples without negative prompts.
      • However, WebUI doesn't offer such a sampler, and it must sample with negative prompts. This is the main difference.
    • VQVAE Decoder Difference:
      • The official VQVAE Decoder takes some Encoder features as input.
      • However, in practice, I found these features are astonishingly huge for large images. (>10G for 4k images even in float16!)
      • Hence, I removed the CFW component in VAE Decoder. As this lead to inferior fidelity in details, I will try to add it back later as an option.

License

This project is licensed under:

CC BY-NC-SA 4.0

Disclaimer

  • All code in this extension is for research purposes only.
  • The commercial use of the code and checkpoint is strictly prohibited.

Important Notice for Outcome Images

  • Please note that the CC BY-NC-SA 4.0 license in the NVIDIA SPADE module also prohibits the commercial use of outcome images.
  • Jianyi Wang may change the SPADE module to a commercial-friendly one but he is busy.
  • If you wish to speed up his process for commercial purposes, please contact him through email: [email protected]

Acknowledgments

I would like to thank Jianyi Wang et al. for the original StableSR method.

sd-webui-stablesr's People

Contributors

filexor avatar pkuliyi2015 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-stablesr's Issues

大佬 还没修复KeyError: '64'报错么?

File "G:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "G:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "G:\stable-diffusion-webui\modules\img2img.py", line 176, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "G:\stable-diffusion-webui\modules\scripts.py", line 441, in run
processed = script.run(p, *script_args)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run
result: Processed = processing.process_images(p)
File "G:\stable-diffusion-webui\modules\processing.py", line 610, in process_images
res = process_images_inner(p)
File "G:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "G:\stable-diffusion-webui\modules\processing.py", line 728, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 223, in sample_custom
samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)
File "G:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
return func()
File "G:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 137, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "G:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\utils\utils.py", line 179, in wrapper
return fn(*args, **kwargs)
File "G:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\mixtureofdiffusers.py", line 125, in apply_model_hijack
x_tile_out = shared.sd_model.apply_model_original_md(x_tile, t_tile, c_tile)
File "G:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "G:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 97, in unet_forward
return getattr(unet, FORWARD_CACHE_NAME)(x, timesteps, context, y, **kwargs)
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 82, in forward
x = layer(x, emb)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
return checkpoint(
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 141, in
resblock._forward = lambda x, timesteps, resblock=resblock, spade=self.input_blocks[i]: dual_resblock_forward(resblock, x, timesteps, spade, get_struct_cond)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 80, in dual_resblock_forward
h = spade(h, get_struct_cond())
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 34, in forward
return checkpoint(
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "G:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "G:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "G:\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 39, in _forward
segmap = segmap_dic[str(x_dic.size(-1))]
KeyError: '64'

最新版的SD web ui
明显是脚本有问题

RuntimeError: mat1 and mat2 must have the same dtype

changing setting sd_model_checkpoint to v2-1_768-ema-pruned.safetensors [dcd690123c]: RuntimeError
Traceback (most recent call last):
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\modules\shared.py", line 597, in set
self.data_labels[key].onchange()
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\modules\call_queue.py", line 15, in f
res = func(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\webui.py", line 225, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_models.py", line 539, in reload_model_weights
checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_models_config.py", line 106, in find_checkpoint_config
return guess_model_config_from_state_dict(state_dict, info.filename)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_models_config.py", line 81, in guess_model_config_from_state_dict
elif is_using_v_parameterization_for_sd2(sd):
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\modules\sd_models_config.py", line 61, in is_using_v_parameterization_for_sd2
out = (unet(x_test, torch.asarray([999], device=device), context=test_cond) - x_test).mean().item()
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 274, in _forward
x = self.ff(self.norm3(x)) + x
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 76, in forward
return self.net(x)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\extensions-builtin\Lora\lora.py", line 400, in lora_Linear_forward
return torch.nn.Linear_forward_before_lora(self, input)
File "F:\AI\sd-webui-aki\sd-webui-aki-v4.1\python\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype

Is there anything wrong with my setting? Thanks

ComfyUI ?

I think it would be much easier to use the StableSR in comfyui than inside Automatic1111 and wondered if anyone had tried to implement it?

Error fixing color in A1111 v1.6.0

Since the 1.6 update of A1111 the color correction no longer works, this error is occurring:

[StableSR] Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 2474, 3722] to have 3 channels, but got 4 channels instead

Error fixing color with default method

After updating WebUI to version 1.6.0, I can't use the ColorFix function in StableSR at the bottom. Does anyone have a solution? The error message is: "[StableSR] Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 4098, 3074] to have 3 channels, but got 4 channels instead." However, when I roll back to version 1.5.1, it works fine again. It's really frustrating.

detail of training data

Since div8k images are of huge image size and slow down the training, did you crop any of them (div2k/8k) into a patch for training? We think this might impact the SR result because it changes the sample number of different training sets

TypeError: UnetHook.hook.<locals>.forward() takes from 2 to 4 positional arguments but 5 were given

I got the following error on windows. perhaps something is not installed? did the setup according to the instructions.
Windows 11, RTX 4080, Python 3.10.6

[StableSR] Target image size: 512x512
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(x3xuqg6s20lxxzo)', 0, 'strawberry', '', [], <PIL.Image.Image image mode=RGBA size=256x256 at 0x1DD0D703C10>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 1, 1.5, 0.15, -1.0, -1.0, 0, 0, 0, False, 0, 256, 256, 1, 0, 0, 32, 0, '', '', '', [], 11, False, 'Mixture of Diffusers', False, True, 1024, 1024, 64, 64, 32, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, 50, True, False, 'C:\\stable-diffusion-webui\\extensions\\sd-webui-riffusion\\outputs', 'Refresh Inline Audio (Last Batch)', None, None, None, None, None, None, None, None, 'stablesr_webui_sd-v2-1-512-ema-000117.ckpt', 2, True, 'Wavelet', False, 'linear (weight sum)', '10', 'C:\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-prompt-travel\\img\\ref_ctrlnet', 'Lanczos', 2, 0, 0, 'mp4', 10.0, 0, '', True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, 'linear', 'lerp', 'token', 'random', '30', 'fixed', 1, '8', None, 'Lanczos', 2, 0, 0, 'mp4', 10.0, 0, '', True, False, False, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 5, True, 0, False, 8, 0, 2, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\stable-diffusion-webui\modules\img2img.py", line 180, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\stable-diffusion-webui\modules\scripts.py", line 408, in run
    processed = script.run(p, *script_args)
  File "C:\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run
    result: Processed = processing.process_images(p)
  File "C:\stable-diffusion-webui\modules\processing.py", line 526, in process_images
    res = process_images_inner(p)
  File "C:\stable-diffusion-webui\modules\processing.py", line 680, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 223, in sample_custom
    samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)
  File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 135, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 97, in unet_forward
    return getattr(unet, FORWARD_CACHE_NAME)(x, timesteps, context, y, **kwargs)
  File "C:\stable-diffusion-webui/extensions/sd-webui-controlnet\scripts\hook.py", line 193, in forward2
    return forward(*args, **kwargs)
TypeError: UnetHook.hook.<locals>.forward() takes from 2 to 4 positional arguments but 5 were given

stable-screen

Euler is so much better

That you might want to say "very recommended" in the readme! Did some tests against DPM & DDIM samplers, not worth it. Have a look at this, zoom in on the sky. Also tried with an old wedding photo, and it did some weird stuff. Euler is the way.

StableSR + ControlNet tile not working

I am trying to use StableSR + ControlNet tile but I get this error

[StableSR] Target image size: 2560x1600 Loading model from cache: control_v11f1e_sd15_tile [a371b31b] Loading preprocessor: tile_resample preprocessor resolution = 64 Calling preprocessor tile_resample outside of cache. 0%| | 0/20 [00:00<?, ?it/s] Error completing request Arguments: ('task(kr91ndjnzcnd7da)', 0, 'grainy, (masterpiece:1.0), (extremely detailed:1.0), illustration, (jndstdbbg style) \n\n', 'easynegative, blurry, smooth, weird colors, jpeg artifacts, seams, black, dark, animals, creature, human, building, tree, fog, mist, glow, ((dark area)), DIRTY, MESSY, overdetailed, tiny white dot, blurry, pixel, noise, intricate detail, ugly, bad art, heavy detailed, (low quality, worst quality:1.4)', [], <PIL.Image.Image image mode=RGBA size=512x320 at 0x7FA72479D8E0>, None, None, None, None, None, None, 20, 1, 4, 0, 1, False, False, 1, 1, 10, 1.5, 0.35, 3795394654.0, -1.0, 0, 0, 0, False, 1, 512, 2048, 1, 0, 0, 32, 0, '', '', '', [], 11, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, <controlnet.py.UiControlNetUnit object at 0x7fa725cdcd30>, <controlnet.py.UiControlNetUnit object at 0x7fa724988ac0>, <controlnet.py.UiControlNetUnit object at 0x7fa724797ee0>, <controlnet.py.UiControlNetUnit object at 0x7fa724960d90>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, True, -1.0, True, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'None', None, 1, 'None', False, False, 'PreviousFrame', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, 50, 'stablesr_webui_sd-v2-1-512-ema-000117.ckpt', 5, True, 'Wavelet', False, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {} Traceback (most recent call last): File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/img2img.py", line 176, in img2img processed = modules.scripts.scripts_img2img.run(p, *args) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/scripts.py", line 441, in run processed = script.run(p, *script_args) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/extensions/sd-webui-stablesr/scripts/stablesr.py", line 248, in run result: Processed = processing.process_images(p) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/processing.py", line 611, in process_images res = process_images_inner(p) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/processing.py", line 729, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 291, in process_sample return process.sample_before_CN_hack(*args, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/extensions/sd-webui-stablesr/scripts/stablesr.py", line 223, in sample_custom samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 383, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 257, in launch_sampling return func() File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 383, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/mnt/csegalin/bg_anime_gen/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 128, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) File "/mnt/csegalin/bg_anime_gen/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 137, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in)) File "/mnt/csegalin/bg_anime_gen/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "/mnt/csegalin/bg_anime_gen/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "/mnt/csegalin/bg_anime_gen/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/extensions/sd-webui-stablesr/scripts/stablesr.py", line 97, in unet_forward return getattr(unet, FORWARD_CACHE_NAME)(x, timesteps, context, y, **kwargs) File "/mnt/csegalin/bg_anime_gen/code/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 587, in forward_webui return forward(*args, **kwargs) TypeError: forward() takes from 2 to 4 positional arguments but 5 were given

I am using all the settings as instructed in img2img. is StableSR not compatible with controlnet?

KeyError: '64' when using old version of SD-webui

After I updated SD-webui everything works fine.
我之前用的是13周版本的webui,勾上Tiled Diffusion后会报KeyError: '64',修改隐空间Tile大小后对应数值也会变,更新成18周的就好了。测试设备是3060Ti 8G

Cuda error (?)

I get this error

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\img2img.py", line 180, in img2img processed = modules.scripts.scripts_img2img.run(p, *args) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 408, in run processed = script.run(p, *script_args) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 231, in run result: Processed = processing.process_images(p) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 526, in process_images res = process_images_inner(p) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 680, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 209, in sample_custom samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling return func() File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 135, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 243, in wrapper return fn(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\mixtureofdiffusers.py", line 131, in apply_model_hijack x_tile_out = shared.sd_model.apply_model_original_md(x_tile, t_tile, c_tile) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 91, in unet_forward self.struct_cond = self.struct_cond_model(self.latent_image, timesteps.to(x.device)[:self.latent_image.shape[0]]) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\struct_cond.py", line 273, in forward emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 361, in lora_Linear_forward return torch.nn.Linear_forward_before_lora(self, input) File "C:\Users\HANY\Documents\Portable Apps\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)

 tried lowering the values on TiledDiffusion and TiledVAE too

求助!加载不了大模型。

StabilityAI 官方的 Stable Diffusion V2.1 512 EMA 模型(约 5.21GB)这个大模型加载不了。
机器是4060 8g
请问有解决方法吗?

CUDA Out of Memory

Even with lowest tile size applied, OOM happens when I'm trying to upscale a x512 image.
Tile Diffusion and TileVAE applied. I think this is a dead end for 4GB vram users? Or is it just only me
Tho it works with StableSR script disabled.

GPU: GTX 1650
"--lowvram" on argument

KeyError: '64' ?

我的webui版本现在是第18周的,但是运行会报KeyError: '64',3060Ti 8G,所有安装和设置都没有问题,
编码器图块尺寸(Encoder Tile Size):1536 解码器图块尺寸(Decoder Tile Size):96 ;
潜变量分块(Latent tile)宽度: 64,潜变量分块(Latent tile)高度:64,潜变量分块(Latent tile)重叠:32,潜变量分块(Latent tile)批处理规模:1,放大算法:None;

Does not save finished image into folder

With WebUI 1.6.0 update the StableSR script no longer saves the image into the chosen paths directory. Not sure where it goes but I have to right-click and copy it or download it for now.

Apply multi loras into regional prompt and upscale with new extension Lora Mark

Hi pkuliyi2015, someone name lifeisboringsoprogramming just make a loramark extension which help us to apply multi loras into any region we want and also can use hires fix to upscale, I try to use it with regional prompt and upscale but so far it not work.

Either we lose region prompt or multi loras, we have to choose. I really love region prompt to draw good background incorporate with characters but I also want to upgrade regional prompt more with new Lora Mark extension can you support us

Thank in advance

Here is the link of the new extension Lora Mark

https://github.com/lifeisboringsoprogramming/sd-webui-lora-masks

Colorfix not working

When I am using tiled diffusion + vae + stablesr I only get one output with weird coloration and it is not saved. Therefore I assume colorfix does not work for me somehow.

I also get this error: "Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 4098, 4098] to have 3 channels, but got 4 channels instead"

screenshot-127 0 0 1_7860-2023 06 13-15_12_06

RAM limit for high size upscale

Trying to upscale an image from 3072x6144 to 2x, the RAM load increases sharply and then the process stops with this error:

”RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 42467328 bytes.”

image

image

image

image

image

I am running a PC with 32GB of RAM

Runtime Error: The size of tensor a (64) must match the size of tensor b (128) at non-singleton dimension 2

[StableSR] Target image size: 1024x2048
[Tiled Diffusion] ControlNet found, support is enabled.
MixtureOfDiffusers Sampling: : 0it [00:00, ?it/s]Mixture of Diffusers hooked into 'Euler a' sampler, Tile size: 64x64, Tile batches: 21, Batch size: 1. (ext: ContrlNet)
[Tiled VAE]: input_size: torch.Size([1, 3, 2048, 1024]), tile_size: 1024, padding: 32
[Tiled VAE]: split to 2x1 = 2 tiles. Optimal tile size 960x992, original tile size 1024x1024
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 512 x 1024 image
[Tiled VAE]: Executing Encoder Task Queue: 100%|█████████████████████████████████████| 182/182 [00:02<00:00, 69.48it/s]
[Tiled VAE]: Done in 18.380s, max VRAM alloc 3158.800 MB███████████▏ | 92/182 [00:01<00:01, 69.33it/s]
0%| | 0/20 [00:01<?, ?it/s]
Error completing request | 0/20 [00:00<?, ?it/s]
Arguments: ('task(76hs6hmz13knsse)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=512x1024 at 0x23DEA0BD690>, None, None, None, None, None, None, 20, 0, 4, 0, 0, False, False, 1, 1, 2, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 1024, 512, 1, 0, 0, 32, 0, '', '', '', [], 15, '\n

\n
Estimated VRAM usage: 6675.32 MB / 8192 MB (81.49%)
\n
(4891 MB system + 1622.11 MB used)
\n
\n ', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_conf': 30, 'ad_dilate_erode': 32, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_full_res': True, 'ad_inpaint_full_res_padding': 0, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_conf': 30, 'ad_dilate_erode': 32, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_full_res': True, 'ad_inpaint_full_res_padding': 0, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1}, False, 'keyword prompt', 'keyword1, keyword2', 'None', 'textual inversion first', 'None', '0.7', 'None', True, 'Mixture of Diffusers', False, True, 1024, 1024, 64, 64, 32, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 1024, 128, True, True, True, False, False, '', 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, <controlnet.py.UiControlNetUnit object at 0x0000023DEA0BDF90>, <controlnet.py.UiControlNetUnit object at 0x0000023DEA0BC9D0>, <controlnet.py.UiControlNetUnit object at 0x0000023DEA0BC6A0>, <controlnet.py.UiControlNetUnit object at 0x0000023DEA5372E0>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '', [], '', True, False, False, '
    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, None, False, None, False, None, False, None, False, 50, 'stablesr_webui_sd-v2-1-512-ema-000117.ckpt', 2, True, 'Wavelet', False, '

Will upscale the image depending on the selected target size type

', 512, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "D:\SD\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\SD\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\SD\stable-diffusion-webui\modules\img2img.py", line 180, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\SD\stable-diffusion-webui\modules\scripts.py", line 408, in run
processed = script.run(p, *script_args)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run
result: Processed = processing.process_images(p)
File "D:\SD\stable-diffusion-webui\modules\processing.py", line 526, in process_images
res = process_images_inner(p)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\SD\stable-diffusion-webui\modules\processing.py", line 680, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 223, in sample_custom
samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)
File "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 377, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 135, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\SD\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 243, in wrapper
return fn(*args, **kwargs)
File "D:\SD\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\mixtureofdiffusers.py", line 126, in apply_model_hijack
x_tile_out = shared.sd_model.apply_model_original_md(x_tile, t_tile, c_tile)
File "D:\SD\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\SD\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 97, in unet_forward
return getattr(unet, FORWARD_CACHE_NAME)(x, timesteps, context, y, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 82, in forward
x = layer(x, emb)
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
return checkpoint(
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 141, in
resblock._forward = lambda x, timesteps, resblock=resblock, spade=self.input_blocks[i]: dual_resblock_forward(resblock, x, timesteps, spade, get_struct_cond)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 80, in dual_resblock_forward
h = spade(h, get_struct_cond())
File "D:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 34, in forward
return checkpoint(
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\SD\stable-diffusion-webui\extensions\sd-webui-stablesr\srmodule\spade.py", line 52, in _forward
out *= (1 + self.mlp_gamma(actv).repeat_interleave(repeat_factor, dim=0))
RuntimeError: The size of tensor a (64) must match the size of tensor b (128) at non-singleton dimension 2

Hello,

I had a similar issue back when I first started using StableDiffusion. IIRC it was an Img2Img dimension mismatch that I fixed by matching the sliders to the dimension of the input image.

I have no idea what the issue could be here, however. This was my trial run with StableSR and I've matched all the relevant settings to those provided in the guide.

Euler a 20 steps
W 512 x H 1024 (matches input image)
CFG Scale 2

Latent Tile Dimensions are both 64
Overlap 32
Batch Size 1
Scale Factor 2
No Upscaler

TiledVAE
Encoder Tile Size 1024
Decoder Tile Size 128

StableSR
Scale Factor 2

Any way to make the workflow / guesswork easier?

Big fan of your work, especially regional prompt control. But this is getting a bit confusing as you have:

  • A repo named multidiffusion upscaler. Inside of it, you have:
    • Tiled VAE, which can be used to enhance many different operations, due to where it hooks into the pipeline (eg hires or another upscaler)
    • Regional Prompt Control, which is just badass, but I feel like nobody knows about it
  • This repo has a new upscaler technique which looks super cool in my testing so far
    • It has some requirements which make it a bit of work to produce optimal results, at least with my 12 GB VRAM

Then elsewhere in SDUI, we have:

  • text2img, with a clear path to produce a given size output (eg 512x512), but depending on built-in or added extensions and scripts selected, can produce bigger images
  • img2img, with a similar situation, but it's worse because of the resize controls at the top, which are used sometimes for some extensions, and not for others
  • The single-purpose Extras tab, which it seems everyone has forgotten about

And third, some extensions have tackled this type of confusion by creating a new tab:

  • Depth Maps
  • One of the Segment/Inpaint Anything extensions (can't find it now)

Anyway, my point here is that it would be cool if you would make it easier to use this technique upscale images. I'm thinking either by using the Extras interface, or by making a new tab. Either way you could simplify the steps and hide what's not so important. Could do stuff like ask user how much VRAM they have (or just do what Vlad's doing in the system info extension), and then make the settings changes that should work well out of the box. Don't even ask about Tiled VAE -- just enable it!

Then you can also check that the right 2.1 model is installed, and select it when the user clicks upscale.

Lastly, while I got you, think about breaking regional prompt control out to its own extension? I feel like it gets overshadowed because of the project name which it's "stuck in".

Thanks again. Great work you've done here. :)

"SR Model" dropdown does not have a component ID

I'm the author of the Config Presets extension. It uses component IDs to target other extension's components so that they can be used as a preset. Currently the SR Model dropdown does not have an ID associated with it (elem_id="..."). Can you give it one, like "StableSR-model"?

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT: Missing key(s) in state_dict: "time_embed.0.weight", "time_embed.0.bias", "time_embed.2.weight", "time_embed.2.bias", "input_blocks.0.0.weight", "input_blocks.0.0.bias", "input_blocks.1.0.in_layers.0.weight", "input_blocks.1.0.in_layers.0.bias", "input_blocks.1.0.in_layers.2.weight", "input_blocks.1.0.in_layers.2.bias", "input_blocks.1.0.emb_layers.1.weight", "input_blocks.1.0.emb_layers.1.bias", "input_blocks.1.0.out_layers.0.weight", "input_blocks.1.0.out_layers.0.bias", "input_blocks.1.0.out_layers.3.weight", "input_blocks.1.0.out_layers.3.bias", "input_blocks.1.1.norm.weight", "input_blocks.1.1.norm.bias", "input_blocks.1.1.qkv.weight", "input_blocks.1.1.qkv.bias", "input_blocks.1.1.proj_out.weight", "input_blocks.1.1.proj_out.bias", "input_blocks.2.0.in_layers.0.weight", "input_blocks.2.0.in_layers.0.bias", "input_blocks.2.0.in_layers.2.weight", "input_blocks.2.0.in_layers.2.bias", "input_blocks.2.0.emb_layers.1.weight", "input_blocks.2.0.emb_layers.1.bias", "input_blocks.2.0.out_layers.0.weight", "input_blocks.2.0.out_layers.0.bias", "input_blocks.2.0.out_layers.3.weight", "input_blocks.2.0.out_layers.3.bias", "input_blocks.2.1.norm.weight", "input_blocks.2.1.norm.bias", "input_blocks.2.1.qkv.weight", "input_blocks.2.1.qkv.bias", "input_blocks.2.1.proj_out.weight", "input_blocks.2.1.proj_out.bias", "input_blocks.3.0.op.weight", "input_blocks.3.0.op.bias", "input_blocks.4.0.in_layers.0.weight", "input_blocks.4.0.in_layers.0.bias", "input_blocks.4.0.in_layers.2.weight", "input_blocks.4.0.in_layers.2.bias", "input_blocks.4.0.emb_layers.1.weight", "input_blocks.4.0.emb_layers.1.bias", "input_blocks.4.0.out_layers.0.weight", "input_blocks.4.0.out_layers.0.bias", "input_blocks.4.0.out_layers.3.weight", "input_blocks.4.0.out_layers.3.bias", "input_blocks.4.1.norm.weight", "input_blocks.4.1.norm.bias", "input_blocks.4.1.qkv.weight", "input_blocks.4.1.qkv.bias", "input_blocks.4.1.proj_out.weight", "input_blocks.4.1.proj_out.bias", "input_blocks.5.0.in_layers.0.weight", "input_blocks.5.0.in_layers.0.bias", "input_blocks.5.0.in_layers.2.weight", "input_blocks.5.0.in_layers.2.bias", "input_blocks.5.0.emb_layers.1.weight", "input_blocks.5.0.emb_layers.1.bias", "input_blocks.5.0.out_layers.0.weight", "input_blocks.5.0.out_layers.0.bias", "input_blocks.5.0.out_layers.3.weight", "input_blocks.5.0.out_layers.3.bias", "input_blocks.5.1.norm.weight", "input_blocks.5.1.norm.bias", "input_blocks.5.1.qkv.weight", "input_blocks.5.1.qkv.bias", "input_blocks.5.1.proj_out.weight", "input_blocks.5.1.proj_out.bias", "input_blocks.6.0.op.weight", "input_blocks.6.0.op.bias", "input_blocks.7.0.in_layers.0.weight", "input_blocks.7.0.in_layers.0.bias", "input_blocks.7.0.in_layers.2.weight", "input_blocks.7.0.in_layers.2.bias", "input_blocks.7.0.emb_layers.1.weight", "input_blocks.7.0.emb_layers.1.bias", "input_blocks.7.0.out_layers.0.weight", "input_blocks.7.0.out_layers.0.bias", "input_blocks.7.0.out_layers.3.weight", "input_blocks.7.0.out_layers.3.bias", "input_blocks.7.0.skip_connection.weight", "input_blocks.7.0.skip_connection.bias", "input_blocks.7.1.norm.weight", "input_blocks.7.1.norm.bias", "input_blocks.7.1.qkv.weight", "input_blocks.7.1.qkv.bias", "input_blocks.7.1.proj_out.weight", "input_blocks.7.1.proj_out.bias", "input_blocks.8.0.in_layers.0.weight", "input_blocks.8.0.in_layers.0.bias", "input_blocks.8.0.in_layers.2.weight", "input_blocks.8.0.in_layers.2.bias", "input_blocks.8.0.emb_layers.1.weight", "input_blocks.8.0.emb_layers.1.bias", "input_blocks.8.0.out_layers.0.weight", "input_blocks.8.0.out_layers.0.bias", "input_blocks.8.0.out_layers.3.weight", "input_blocks.8.0.out_layers.3.bias", "input_blocks.8.1.norm.weight", "input_blocks.8.1.norm.bias", "input_blocks.8.1.qkv.weight", "input_blocks.8.1.qkv.bias", "input_blocks.8.1.proj_out.weight", "input_blocks.8.1.proj_out.bias", "input_blocks.9.0.op.weight", "input_blocks.9.0.op.bias", "input_blocks.10.0.in_layers.0.weight", "input_blocks.10.0.in_layers.0.bias", "input_blocks.10.0.in_layers.2.weight", "input_blocks.10.0.in_layers.2.bias", "input_blocks.10.0.emb_layers.1.weight", "input_blocks.10.0.emb_layers.1.bias", "input_blocks.10.0.out_layers.0.weight", "input_blocks.10.0.out_layers.0.bias", "input_blocks.10.0.out_layers.3.weight", "input_blocks.10.0.out_layers.3.bias", "input_blocks.11.0.in_layers.0.weight", "input_blocks.11.0.in_layers.0.bias", "input_blocks.11.0.in_layers.2.weight", "input_blocks.11.0.in_layers.2.bias", "input_blocks.11.0.emb_layers.1.weight", "input_blocks.11.0.emb_layers.1.bias", "input_blocks.11.0.out_layers.0.weight", "input_blocks.11.0.out_layers.0.bias", "input_blocks.11.0.out_layers.3.weight", "input_blocks.11.0.out_layers.3.bias", "middle_block.0.in_layers.0.weight", "middle_block.0.in_layers.0.bias", "middle_block.0.in_layers.2.weight", "middle_block.0.in_layers.2.bias", "middle_block.0.emb_layers.1.weight", "middle_block.0.emb_layers.1.bias", "middle_block.0.out_layers.0.weight", "middle_block.0.out_layers.0.bias", "middle_block.0.out_layers.3.weight", "middle_block.0.out_layers.3.bias", "middle_block.1.norm.weight", "middle_block.1.norm.bias", "middle_block.1.qkv.weight", "middle_block.1.qkv.bias", "middle_block.1.proj_out.weight", "middle_block.1.proj_out.bias", "middle_block.2.in_layers.0.weight", "middle_block.2.in_layers.0.bias", "middle_block.2.in_layers.2.weight", "middle_block.2.in_layers.2.bias", "middle_block.2.emb_layers.1.weight", "middle_block.2.emb_layers.1.bias", "middle_block.2.out_layers.0.weight", "middle_block.2.out_layers.0.bias", "middle_block.2.out_layers.3.weight", "middle_block.2.out_layers.3.bias", "fea_tran.0.in_layers.0.weight", "fea_tran.0.in_layers.0.bias", "fea_tran.0.in_layers.2.weight", "fea_tran.0.in_layers.2.bias", "fea_tran.0.emb_layers.1.weight", "fea_tran.0.emb_layers.1.bias", "fea_tran.0.out_layers.0.weight", "fea_tran.0.out_layers.0.bias", "fea_tran.0.out_layers.3.weight", "fea_tran.0.out_layers.3.bias", "fea_tran.1.in_layers.0.weight", "fea_tran.1.in_layers.0.bias", "fea_tran.1.in_layers.2.weight", "fea_tran.1.in_layers.2.bias", "fea_tran.1.emb_layers.1.weight", "fea_tran.1.emb_layers.1.bias", "fea_tran.1.out_layers.0.weight", "fea_tran.1.out_layers.0.bias", "fea_tran.1.out_layers.3.weight", "fea_tran.1.out_layers.3.bias", "fea_tran.2.in_layers.0.weight", "fea_tran.2.in_layers.0.bias", "fea_tran.2.in_layers.2.weight", "fea_tran.2.in_layers.2.bias", "fea_tran.2.emb_layers.1.weight", "fea_tran.2.emb_layers.1.bias", "fea_tran.2.out_layers.0.weight", "fea_tran.2.out_layers.0.bias", "fea_tran.2.out_layers.3.weight", "fea_tran.2.out_layers.3.bias", "fea_tran.2.skip_connection.weight", "fea_tran.2.skip_connection.bias", "fea_tran.3.in_layers.0.weight", "fea_tran.3.in_layers.0.bias", "fea_tran.3.in_layers.2.weight", "fea_tran.3.in_layers.2.bias", "fea_tran.3.emb_layers.1.weight", "fea_tran.3.emb_layers.1.bias", "fea_tran.3.out_layers.0.weight", "fea_tran.3.out_layers.0.bias", "fea_tran.3.out_layers.3.weight", "fea_tran.3.out_layers.3.bias", "fea_tran.3.skip_connection.weight", "fea_tran.3.skip_connection.bias".
Time taken: 17.57sTorch active/reserved: 2515/2600 MiB, Sys VRAM: 4125/12288 MiB (33.57%)

Couldn't make it work through tiled VAE

Use MultiDiffusion in tiled VAE:
Traceback (most recent call last): File "D:\ai\auto-ui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\ai\auto-ui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "D:\ai\auto-ui\modules\img2img.py", line 180, in img2img processed = modules.scripts.scripts_img2img.run(p, *args) File "D:\ai\auto-ui\modules\scripts.py", line 408, in run processed = script.run(p, *script_args) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 231, in run result: Processed = processing.process_images(p) File "D:\ai\auto-ui\modules\processing.py", line 526, in process_images res = process_images_inner(p) File "D:\ai\auto-ui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "D:\ai\auto-ui\modules\processing.py", line 680, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 209, in sample_custom samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning) File "D:\ai\auto-ui\modules\sd_samplers_kdiffusion.py", line 411, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "D:\ai\auto-ui\modules\sd_samplers_kdiffusion.py", line 285, in launch_sampling return func() File "D:\ai\auto-ui\modules\sd_samplers_kdiffusion.py", line 411, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "D:\ai\auto-ui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\ai\auto-ui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\modules\sd_samplers_kdiffusion.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]}) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\ai\auto-ui\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 243, in wrapper return fn(*args, **kwargs) File "D:\ai\auto-ui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 103, in kdiff_forward return self.sample_one_step(x_in, org_func, repeat_func, custom_func) File "D:\ai\auto-ui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 179, in sample_one_step x_tile_out = repeat_func(x_tile, bboxes) File "D:\ai\auto-ui\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\multidiffusion.py", line 97, in repeat_func x_tile_out = self.sampler_forward(x_tile, sigma_in_tile, cond=new_cond) File "D:\ai\auto-ui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\ai\auto-ui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\ai\auto-ui\modules\sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\ai\auto-ui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 92, in unet_forward return getattr(unet, FORWARD_CACHE_NAME)(x, timesteps, context, y, **kwargs) File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 82, in forward x = layer(x, emb) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward return checkpoint( File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\ai\auto-ui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\srmodule\spade.py", line 139, in <lambda> resblock._forward = lambda x, timesteps, resblock=resblock, spade=self.input_blocks[i]: dual_resblock_forward(resblock, x, timesteps, spade, get_struct_cond) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\srmodule\spade.py", line 80, in dual_resblock_forward h = spade(h, get_struct_cond()) File "D:\ai\auto-ui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\srmodule\spade.py", line 34, in forward return checkpoint( File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\ai\auto-ui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\srmodule\spade.py", line 39, in _forward segmap = segmap_dic[str(x_dic.size(-1))] KeyError: '96'

I used default setting in Tiled VAE:
Latent tile width/height: 96
Scale to 2X : 1024x1536
Model use: stablesr_webui_sd-v2-1-512-ema-000117
Upscaler(Tiled VAE): None.

For Mixture of Diffusers instruction in README, it returns:
File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\ai\auto-ui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "D:\ai\auto-ui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\ai\auto-ui\extensions\sd-webui-stablesr\srmodule\spade.py", line 52, in _forward out *= (1 + self.mlp_gamma(actv).repeat_interleave(repeat_factor, dim=0)) RuntimeError: The size of tensor a (32) must match the size of tensor b (96) at non-singleton dimension 2

Auto1111 reports "Error: 'Model webui_768v_139.ckpt is not in the list! Please refresh your browser!'"

StableSR installed from github URL today (07/04/23).
Checkpoint webui_768v_139.ckpt is placed in auto1111 > extensions > sd-webui-stablesr > models
SD Checkpoint: SD-v2-1_768-nonema-pruned.safetensors
SD VAE: vqgan_cfw_00011_vae_only.ckpt
Auto1111 version: 1.3.2

Every launch of auto1111 requires manual refresh of SR Model pulldown to see the webui_768v_139.ckpt. After applying ckpt selection and attempting to Generate, Auto1111 reports "Error: 'Model webui_768v_139.ckpt is not in the list! Please refresh your browser!'"

Mac M1 is unavailable

loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<1x1024xf32>' and 'tensor<1024xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '

RuntimeError: The size of tensor a (1024) must match the size of tensor b (768) at non-singleton dimension 1

大佬你好,我在放大一张725 x 878 图片到2倍时报了这个错误,主模型用的v2-1_768-ema-pruned sr模型用的webui_768v_139.ckpt,cfg=7,steps=20,采样用的euler a,也启用的Tiled Diffusion和分块 VAE,Tiled Diffusion升级到了最新版.请问是什么原因呢?

[StableSR] Target image size: 1456x1760
[Tiled Diffusion] ControlNet found, support is enabled.
[Tiled Diffusion] StableSR found, support is enabled.
Mixture of Diffusers hooked into 'Euler a' sampler, Tile size: 64x64, Tile batches: 4, Batch size: 8. (ext: ContrlNet)
[Tiled VAE]: input_size: torch.Size([1, 3, 1760, 1456]), tile_size: 1536, padding: 32
[Tiled VAE]: split to 2x1 = 2 tiles. Optimal tile size 1408x864, original tile size 1536x1536
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 1270 x 1536 image
[Tiled VAE]: Done in 2.116s, max VRAM alloc 5216.13d MB
*** Error completing request
*** Arguments: ('task(4bdixwtd7onc1qz)', 0, '(large breats,nsfw,bra),(8k, RAW photo, masterpiece:1.2), solo, full body,(hair accessories,night), (light on face:1.2), , floating hair,blush,eyelashes,Beautiful and delicate eyes, photorealistic, super detail, high quality, best quality', 'advertisement,paintings,3d, cartoon, anime, sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,normal quality,((monochrome)),((grayscale)),skin spots,NG_DeepNegative_V1_75T,pubic hair,glans', [], <PIL.Image.Image image mode=RGBA size=725x878 at 0x1CB6D357850>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.23, 4076662761.0, -1.0, 0, 0, 0, False, 0, 878, 725, 1, 2, 0, 32, 0, '', '', '', [], 11, False, {'ad_model': 'face_yolov8s.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, True, 'Mixture of Diffusers', False, True, 1024, 1024, 64, 64, 32, 8, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 1536, 96, True, True, True, False, False, '', 0, False, 7, 98, 'Half Cosine Up', 3, 'Half Cosine Up', 3, 4, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001CB6D2B9FF0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001CB6D2BA7D0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001CB6D313640>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nBODY:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nBODY0.5:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\nFACE0.2:1,0,0,0,0,0,0,0,0.2,0.6,0.8,0.2,0,0,0,0,0\nHAND:1,0,1,1,0.2,0,0,0,0,0,0,0,0,0,0,0,0\nCLOTHES:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\nPOSE:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\nPALETTE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nKEEPCHAR:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\nKEEPBG:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\nREDUCEFIT:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nACT: 1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50, 'webui_768v_139.ckpt', 2, True, 'Wavelet', False, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "D:\01-01HH\novelai-webui-aki-v3A\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "D:\01-01HH\novelai-webui-aki-v3A\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\img2img.py", line 196, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\scripts.py", line 456, in run
processed = script.run(p, *script_args)
File "D:\01-01HH\novelai-webui-aki-v3A\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run
result: Processed = processing.process_images(p)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\processing.py", line 620, in process_images
res = process_images_inner(p)
File "D:\01-01HH\novelai-webui-aki-v3A\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\processing.py", line 729, in process_images_inner
p.setup_conds()
File "D:\01-01HH\novelai-webui-aki-v3A\modules\processing.py", line 346, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\processing.py", line 338, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\prompt_parser.py", line 143, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "D:\01-01HH\novelai-webui-aki-v3A\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "D:\01-01HH\novelai-webui-aki-v3A\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "D:\01-01HH\novelai-webui-aki-v3A\modules\sd_hijack_open_clip.py", line 28, in encode_with_transformers
z = self.wrapped.encode_with_transformer(tokens)
File "D:\01-01HH\novelai-webui-aki-v3A\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 220, in encode_with_transformer
x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
File "D:\01-01HH\novelai-webui-aki-v3A\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 232, in text_transformer_forward
x = r(x, attn_mask=attn_mask)
File "D:\01-01HH\novelai-webui-aki-v3A\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\01-01HH\novelai-webui-aki-v3A\py310\lib\site-packages\open_clip\transformer.py", line 154, in forward
x = x + self.ls_1(self.attention(self.ln_1(x), attn_mask=attn_mask))
File "D:\01-01HH\novelai-webui-aki-v3A\py310\lib\site-packages\open_clip\transformer.py", line 151, in attention
return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0]
File "D:\01-01HH\novelai-webui-aki-v3A\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\01-01HH\novelai-webui-aki-v3A\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 425, in lora_MultiheadAttention_forward
lora_apply_weights(self)
File "D:\01-01HH\novelai-webui-aki-v3A\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 347, in lora_apply_weights
self.in_proj_weight += updown_qkv
RuntimeError: The size of tensor a (1024) must match the size of tensor b (768) at non-singleton dimension 1
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


[StableSR] Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 3242, 2162] to have 3 channels, but got 4 channels instead

我也遇到了这个报错,看了前面的同样报错但是没找到解决办法。
首先,排除了应用了错误的模型,使用的确实是README中提到的两个512模型。
报错信息如下:
[StableSR] Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 3242, 2162] to have 3 channels, but got 4 channels instead
虽然有这个报错信息,但并没有影响出图。

I also encountered this error, read the previous same error but did not find a solution.
First, I rule out applying the wrong model and use the two 512 models mentioned in the README.
The error message is as follows:
[StableSR] Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 3242, 2162] to have 3 channels, but got 4 channels instead
Although there is this error message, it does not affect the drawing.

Error when color fixing

[StableSR] Error fixing color with default method: Given groups=3, weight of size [3, 1, 3, 3], expected input[1, 4, 2562, 1442] to have 3 channels, but got 4 channels instead

I'm using 1.3.0-RC.
It's too late for digging in the reason, I'll add more detail afterwards.

Soft images

Using StableSR I get soft images with little detail, compared to Ultimate SD Upscale.

StableSR to the leflt, Ultimate SD Upscale to the right of the comparison
image

This is the prompt used

4K, Masterpiece, highres, absurdres,natural volumetric lighting and best shadows, smiling,nikon d850, 85mm F1.2mm lens
portrait of pretty woman posing for the camera

Negative prompt: ((disfigured)), ((bad art)), ((deformed)),((extra limbs)),((close up)),((b&w)), wierd colors, blurry, (((duplicate))), ((morbid)), ((mutilated)), [out of frame], (((extra fingers, mutated hands))), ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy,(((UGLY DEFORMED, NDS))),(((black and white picture))),(((cat ears))),(((wings))),(((cleavage,boobs))),young,loli,child

Steps: 20, Sampler: Euler a, CFG scale: 2, Seed: 2894591607, Size: 1840x3680, Model hash: ac34765554, Model: icbinpICantBelieveIts_v6, Denoising strength: 0.05, Clip skip: 2, ENSD: 31337, Version: v1.2.1, Tiled Diffusion: "{'Method': 'Mixture of Diffusers', 'Tile tile width': 64, 'Tile tile height': 64, 'Tile Overlap': 32, 'Tile batch size': 8, 'Keep input size': True}", Eta: 0.67

I notice that the higher the denoise setting, the softer the image

REQ建议:添加“放大至”而不是仅仅支持放大倍数。

如果需要对一批图片进行操作,仅仅只有放大倍数(upscale factor)是根本不可用的,需要放大至(upscale to)的设置。另外,如果可以支持缩小也是最好的,有时仅仅需要让图片质量提高,而并不需要那么高的分辨率。

KeyError: '8'

Error completing request
Arguments: ('task(nfljhzf75zkstrx)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=512x512 at 0x7F74C161CC70>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 12, True, 'Mixture of Diffusers', False, True, 1024, 1024, 64, 64, 32, 8, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 3072, 256, True, True, True, False, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7f74c1730b50>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7f74c17335e0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7f74c1731de0>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, 50, 'sd-v2-1-512-ema-000117.ckpt', 2, True, 'Wavelet', False, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/scripts/stablesr.py", line 248, in run
result: Processed = processing.process_images(p)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/processing.py", line 503, in process_images
res = process_images_inner(p)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/processing.py", line 653, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/scripts/stablesr.py", line 223, in sample_custom
samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in launch_sampling
return func()
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 126, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/multidiffusion-upscaler-for-automatic1111/tile_utils/utils.py", line 243, in wrapper
return fn(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/multidiffusion-upscaler-for-automatic1111/tile_methods/mixtureofdiffusers.py", line 131, in apply_model_hijack
x_tile_out = shared.sd_model.apply_model_original_md(x_tile, t_tile, c_tile)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/scripts/stablesr.py", line 97, in unet_forward
return getattr(unet, FORWARD_CACHE_NAME)(x, timesteps, context, y, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 82, in forward
x = layer(x, emb)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
return checkpoint(
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/srmodule/spade.py", line 141, in
resblock._forward = lambda x, timesteps, resblock=resblock, spade=self.input_blocks[i]: dual_resblock_forward(resblock, x, timesteps, spade, get_struct_cond)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/srmodule/spade.py", line 80, in dual_resblock_forward
h = spade(h, get_struct_cond())
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/srmodule/spade.py", line 34, in forward
return checkpoint(
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/drive/MyDrive/stable-diffusion-webui-colab/stable-diffusion-webui/extensions/sd-webui-stablesr/srmodule/spade.py", line 39, in _forward
segmap = segmap_dic[str(x_dic.size(-1))]
KeyError: '8'

Req: add generation info

Would be cool to see settings added to the PNG generation info. eg here's Tile Diffusion adding its stuff:

Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 654441681, Size: 1536x1536, Model hash: df955bdf6b, Model: sd21_v2-1_512-ema-pruned, Denoising strength: 1, Tiled Diffusion: "{'Method': 'Mixture of Diffusers', 'Tile tile width': 96, 'Tile tile height': 96, 'Tile Overlap': 48, 'Tile batch size': 1, 'Keep input size': True}"

why it happen?

stablesr Error when generating image: 'Model is not in the list! Please refresh your browser!' thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.