Git Product home page Git Product logo

Comments (9)

lenage avatar lenage commented on July 29, 2024 6

SD1.5

image
Error occurred when executing KSampler:

CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


File "/opt/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/opt/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/opt/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/opt/ComfyUI/nodes.py", line 1368, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/opt/ComfyUI/nodes.py", line 1338, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/opt/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
raise e
File "/opt/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "/opt/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 248, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "/opt/ComfyUI/comfy/sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/opt/ComfyUI/comfy/samplers.py", line 703, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/opt/ComfyUI/comfy/samplers.py", line 608, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/opt/ComfyUI/comfy/samplers.py", line 547, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/samplers.py", line 285, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/samplers.py", line 272, in forward
return self.apply_model(*args, **kwargs)
File "/opt/ComfyUI/comfy/samplers.py", line 269, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/opt/ComfyUI/comfy/samplers.py", line 249, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "/opt/ComfyUI/comfy/samplers.py", line 223, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "/opt/ComfyUI/comfy/model_base.py", line 96, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 849, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "/opt/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 43, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 632, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/custom_nodes/ComfyUI-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 253, in forward
return func(self, x, context, transformer_options)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 459, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/opt/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
return func(*inputs)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 519, in _forward
n = self.attn1(n, context=context_attn1, value=value_attn1)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/custom_nodes/ComfyUI-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 239, in forward
x = optimized_attention(q, k, v, self.heads)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 326, in attention_xformers
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention
return _memory_efficient_attention(
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 321, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 341, in _memory_efficient_attention_forward
out, *_ = op.apply(inp, needs_gradient=False)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/flash.py", line 458, in apply
out, softmax_lse, rng_state = cls.OPERATOR(
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/_ops.py", line 755, in __call__
return self._op(*args, **(kwargs or {}))
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/flash.py", line 106, in _flash_fwd
) = _C_flashattention.fwd(

from comfyui-layerdiffuse.

rcfcu2000 avatar rcfcu2000 commented on July 29, 2024 1

采样器那边出现4096报错, 是图片大小不匹配
你把原始图片缩放到 512x512, 然后再传给下一个节点

from comfyui-layerdiffuse.

simonlive avatar simonlive commented on July 29, 2024

SD1.5, Error occurred when executing KSampler:

'UNetModel' object has no attribute 'default_image_only_indicator'

from comfyui-layerdiffuse.

nighting0le01 avatar nighting0le01 commented on July 29, 2024

Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?

from comfyui-layerdiffuse.

huchenlei avatar huchenlei commented on July 29, 2024

SD15 are all attn sharing one step model. They are not directly comparable with previous SDXL models.

Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?

from comfyui-layerdiffuse.

KennyChan3389 avatar KennyChan3389 commented on July 29, 2024
%7B610A44E0-37AB-4cd4-8B48-97A410394429%7D 请问,运行到采样器那边出现4096报错,是什么样的原因引起的呢? 附上运行文档:求助,谢谢 [新建文本文档.txt](https://github.com/huchenlei/ComfyUI-layerdiffuse/files/14603150/default.txt)

from comfyui-layerdiffuse.

KennyChan3389 avatar KennyChan3389 commented on July 29, 2024

采样器那边出现4096报错, 是图片大小不匹配 你把原始图片缩放到 512x512, 然后再传给下一个节点

您的意思是说,扣完图的图片,要和空latent的大小匹配,对嘛?

from comfyui-layerdiffuse.

nighting0le01 avatar nighting0le01 commented on July 29, 2024

SD15 are all attn sharing one step model. They are not directly comparable with previous SDXL models.

Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?

Can you share any HF links for attn sharing SD1.5?

from comfyui-layerdiffuse.

huchenlei avatar huchenlei commented on July 29, 2024

SD15 are all attn sharing one step model. They are not directly comparable with previous SDXL models.

Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?

Can you share any HF links for attn sharing SD1.5?

You can find all models here: https://huggingface.co/LayerDiffusion/layerdiffusion-v1/tree/main

from comfyui-layerdiffuse.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.