Git Product home page Git Product logo

comfyui_extramodels's Introduction

Bored sysadmin doing ML stuff.

See v100s.net for contact info.

comfyui_extramodels's People

Contributors

city96 avatar deinferno avatar gavchap avatar zmwv823 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

comfyui_extramodels's Issues

In hunyuan, can now generate 1280x768, but it's cropped

Hi, thanks for your great work integrating what chaojie started with these hunyuan nodes. I've been using theirs until I just saw that you updated yours. You can now generate in 1280x768, but it's just cropped 1024x1024. If you look at the difference between theirs and yours, they've got a node that specifies a resolution on the checkpoint loader as well. You need the resolution in both the empty latent AND the checkpoint loader so that it doesn't just still do 1024x1024 and crops the image. An example prompt that will cut off the head if not done correctly at 1280x768 is the following. Thanks very much for your work on this. The prompt: Anime-style illustration featuring a muscular cyborg character with glowing eyes, mechanical arms and tubes, highlighted in fiery orange and contrasting deep shadows, standing powerfully with large machinery in the background, sharp lines, and dynamic shading enhancing the intensity and futuristic atmosphere.

About No VAE weights detected, VAE not initalized.

Thank you for your work. I've followed the steps in README to try to download PIXART and use ComfyUI to demo it. However, after downloading all the necessary files and press the Queue botton, I got this error message.

╭─polop@cluster-MS-7D07 11:27:49 ~/comfy/ComfyUI:master
╰─$ python main.py
Total VRAM 24268 MB, total RAM 128708 MB
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.3.0+cu121 with CUDA 1201 (you have 1.13.1+cu116)
Python 3.10.14 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Set vram state to: NORMAL_VRAM╭─polop@cluster-MS-7D07 11:27:49 ~/comfy/ComfyUI:master
╰─$ python main.py
Total VRAM 24268 MB, total RAM 128708 MB
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.3.0+cu121 with CUDA 1201 (you have 1.13.1+cu116)
Python 3.10.14 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 :
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Torch version too old for FP8

Import times for custom nodes:
0.0 seconds: /home/polop/comfy/ComfyUI/custom_nodes/ComfyUI_ExtraModels

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS

########################################
PixArt: Not using xformers!
Expect images to be non-deterministic!
Batch sizes > 1 are most likely broken
########################################

Loading T5 from '/home/polop/comfy/ComfyUI/models/t5/t5-v1_1-xxl'
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.76s/it]
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565
WARNING: No VAE weights detected, VAE not initalized.
!!! Exception during processing!!! 'VAE' object has no attribute 'vae_dtype'
Traceback (most recent call last):
File "/home/polop/comfy/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/polop/comfy/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/polop/comfy/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/polop/comfy/ComfyUI/nodes.py", line 294, in encode
t = vae.encode(pixels[:,:,:,:3])
File "/home/polop/comfy/ComfyUI/comfy/sd.py", line 320, in encode
memory_used = self.memory_used_encode(pixel_samples.shape, self.vae_dtype)
AttributeError: 'VAE' object has no attribute 'vae_dtype'

Prompt executed in 21.43 seconds
Device: cuda:0 NVIDIA GeForce RTX 3090 :
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Torch version too old for FP8

Import times for custom nodes:
0.0 seconds: /home/polop/comfy/ComfyUI/custom_nodes/ComfyUI_ExtraModels

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS

########################################
PixArt: Not using xformers!
Expect images to be non-deterministic!
Batch sizes > 1 are most likely broken
########################################

Loading T5 from '/home/polop/comfy/ComfyUI/models/t5/t5-v1_1-xxl'
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.76s/it]
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565
WARNING: No VAE weights detected, VAE not initalized.
!!! Exception during processing!!! 'VAE' object has no attribute 'vae_dtype'
Traceback (most recent call last):
File "/home/polop/comfy/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/polop/comfy/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/polop/comfy/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/polop/comfy/ComfyUI/nodes.py", line 294, in encode
t = vae.encode(pixels[:,:,:,:3])
File "/home/polop/comfy/ComfyUI/comfy/sd.py", line 320, in encode
memory_used = self.memory_used_encode(pixel_samples.shape, self.vae_dtype)
AttributeError: 'VAE' object has no attribute 'vae_dtype'

Prompt executed in 21.43 seconds

may i ask what could probably cause this and how to fix it? Thank you for your time and hard work.

!!! Exception during processing!!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

ampler:

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

File "E:\AI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 761, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 663, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 650, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 629, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI.ext\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 616, in call
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 619, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 258, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\samplers.py", line 218, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\comfy\model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\models.py", line 386, in forward
out = self.forward_raw(
^^^^^^^^^^^^^^^^^
File "E:\AI\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\models.py", line 313, in forward_raw
extra_vec = self.pooler(encoder_hidden_states_t5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI.ext\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\models\poolers.py", line 19, in forward
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (L+1)NC

mat1 and mat2 shapes cannot be multiplied (16x1024 and 4096x1152)

Hello,

Thanks for bringing pixart to comfyui!

I am trying to use the default workflow and I am getting a mat1 and mat2 shapes cannot be multiplied (16x1024 and 4096x1152)

I am using PixArt-XL-2-1024x1024.pth

Here's the full trace:

got prompt
model_type EPS
adm 0
Warning: lewei scale: (2,), base size: 64
Loading T5 from 'D:\stable\comfyUI\ComfyUI\models\t5\ZET5'
Requested to load BaseModel
Loading 1 new model
0%| | 0/20 [00:00<?, ?it/s]
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "D:\stable\comfyUI\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\stable\comfyUI\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\stable\comfyUI\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\stable\comfyUI\ComfyUI\nodes.py", line 1299, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "D:\stable\comfyUI\ComfyUI\nodes.py", line 1269, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "D:\stable\comfyUI\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 715, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 621, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 560, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\stable\comfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 284, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 274, in forward
return self.apply_model(*args, **kwargs)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 271, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 252, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond
, x, timestep, model_options)
File "D:\stable\comfyUI\ComfyUI\comfy\samplers.py", line 226, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
File "D:\stable\comfyUI\ComfyUI\comfy\model_base.py", line 85, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable\comfyUI\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArtMS.py", line 215, in forward
out = self.forward_raw(
File "D:\stable\comfyUI\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArtMS.py", line 169, in forward_raw
y = self.y_embedder(y, self.training) # (N, D)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable\comfyUI\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArt_blocks.py", line 399, in forward
caption = self.y_proj(caption)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\timm\models\layers\mlp.py", line 27, in forward
x = self.fc1(x)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable\comfyUI\comf\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
R

Today's Update broke your Node

ComfyUI Revision: 2111 [38ed2da2] | Released on '2024-04-05'

(IMPORT FAILED): ComfyUI\custom_nodes\ComfyUI_ExtraModels

ImportError: cannot import name 'get_models_from_cond' from 'comfy.sample' (H:\ComfyUI\comfy\sample.py)

Cannot import H:\ComfyUI\custom_nodes\ComfyUI_ExtraModels module for custom nodes: cannot import name 'get_models_from_cond' from 'comfy.sample' (H:\ComfyUI\comfy\sample.py)

An error is reported when selecting gpu 【Error occurred when executing HYDiTTextEncodeSimple:】

Error occurred when executing HYDiTTextEncodeSimple:

Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

File "E:\soft\ComfyUI-aki-v1.2\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\soft\ComfyUI-aki-v1.2\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\soft\ComfyUI-aki-v1.2\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\soft\ComfyUI-aki-v1.2\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\nodes.py", line 162, in encode_simple
return self.encode(text=text, text_t5=text, **args)
File "E:\soft\ComfyUI-aki-v1.2\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\nodes.py", line 111, in encode
t5_outs = T5.cond_stage_model.transformer(
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1975, in forward
encoder_outputs = self.encoder(
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1016, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
return F.embedding(
File "E:\soft\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

Pixart Sigma images are garbled

For some reason, Pixart Sigma images are coming out as some sort of incomprehensible mess. I followed the directions as best I could and used the sample workflow, but this is the result:
ComfyUI_PixArt_00002_

Mind you, the prompt is unchanged: "pixelart drawing of a tank with a (blue:0.8) camo pattern". I assumed this was a VAE issue, but changing the VAE has little to no effect. Everything has been uninstalled and reinstalled, but I've had no success. Any help would be sincerely appreciated.

requires Accelerate

accelerate

Hello, I'm trying to run the Abominable Spaghetti Workflow and I'm getting this error.
I installed every pips I saw on comfy ui's pages, and on this one, also installed the pip accelerate but I still have the error
I tried with both portable and classic comfy ui versions.
I tried with every dtype
8GB RTX 2080 with Ryzen 7 3800X and 64GB RAM

Hope you have the solution, thanks

T5v11Loader Path, you cannot specify a path there

Hello

T5v11Loader Path not working.
T5v11_name Not working Path
Pleas Fix it.

ERROR:root:Failed to validate prompt for output 9:
ERROR:root:* T5v11Loader 11:
ERROR:root: - Value not in list: t5v11_name: 'pytorch_model-00001-of-00002.bin' not in []
ERROR:root:Output will be ignored
Prompt executed in 0.00 seconds

you cannot specify a path there

`/models/T5` should be `/models/t5`

in the README.md, you mentioned:

Download the [second text encoder from here](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/t2i/mt5/pytorch_model.bin) and place it in ComfyUI/models/T5 - rename it to "mT5.bin"

But when i install that on Linux, the node is looking for the ComfyUI/models/t5 folder. So i think T should be lower case.

Thanks.

只能生成1024x1024的图片

测试了一下,只能生成1024x1024的图片,其规格的图片比如1024x768,或者768x1024的图片,最后生成的结果是条纹状态的图

T5TextEncode: Tensor on device cpu is not on the expected device meta!

Error occurred when executing T5TextEncode:

Tensor on device cpu is not on the expected device meta!

File "C:\Users\strau\ComfyUI\execution.py", line 154, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\strau\ComfyUI\execution.py", line 84, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\strau\ComfyUI\execution.py", line 77, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\strau\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\nodes.py", line 83, in encode
cond = T5.encode_from_tokens(tokens)
File "C:\Users\strau\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\loader.py", line 71, in encode_from_tokens
return self.cond_stage_model.encode_token_weights(tokens)
File "C:\Users\strau\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\t5v11.py", line 97, in encode_token_weights
out = self.encode(to_encode)
File "C:\Users\strau\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\t5v11.py", line 79, in encode
return self(tokens)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\strau\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\t5v11.py", line 72, in forward
outputs = self.transformer(input_ids=tokens, attention_mask=attention_mask)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1975, in forward
encoder_outputs = self.encoder(
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1110, in forward
layer_outputs = layer_module(
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 694, in forward
self_attention_outputs = self.layer[0](
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 600, in forward
normed_hidden_states = self.layer_norm(hidden_states)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 260, in forward
return self.weight * hidden_states
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_prims_common\wrappers.py", line 229, in fn
result = fn(*args, **kwargs)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_prims_common\wrappers.py", line 132, in fn
result = fn(**bound.arguments)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_refs_init
.py", line 982, in ref
output = prim(a, b)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_refs_init
.py", line 1610, in mul
return prims.mul(a, b)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_ops.py", line 448, in call
return self.op(*args, **kwargs or {})
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_prims_init
.py", line 351, in elementwise_meta
utils.check_same_device(*args
, allow_cpu_scalar_tensors=True)
File "C:\Users\strau\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_prims_common_init
.py", line 654, in check_same_device
raise RuntimeError(msg)

Clos

Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

m1 max ,Running the official example will result in the following error:

Error occurred when executing KSampler:
Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
File "/Users/weiwei/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/Users/weiwei/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/Users/weiwei/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/Users/weiwei/ComfyUI/nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/Users/weiwei/ComfyUI/nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 111, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, *args, **kwargs)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "/Users/weiwei/ComfyUI/comfy/sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1446, in KSampler_sample
return _KSampler_sample(*args, **kwargs)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 761, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1469, in sample
return _sample(*args, **kwargs)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 663, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 650, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 629, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/Users/weiwei/Envs/comfyui/lib/python3.10/site-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/weiwei/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 992, in call
return self.predict_noise(*args, **kwargs)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1042, in predict_noise
out = super().predict_noise(*args, **kwargs)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 619, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 258, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/.patches.py", line 4, in calc_cond_batch
return calc_cond_batch_original_tiled_diffusion_3bb493f6(model, conds, x_in, timestep, model_options)
File "/Users/weiwei/ComfyUI/comfy/samplers.py", line 218, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 63, in apply_model_uncond_cleanup_wrapper
return orig_apply_model(self, *args, **kwargs)
File "/Users/weiwei/ComfyUI/comfy/model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/Users/weiwei/Envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/weiwei/Envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_ExtraModels/PixArt/models/PixArtMS.py", line 242, in forward
out = self.forward_raw(
File "/Users/weiwei/ComfyUI/custom_nodes/ComfyUI_ExtraModels/PixArt/models/PixArtMS.py", line 177, in forward_raw
).unsqueeze(0).to(x.device).to(self.dtype)

Support the new Pixart 2K model

Pixart just dropped a "2K" model at https://huggingface.co/PixArt-alpha/PixArt-Sigma/tree/main

I tried running it but that one just errors with:

Traceback (most recent call last):
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\nodes.py", line 29, in load_checkpoint
    model = load_pixart(
            ^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\loader.py", line 102, in load_pixart
    m, u = model.diffusion_model.load_state_dict(state_dict, strict=False)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PixArtMS:
        size mismatch for pos_embed: copying a param with shape torch.Size([1, 16384, 1152]) from checkpoint, the shape in current model is torch.Size([1, 4096, 1152]).

Clarification on docs multi GPU usage

The docs state the following but I cannot find any way to use it, any help or clarification?

If you have a second GPU, selecting "cuda:1" as the device will allow you to use it for T5, freeing at least some VRAM/System RAM. Using FP16 as the dtype is recommended.

PixArt: UNET conversion has missing keys!

settings in ComfyUI.
PixArt: UNET conversion has missing keys!
['y_embedder.y_embedding']
model_type EPS
Missing UNET keys ['pos_embed', 'y_embedder.y_embedding']
style: base
text_positive: stars, water, brilliantly, gorgeous large scale scene, a little girl, in the style of dreamy realism, light gold and amber, blue and pink, brilliantly illuminated in the background.
text_negative: anime, manga, illustration, cartoon, 3d, Photoshop, sketch, video game, draw, paint, cgi, canvas frame, watermark, signature, username, artist name
text_positive_styled: stars, water, brilliantly, gorgeous large scale scene, a little girl, in the style of dreamy realism, light gold and amber, blue and pink, brilliantly illuminated in the background.
text_negative_styled: anime, manga, illustration, cartoon, 3d, Photoshop, sketch, video game, draw, paint, cgi, canvas frame, watermark, signature, username, artist name
Loading T5 from 'J:\ComfyUI\models\t5'
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:50<00:00, 25.35s/it]
image
And there is no option for pixartms sigma xl 2 512 in the model drop-down list.

Results Inconsistency with Official sample_t2i.py Code

Hello,

I have been working with the hunyuan dit library and I've noticed that the results I'm getting are not completely consistent with the ones produced by the official sample.py code.

I've tried troubleshooting the issue on my end, but it seems that there might be an underlying issue with the library itself. I'm not entirely sure what is causing this discrepancy and I would appreciate your assistance in resolving this.

If you are willing to collaborate with us in resolving this issue, we would be very grateful. Once we identify and fix the problem, your contribution can be acknowledged in the hunyuan dit library.

Looking forward to your response and the possibility of working together on this.

Best regards,
Richard

Requesting a new model (Kandinsky 2.2)

Hi! Really appreciate the effort gone into producing these nodes, I was wondering if you had any plans to support models based on Kandinsky 2.2? Would really like to try it out and build new workflows since the quality and image blending is superior to most models :)

'caption_projection.y_embedding'

I'm getting an error on the latest ComfyUI commit. Have you seen this before?

Error occurred when executing PixArtCheckpointLoader:

'caption_projection.y_embedding'

File "/home/gchapman/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/gchapman/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/gchapman/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/gchapman/ComfyUI/custom_nodes/ComfyUI_ExtraModels/PixArt/nodes.py", line 29, in load_checkpoint
model = load_pixart(
File "/home/gchapman/ComfyUI/custom_nodes/ComfyUI_ExtraModels/PixArt/loader.py", line 59, in load_pixart
state_dict = convert_state_dict(state_dict) # Diffusers
File "/home/gchapman/ComfyUI/custom_nodes/ComfyUI_ExtraModels/PixArt/diffusers_convert.py", line 76, in convert_state_dict
new_state_dict = {k: state_dict[v] for k,v in cmap}
File "/home/gchapman/ComfyUI/custom_nodes/ComfyUI_ExtraModels/PixArt/diffusers_convert.py", line 76, in
new_state_dict = {k: state_dict[v] for k,v in cmap}

KeyError: 'caption_projection.y_embedding'

Error occurred when executing T5v11Loader: Using `low_cpu_mem_usage=True`

I've done all the installation's steps (including all updates), download all the necessary models (Pixart/T5/Vae), but I still have this:

Error occurred when executing T5v11Loader:

Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`

File "D:\git\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\git\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\git\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\nodes.py", line 57, in load_model
return (load_t5(
^^^^^^^^
File "D:\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\loader.py", line 107, in load_t5
return EXM_T5v11(**model_args)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\loader.py", line 44, in __init__
self.cond_stage_model = T5v11Model(
^^^^^^^^^^^
File "D:\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\T5\t5v11.py", line 38, in __init__
self.transformer = T5EncoderModel.from_pretrained(textmodel_path, **model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\git\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 2970, 
in from_pretrained raise ImportError(

And I've also done this:
python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui
And also upgraded the transformers.

I'm using Workflow example from repo.
My GPU: RTX 3080 10Gb
My OS: Win10
My nodes looks like this:

image

Can you help me to solve this problem?

Error when executing PixArtCheckpointLoader

I've loaded up the sample workflow and the models specified in the readme. I've redirected the nodes to the correct models and checked that they're loading. When I run the workflow, however, it spits out this error:

Error occurred when executing PixArtCheckpointLoader:

PatchEmbed.init() got an unexpected keyword argument 'bias'

File "F:\Tools\ComfyUI\execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\Tools\ComfyUI\execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\Tools\ComfyUI\execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\nodes.py", line 30, in load_checkpoint
model = load_pixart(
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\loader.py", line 51, in load_pixart
model.diffusion_model = PixArtMS(**model_conf.unet_config)
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArtMS.py", line 110, in init
super().init(
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArt.py", line 92, in init
self.x_embedder = PatchEmbed(input_size, patch_size, in_channels, hidden_size, bias=True)

Could you let me know where I've gone wrong?

T5v11Loader error

I'm really not good at those things, I checked the website indicated in the error message but where should I install that SentencePiece thing ? in the ExtraModel folder ? In a venv folder ? in ComfyUI/pythonembedded ? I tried in ComfyUI root but it was already there in my Python folders, so I'm completely lost again
image
image

Error occurred when executing HYDiTTextEncoderLoader:

Error occurred when executing HYDiTTextEncoderLoader:

We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Tencent-Hunyuan/HunyuanDiT is not the path to a directory containing a file named t2i/tokenizer\config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

Can anyone help me? Many thanks.

Hires fix?

Is it possible to use the PixArt models with a hires fix as part of the workflow? I'm not sure if special adjustments are necessary, but doing it the usual way of upscaling the image then running it through a second ksampler with lower denoise produces pretty bad results. Is this just not possible with the PixArt models?

VAE error

i'm getting this one... also tried the other two.. same

image
ComfyUI: 2143 -27d580-(2024-04-23)

Error occurred when executing VAELoader:

'decoder.conv_in.weight'

File "D:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\SD\ComfyUI_windows_portable\ComfyUI\nodes.py", line 690, in load_vae
vae = comfy.sd.VAE(sd=sd)
^^^^^^^^^^^^^^^^^^^
File "D:\SD\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 222, in init
self.latent_channels = ddconfig['z_channels'] = sd["decoder.conv_in.weight"].shape[1]
~~^^^^^^^^^^^^^^^^^^^^^^^^^^

Issues trying to run T5v1.1 Loader

I'm trying to run on PixArt Sigma but this node kept giving me trouble every time. I had uninstalled and reinstalled the node to no avail. I have the necessary requirements to run PixArt Sigma in the t5 folder but when the path is set to "folder" it kept giving me various kinds of errors, such as "missing config.json". The config name was initially renamed to something different so I renamed it back, and then it gave me another error saying:

Screenshot (367)

And this is where the heart of the confusions lie. I have seen other people's workflow with the path option set to "folder" and they work just fine. This issue seems to be entirely unique to me as I don't see anyone else with this problem.

Also to note, when the path type is set to "file", the output image becomes a noisy mess, and even that is in question because I'm still not entirely sure what causes that too. I got the right VAE for it too.

Screenshot (364)

Error occurred when executing HYDiTTextEncoderLoader:invalid load key, 'v'.

Error occurred when executing HYDiTTextEncoderLoader:

invalid load key, 'v'.

File "L:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "L:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "L:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\nodes.py", line 69, in load_model
clip = load_clip(
^^^^^^^^^^
File "L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ExtraModels\HunYuanDiT\tenc.py", line 148, in load_clip
sd = comfy.utils.load_torch_file(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "L:\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 23, in load_torch_file
pl_sd = torch.load(ckpt, map_location=device, pickle_module=comfy.checkpoint_pickle)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "L:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1040, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "L:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1258, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

About how to load parameters from other folder

Thank you for answering my eariler questions. Now I have updated my pytorch and cuda version and use sd 1.5 models for vae. But due to my limited disk storage, I have to download Pixart model to the folder

/data/Pixart_Alpha/PixArt-alpha

Where the checkpoints, t5, vae are stored in the

/data/Pixart_Alpha/PixArt-alpha/t5/t5-v1_1-xxl
/data/Pixart_Alpha/PixArt-alpha/vae/sd-vae-ft-ema

and I install comfy UI in

/home/polop

I tried to modify extra_model_paths.yaml as

base_path: /data/Pixart_Alpha/Pixart-alpha
checkpoints: /data/Pixart_Alpha/Pixart-alpha
vae: /data/Pixart_Alpha/Pixart-alpha/sd-vae-ft-ema

However, that doesn't seem work for me. The file list in GUI is null and I can't find where to modify the path to t5 folder.
Could i ask how to modify the path to these data folders so that GUI could load parameter from them? Thank you for your help

Can't run Pixart Sigma 2k, worked ok couple of days ago. "unexpected keyword argument 'bias'"

Just loaded the workflow which worked before and got the error from the PixArtCheckpointLoader node:

Error occurred when executing PixArtCheckpointLoader:

PatchEmbed.init() got an unexpected keyword argument 'bias'

File "C:_sd-win\ComfyUI_new\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:_sd-win\ComfyUI_new\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:_sd-win\ComfyUI_new\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:_sd-win\ComfyUI_new\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\nodes.py", line 29, in load_checkpoint
model = load_pixart(
^^^^^^^^^^^^
File "C:_sd-win\ComfyUI_new\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\loader.py", line 87, in load_pixart
model.diffusion_model = PixArtMS(**model_conf.unet_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:_sd-win\ComfyUI_new\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArtMS.py", line 109, in init
super().init(
File "C:_sd-win\ComfyUI_new\ComfyUI\custom_nodes\ComfyUI_ExtraModels\PixArt\models\PixArt.py", line 90, in init
self.x_embedder = PatchEmbed(input_size, patch_size, in_channels, hidden_size, bias=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Tried reinstalling the nodes, restarting comfy, rebooting - still the same. Maybe there is a solution before I try to reinstall comfy from scratch?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.