Git Product home page Git Product logo

comfyui-magicanimate's People

Contributors

mingqizhang avatar slapaper avatar thecooltechguy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-magicanimate's Issues

Controlnet inference out of memory

After Loading Magic Animate model, Magic Animate node starts to run. Then there is a Exception.

Error occurred when executing MagicAnimate:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 6.82 GiB
Requested : 160.00 MiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "D:\stable_diffusion\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\stable_diffusion\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\stable_diffusion\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\stable_diffusion\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 253, in generate
sample = pipeline(
File "D:\python\python3.10\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\stable_diffusion\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 699, in __call__
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable_diffusion\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\models\controlnet.py", line 529, in forward
sample, res_samples = downsample_block(
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1086, in forward
hidden_states = attn(
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\diffusers\models\transformer_2d.py", line 315, in forward
hidden_states = block(
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\diffusers\models\attention.py", line 248, in forward
ff_output = self.ff(norm_hidden_states, scale=lora_scale)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\diffusers\models\attention.py", line 307, in forward
hidden_states = module(hidden_states, scale)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\python\python3.10\lib\site-packages\diffusers\models\attention.py", line 356, in forward
return hidden_states * self.gelu(gate)

From the code, I encountered the exception while running ControlNet. my notebook has only 8GB of VRAM. How much VRAM is required to run this workflow?

The results are too white

The results are similar to your example. It looks covering with a semi transparent white layer. The official examples have correct color.
From your nodes:
image

Official examples:
image

Error message saying "Ran out of input"

I encountered an error message saying "Ran out of input", The specific error code is as follows:

Error occurred when executing MagicAnimateModelLoader:

Ran out of input

File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 123, in load_model
motion_module_state_dict = torch.load(motion_module, map_location="cpu")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1028, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1246, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ImportError: cannot import name 'PositionNet' from 'diffusers.models.embeddings'

Got error when starting comfyUI

File "E:\PycharmProjects\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\models\appearance_encoder.py", line 39, in
from diffusers.models.embeddings import (
ImportError: cannot import name 'PositionNet' from 'diffusers.models.embeddings'

There is no PositionNet in diffusers code, how to solve the problem?

ComfyUI can't load nodes

Hi !
I am waiting for such a great tool for years :) But.... The nodes are not loading in ComfyUI.
The ComfyUI is up to date and working fine
The issue is present on Linux and on Win

this is the error from the ubuntu terminal

File "/home/user/ComfyUI/nodes.py", line 1800, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/user/ComfyUI/custom_nodes/ComfyUI-MagicAnimate/init.py", line 15, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "/home/user/ComfyUI/custom_nodes/ComfyUI-MagicAnimate/nodes.py", line 11, in
from diffusers import AutoencoderKL, DDIMScheduler, UniPCMultistepScheduler
ImportError: cannot import name 'UniPCMultistepScheduler' from 'diffusers' (/home/user.local/lib/python3.10/site-packages/diffusers/init.py)

Cannot import /home/user/ComfyUI/custom_nodes/ComfyUI-MagicAnimate module for custom nodes: cannot import name 'UniPCMultistepScheduler' from 'diffusers' (/home/user/.local/lib/python3.10/site-packages/diffusers/init.py)

image

Currently broken with latest ComfyUI?

Have tried with fresh installs on multiple machines. Anybody else experiencing simplilar.

`Cannot import C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate module for custom nodes: cannot import name 'PositionNet' from 'diffusers.models.embeddings' (C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\embeddings.py)
### Loading: ComfyUI-Manager (V2.7.2)
### ComfyUI Revision: 1965 [f44225fd] | Released on '2024-02-09'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[comfyui_controlnet_aux] | INFO -> Using ckpts path: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
Traceback (most recent call last):
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1893, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 936, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1073, in get_code
  File "<frozen importlib._bootstrap_external>", line 1130, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\AI\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ControlNet-LLLite-ComfyUI\\__init__.py'

Cannot import C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ControlNet-LLLite-ComfyUI module for custom nodes: [Errno 2] No such file or directory: 'C:\\AI\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ControlNet-LLLite-ComfyUI\\__init__.py'

Import times for custom nodes:
   0.0 seconds (IMPORT FAILED): C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ControlNet-LLLite-ComfyUI
   0.0 seconds: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
   0.1 seconds: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
   0.2 seconds: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
   0.2 seconds: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
   0.2 seconds (IMPORT FAILED): C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate
   0.5 seconds: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux`

OverflowError: cannot fit 'int' into an index-sized integer

Using the example workflow,
select model ->
upload reference image ->
upload pose video ->
click queue prompt ->
when excuting "Magic Animate" node, it boomed

System Win11 with Python 3.10.9

Tried reinstall requirements, upload another image, change some input value, didnt work

got prompt
control shape: (25, 512, 512, 3)
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\AI\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\AI\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\AI\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\AI\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 254, in generate
sample = pipeline(
File "C:\AI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 591, in call
text_embeddings = self._encode_prompt(
File "C:\AI\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 189, in _encode_prompt
text_inputs = self.tokenizer(
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 2790, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 2876, in _call_one
return self.batch_encode_plus(
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 3067, in batch_encode_plus
return self._batch_encode_plus(
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils.py", line 807, in _batch_encode_plus
batch_outputs = self._batch_prepare_for_model(
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils.py", line 879, in _batch_prepare_for_model
batch_outputs = self.pad(
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 3274, in pad
outputs = self._pad(
File "C:\AI\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 3638, in _pad
encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference
OverflowError: cannot fit 'int' into an index-sized integer

Prompt executed in 0.25 seconds

Not working with OpenPose

I tried to use OpenPose with this addon (judging by the folder for the model, this is possible). When I tried to start, I got an error - no config.the json file for OpenPose (it's really not in the folder). Question: where can I get this file?

Missing appearance_encoder config.json

Getting this error

Error occurred when executing MagicAnimateModelLoader:

Error no file named config.json found in directory D:\AI\ComfyUI_windows_portable\ComfyUI\models\MagicAnimate\appearance_encoder.

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ezXY\autoCastPatch.py", line 299, in map_node_over_list
return _map_node_over_list(obj, input_data_all, func, allow_interrupt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 89, in load_model
appearance_encoder = AppearanceEncoderModel.from_pretrained(config.pretrained_appearance_encoder_path).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 712, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 365, in load_config
raise EnvironmentError(

Huge downloads, Import Failed

I just wish it was mentioned that you need a lot of disk space to download the package. I never expected to download 48 GIGABYTES of content, most of it is diffusers and also safetensors that I already have. Are the CKPTs necessary ?
I'm not even sure the whole thing was downloaded correctly because of the quick 0 byte left on my disk, thank you very much

So far all I can get is the now classic "Import Failed" -_-
Could be an incomplete install OR incompatibility with other custom nodes

...oh, you need the list of my installed custom nodes ? Sure, here it is :

canvas_tab
cg-use-everywhere
comfy-image-saver
ComfyLiterals
ComfyUI-Advanced-ControlNet
ComfyUI-Custom-Scripts
comfyui-dynamicprompts
ComfyUI-Image-Selector
ComfyUI-Impact-Pack
ComfyUI-Inspire-Pack
ComfyUI-Lora-Auto-Trigger-Words
ComfyUI-MagicAnimate
ComfyUI-Manager
ComfyUI-post-processing-nodes
comfyui-reactor-node
ComfyUI-Stable-Video-Diffusion
ComfyUI-TacoNodes
comfyui-tooling-nodes
ComfyUI-VideoHelperSuite
ComfyUI-WD14-Tagger
ComfyUI_ADV_CLIP_emb
ComfyUI_Comfyroll_CustomNodes
comfyui_controlnet_aux
ComfyUI_Cutoff
ComfyUI_FizzNodes
ComfyUI_IPAdapter_plus
ComfyUI_TiledKSampler
ComfyUI_tinyterraNodes
ComfyUI_toyxyz_test_nodes
ComfyUI_UltimateSDUpscale
Derfuu_ComfyUI_ModdedNodes
efficiency-nodes-comfyui
facerestore_cf
IPAdapter-ComfyUI
LoadLoraWithTags
OneButtonPrompt
rgthree-comfy
sdxl_prompt_styler
stability-ComfyUI-nodes
was-node-suite-comfyui
wlsh_nodes

Error loading models in comfyUI

When i try to load the workflow .. I am getting this error and even after reinstalling all and checking all models are in the path still same error persists. Can you give what models are to be loaded where and what is being looked for.
Error occurred when executing MagicAnimateModelLoader:

Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:\AI\ComfyUI\models\MagicAnimate\stable-diffusion-v1-5.

File "D:\AI\comfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\comfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\custom_nodes\ComfyUI_ezXY\autoCastPatch.py", line 299, in map_node_over_list
return _map_node_over_list(obj, input_data_all, func, allow_interrupt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\comfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 81, in load_model
text_encoder = CLIPTextModel.from_pretrained(config.pretrained_model_path, subfolder="text_encoder")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\comfyUI\venv\Lib\site-packages\transformers\modeling_utils.py", line 2992, in from_pretrained
raise EnvironmentError(

Close

[Solved] DeepPose video input ?

Hi,
Now that the huge download issue is solved, everything load perfectly. But in the end I still don't know how to use MagicAnimate.
How to make a "DeepPose" video input ? I just google that and it looks like you're the only one even mentioning it for ComfyUI, maybe I need to search harder ? x)
I'm confused with all the AnimateDiff/AnimateAnyone/SVD teasing videos you can see all over the internet lately, not sure if MagicAnimate is part of those tech, which path to follow... in short : help xD

Also usually with some Github custom node I would take a PNG from the description to get the basic workflow in ComfyUI, that didn't work this time lol I had to do it by hand, luckily it's very simple

AttributeError: 'Tensor' object has no attribute 'copy'

After I updated, all workflows related to magicanimate seemed to report errors.

Error occurred when executing MagicAnimate:

'Tensor' object has no attribute 'copy'

File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 154, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 84, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 77, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 253, in generate
sample = pipeline(
^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 606, in call
control = self.prepare_condition(
^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 356, in prepare_condition
condition = torch.from_numpy(condition.copy()).to(device=device, dtype=dtype) / 255.0

got prompt
control shape: (16, 681, 681, 3)
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 154, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 84, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 77, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 253, in generate
sample = pipeline(
^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 606, in call
control = self.prepare_condition(
^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 356, in prepare_condition
condition = torch.from_numpy(condition.copy()).to(device=device, dtype=dtype) / 255.0
^^^^^^^^^^^^^^
AttributeError: 'Tensor' object has no attribute 'copy'

Prompt executed in 0.69 seconds

ImportError: cannot import name 'ADDED_KV_ATTENTION_PROCESSORS'

Hello
Would you have an idea why I am getting the error below when starting up comfy UI with conda please?
Many thanks

`Torch version: 2.0.1+cu117
2024-01-09 09:13:30.296155: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-01-09 09:13:30.296191: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-01-09 09:13:30.296204: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Traceback (most recent call last):
File "/home/admin/ComfyUI/nodes.py", line 1810, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/admin/ComfyUI/custom_nodes/ComfyUI-MagicAnimate/init.py", line 16, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "/home/admin/ComfyUI/custom_nodes/ComfyUI-MagicAnimate/nodes.py", line 15, in
from magicanimate.models.appearance_encoder import AppearanceEncoderModel
File "/home/admin/ComfyUI/custom_nodes/ComfyUI-MagicAnimate/libs/magicanimate/models/appearance_encoder.py", line 31, in
from diffusers.models.attention_processor import (
ImportError: cannot import name 'ADDED_KV_ATTENTION_PROCESSORS' from 'diffusers.models.attention_processor' (/home/admin/anaconda3/envs/automatic/lib/python3.10/site-packages/diffusers/models/attention_processor.py)

Cannot import /home/admin/ComfyUI/custom_nodes/ComfyUI-MagicAnimate module for custom nodes: cannot import name 'ADDED_KV_ATTENTION_PROCESSORS' from 'diffusers.models.attention_processor' (/home/admin/anaconda3/envs/automatic/lib/python3.10/site-packages/diffusers/models/attention_processor.py)
---------------------------------`

[Feature Request] Support Any Checkpoint

Hey there. I see you ported this to Comfy...which is awesome.

I submitted a PR to the original project that adds support for any 1.5 checkpoint, eliminating the need for downloading diffusers weights.

If you want to take a look at my work, I'd be happy to help with any questions you'd have in merging it.

magic-research/magic-animate#111

生成的视频会变灰

生成的视频会变灰,请问这个是正常情况吗?然后我使用同样的参考视频和图片,在原生的magic-animate下,生成的是正常的。
image
image

There is a TypeError when running the code, “expected str, bytes or os.PathLike object, not NoneType”. Thank you for your help!

Error occurred when executing MagicAnimateModelLoader:

expected str, bytes or os.PathLike object, not NoneType

File "E:\coUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\coUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\coUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\coUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 82, in load_model
tokenizer = CLIPTokenizer.from_pretrained(config.pretrained_model_path, subfolder="tokenizer")
File "E:\coUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
File "E:\coUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\tokenization_utils_base.py", line 1959, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "E:\coUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\tokenization_clip.py", line 322, in init
with open(vocab_file, encoding="utf-8") as vocab_handle:

anyone can help me, "ModuleNotFoundError: No module named 'diffusers'"

File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1735, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate_init
.py", line 15, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 11, in
from diffusers import AutoencoderKL, DDIMScheduler, UniPCMultistepScheduler
ModuleNotFoundError: No module named 'diffusers'

Wierd result using OpenPose

Снимок экрана 2023-12-11 194217
I tried using OpenPose, the result is very strange. It works, but the video comes out full of glitches and artifacts. The character seems to be mixed with the background and I don't understand why. I tried various combinations of resolutions and styles of the video and the input image, but nothing helped. Please, help me

ModuleNotFoundError: No module named 'imageio'

C:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan

ComfyUI-Manager: installing dependencies done.

** ComfyUI startup time: 2024-06-30 19:46:40.849712
** Platform: Windows
** Python version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
** Python executable: C:\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: C:\ComfyUI_windows_portable\ComfyUI\main.py
** Log path: C:\ComfyUI_windows_portable\comfyui.log

Prestartup times for custom nodes:
0.0 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate
0.4 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 10240 MB, total RAM 65349 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
Using pytorch cross attention
C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
Traceback (most recent call last):
File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1906, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate_init
.py", line 16, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\nodes.py", line 17, in
from magicanimate.pipelines.pipeline_animation import AnimationPipeline
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\pipelines\pipeline_animation.py", line 63, in
from magicanimate.utils.util import get_tensor_interpolation_method
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate\libs\magicanimate\utils\util.py", line 9, in
import imageio
ModuleNotFoundError: No module named 'imageio'

Cannot import C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate module for custom nodes: No module named 'imageio'

Loading: ComfyUI-Manager (V2.43)

ComfyUI Revision: 2309 [dbb7dd3b] | Released on '2024-06-30'

Import times for custom nodes:
0.0 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
0.2 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
0.2 seconds (IMPORT FAILED): C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json


I am encountering an issue with ComfyUI, and I need help resolving it. Below is the error message I'm receiving

What about the 100GB of automatically downloaded models?

Checking the operational structure of the node, it is easy to identify how in the ComfyUI\models\MagicAnimate folder about 100GB of models are installed, not all of which are needed.
This causes a very slow first boot, plus it may clog some user's HDD.
I also find it very atypical that in the official workflow there is no reference to many of them (especially those in the sd-vae-ft-mse and stable-diffusion-v1-5 folders), suggesting that they are loaded behind the scenes.
Certainly the whole workflow is remarkable, but I would prefer the ability to be able to check all the uploaded models and avoid downloading and saving redundant data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.