Git Product home page Git Product logo

comfyscript's People

Contributors

chaoses-ib avatar lucak5s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyscript's Issues

`ComfyScript: Failed to load node VHS_VideoCombine` because of `AttributeError: 'list' object has no attribute 'removesuffix'`

Hi there, thanks for this super cool repo, this is exactly what I've been waiting for to really dive into comfyUI!

I'm trying to get an animateDiff workflow working, but I'm running into an issue with the node which outputs the video/gif.

It's from https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite

The workflow is here:
whileaf-lora-workflow.json

It gets transpiled to:

from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.safetensors')
    model, clip = LoraLoader(model, clip, 'whileaf.safetensors', 1, 1)
    motion_model = ADELoadAnimateDiffModel('v3_sd15_mm.ckpt', None)
    m_models = ADEApplyAnimateDiffModel(motion_model, 0, 1, None, None, None, None, None)
    context_opts = ADELoopedUniformContextOptions(16, 1, 4, True, 'pyramid', False, 0, 1, None, None)
    settings = ADEAnimateDiffSamplingSettings(0, 'FreeNoise', 'comfy', 0, None, None, 0, False, None, None)
    model = ADEUseEvolvedSampling(model, 'autoselect', m_models, context_opts, settings)
    conditioning = CLIPTextEncode('whileaf whileaf creepy slime calligraphy graffiti runes', clip)
    conditioning2 = CLIPTextEncode('ugly', clip)
    latent = EmptyLatentImage(512, 512, 48)
    latent = KSampler(model, 820058635513319, 20, 8, 'dpmpp_2m_sde_gpu', 'karras', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    _ = VHSVideoCombine(image, 8, 0, 'AnimateDiff', 'image/gif', False, True, None, None)

but running it gives the following errors:

ComfyScript: Using ComfyUI from http://127.0.0.1:8188/
Nodes: 357
ComfyScript: Failed to load node VHS_VideoCombine
Traceback (most recent call last):
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/nodes.py", line 19, in load
    fact.add_node(node_info)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 392, in add_node
    inputs.append(f'{input_id}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 264, in type_and_hint
    enum_c, t = astutil.to_str_enum(id, { _remove_extension(s): s for s in type_info }, '    ')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 264, in <dictcomp>
    enum_c, t = astutil.to_str_enum(id, { _remove_extension(s): s for s in type_info }, '    ')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 19, in _remove_extension
    path = path.removesuffix(ext)
AttributeError: 'list' object has no attribute 'removesuffix'
ComfyScript: Node already exists: {'input': {'required': {'frames_per_batch': ['INT', {'default': 16, 'min': 1, 'max': 128, 'step': 1}]}, 'hidden': {'prompt': 'PROMPT', 'unique_id': 'UNIQUE_ID'}}, 'output': ['VHS_BatchManager'], 'output_is_list': [False], 'output_name': ['VHS_BatchManager'], 'name': 'VHS_BatchManager', 'display_name': 'Batch Manager ๐ŸŽฅ๐Ÿ…ฅ๐Ÿ…—๐Ÿ…ข', 'description': '', 'category': 'Video Helper Suite ๐ŸŽฅ๐Ÿ…ฅ๐Ÿ…—๐Ÿ…ข', 'output_node': False}
ComfyScript: Failed to load node List of any [Crystools]
Traceback (most recent call last):
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/nodes.py", line 19, in load
    fact.add_node(node_info)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 420, in add_node
    output_types = [type_and_hint(type, name, output=True)[0] for type, name in output_with_name]
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 420, in <listcomp>
    output_types = [type_and_hint(type, name, output=True)[0] for type, name in output_with_name]
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/factory.py", line 264, in type_and_hint
    enum_c, t = astutil.to_str_enum(id, { _remove_extension(s): s for s in type_info }, '    ')
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/astutil.py", line 149, in to_str_enum
    return to_enum(id, dic, indent, StrEnum)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/astutil.py", line 142, in to_enum
    return c, enum_class(id, members)
  File "/home/hans/.conda/envs/hans/lib/python3.10/enum.py", line 387, in __call__
    return cls._create_(
  File "/home/hans/.conda/envs/hans/lib/python3.10/enum.py", line 518, in _create_
    enum_class = metacls.__new__(metacls, class_name, bases, classdict)
  File "/home/hans/.conda/envs/hans/lib/python3.10/enum.py", line 208, in __new__
    raise ValueError('Invalid enum member name: {0}'.format(
ValueError: Invalid enum member name: 
ComfyScript: Failed to queue prompt: <ClientResponse(http://127.0.0.1:8188/prompt) [400 Bad Request]>
<CIMultiDictProxy('Content-Type': 'application/json; charset=utf-8', 'Content-Length': '128', 'Date': 'Sat, 10 Feb 2024 13:57:54 GMT', 'Server': 'Python/3.10 aiohttp/3.9.3')>
<ClientResponse(http://127.0.0.1:8188/prompt) [400 Bad Request]>
<CIMultiDictProxy('Content-Type': 'application/json; charset=utf-8', 'Content-Length': '128', 'Date': 'Sat, 10 Feb 2024 13:57:54 GMT', 'Server': 'Python/3.10 aiohttp/3.9.3')>
{
  "error": {
    "type": "prompt_no_outputs",
    "message": "Prompt has no outputs",
    "details": "",
    "extra_info": {}
  },
  "node_errors": []
}
Traceback (most recent call last):
  File "/home/hans/.conda/envs/hans/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/hans/.conda/envs/hans/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/HUGE/Code/ComfyUI/custom_nodes/ComfyScript/whileaf-lora-workflow.py", line 18, in <module>
    VHSVideoCombine(image, 8, 0, 'AnimateDiff', 'image/gif', False, True, None, None)
TypeError: VHSVideoCombine() takes no arguments

Any idea what might be going on?

Real mode and IPAdapterApplyFaceID node

Hi,

I'm trying to run IPAdapterApplyFaceID node from IPAdapter_plus package in the real mode and get the error below. IPAdapterApply node from the same package works fine. Both those nodes also worked in virtual mode.
Overall this is a very nice tool, any help or hint would be appreciated.

from comfy_script.runtime.real import *
load()
from comfy_script.runtime.real.nodes import *


im, mask = LoadImage('test.png')

with Workflow():
    gen_model, gen_clip, gen_vae = CheckpointLoaderSimple('deliberate_v3.safetensors') 
    gen_ipadapter = IPAdapterModelLoader('ip-adapter-faceid-portrait_sd15.bin') 
    clip_vision = CLIPVisionLoader('model.safetensors') 
    insightface = InsightFaceLoader(provider='CUDA')

    adapter_applied_model = IPAdapterApplyFaceID(
        ipadapter=gen_ipadapter, 
        clip_vision=clip_vision, 
        insightface=insightface, 
        image=im, 
        model=gen_model, 
        weight=1, 
        noise=0, 
        weight_type='original', 
        start_at=0, 
        end_at=1, 
        faceid_v2=False, 
        weight_v2=1, 
        unfold_batch=False,
        ) 
Traceback (most recent call last):
File "/home/ro/test.py", line 16, in <module>
adapter_applied_model = IPAdapterApplyFaceID(
File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 65, in new
obj = orginal_new(cls)
File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 97, in new
outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
TypeError: IPAdapterApply.apply_ipadapter() missing 2 required positional arguments: 'ipadapter' and 'model'

Doc Request - how to run script.

Hi. Python noob here.

I understand how to use comfyUI, and regular custom nodes. I have installed comfyscript in custom_nodes. I can code (a bit).

I don't use python, but I can follow what the scripts are doing and I'm pretty confident I could write some, so this looks really useful.

I may have missed something, but in the docs says "With the runtime, you can run ComfyScript like this:"... but then just shows another script.

Could I request a line or two in the docs to explain how to actually run one of the examples already given. Its probably obvious when you know - I'm just not seeing it.
Does it matter where the script files are saved?
Do I need to have comfyui running?
Do I need a workflow open?
How do I run the script?

Thanks

Getting errors after sampler run, logs hidden

I have a workflow where I have a sampler, but I do more after the sampler is done. In VSCode, I see the image output/preview from the Sampler, but if there's a failure later, there's no logs.

LoadImage - TypeError: Object of type Image is not JSON serializable

How to use LoadImage? I'm getting this error

TypeError: Object of type Image is not JSON serializable

specifically, I'm trying to add this to my workflow (some hardcoded values for now):

def cnet_zoedepth(ctx, img):
    control_net = ControlNetLoaderAdvanced(ControlNetLoaderAdvanced.control_net_name.diffusers_xl_depth_mid, None)
    img = ImageResize(img, 'rescale', 'true', 'lanczos', 1, ctx.width, ctx.height)
    preprocessed_img = ZoeDepthAnythingPreprocessor(img, 'indoor', ctx.width)
    ctx.pos, ctx.neg, ctx.model = ACNAdvancedControlNetApply(ctx.pos, ctx.neg, control_net, preprocessed_img, strength=0.4, start_percent=0, end_percent=0.4)
img, _ = LoadImage("D:\sd\input\clipspace\clipspace-rgba-534537233.png")
cnet_zoedepth(ctx, img)

Is it possible to add auto-transpile to create a python script for running?

I know python -m comfy_script.transpile workflow.json can create script

    model, clip, vae = CheckpointLoaderSimple('realisticVisionV60B1_v51HyperVAE.safetensors')
    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
    conditioning2 = CLIPTextEncode('text, watermark', clip)
    latent = EmptyLatentImage(512, 512, 1)
    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)    
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')

then I will add these codes to make it run

from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *
with Workflow():

Is it possible to make a auto-transpile to convert workflow into a script.py , so I can convert it and then run without editting any code?

Save Image returns what?

I'm a bit confused by this line in the examples that let's you choose an image to hiresfix:

image_batches.append(SaveImage(VAEDecode(latent, vae), f'{seed} {color}'))

AFAIK SaveImage doesn't return anything. What am I missing?

Custom node that fail to load can make the whole loading fail

I found this node make the whole load to fail:
IFRNet VFI
It's from ComfyUI-Frame-Interpolation

I added and exception catch to ignore it in nodes.py and the rest of the nodes loaded properly
image

Edit:
The same thing happens in the nodes.py of the real mode.

Handling "UI-only" nodes

Some nodes it seems are client-only nodes, like Reroute (rgthree). I'm getting this error, and I'm assuming that is why, as it's only javascript, there's no associated Python code with it. How should I handle that?

Traceback (most recent call last):
  File "E:\custom_nodes\ComfyScript\src\comfy_script\nodes\__init__.py", line 97, in chunks
    comfy_script = transpile.WorkflowToScriptTranspiler(workflow).to_script(end_nodes)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 340, in to_script
    c += self._node_to_assign_st(self.G.nodes[node])
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 193, in _node_to_assign_st
    args = self._keyword_args_to_positional(v.type, args_dict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 122, in _keyword_args_to_positional
    input_types = self._get_input_types(node_type)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 81, in _get_input_types
    return self.nodes_info[node_type]['input']
           ~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'Reroute (rgthree)'

[Bug Report] Misrecognition of nodes with same name

Thanks for your great job, but I found a bug that really confuse me.

It seems like that ComfyScript can't recognize some nodes correctly. In my workflow, I have a node (Image Resize from Image Resize for ComfyUI)
image
In the output script, it gave me "image, _ = ImageResize(image, 'crop to ratio', 0, 1024, 0, 'any', '9:16', 0.5, 0, None)"
Run it, error occurred:
"Traceback (most recent call last):
File "/home/ruanxy/work/gen_tools/main.py", line 10, in
image, _ = ImageResize(image, 'crop to ratio', 0, 1024, 0, 'any', '9:16', 0.5, 0, None)
ValueError: too many values to unpack (expected 2)
"
I tried to fix it to add a return value like this: "image, _ = ImageResize(image, 'crop to ratio', 0, 1024, 0, 'any', '9:16', 0.5, 0, None)"
new error occurred:
"
ERROR:root:Failed to validate prompt for output SaveImage.0:
ERROR:root:* ImageResize+ ImageResize+.0:
ERROR:root: - Failed to convert an input value to a INT value: width, crop to ratio, invalid literal for int() with base 10: 'crop to ratio'
ERROR:root: - Value not in list: interpolation: '1024' not in ['nearest', 'bilinear', 'bicubic', 'area', 'nearest-exact', 'lanczos']
ERROR:root: - Value not in list: condition: 'any' not in ['always', 'only if bigger', 'only if smaller']
"
Through the error infomation I realized that ComfyScript recognized the node as (Image Resize from ComfyUI Essentials)
image

I want to know how to deal with this kind of situation.
Thank you.

Import issues and type stubs

This might be an issue with me lacking python skills.

import random
import sys
import os
#sys.path.insert(0, 'script/runtime')
#from nodes import *
#from script.runtime.nodes import *

sys.path.insert(0, '../../')
import folder_paths
sys.path.insert(0, 'src')
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *

####################
# Randomize script #
####################

#
# Config
#

# set to true if sdxl, false if sd 1.5
xl = True 

# needs to have checkpoints and loras divided in SDXL and SD 1.5 folders, embeddings needs to be divided into positive and negative folders
xl_folder_name = "xl"
sd1_5_folder_name = "sd1.5"

pos_folder_name = "pos"
neg_folder_name = "neg"

# number of images to create
images = 10

# checkpoint
randomize_checkpoint = True
# used if randomize_checkpoint is false
default_checkpoint = "xl\turbovisionxlSuperFastXLBasedOnNew_alphaV0101Bakedvae.safetensors"

# lora
randomize_lora = True
number_of_loras = 3
lora_min_value = 0.1
lora_max_value = 2
# default lora setup using CRLoRAStack, used if randomize_lora is false, if you have more then 3 loras you need to modify the code
default_lora_stack = ('On', r'xl\LCMTurboMix_LCM_Sampler.safetensors', 1, 1, 'On', r'xl\xl_more_art-full_v1.safetensors', 1, 1, 'On', r'xl\add-detail-xl.safetensors', 1, 1)

# prompt
fully_randomized_prompt = False # TODO use https://github.com/adieyal/comfyui-dynamicprompts to generate random prompt
positive_prompt = "Shot Size - extreme wide shot,( Marrakech market at night time:1.5), Moroccan young beautiful woman, smiling, exotic, (loose hijab:0.1)"
negative_prompt = "(worst quality, low quality, normal quality:2), blurry, depth of field, nsfw"
randomize_positive_embeddings = True
randomize_negative_embeddings = True
embeddings_positive_min_value = 0.1
embeddings_positive_max_value = 2
embeddings_negative_min_value = 0.1
embeddings_negative_max_value = 2

# freeu
randomize_freeu = True
min_freeu_values = [0.5, 0.5, 0.2, 0.1]
max_freeu_values = [3, 3, 2, 1]
# used if randomize_freeu is set to false
default_freeu_values = (1.3, 1.4, 0.9, 0.2)


# code taken from impact.utils
def add_folder_path_and_extensions(folder_name, full_folder_paths, extensions):
    # Iterate over the list of full folder paths
    for full_folder_path in full_folder_paths:
        # Use the provided function to add each model folder path
        folder_paths.add_model_folder_path(folder_name, full_folder_path)

    # Now handle the extensions. If the folder name already exists, update the extensions
    if folder_name in folder_paths.folder_names_and_paths:
        # Unpack the current paths and extensions
        current_paths, current_extensions = folder_paths.folder_names_and_paths[folder_name]
        # Update the extensions set with the new extensions
        updated_extensions = current_extensions | extensions
        # Reassign the updated tuple back to the dictionary
        folder_paths.folder_names_and_paths[folder_name] = (current_paths, updated_extensions)
    else:
        # If the folder name was not present, add_model_folder_path would have added it with the last path
        # Now we just need to update the set of extensions as it would be an empty set
        # Also ensure that all paths are included (since add_model_folder_path adds only one path at a time)
        folder_paths.folder_names_and_paths[folder_name] = (full_folder_paths, extensions)
        
model_path = folder_paths.models_dir
add_folder_path_and_extensions("loras", [os.path.join(model_path, "loras")], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("loras_xl", [os.path.join(model_path, "loras", xl_folder_name)], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("loras_sd1.5", [os.path.join(model_path, "loras", sd1_5_folder_name)], folder_paths.supported_pt_extensions)

add_folder_path_and_extensions("checkpoints", [os.path.join(model_path, "checkpoints")], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("checkpoints_xl", [os.path.join(model_path, "checkpoints", xl_folder_name)], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("checkpoints_sd1.5", [os.path.join(model_path, "checkpoints", sd1_5_folder_name)], folder_paths.supported_pt_extensions)

add_folder_path_and_extensions("embeddings", [os.path.join(model_path, "embeddings")], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("embeddings_pos", [os.path.join(model_path, "embeddings", pos_folder_name)], folder_paths.supported_pt_extensions)
add_folder_path_and_extensions("embeddings_neg", [os.path.join(model_path, "embeddings", neg_folder_name)], folder_paths.supported_pt_extensions)

def get_random_loras():
    if xl == True:
        loras = [xl_folder_name + "/" + x for x in folder_paths.get_filename_list("loras_xl")]
    else:
        loras = [sd1_5_folder_name + "/" + x for x in folder_paths.get_filename_list("loras_sd1.5")]
    return random.sample(loras, number_of_loras)
    
def get_lora_strength():
    return random.uniform(lora_min_value, lora_max_value)
    
def get_random_checkpoint():
    if xl == True:
        checkpoints = [xl_folder_name + "/" + x for x in folder_paths.get_filename_list("checkpoints_xl")]
    else:
        checkpoints = [sd1_5_folder_name + "/" + x for x in folder_paths.get_filename_list("checkpoints_sd1.5")]
    return random.choice(checkpoints)
    
def get_positive_embedding_strength():
    return random.uniform(embeddings_positive_min_value, embeddings_positive_max_value)
    
def get_random_pos_embedding():
    pos_embeddings = [pos_folder_name + "/" + x for x in folder_paths.get_filename_list("embeddings_pos")]
    num_of_embeddings = random.randint(0, len(pos_embeddings))
    if num_of_embeddings == 0:
        return ""
    samples = random.sample(pos_embeddings, num_of_embeddings)
    string = ""
    for sample in samples:
        string += ", (embedding:" + sample + ":" + str(get_positive_embedding_strength()) + ")"
    return string
    
def get_negative_embedding_strength():
    return random.uniform(embeddings_negative_min_value, embeddings_negative_max_value)
    
def get_random_neg_embedding():
    neg_embeddings = [neg_folder_name + "/" + x for x in folder_paths.get_filename_list("embeddings_neg")]
    num_of_embeddings = random.randint(0, len(neg_embeddings))
    if num_of_embeddings == 0:
        return ""
    samples = random.sample(neg_embeddings, num_of_embeddings)
    string = ""
    for sample in samples:
        string += ", (embedding:" + sample + ":" + str(get_negative_embedding_strength()) + ")"
    return string

def get_random_freeu_values():
    return (random.uniform(min_freeu_values[0],max_freeu_values[0]),random.uniform(min_freeu_values[1],max_freeu_values[1]),
            random.uniform(min_freeu_values[2],max_freeu_values[2]),random.uniform(min_freeu_values[3],max_freeu_values[3]))
        
with Workflow():
    # checkpoint
    if randomize_checkpoint == True:
        model, clip, vae = CheckpointLoaderSimple(get_random_checkpoint())
    else:
        model, clip, vae = CheckpointLoaderSimple(default_checkpoint)
    
    # loras
    if randomize_lora == True:
        loras = get_random_loras()
        for lora in loras:
            model, clip = LoraLoader(model, clip, lora, get_lora_strength(), get_lora_strength())
    else:
        lora_stack, _ = CRLoRAStack(default_lora_stack)
        model, clip, _ = CRApplyLoRAStack(model, clip, lora_stack)
    
    
    # freeu
    if randomize_freeu == True:
        model = FreeUV2(model, get_random_freeu_values())
    else:
        model = FreeUV2(model, default_freeu_values)
    
    # positive prompt
    if randomize_positive_embeddings == True:
        pos_string = positive_prompt + get_random_pos_embedding()
    else:
        pos_string = positive_prompt

    pos_cond = CLIPTextEncode(pos_string, clip)
    
    # negative prompt
    if randomize_negative_embeddings == True:
        neg_string = negative_prompt + get_random_neg_embedding()
    else:
        neg_string = negative_prompt

    neg_cond = CLIPTextEncode(neg_string, clip)

Gives error:

Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\test.py", line 170, in <module>
    model = FreeUV2(model, get_random_freeu_values())
            ^^^^^^^
NameError: name 'FreeUV2' is not defined

Uncomment line 4 and 5 and it works

FreeUV2 is not recognized in VS Code, uncomment line 6 and it is recognized but script gives error:

Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\test.py", line 6, in <module>
    from script.runtime.nodes import *
ModuleNotFoundError: No module named 'script.runtime.nodes'

Not sure if this is how type stubs should be imported, new to python. The script is work in progress btw.

Can't import plugin ,Error is: module 'ComfyScript.nodes.ComfyUI_Ib_CustomNodes' has no attribute 'NODE_CLASS_MAPPINGS'

Installed the lastest version. had installed requirements.txt
Error Info:

ComfyScript: Loading nodes...
Traceback (most recent call last):
  File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1810, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyScript\__init__.py", line 20, in <module>
    NODE_CLASS_MAPPINGS.update(ComfyUI_Ib_CustomNodes.NODE_CLASS_MAPPINGS)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'ComfyScript.nodes.ComfyUI_Ib_CustomNodes' has no attribute 'NODE_CLASS_MAPPINGS'

Cannot import E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyScript module for custom nodes: module 'ComfyScript.nodes.ComfyUI_Ib_CustomNodes' has no attribute 'NODE_CLASS_MAPPINGS'
Adding E:\AI\ComfyUI\ComfyUI-webui\ComfyUI_windows_portable\ComfyUI\custom_nodes to sys.path

160413

Generation preview

Is it possible to get previews of an in-progress generation, similar to what is shown in the web ui, using comfyscript? Ideally I'd like some form of stream I can process per-step.

Auto Queue

What would be the idiomatic way of continuously queuing a workflow (incrementing the seed each time) in virtual mode?

Enumeration enhancements

Great work with this project so far. I just got started, and it's working great so far. Can't wait to create some really great stuff with this. I had a quick suggestion. Currently, if a checkpoint, vae, or lora, etc. are located in a file they appear as file_path_{checkpoint/vae/lora/etc}_name. Since these enumerations are created dynamically, would it be possible to represent the file structures as nested enumerations?

Why is this convenient? For example, I, and I'm sure many others organize their checkpoints into folders. Namely, I have a folder for inpainting models, and SDXL and SD1.5 models. I would love to be able to access models with
Checkpoints.sd15.dreamshaper8 rather than Checkpoints.sd15_dreamshaper8. The former looks much nicer, and even more importantly, aids in looping, say if I want to loop over all my SD1.5 checkpoints. What do you think?

AttributeError: module 'comfy.cmd.server' has no attribute 'nodes'

Hi there -- I'm attempting to use ComfyScript with the ComfyUI package in a Docker image. I've followed the Readme as best I could, paired with a little reading of the source code, but I'm now stuck. Take a look at my imports and setup:

from comfy_script.runtime import *
import random
import boto3
import os
import tarfile

load('comfyui')
from comfy_script.runtime.nodes import *

Pretty standard stuff, I think. However, when the script hits the load line, it dies and spits out the following error:

Traceback (most recent call last):
  File "/app/main.py", line 7, in <module>
    load('comfyui')
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 31, in load
    asyncio.run(_load(comfyui, args, vars, watch, save_script_source))
  File "/home/user/micromamba/lib/python3.10/site-packages/nest_asyncio.py", line 30, in run
    return loop.run_until_complete(task)
  File "/home/user/micromamba/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "/home/user/micromamba/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/home/user/micromamba/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 54, in _load
    start_comfyui(comfyui, args)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 340, in start_comfyui
    loop.run_until_complete(main.main())
  File "/home/user/micromamba/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "/home/user/micromamba/lib/python3.10/asyncio/futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/home/user/micromamba/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy/cmd/main.py", line 201, in main
    exit(0)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 271, in exit_hook
    setup_comfyui_polyfills(outer.f_locals)
  File "/home/user/micromamba/lib/python3.10/site-packages/comfy_script/runtime/__init__.py", line 235, in setup_comfyui_polyfills
    exported_nodes = comfy.cmd.server.nodes
AttributeError: module 'comfy.cmd.server' has no attribute 'nodes'

Pasted the whole stack trace for context, but the error is evident at the bottom there.

The relevant parts of my Dockerfile -- just installing the required packages, nothing special:

RUN pip install wheel
RUN pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git
RUN pip install -U "comfy-script[default]"

Any ideas on where this is going wrong? Thanks!

Am I doing this right?

I might be missing some fundamental knowledge here, new to Python. I copied a script called test.py:

from script.runtime import *
load()
from script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
    conditioning2 = CLIPTextEncode('text, watermark', clip)
    latent = EmptyLatentImage(512, 512, 1)
    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')

Put it in ComfyScript folder, run it like "python test.py", result:

Nodes: 1018
Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\test.py", line 2, in <module>
    load()
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\__init__.py", line 18, in load
    asyncio.run(_load(api_endpoint, vars, watch, save_script_source))
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\site-packages\nest_asyncio.py", line 31, in run
    return loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\site-packages\nest_asyncio.py", line 99, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\asyncio\tasks.py", line 267, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\__init__.py", line 29, in _load
    nodes.load(nodes_info, vars)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\nodes.py", line 10, in load
    fact.add_node(node_info)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 124, in add_node
    inputs.append(f'{name}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 69, in type_and_hint
    enum_c, t = astutil.to_str_enum(name, { _remove_extension(s): s for s in type_info }, '    ')
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 69, in <dictcomp>
    enum_c, t = astutil.to_str_enum(name, { _remove_extension(s): s for s in type_info }, '    ')
                                            ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\runtime\factory.py", line 14, in _remove_extension
    path = path.removesuffix(ext)
           ^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'removesuffix'

Using Python 3.11.

Also am I suppose to run some kind of IDE and feed it lines runtime, like the queue commands?

How to get value from the output of a node and perform math or other operations

Hello! I have this code here

async def extend_audio(params: AudioWorkflow):
    async with Workflow(wait=True) as wf:
        model, model_sr = MusicgenLoader()
        audio, sr, duration = LoadAudio(params.snd_filename)
        audio = ConvertAudio(audio, sr, model_sr, 1)
        audio = ClipAudio(audio, duration - 10.0, duration, model_sr) # I would like to perform this math
        raw_audio = MusicgenGenerate(model, audio, 4, duration + params.duration, params.cfg, params.top_k, params.top_p, params.temperature, params.seed or random.randint(0, 2**32 - 1))
        audio = ClipAudio(audio, 0.0, duration - 10.0, model_sr)
        audio = ConcatAudio(audio, raw_audio)
        spectrogram_image = SpectrogramImage(audio, 1024, 256, 1024, 0.4)
        spectrogram_image = ImageResize(spectrogram_image, ImageResize.mode.resize, True, ImageResize.resampling.lanczos, 2, 512, 128)
        video = CombineImageWithAudio(spectrogram_image, audio, model_sr, CombineImageWithAudio.file_format.webm, "final_output")
        await wf.queue()._wait()
    results = await video._wait()
    return await get_data(results)

And I would like to perform math on the resulting duration from LoadAudio. If I try to use it raw, it will just throw the error TypeError: unsupported operand type(s) for -: 'Float' and 'float' Is it possible to get the result here?

Was Node Suite Image Save is not transpiled

workflow (79).json

Not transpiled, also when I added the "was node suite image save" to my script manually and ran the script it embedded the whole script to the image and not the workflow.

ImageSave(
        image,             # images: Image,
        path,                # output_path: str = '[time(%Y-%m-%d)]',
        'ComfyUI',       #filename_prefix: str = 'ComfyUI',
        '_',                    #filename_delimiter: str = '_',
        4,                      #filename_number_padding: int = 4,
        'false',               #filename_number_start: ImageSave.filename_number_start = 'false',
        'webp',             #extension: ImageSave.extension = 'png',
        80,                    #quality: int = 100,
        'true',                 #lossless_webp: ImageSave.lossless_webp = 'false',
        'false',               #overwrite_mode: ImageSave.overwrite_mode = 'false',
        'false',               #show_history: ImageSave.show_history = 'false',
        'true',               #show_history_by_prefix: ImageSave.show_history_by_prefix = 'true',
        'true',               #embed_workflow: ImageSave.embed_workflow = 'true',
        'true',               #show_previews: ImageSave.show_previews = 'true'
        )

Real mode and UltimateSDUpscale node

Hi,
I'm getting error running ultimate upscale in real mode. Virtual mode works fine.

from comfy_script.runtime.real import *
load()
from comfy_script.runtime.real.nodes import *


im, mask = LoadImage('test.png')

with Workflow():

    base_model, base_clip, base_vae = CheckpointLoaderSimple('deliberate_v3.safetensors') 
    upscale_model = UpscaleModelLoader('4x-UltraSharp.pth')

    conditioning = CLIPTextEncode('', base_clip)
    
    upscaled_image = UltimateSDUpscale(
        image=im,
        model=base_model,
        positive=conditioning,
        negative=conditioning, 
        vae=base_vae,
        upscale_by=2,
        seed=0,
        steps=18,
        cfg=1,
        sampler_name='ddpm',
        scheduler='karras',
        denoise=0.2,
        upscale_model=upscale_model, 
        mode_type='Linear',
        tile_width=1024,
        tile_height=1024,
        mask_blur=16,
        tile_padding=32,
        seam_fix_mode='Half Tile',
        seam_fix_denoise=0.5,
        seam_fix_width=64,
        seam_fix_mask_blur=16,
        seam_fix_padding=32, 
        force_uniform_tiles=False, 
        tiled_decode=False) 
Traceback (most recent call last):
  File "/home/ro/test.py", line 17, in <module>
    upscaled_image = UltimateSDUpscale(
  File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 91, in new
    outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
  File "/home/ro/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/nodes.py", line 116, in upscale
    sdprocessing = StableDiffusionProcessing(
  File "/home/ro/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/modules/processing.py", line 40, in __init__
    self.vae_decoder = VAEDecode()
  File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 91, in new
    outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
TypeError: VAEDecode.decode() missing 2 required positional arguments: 'vae' and 'samples'

[Question] About lifecycle of models

Hi I haven't tried it yet but genuinely interested, my question would be about the loaded model's lifecycles. In comfy whenever i switch workflows it will have to reload the models because they have different node IDs, just wondering if using this library would fix that if I were to create a repository of workflows..

Real mode and ImageBatch node

Hi,
I'm getting error running ImageBatch in real mode. It's fine in the virtual mode.

from comfy_script.runtime.real import *
load()
from comfy_script.runtime.real.nodes import *


im, mask = LoadImage('test.png')

with Workflow():
    ImageBatch(im, im)
Traceback (most recent call last):
  File "/home/ro/test.py", line 9, in <module>
    ImageBatch(im, im)
  File "/home/ro/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/real/nodes.py", line 132, in new
    outputs = getattr(obj, obj.FUNCTION)(*args, **kwds)
  File "/home/ro/ComfyUI/nodes.py", line 1658, in batch
    s = torch.cat((image1, image2), dim=0)
TypeError: Multiple dispatch failed for 'torch.cat'; all __torch_function__ handlers returned NotImplemented:

  - tensor subclass <class 'comfy_script.runtime.real.nodes.RealNodeOutputWrapper'>

For more information, try re-running with TORCH_LOGS=not_implemented

่ฏทๆ•™ไธ€ไธ‹ real ๆจกๅผไธ‹ๅฆ‚ไฝ•่ฟ›่กŒๅ†…ๅญ˜็ฎก็†๏ผŒ้ฟๅ…่พ“ๅ…ฅๅ‚ๆ•ฐๆฒกๆ”นๅ˜็š„่Š‚็‚นไป็„ถ้‡ๆ–ฐๆŽจ็†ไธ€้ๅ‘ข๏ผŸ

ๆˆ‘็ผบไน Python ๆˆ–่€…ๅ…ถๅฎƒไปฃ็ ๅผ€ๅ‘ๆ–น้ข็š„ๅŸบ็ก€็Ÿฅ่ฏ†๏ผŒ่‡ชๅทฑๆœ็ดขไบ†ไธ€ไธ‹๏ผŒ็œ‹ๅˆฐๆœ‰็”จ@cached @lru_cache ็ญ‰ decorator ๅŽปๅš็ผ“ๅญ˜็š„๏ผŒไธ็Ÿฅ้“่ฟ™ๆ˜ฏๅฆๆ˜ฏๆ–‡ๆกฃไธญๆๅˆฐ็š„่‡ชๅทฑ็ฎก็†็ผ“ๅญ˜็š„ๆญฃ็กฎๆ–นๆณ•ๅ‘ข๏ผŸๅฆ‚ๆžœไธๆ˜ฏ๏ผŒ่ƒฝๅฆ้บป็ƒฆ็ป™ไธ€ไธ‹็›ธๅ…ณๆ–นๆณ•็š„ๅ…ณ้”ฎ่ฏๆˆ‘ๅŽปๆŸฅไธ€ๆŸฅ๏ผŒ่ฐข่ฐขใ€‚

ๅฆๅค–๏ผŒๆ„Ÿ่ฐขไธ€ไธ‹ๅš่ฟ™ไธชrepo๏ผŒๆˆ‘ไน‹ๅ‰็”จ huggingface ็š„ diffusers ๏ผŒๆ„Ÿ่ง‰ๅฏนไบŽๆˆ‘่ฟ™็ง่œ้ธŸๆฅ่ฏด๏ผŒ้‡Œ้ข็š„ class function ่ฎพ่ฎก็š„ๆŠฝ่ฑกๅŒ–็จ‹ๅบฆ่ฆไนˆๆœ‰็‚นๆ•ด่ฆไนˆๆœ‰็‚น่ฟ‡็ป†๏ผŒๆฒกๆœ‰ comyui ็š„้ข—็ฒ’ๅบฆ้€‚ๅˆๆˆ‘ใ€‚ไน‹ๅ‰ๅฐฑๅœจๆƒณ่ฆๆ˜ฏ comfyui ๆœ‰ library ็š„่ฏๅฐฑๅฅฝไบ†๏ผŒไฝ†ๆ˜ฏๆˆ‘่‡ชๅทฑๆ— ไปŽไธ‹ๆ‰‹๏ผŒๆฒกๆƒณๅˆฐ็œŸ็š„ๆœ‰ไบบๅšๅ‡บๆฅไบ†ใ€‚

Ideas for getting the image preview associated with a Lora?

the python enumerations are a nice touch. Though, some of the names of the loras I have are not enough for me to remember what they are going to influence. Any ideas on how to have VSCode show an image preview when looking at the enumerations?

LoadImage to take as input a Path

hey, very quick issue that you can fix whenever you have time: when using a Path in a LoadImage node, we hit a json encode error saying the path is not serialisable. Right now i fix it by using str(filepath) before giving as input to the node, but maybe it would be worth implementing a JSONEncoder class that simply does it in the background? i think this happens when using a workflow that tries to serialise things at some point:

click here to expand the exception traceback
File ~/miniconda3/envs/comfyui/lib/python3.10/asyncio/tasks.py:232, in Task.__step(***failed resolving arguments***)
    228 try:
    229     if exc is None:
    230         # We use the `send` method directly, because coroutines
    231         # don't have `__iter__` and `__next__` methods.
--> 232         result = coro.send(None)
    233     else:
    234         result = coro.throw(exc)

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/__init__.py:501, in Workflow._exit(self, source)
    499 nodes.Node.clear_output_hook()
    500 if self._queue_when_exit:
--> 501     if await self._queue(source):
    502         # TODO: Fix multi-thread print
    503         # print(task)
    504         if self._wait_when_exit:
    505             await self.task

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/__init__.py:480, in Workflow._queue(self, source)
    477 elif self._cancel_remaining_when_queue:
    478     await queue._cancel_remaining()
--> 480 self.task = await queue._put(self, source)
    481 for output in self._outputs:
    482     output.task = self.task

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/src/comfy_script/runtime/__init__.py:216, in TaskQueue._put(self, workflow, source)
    210 if _save_script_source:
    211     extra_data = {
    212         'extra_pnginfo': {
    213             'ComfyScriptSource': source
    214         }
    215     }
--> 216 async with session.post(f'{client.endpoint}prompt', json={
    217     'prompt': prompt,
    218     'extra_data': extra_data,
    219     'client_id': _client_id,
    220 }) as response:
    221     if response.status == 200:
    222         response = await response.json()

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/aiohttp/client.py:1187, in _BaseRequestContextManager.__aenter__(self)
   1186 async def __aenter__(self) -> _RetType:
-> 1187     self._resp = await self._coro
   1188     return self._resp

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/aiohttp/client.py:430, in ClientSession._request(self, method, str_or_url, params, data, json, cookies, headers, skip_auto_headers, auth, allow_redirects, max_redirects, compress, chunked, expect100, raise_for_status, read_until_eof, proxy, proxy_auth, timeout, verify_ssl, fingerprint, ssl_context, ssl, server_hostname, proxy_headers, trace_request_ctx, read_bufsize, auto_decompress, max_line_size, max_field_size)
    426     raise ValueError(
    427         \"data and json parameters can not be used at the same time\"
    428     )
    429 elif json is not None:
--> 430     data = payload.JsonPayload(json, dumps=self._json_serialize)
    432 if not isinstance(chunked, bool) and chunked is not None:
    433     warnings.warn(\"Chunk size is deprecated #1615\", DeprecationWarning)

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/aiohttp/payload.py:396, in JsonPayload.__init__(self, value, encoding, content_type, dumps, *args, **kwargs)
    385 def __init__(
    386     self,
    387     value: Any,
   (...)
    392     **kwargs: Any,
    393 ) -> None:
    395     super().__init__(
--> 396         dumps(value).encode(encoding),
    397         content_type=content_type,
    398         encoding=encoding,
    399         *args,
    400         **kwargs,
    401     )

[...]

--> [179](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/clementr/projects/qrpuzz/notebooks/~/miniconda3/envs/comfyui/lib/python3.10/json/encoder.py:179)     raise TypeError(f'Object of type {o.__class__.__name__} '
    [180](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/clementr/projects/qrpuzz/notebooks/~/miniconda3/envs/comfyui/lib/python3.10/json/encoder.py:180)                     f'is not JSON serializable')

TypeError: Object of type PosixPath is not JSON serializable

thanks!

Transpile error with Latent Blend?

Traceback (most recent call last):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\nodes\__init__.py", line 94, in chunks
    comfy_script = transpile.WorkflowToScriptTranspiler(workflow).to_script(end_nodes)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 338, in to_script
    for node in self._topological_generations_ordered_dfs(end_nodes):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 313, in _topological_generations_ordered_dfs
    yield from visit(v)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 308, in visit
    yield from visit(node_u)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 308, in visit
    yield from visit(node_u)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 308, in visit
    yield from visit(node_u)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\__init__.py", line 303, in visit
    inputs = passes.multiplexer_node_input_filter(G.nodes[node], self._widget_values_to_dict(v.type, v.widgets_values))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\transpile\passes\__init__.py", line 127, in multiplexer_node_input_filter
    if widget_values[k] != value:
       ~~~~~~~~~~~~~^^^
KeyError: 'blend_mode'
Prompt executed in 7.81 seconds

workflow (83).json

Trouble Running Example Script with Custom Model in Conda Environment on Windows 10

Hello,

Firstly, I want to express my gratitude for the effort and dedication you've put into developing this project. It's evident that it holds substantial promise. However, I've encountered some challenges while attempting to execute scripts.

I followed the installation process outlined for ComfyScript using the "Package and nodes with ComfyUI" method, opting for a standalone (non-portable) version of comfyui. For your information, my setup involves running comfy within a conda environment on a Windows 10 system.

I attempted to run the example script provided in the README.md file of the repository, making a minor modification to use a different model that I possess. You can find the script I used here: test.py.txt.

Encountering errors, I've captured and shared the log for your review: ComfyScript.log.

Could you offer any guidance or suggestions on how to resolve these issues and successfully run the script? Any advice would be greatly appreciated.

Thank you for your support.

ipywidgets/gui from VSCode?

I'm not that familiar with Jupyter Notebooks. Is it possible to get the images / simple UI you have in the README straight from VSCode? Do you have any other recommendations/reading around that?

key error

  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\__init__.py", line 118, in chunks
    comfy_script = transpile.WorkflowToScriptTranspiler(workflow).to_script(end_nodes)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\transpile\__init__.py", line 296, in to_script
    for node in self._topological_generations_ordered_dfs(end_nodes):
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\transpile\__init__.py", line 287, in _topological_generations_ordered_dfs
    yield from visit(v)
  File "C:\Users\lingo\Desktop\ComfyUI\custom_nodes\ComfyScript\script\transpile\__init__.py", line 273, in visit
    v = G.nodes[node]['v']
        ~~~~~~~^^^^^^
  File "C:\Users\lingo\AppData\Local\Programs\Python\Python311\Lib\site-packages\networkx\classes\reportviews.py", line 194, in __getitem__
    return self._nodes[n]
           ~~~~~~~~~~~^^^
KeyError: '9'

Using default "glass bottle landscape" workflow, tried with global 3.11 python and embedded 3.12 python versions of ComfyUI.

edit: Transpiler works when using CLI.

ComfyUI embedded installation failure

I ran through this, with slightly different steps:

https://github.com/Chaoses-Ib/ComfyScript?tab=readme-ov-file#with-comfyui

cd /e/custom_nodes/ComfyScript
/e/ComfyUI_windows_portable/python_embeded/python -m pip install -e ".[default]"
$ /e/ComfyUI_windows_portable/python_embeded/python -m pip install -e ".[default]"
Obtaining file:///E:/custom_nodes/ComfyScript
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
ERROR: Exception:
Traceback (most recent call last):
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
             ^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli\req_command.py", line 245, in wrapper
    return func(self, options, args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\commands\install.py", line 377, in run
    requirement_set = resolver.resolve(
                      ^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 76, in resolve
    collected = self.factory.collect_root_requirements(root_reqs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 534, in collect_root_requirements
    reqs = list(
           ^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 490, in _make_requirements_from_install_req
    cand = self._make_base_candidate_from_link(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 207, in _make_base_candidate_from_link
    self._editable_candidate_cache[link] = EditableCandidate(
                                           ^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 315, in __init__
    super().__init__(
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 156, in __init__
    self.dist = self._prepare()
                ^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 222, in _prepare
    dist = self._prepare_distribution()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 325, in _prepare_distribution
    return self._factory.preparer.prepare_editable_requirement(self._ireq)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\operations\prepare.py", line 696, in prepare_editable_requirement
    dist = _get_prepared_distribution(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\operations\prepare.py", line 71, in _get_prepared_distribution
    abstract_dist.prepare_distribution_metadata(
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\distributions\sdist.py", line 52, in prepare_distribution_metadata
    self.req.isolated_editable_sanity_check()
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\req\req_install.py", line 545, in isolated_editable_sanity_check
    and not self.supports_pyproject_editable()
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\req\req_install.py", line 257, in supports_pyproject_editable
    return "build_editable" in self.pep517_backend._supported_features()
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\pyproject_hooks\_impl.py", line 153, in _supported_features
    return self._call_hook('_supported_features', {})
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\pyproject_hooks\_impl.py", line 321, in _call_hook
    raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 77, in _build_backend
    obj = import_module(mod_path)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "importlib\__init__.py", line 126, in import_module
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'hatchling'

any ideas about this?

Transpiler and nodes with optional inputs

The empty optional inputs of a node are not set to None when transpiling a workflow which causes an error, can set it to None manually of course but I thought I mentioned it.

Transpiled DetailerForEachPipe from Impact pack:
image5, _, _, _ = DetailerForEachPipe(image3, segs, 1024, True, 1024, 395935899176991, 10, 3, 'lcm', 'ddim_uniform', 0.1, 50, True, True, basic_pipe, '', 0.0, 1, True, 50)
Should be:
image5, _, _, _ = DetailerForEachPipe(image3, segs, 1024, True, 1024, 395935899176991, 10, 3, 'lcm', 'ddim_uniform', 0.1, 50, True, True, basic_pipe, '', 0.0, 1, None, None, True, 50)

Turn off transpiler option/transpile on command

I have a habit of printing out seed numbers so I can go back to specific seeds that I liked. The transpiler really crowds up my command window, so that I have to scroll a lot. Is there an easy way to disable automatic transpiling, maybe by adding an option in settings? I think we can always use the cmdline version of the transpiler in case we need it.

AssertionError when using `load()` in example notebook

Hey, thanks very much for your work, it looks awesome.

I hit an error, when trying to make the example notebook runtime.ipynb work:

from script.runtime import *

load()
Click here to see the error stack trace
Nodes: 529
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[1], line 4
      1 from script.runtime import *
      3 # load('http://127.0.0.1:8188/')
----> 4 load()
      6 # Nodes can only be imported after load()
      7 from script.runtime.nodes import *

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/__init__.py:18, in load(api_endpoint, vars, watch, save_script_source)
     17 def load(api_endpoint: str = 'http://127.0.0.1:8188/', vars: dict | None = None, watch: bool = True, save_script_source: bool = True):
---> 18     asyncio.run(_load(api_endpoint, vars, watch, save_script_source))

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/nest_asyncio.py:30, in _patch_asyncio.<locals>.run(main, debug)
     28 task = asyncio.ensure_future(main)
     29 try:
---> 30     return loop.run_until_complete(task)
     31 finally:
     32     if not task.done():

File ~/miniconda3/envs/comfyui/lib/python3.10/site-packages/nest_asyncio.py:98, in _patch_loop.<locals>.run_until_complete(self, future)
     95 if not f.done():
     96     raise RuntimeError(
     97         'Event loop stopped before Future completed.')
---> 98 return f.result()

File ~/miniconda3/envs/comfyui/lib/python3.10/asyncio/futures.py:201, in Future.result(self)
    199 self.__log_traceback = False
    200 if self._exception is not None:
--> 201     raise self._exception.with_traceback(self._exception_tb)
    202 return self._result

File ~/miniconda3/envs/comfyui/lib/python3.10/asyncio/tasks.py:232, in Task.__step(***failed resolving arguments***)
    228 try:
    229     if exc is None:
    230         # We use the `send` method directly, because coroutines
    231         # don't have `__iter__` and `__next__` methods.
--> 232         result = coro.send(None)
    233     else:
    234         result = coro.throw(exc)

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/__init__.py:29, in _load(api_endpoint, vars, watch, save_script_source)
     26 nodes_info = await api._get_nodes_info()
     27 print(f'Nodes: {len(nodes_info)}')
---> 29 nodes.load(nodes_info, vars)
     31 # TODO: Stop watch if watch turns to False
     32 if watch:

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/nodes.py:14, in load(nodes_info, vars)
     12 fact = VirtualRuntimeFactory()
     13 for node_info in nodes_info.values():
---> 14     fact.add_node(node_info)
     16 globals().update(fact.vars())
     17 __all__.extend(fact.vars().keys())

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/factory.py:218, in RuntimeFactory.add_node(self, info)
    215                 config = {}
    216         inputs.append(f'{name}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
--> 218 output_types = [type_and_hint(type, output=True)[0] for type in info['output']]
    220 outputs = len(info['output'])
    221 if outputs >= 2:

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/factory.py:218, in <listcomp>(.0)
    215                 config = {}
    216         inputs.append(f'{name}: {type_and_hint(type_info, name, optional, config.get("default"))[1]}')
--> 218 output_types = [type_and_hint(type, output=True)[0] for type in info['output']]
    220 outputs = len(info['output'])
    221 if outputs >= 2:

File /mnt/c/Users/clementr/Projects/stablediffusion/ComfyUI/custom_nodes/ComfyScript/script/runtime/factory.py:117, in RuntimeFactory.add_node.<locals>.type_and_hint(type_info, name, optional, default, output)
    115 if isinstance(type_info, list):
    116     if output: print(type_info)
--> 117     assert not output
    118     if is_bool_enum(type_info):
    119         t = bool

AssertionError: 

From what i can tell, everything looks ok, my comfyui is running and works correctly.

When I investigate a bit i can add a line to script/runtime/factory.py>RuntimeFactory>add_node (at line L116):

[...]
L115            if isinstance(type_info, list):
L116                if output: print(type_info)
L117                assert not output
[...]

and i get the following list being printed, which appears to be the models i have downloaded from civitai

['cyberrealistic_v41BackToBasics.safetensors', 'dreamshaper_8.safetensors', 'realisticVisionV60B1_v20Novae.safetensors']

I don't understand the code enough to debug further, could you help me please?

Thanks very much,
Clement

Error with Inspire

I know this error is coming from Inspire, however, it only shows up when using ComfyScript via VSCode. I was wondering if you had an prior knowledge/experience with what might be going on? Thanks!

[WARN] ComfyUI-Impact-Pack: Error on prompt - several features will not work.
'workflow'
[ERROR] An error occurred during the on_prompt_handler processing
Traceback (most recent call last):
  File "e:\ComfyUI_windows_portable\ComfyUI\server.py", line 650, in trigger_on_prompt
    json_data = handler(json_data)
                ^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\inspire_server.py", line 365, in onprompt
    populate_wildcards(json_data)
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\inspire_server.py", line 334, in populate_wildcards
    for node in json_data['extra_data']['extra_pnginfo']['workflow']['nodes']:
                ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'workflow'

Note, this doesn't actually stop anything from generating. So it's kinda just spam logging. Still would like to resolve it if I can.

[Question] Could ComfyScript be used to automate activation(Unbypass) of a group of nodes?

Thanks for releasing, I'm looking forward to trying this out.

I've made a request on the main ComfyUI repo comfyanonymous/ComfyUI#2357 hoping for an Unbypass Group Nodes function for the right click menu. I've been planning out a workflow that would require being able to automate bypass/unbypass of group nodes, currently they can be bypassed as a group selection from the right click menu, but Unbypass doesn't exist as an option presently.

If this functionality was implemented within ComfyUI, would ComfyScript in it's current form allow me to set conditions for when a grouped set of nodes is active?

Import main fail in real mode

image

In my case the value of file is:
'C:\work\misc_rep\python\ComfyUI\custom_nodes\ComfyScript\src\comfy_script\runtime\real\init.py'

And the value of comfy_ui is WindowsPath('C:/work/misc_rep/python/ComfyUI/custom_nodes')
It should be C:/work/misc_rep/python/ComfyUI/

I replaced with comfy_ui = Path(file).resolve().parents[6] and it worked

`AttributeError: module 'comfy.cmd.main' has no attribute 'server'`

I tried running it with the comfyui python package however I got this error each time I try to run it.

Log:

ComfyScript: Importing ComfyUI from comfyui package
Total VRAM 12288 MB, total RAM 16297 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py:334: RuntimeWarning: coroutine 'main' was never awaited
  main.main()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
  File "D:\comfyui\test.py", line 2, in <module>
    load("comfyui")
  File "D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py", line 31, in load
    asyncio.run(_load(comfyui, args, vars, watch, save_script_source))
  File "D:\comfyui\venv\lib\site-packages\nest_asyncio.py", line 30, in run
    return loop.run_until_complete(task)
  File "D:\comfyui\venv\lib\site-packages\nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "C:\Python\lib\asyncio\futures.py", line 201, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Python\lib\asyncio\tasks.py", line 232, in __step
    result = coro.send(None)
  File "D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py", line 52, in _load
    start_comfyui(comfyui, args)
  File "D:\comfyui\venv\lib\site-packages\comfy_script\runtime\__init__.py", line 340, in start_comfyui
    threading.Thread(target=main.server.loop.run_until_complete, args=(main.server.publish_loop(),), daemon=True).start()
AttributeError: module 'comfy.cmd.main' has no attribute 'server'

Code:

from comfy_script.runtime import *
load("comfyui")
from comfy_script.runtime.nodes import *

with Workflow():
    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
    conditioning2 = CLIPTextEncode('text, watermark', clip)
    latent = EmptyLatentImage(512, 512, 1)
    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
    image = VAEDecode(latent, vae)
    SaveImage(image, 'ComfyUI')

Pass configuration options when loading ComfyUI package?

I'm importing, loading, and using the 'real' runtime, using the hiddenswitch package as my ComfyUI. Inside a Docker image, if that matters.

Relevant parts of the Dockerfile:

RUN git clone https://github.com/hiddenswitch/ComfyUI.git && cd ComfyUI && git checkout e49c662c7f026f05a5e082d48b629e2b977c0441 && pip install --no-build-isolation -e .
RUN pip install -U "comfy-script[default]"

And the importing:


from comfy_script.runtime.real import *
load('comfyui')
from comfy_script.runtime.real.nodes import *

So, now suppose I want to pass some additional configuration options to the ComfyUI package when its initialized, for instance, an extra model paths yaml file. Is there a way to do this? I looked into the source a little and found that the load function takes a RealModeConfig object, but it's a very abstract object and I'd just be blindly stumbling in the dark trying to figure out how to use it, if it even is the right solution.

Thanks!

Naming of nodes

Is it possible to name the nodes in the script and it will show when you load it in the web-ui?

Untitled

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.