tinyterra / comfyui_tinyterranodes Goto Github PK
View Code? Open in Web Editor NEWA selection of nodes for Stable Diffusion ComfyUI
License: GNU General Public License v3.0
A selection of nodes for Stable Diffusion ComfyUI
License: GNU General Public License v3.0
How can I use more than 3 loras? Tried a lot but nothing worked. Is there something I'm missing?
Completely lost all TTN options on node menu when upgrading .
Even wiped completly and did full clean install.
In end reverted to older version
I used your old nodes from a week ago and it took 15second to generate
now they take 160second to generate. even after the initial generation
Error occurred when executing ttN pipeLoader:
Error while deserializing header: HeaderTooLarge
File "/content/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/ComfyUI/custom_nodes/ComfyUI_tinyterraNodes/tinyterraNodes.py", line 944, in adv_pipeloader
model, clip, vae = ttNcache.load_checkpoint(ckpt_name)
File "/content/ComfyUI/custom_nodes/ComfyUI_tinyterraNodes/tinyterraNodes.py", line 179, in load_checkpoint
loaded_ckpt = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
File "/content/ComfyUI/comfy/sd.py", line 1335, in load_checkpoint_guess_config
sd = utils.load_torch_file(ckpt_path)
File "/content/ComfyUI/comfy/utils.py", line 11, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
Hi, is it possible to add another resize mode to this node? Something like "by longer side (keep aspect)"
It would require one parameter which is the target size of the longer_side
, and the calculation would be something like this:
if current_width > current_height:
new_width, new_height = longer_side, int(current_height * longer_side / current_width)
else:
new_width, new_height = int(current_width * longer_side / current_height), longer_side
This would be useful because in the case you change the generated image aspect ratio and/or the size too, still the node would resize to the same dimension without any change of the parameters.
I admit that maybe it wouldn't be beneficial to everyone but please consider adding this. :)
Got this error:
!!! Exception during processing !!!
Traceback (most recent call last):
File "C:\Users\anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 1516, in doit
model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector, sam_model_opt = detailer_pipe
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 9, got 6)
Hey, I'm using the latest version of ComfyUI and ttN, I'm getting this error when using a LoRA:
!!! Exception during processing !!! Traceback (most recent call last): File "C:\ComfyUI\ComfyUI\execution.py", line 145, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ComfyUI\ComfyUI\execution.py", line 75, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ComfyUI\ComfyUI\execution.py", line 68, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 250, in adv_pipeloader model, clip = load_lora(lora1_name, model, clip, lora1_model_strength, lora1_clip_strength) File "C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 169, in load_lora model_lora, clip_lora = comfy.sd.load_lora_for_models(model, clip, lora_path, strength_model, strength_clip) File "C:\ComfyUI\ComfyUI\comfy\sd.py", line 407, in load_lora_for_models loaded = load_lora(lora, key_map) File "C:\ComfyUI\ComfyUI\comfy\sd.py", line 69, in load_lora if alpha_name in lora.keys(): AttributeError: 'str' object has no attribute 'keys'
It worked fine before, but after I updated Comfy and ttN this started happening.
Tried with pipeLoader + pipeKSampler
and pipeLoader + Ksampler
, same issue.
You might need to restart ComfyUI after runnning through pipeLoader to test the default workflow since it'll get really slow. Not sure if that's a TTN or ComfyUI bug.
{
"last_node_id": 16,
"last_link_id": 44,
"nodes": [
{
"id": 5,
"type": "KSampler",
"pos": [
500,
820
],
"size": [
315,
446.00001525878906
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 26
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 17
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 16
},
{
"name": "latent_image",
"type": "LATENT",
"link": 30
},
{
"name": "seed",
"type": "INT",
"link": 32,
"widget": {
"name": "seed",
"config": [
"INT",
{
"default": 0,
"min": 0,
"max": 18446744073709552000
}
]
}
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
23
],
"shape": 3,
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
206821104632760,
"randomize",
50,
5,
"dpmpp_2m",
"karras",
1
]
},
{
"id": 4,
"type": "CLIPTextEncode",
"pos": [
60,
1070
],
"size": {
"0": 400,
"1": 200
},
"flags": {
"collapsed": false
},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 29,
"slot_index": 0
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
16
],
"shape": 3,
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
""
]
},
{
"id": 8,
"type": "VAEDecode",
"pos": [
510,
730
],
"size": {
"0": 210,
"1": 46
},
"flags": {},
"order": 9,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 23
},
{
"name": "vae",
"type": "VAE",
"link": 27
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
25
],
"shape": 3,
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "VAEDecode"
}
},
{
"id": 1,
"type": "ttN pipeLoader",
"pos": [
60,
30
],
"size": [
400,
726
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "seed",
"type": "INT",
"link": 42,
"widget": {
"name": "seed",
"config": [
"INT",
{
"default": 0,
"min": 0,
"max": 18446744073709552000
}
]
},
"slot_index": 0
},
{
"name": "positive",
"type": "STRING",
"link": 34,
"widget": {
"name": "positive",
"config": [
"STRING",
{
"default": "Positive",
"multiline": true
}
]
}
}
],
"outputs": [
{
"name": "pipe",
"type": "PIPE_LINE",
"links": [
1
],
"shape": 3,
"slot_index": 0
},
{
"name": "model",
"type": "MODEL",
"links": [],
"shape": 3,
"slot_index": 1
},
{
"name": "positive",
"type": "CONDITIONING",
"links": [],
"shape": 3,
"slot_index": 2
},
{
"name": "negative",
"type": "CONDITIONING",
"links": [],
"shape": 3,
"slot_index": 3
},
{
"name": "latent",
"type": "LATENT",
"links": [],
"shape": 3,
"slot_index": 4
},
{
"name": "vae",
"type": "VAE",
"links": [],
"shape": 3,
"slot_index": 5
},
{
"name": "clip",
"type": "CLIP",
"links": [],
"shape": 3
},
{
"name": "seed",
"type": "INT",
"links": [],
"shape": 3,
"slot_index": 7
}
],
"properties": {
"Node name for S&R": "ttN pipeLoader"
},
"widgets_values": [
"sd_xl_base_1.0.safetensors",
"Baked VAE",
-1,
"None",
1,
1,
"None",
1,
1,
"None",
1,
1,
"Astronaut in a jungle, (cold color palette:1.2), muted colors, detailed, 8k\n",
"none",
"comfy",
"",
"none",
"comfy",
1024,
1024,
1,
42,
"fixed"
]
},
{
"id": 10,
"type": "EmptyLatentImage",
"pos": [
-350,
1100
],
"size": {
"0": 315,
"1": 106
},
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
30
],
"shape": 3,
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
1024,
1024,
1
]
},
{
"id": 9,
"type": "CheckpointLoaderSimple",
"pos": [
-350,
950
],
"size": {
"0": 315,
"1": 98
},
"flags": {},
"order": 1,
"mode": 0,
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
26
],
"shape": 3,
"slot_index": 0
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
28,
29
],
"shape": 3,
"slot_index": 1
},
{
"name": "VAE",
"type": "VAE",
"links": [
27
],
"shape": 3,
"slot_index": 2
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"sd_xl_base_1.0.safetensors"
]
},
{
"id": 12,
"type": "PrimitiveNode",
"pos": [
-350,
770
],
"size": [
310,
130
],
"flags": {},
"order": 2,
"mode": 0,
"outputs": [
{
"name": "STRING",
"type": "STRING",
"links": [
33,
34
],
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 0
}
],
"properties": {},
"widgets_values": [
"Astronaut in a jungle, (cold color palette:1.2), muted colors, detailed, 8k\n"
]
},
{
"id": 7,
"type": "PreviewImage",
"pos": [
830,
830
],
"size": {
"0": 210,
"1": 26
},
"flags": {},
"order": 11,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 25
}
],
"properties": {
"Node name for S&R": "PreviewImage"
}
},
{
"id": 11,
"type": "PrimitiveNode",
"pos": [
-220,
340
],
"size": {
"0": 210,
"1": 82
},
"flags": {},
"order": 3,
"mode": 0,
"outputs": [
{
"name": "INT",
"type": "INT",
"links": [
32,
42
],
"widget": {
"name": "seed",
"config": [
"INT",
{
"default": 0,
"min": 0,
"max": 18446744073709552000
}
]
},
"slot_index": 0
}
],
"properties": {},
"widgets_values": [
42,
"fixed"
]
},
{
"id": 3,
"type": "CLIPTextEncode",
"pos": [
113,
842
],
"size": [
210,
50
],
"flags": {
"collapsed": false
},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 28,
"slot_index": 0
},
{
"name": "text",
"type": "STRING",
"link": 33,
"widget": {
"name": "text",
"config": [
"STRING",
{
"multiline": true
}
]
},
"slot_index": 1
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
17
],
"shape": 3,
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"Astronaut in a jungle, (cold color palette:1.2), muted colors, detailed, 8k\n"
]
},
{
"id": 2,
"type": "ttN pipeKSampler",
"pos": [
490,
30
],
"size": [
330,
622
],
"flags": {},
"order": 8,
"mode": 0,
"inputs": [
{
"name": "pipe",
"type": "PIPE_LINE",
"link": 1
},
{
"name": "optional_model",
"type": "MODEL",
"link": null
},
{
"name": "optional_positive",
"type": "CONDITIONING",
"link": null
},
{
"name": "optional_negative",
"type": "CONDITIONING",
"link": null
},
{
"name": "optional_latent",
"type": "LATENT",
"link": null
},
{
"name": "optional_vae",
"type": "VAE",
"link": null
},
{
"name": "optional_clip",
"type": "CLIP",
"link": null
},
{
"name": "xyPlot",
"type": "XYPLOT",
"link": null
},
{
"name": "seed",
"type": "INT",
"link": null,
"widget": {
"name": "seed",
"config": [
"INT",
{
"default": 0,
"min": 0,
"max": 18446744073709552000
}
]
}
}
],
"outputs": [
{
"name": "pipe",
"type": "PIPE_LINE",
"links": null,
"shape": 3
},
{
"name": "model",
"type": "MODEL",
"links": null,
"shape": 3
},
{
"name": "positive",
"type": "CONDITIONING",
"links": null,
"shape": 3
},
{
"name": "negative",
"type": "CONDITIONING",
"links": null,
"shape": 3
},
{
"name": "latent",
"type": "LATENT",
"links": null,
"shape": 3
},
{
"name": "vae",
"type": "VAE",
"links": null,
"shape": 3
},
{
"name": "clip",
"type": "CLIP",
"links": null,
"shape": 3
},
{
"name": "image",
"type": "IMAGE",
"links": [
21
],
"shape": 3,
"slot_index": 7
},
{
"name": "seed",
"type": "INT",
"links": null,
"shape": 3
}
],
"properties": {
"Node name for S&R": "ttN pipeKSampler"
},
"widgets_values": [
"None",
1,
1,
"None",
2,
"disabled",
"Sample",
50,
5,
"dpmpp_2m",
"karras",
1,
"Preview",
"ComfyUI",
614447616870507,
"randomize"
]
},
{
"id": 6,
"type": "PreviewImage",
"pos": [
850,
30
],
"size": [
210,
246
],
"flags": {},
"order": 10,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 21
}
],
"properties": {
"Node name for S&R": "PreviewImage"
}
}
],
"links": [
[
1,
1,
0,
2,
0,
"PIPE_LINE"
],
[
16,
4,
0,
5,
2,
"CONDITIONING"
],
[
17,
3,
0,
5,
1,
"CONDITIONING"
],
[
21,
2,
7,
6,
0,
"IMAGE"
],
[
23,
5,
0,
8,
0,
"LATENT"
],
[
25,
8,
0,
7,
0,
"IMAGE"
],
[
26,
9,
0,
5,
0,
"MODEL"
],
[
27,
9,
2,
8,
1,
"VAE"
],
[
28,
9,
1,
3,
0,
"CLIP"
],
[
29,
9,
1,
4,
0,
"CLIP"
],
[
30,
10,
0,
5,
3,
"LATENT"
],
[
32,
11,
0,
5,
4,
"INT"
],
[
33,
12,
0,
3,
1,
"STRING"
],
[
34,
12,
0,
1,
1,
"STRING"
],
[
42,
11,
0,
1,
0,
"INT"
]
],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}
In the latest ComfyUI release, LINK_RENDER_MODES has been added as a default setting. However, when installing the TinyTerra extension, due to conflicts, the initial setting value of LINK_RENDER_MODES is not applied, resulting in the issue where it always defaults to "Straight". (Even though the combo is set to "Spline" in the settings, to properly apply Spline, you need to change it to another option and then switch back to "Spline" again.)
Your autocomplete embedding filenames in text widgets is brilliant! But I have like 200 embeddings saved into different folders - pos, negative, characters, genre, environment, tools, etc and it would be super helpful if your pop up could show the folder views. A lot of embeddings have hard to remember names so it's easiest to find them via the folders. So a feature request to show the folders... thanks!
The textDebug node is very useful, thank you. I'm using it to verify that my very simple prompt builder is working as expected:
Before your last commit, this setup would show the text concat both inside the node and in the terminal. After the commit, the concat only appears in the terminal. The node remains always empty.
Side note: the terminal print only appears when I modify the prompt, rather than at each generation. It would be nice if it could print the text at each generation.
Thank you
I just updated ComfyUI and your extension to the newest versions. It looks like pipeLoader is not showing the prompt text it loads from files, and does not take input when I input new text.
By the way, thank you for some of your other recent updates like the embedding list and collapsing unused parts of the node. I appreciate them.
Otherwise, SDXL checkpoints won't work.
Using the upscale_method
option in pipeKSampler throws this error:
Traceback (most recent call last):
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 144, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 67, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py",
line 1190, in sample
return process_sample_state(self, pipe, lora_name, lora_model_strength, lora_clip_strength, steps, cfg, sampler_name, schedule
r, denoise, image_output, preview_prefix, save_prefix, prompt, extra_pnginfo, my_unique_id, preview_latent)
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py",
line 762, in process_sample_state
pipe["vars"]["samples"] = handle_upscale(pipe["vars"]["samples"], upscale_method, factor, crop)
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py",
line 723, in handle_upscale
samples = upscale(samples, upscale_method, factor, crop)[0]
File "D:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py",
line 346, in upscale
samples = samples[0]
KeyError: 0
I like to use the lanczos algorithm, because it smoothes out the zoomed pixels without smearing.
but when use lanczos in ttn pipeKsampler, it show error result,
And how about to add lanczos to upsacle model in hiresFixScale
just like SDwebui, I just need lanczos only, other model zoomed pixels will smearing
ImportError: cannot import name 'ModelPatcher' from 'comfy.sd' (C:\User\ComfyUI\comfy\sd.py)
I have a 3 node set with the ttn pipe loader, ttn xyplot, and ttn ksampler as in your demo
everything works - all images are generated and can be seen in the ksampler preview window but when it comes to assembling the imagegrid and writing the labels, there's an error. None of the individual images are saved, nor is the image grid. I posted to the matrix for ComfyUI and the consensus was this maybe an error with the custom nodes so I'm posting here: Below is the error message from the console. btw - I'm using windows standalone, nightly build, windows 11, python 3.11, everything updated. Thank you - I really like your nodes, and want to use your xy plot because how nicely yours loads the xy info.
got prompt
[ttN] Warning: pipeKSampler[2]: The selected latent_id (15) is out of range.
[ttN] Warning: pipeKSampler[2]: Automatically setting the latent_id to the last image in the list (index: 0).
[ttN] Plot Values 1/9 ->: X: pos prompt 1, Y: Favorites\deliberate_v2.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 11.99it/s]
[ttN] Plot Values 2/9 ->: X: pos prompt 1, Y: NEW\epicrealism_pureEvolutionV5.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.06it/s]
[ttN] Plot Values 3/9 ->: X: pos prompt 1, Y: NEW\dreamshaper_8.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.08it/s]
[ttN] Plot Values 4/9 ->: X: pos prompt 2, Y: Favorites\deliberate_v2.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.05it/s]
[ttN] Plot Values 5/9 ->: X: pos prompt 2, Y: NEW\epicrealism_pureEvolutionV5.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.21it/s]
[ttN] Plot Values 6/9 ->: X: pos prompt 2, Y: NEW\dreamshaper_8.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.23it/s]
[ttN] Plot Values 7/9 ->: X: pos prompt 3, Y: Favorites\deliberate_v2.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.12it/s]
[ttN] Plot Values 8/9 ->: X: pos prompt 3, Y: NEW\epicrealism_pureEvolutionV5.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.24it/s]
[ttN] Plot Values 9/9 ->: X: pos prompt 3, Y: NEW\dreamshaper_8.safetensors
100%|##################################################################################| 20/20 [00:01<00:00, 12.16it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
File "A:\Comfy_Aug\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1433, in sample
return ttN_TSC_pipeKSampler.sample(self, pipe, lora_name, lora_model_strength, lora_clip_strength, sampler_state, steps, cfg, sampler_name, scheduler, image_output, save_prefix, denoise,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1356, in sample
return process_xyPlot(self, pipe, lora_name, lora_model_strength, lora_clip_strength, steps, cfg, sampler_name, scheduler, denoise, image_output, preview_prefix, save_prefix, prompt, extra_pnginfo, my_unique_id, preview_latent, xyPlot)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1311, in process_xyPlot
label_bg = create_label(img, x_label[col_index], int(48 * img.width / 512))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1220, in create_label
font_size = adjusted_font_size(text, initial_font_size, label_width)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "A:\Comfy_Aug\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1190, in adjusted_font_size
text_width, _ = font.getsize(text)
^^^^^^^^^^^^
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
First of all thank you for all the work and effort you put into this. Very much appreciated!
I'm using the nodes within a workflow that is hosted with AWS Sagemaker using auto scaling. When scaling up it's important to reduce the time to fire up a new instance to a minimum. There are some other custom nodes used with the ComfyUI workflow, all of them imported within 1 second. Only the tinyterraNodes take 25 seconds to import and delaying the instance startup from ~5 seconds to ~30 seconds.
Is there any way to improve the import time?
A small note: In my ComfyUI setup I'm using the extra_model_paths.yaml to link additional folders containing model files/artifacts. Could this have an impact on the import time?
Thank you!
Error occurred when executing ttN pipeKSamplerAdvanced:
'orig'
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1429, in sample
return ttN_TSC_pipeKSampler.sample(self, pipe, lora_name, lora_model_strength, lora_clip_strength, sampler_state, steps, cfg, sampler_name, scheduler, image_output, save_prefix, denoise,
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1352, in sample
return process_xyPlot(self, pipe, lora_name, lora_model_strength, lora_clip_strength, steps, cfg, sampler_name, scheduler, denoise, image_output, preview_prefix, save_prefix, prompt, extra_pnginfo, my_unique_id, preview_latent, xyPlot)
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 1004, in process_xyPlot
latent_image_tensor = pipe['orig']['samples']['samples']
any notions what is missing? only changed the model + vae to fit my paths
Hi, thanks again to fix this last time. Unfortunately, it is broken now. Maybe because of a comfyui update?
As before, any of these formats would be amazing:
Could you please fix it again?
The colours/type seem to be out of alignment on the output, so they can not be connected to the corresponding inputs. Possibly broken by 351d36c ?
Example:
clip should be yellow
refiner_model should be purple
refiner_negative should be orange
refiner_vae should be red
refiner_clip should be yellow
latent should be purple
The seed set in a pipeLoader does not get loaded in a connected pipeKSampler. By default the seed of the Sampler is set to 0. Then that is the value that will be used, not the value from the pipe.
There is this in the code for ttN_TSC_pipeKSampler.sample(), but how does one set the seed in the pipeKSampler to None or undefined? A value of -1 yields an error.
... if seed in (None, 'undefined'): seed = pipe["seed"] else: pipe["seed"] = seed ...
The parameter "denoise" is simply missing from the ttN_pipeKSamplerAdvanced
input list. On tinyterraNodes.py
line 1371
Not all models play well with a refiner, so being able to disable the refiner stages in some way, and not have to waste time and space loading a refiner model would be good.
For the past 12 hours I've been trying to work out what has been causing my 32Gb RAM to be exhausted and continual disk caching when creating a single SDXL image (RTX4090 was hardly being used). Yesterday this would take a few seconds and not cause and issue with resources, but after updating the nodes to hash 0bba92f I've had continual problems. I finally tracked it to these nodes after rolling back to the previous commit.
Initial load time is usually longer because it's loading the models, but latest commit shows 2nd time is even longer, perhaps a memory leak?
These are the gen time differences between them...
Mainly looking for start_at_step
, end_at_step
, return_with_leftover_noise
to be able to refactor some existing workflows.
If you change the ckpt_name in the pipeLoader to an input and then pass in a selected checkpoint, pipeLoader throws an error:
Error occurred when executing ttN pipeLoader:
unhashable type: 'list'
File "C:\Users\hoffl\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\hoffl\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\hoffl\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\hoffl\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 550, in adv_pipeloader
update_loaded_objects(prompt)
File "C:\Users\hoffl\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes\tinyterraNodes.py", line 153, in update_loaded_objects
desired_ckpt_names.add(entry["inputs"]["ckpt_name"])
I found this bug using the Checkpoint Selector from https://github.com/giriss/comfy-image-saver but I think this would happen for any node that attempted to pass in a checkpoint name.
Hi, a couple of things I noticed. In this screenshot:
The word "TRUE" has somehow replaced the text in the info box. I think I've also seen the contents of save_prefix copied into an adjacent box, as if the contents of one parameter is overwriting another. I couldn't reproduce it, but maybe it has something to do with collapsing and opening parts of the node, or switching the node's mode to "Never" and back.
I also wanted to ask about the percent field in highresfixScale. After a recent update, the percent field always uses increments of 25, even if I type in another number. Could it not do that?
WARNING: The script f2py.exe is installed in 'E:\AI\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\~umpy\.libs\libopenblas64__v0.3.23-246-g3d31191b-gcc_10_3_0.dll'
Consider using the --user
option or check the permissions.
On first run, I get this error:
ERROR: Cannot install rembg[gpu]==2.0.28, rembg[gpu]==2.0.29, rembg[gpu]==2.0.30, rembg[gpu]==2.0.31, rembg[gpu]==2.0.32, rembg[gpu]==2.0.33, rembg[gpu]==2.0.34, rembg[gpu]==2.0.35, rembg[gpu]==2.0.36, rembg[gpu]==2.0.37, rembg[gpu]==2.0.38, rembg[gpu]==2.0.39, rembg[gpu]==2.0.40, rembg[gpu]==2.0.41, rembg[gpu]==2.0.43, rembg[gpu]==2.0.44, rembg[gpu]==2.0.45, rembg[gpu]==2.0.46, rembg[gpu]==2.0.47, rembg[gpu]==2.0.48, rembg[gpu]==2.0.49 and rembg[gpu]==2.0.50 because these package versions have conflicting dependencies.
Does not stop the server from starting, but was able to install rembg==2.0.50
manually. Unless rembg[gpu] supports Mac M1 MPS or MLS or similar, might be better to have the startup determine the system it is running on and install the rembg
as appopriate or have it fallback to a non-gpu version?
Hi, thank you for your amazing nodes.
Is there a way to add save_prefix
tokens (e.g. ComfyUUI's default %date:yyyy-MM-dd-hh-mm-ss%
or Was Suite's [time(%Y-%m-%d-%H-%M-%S)]
)?
I can't find a way to save the generation date and time in the filename directly from the pipeKSampler. Did I miss anything?
I've configured a workflow that looks correct for SDXL, but the preview output shows that the refiner stage is not doing anything with the leftover latent noise. I've tried varying the steps up and down, to no avail. Also, it would be nice to not need to enter the number of steps in the refiner stage, and it calculates it from the remaining steps from the previous stage.
too many values to unpack (expected 9)
File "E: SDXL\ComfyUI execution.py",line 151,in recursive executeinput data all)output data, output ui = get output data(obj,File "E:\SDXL\ComfyUI\execution.py", line 81,in get output datareturn_values = map_node_over_list(obj, input_data all, obj.FUNCTION,allow interrupt=TrueFile "E:\SDXL\ComfyUI execution.py" line 74, in map node over_listresults.append(getattr(obj, func)(**slice dict(input data all, i)))File "E:\SDXL\ComfyUI\custom nodes ComfyUl tinyterraNodes\tinyterraNodes.py, line 1745,inflush
model,pos, neg, latent, vae, clip, image, seed,= pipe.values()
CR Load LoRA use to work with the PipeLoader SDXL but now it doesn't. Is there any help?
Thank you so much for your good work. I really like the realization of the Link Style (ttN).
But in " straight style " mode all links tend to overlap and turn into 1 common link. and when you select a node the link highlighting goes only up to the common line
example :
maybe in this "style mode straight" you can add more glow effect so that even a link that is at the bottom of the layers, still gives a hint where it goes.
example :
hold
along with your seed set to random, the 2nd pipeKSampler will use a random latent input.bilinear
. Immediately cancel and change it back to your previous latent upscaler, e.g. nearest-exact
.nearest-exact
or your upscaler of your choice, and it will be different each time.A different image1.
The same image1.
The PipeLoaderSDXL and PipeKSamplerSDXL combination are now unusable for me. I have an RTX4090 and the 24GB VRAM is maxed out, and shared VRAM is being used., which makes creating anything impossible. I've switched to the built-in KSampler (Advanced) nodes and the VRAM usage doesn't go above 20GB.
%date:yyyy-MM-dd%
It would be great if you could use the date in the folder names as above.
ComfyUI_tinyterraNodes/tinyterraNodes.py
Line 2079 in a1bbce6
In the latest Impact Pack
, 4 new items have been added to DETAILER_PIPE
.
refiner_model, refiner_clip, refiner_positive, refiner_negative
Attempting to load a lora in pipeLoader or pipeKSampler fails with the error: "'str' object has no attribute 'keys'". Possibly caused by Comfy's update to LoraLoader a couple of days ago? Of course I can still use loras with the separate lora loader node.
A node that allows for 100% full screen preview of each generated image as it's generated in real time. It would function like a full screen slide show when using the Auto Queue setting (for example). Or maybe a hotkey option to hide/unhide all UI elements in order to show just the preview or save image alone on screen.
I would actually pay for this.
This node is awesome and I want to help make it even better, but I'm not a programmer so I can only offer some ideas.
Below are my ideas, which seem a bit complicated because I thought about them for several days.
@
, a pop-up window will be displayed to list the LoRA list
. After selecting a LoRA
, the LoRA Tag
will be automatically filled in based on the format <lora:loraname:strength>
#
, the preset text list
will be listed. After selecting a preset text
, fill in the contents of preset text.txt
into the current text boxSave as preset text
to the node right-click context menu
Save the contents of the text box as a
txt
file and add it to thepreset text list
Read LoRA tag(s)
from text and load it into checkpoint model.
Reference comfyui_lora_tag_loader
In fact, there are bugs inLora load
andlora stack
ofPipeLoder
, but I think instead of fixing them, it is better to change to a more efficient LoRA loading method.
Add a widget:enable
/disable
LoRA, default valueenable
Positive
/Negative
Supports encoding and sampling of input image
1.
PipeEdit
andPipeIN
can inputimage
, but have no effect or report an error, perhaps becausePipeKSampler
does not support the input ofimage
.
2.upscale by model
requires it
Add new component: sampling object "Latent
/image
", default value image
If the selected object has no input, process another object
This widget allows people to quickly switch betweentext2image
/image2image
New component: hiresfixScale "Disable
/Direct amplification
/Post-sampling amplification
", the default value is Disable
1.Disable: Do not perform the
upscale
operation and hide related components
2. Direct amplification: first performupscale
processing, thensample
, suitable forimage2image
3. Amplify after sampling: firstsample
the latent/image, then performupscale
processing, and thensample
, suitable fortext2image
New widget: upscale model "none
/upscale model name...
", default value none
After selecting any
upscale model
, enable upscale by model. This upscaling method is usually better than upscaling latent.
vae decode
Sometimes
vae decode
is not needed, so add an option to disablevae decode
.
Another thing to note is that whenPipeKSampler
disablesvae decode
and does not connect thesave
/preview
nodes,PipeKSampler
should not run. This is a bug inKSampler (Efficient)
.
pipes
into pipeSDXL
Can replace
pipeLoaderSDXL
,pipeLoaderSDXL
complicates simple things
lora option
, no need for lora1
lora2
..., nor need to distinguish between name
/model weight
/clip weight
LoRA
for comparison, such as:
<lora:add_detail:1.0>
<lora:wedding:1.0>
;<lora:add_detail:0.4>
<lora:wedding:0.4:0.7>
;<lora:chibi-v1:1.0 :0.7>
;
x1: For bothLoRA
,model weight
is 1
x2: The same twoLoRA
as x1,model weight
0.4,clip weight
0.7
X3:lora name
is different and the quantity is also different
pipeedit
and pipeout
The two can be combined together, making it more convenient to use
seed component
in pipeLoader
pipeEdit
pipeIN
pipeOUT
, I think it is enough to only keep the seed output
of pipeKSampler
LoRA component
in pipeKSampler
, I just feel that there is no need to add LoRA in the sampler. After changing to a more efficient LoRA loading method, it can also be deleted.A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.