Git Product home page Git Product logo

prompt-generator-comfyui's Introduction

prompt-generator-comfyui

Custom AI prompt generator node for ComfyUI. With this node, you can use text generation models to generate prompts. Before using, text generation model has to be trained with prompt dataset.

Table Of Contents

Setup

For Portable Installation of the ComfyUI

  • Automatic installation is added for portable version.
  • Clone the repository with git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git command under custom_nodes folder.
  • Go to the ComfyUI_windows_portable folder and run the run_nvidia_gpu.bat file
  • Open the hires.fixWithPromptGenerator.json or basicWorkflowWithPromptGenerator.json workflow
  • Put your generator under the models/prompt_generators folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin file for example.
  • Click Refresh button in ComfyUI

For Manual Installation of the ComfyUI

  • Clone the repository with git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git command under custom_nodes folder.
  • Run the ComfyUI
  • Open the hires.fixWithPromptGenerator.json or basicWorkflowWithPromptGenerator.json workflow
  • Put your generator under the models/prompt_generators folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin file for example.
  • Click Refresh button in ComfyUI

For ComfyUI Manager Users

  • Download the node with ComfyUI Manager
  • Restart the ComfyUI
  • Open the hires.fixWithPromptGenerator.json or basicWorkflowWithPromptGenerator.json workflow
  • Put your generator under the models/prompt_generators folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin file for example.
  • Click Refresh button in ComfyUI

Features

  • Multiple output generation is added. You can choose from 5 outputs with the index value. You can check the generated prompts from the log file and terminal. The prompts are logged and printed in order.
  • Randomness is added. See this section.
  • Quantization is added with Quanto and Bitsandbytes packages. See this section.
  • Lora adapter model loading is added with Peft package. (The feature is not full tested in this repository because of my VRAM but I am using the same implementation in Google Colab for training and inference and it is working there)
  • Optimizations are done with Optimum package.
  • ONNX and transformers models are supported.
  • Preprocessing outputs. See this section.
  • Recursive generation is supported. See this section.
  • Print generated text to terminal and log the node's state under the generated_prompts folder with date as filename.

Example Workflow

example_hires_workflow

example_basic_workflow

  • Prompt Generator Node may look different with final version but workflow in ComfyUI is not going to change

Pretrained Prompt Models

  • You can find the models in this link

  • For to use the pretrained model follow these steps:

    • Download the model and unzip to models/prompt_generators folder.
    • Click Refresh button in ComfyUI.
    • Then select the generator with the node's model_name variable (If you can't see the generator, restart the ComfyUI).

Dataset

  • The dataset has 2.150.395 rows of unique prompts currently.
  • Process of data cleaning and gathering can be found here

Models

  • The model versions are used to differentiate models rather than showing which one is better.

  • The v2 version is the latest trained model and the v4 model is an experimental model.

  • female_positive_generator_v2 | (Training In Process)

  • female_positive_generator_v3 | (Training In Process)

  • female_positive_generator_v4 | Experimental

Variables

Variable Names Definitions
model_name Folder name that contains the model
accelerate Open optimizations. Some of the models are not supported by BetterTransformer (Check your model). If it is not supported switch this option to disable or convert your model to ONNX
quantize Quantize the model. The quantize type is changed based on your OS and torch version. none value disables the quantization. Check this section for more information
prompt Input prompt for the generator
seed Seed value for the model
lock Lock the generation and select from the last generated prompts with index value
random_index Random index value in [1, 5]. If the value is enable, the index value is not used
index User specified index value for selecting prompt from the generated prompts. random_index variable must be disable
cfg CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality
min_new_tokens The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt.
max_new_tokens The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
do_sample Whether or not to use sampling; use greedy decoding otherwise
early_stopping Controls the stopping condition for beam-based methods, like beam-search
num_beams Number of steps for each search path
num_beam_groups Number of groups to divide num_beams into in order to ensure diversity among different groups of beams
diversity_penalty This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled.
temperature How sensitive the algorithm is to selecting low probability options
top_k The number of highest probability vocabulary tokens to keep for top-k-filtering
top_p If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation
repetition_penalty The parameter for repetition penalty. 1.0 means no penalty
no_repeat_ngram_size The size of an n-gram that cannot occur more than once. (0=infinity)
remove_invalid_values Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.
self_recursive See this section
recursive_level See this section
preprocess_mode See this section

Quantization

  • Quantization is added with Quanto and Bitsandbytes packages.
  • The Quanto package requires torch >= 2.2 and Bitsandbytes package works out-of-box with Linux OS. So the node is checking which package to use:
    • If requirements are not specified for this packages, you can not use the quantize variable and it has only none value.
    • If the Quanto requirements are filled then you can choose between none, int8, float8, int4 values.
    • If the Bitsandbytes requirements are filled then you can choose between none, int8, int4 values.
  • If your environment can use the Quanto and Bitsandbytes packages, the node selects the Bitsandbytes package.

Random Generation

  • For random generation:

    • Enable do_sample
  • You can find this text generation strategy from the upper link. The strategy is called Multinomial sampling.

  • Changing variable of do_sample to disable gives deterministic generation.

  • For more randomness, you can:

    • Enable random_index variable
    • Increase recursive_level
    • Enable self_recursive

Lock The Generation

  • Enabling the lock variable skip the generation and let you choose from the last generated prompts.
  • You can choose from the index value or use the random_index.
  • If random_index is enabled, the index value is ignored.

How Recursive Works?

  • Let's say we give a, as seed and recursive level is 1. I am going to use the same outputs for this example to explain the functionality more accurately.
  • With self recursive, let's say generator's output is b. So next seed is going to be b and generator's output is c. Final output is a, c. It can be used for generating random outputs.
  • Without self recursive, let's say generator's output is b. So next seed is going to be a, b and generator's output is c. Final output is a, b, c. It can be used for more accurate prompts.

How Preprocess Mode Works?

  • exact_keyword => (masterpiece), ((masterpiece)) is not allowed. Checking the pure keyword without parantheses and weights. The algorithm is adding the prompts from the beginning of the generated text, so add important prompts to seed.
  • exact_prompt => (masterpiece), ((masterpiece)) is allowed but (masterpiece), (masterpiece) is not. Checking the exact match of the prompt.
  • none => Everything is allowed even the repeated prompts.

Example

# ---------------------------------------------------------------------- Original ---------------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt, (masterpiece)
# ------------------------------------------------------------- Preprocess (Exact Keyword) ------------------------------------------------------------- #
((masterpiece)), blahblah, blah, ((same prompt))
# ------------------------------------------------------------- Preprocess (Exact Prompt) -------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt

Troubleshooting

  • If the below solutions are not fixed your issue please create an issue with bug label

Package Version

  • The node is based on transformers and optimum packages. So most of the problems may be caused from these packages. For overcome these problems you can try to update these packages:

For Manual Installation of the ComfyUI

  1. Activate the virtual environment if there is one.
  2. Run the pip install --upgrade transformers optimum optimum[onnxruntime-gpu] command.

For Portable Installation of the ComfyUI

  1. Go to the ComfyUI_windows_portable folder.
  2. Open the command prompt in this folder.
  3. Run the .\python_embeded\python.exe -s -m pip install --upgrade transformers optimum optimum[onnxruntime-gpu] command.

Automatic Installation

For Manual Installation of the ComfyUI

  • The users have to check if they activate the virtual environment if there is one.

For Portable Installation of the ComfyUI

  • The users have to check that they are starting the ComfyUI in the ComfyUI_windows_portable folder.
  • Because the node is checking the python_embeded folder if it is exists and is using it to install the required packages.

New Updates On The Node

  • Sometimes the variables are changed with updates, so it may broke the workflow. But don't worry, you have to just delete the node in the workflow and add it again.

Contributing

  • Contributions are welcome. If you have an idea and want to implement it by yourself please follow these steps:

    1. Create a fork
    2. Pull request the fork with the description that explaining the new feature
  • If you have an idea but don't know how to implement it, please create an issue with enhancement label.

  • The contributing can be done in several ways. You can contribute to code or to README file.

Example Outputs

first_example second_example third_example

prompt-generator-comfyui's People

Contributors

alpertunga-bile avatar haohaocreates avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

prompt-generator-comfyui's Issues

[BUG]UnicodeEncodeError: 'gbk' codec can't encode character '\u202c'

Describe the bug
When excuting the node, got encoding error.

Screenshots

ERROR:root:Traceback (most recent call last):
  File "E:\DEV\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "E:\DEV\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "E:\DEV\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "E:\DEV\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 290, in generate
    self.log_outputs(
  File "E:\DEV\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 130, in log_outputs
    file.write(
UnicodeEncodeError: 'gbk' codec can't encode character '\u202c' in position 196: illegal multibyte sequence

Operating System

  • [ Windows 11]

** Python Version**

  • 3.10.11

ModuleNotFoundError: No module named 'generator.generate'

I can't seem to find this module. It prevents loading the node. I cloned from the git.

Traceback (most recent call last):
File "D:\AI\ComfyUI\nodes.py", line 1698, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\AI\ComfyUI\custom_nodes\prompt-generator-comfyui_init
.py", line 12, in
from prompt_generator import PromptGenerator
File "D:\AI\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 4, in
from generator.generate import GenerateArgs, Generator
ModuleNotFoundError: No module named 'generator.generate'

[BUG]

Cannot import C:\Users\Z0Z00FICIAL\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui module for custom nodes: Command '.\python_embeded\python.exe -s -m pip install optimum[onnxruntime-gpu]' returned non-zero exit status 1.

Operating System
Windows 11 Pro

Python Version
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32

[BUG] ComfyUI Standalone Requirements Issues - ModuleNotFoundError: No module named 'optimum' & More

Describe the bug
I'm running the latest version of ComfyUI Standalone (pulled today) and cannot get this plugin to load properly, and it will always display IMPORT FAILED

To Reproduce
Steps to reproduce the behavior:

  1. Install the latest comfyUI standalone
  2. Git pull latest prompt-generator-comfyui
  3. Launch comfy UI
  4. Observe server output and IMPORT FAILED message

Screenshots
image

Operating System

  • Windows

Python Version
3.10.9

The following error was printed using the female-positive_generator_v4 version. There are no issues with the V2 and V3 versions. Please provide an answer. Thank you![bug]

The following error was printed using the female-positive_generator_v4 version. There are no issues with the V2 and V3 versions. Please provide an answer. Thank you! Also, should we consider refining the model to reduce memory usage? Currently, using SDXL+V4 version inference has exceeded 24GB of memory usage。

Traceback (most recent call last):
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 57, in try_wo_onnx_pipeline
self.pipe = get_bettertransformer_pipeline(model_name=model_path)
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 103, in get_bettertransformer_pipeline
model = get_model(model_name, use_device_map=True)
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 38, in get_model
model = AutoModelForCausalLM.from_pretrained(
File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3565, in from_pretrained
model.load_adapter(
File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\integrations\peft.py", line 180, in load_adapter
peft_config = PeftConfig.from_pretrained(
File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 151, in from_pretrained
return cls.from_peft_type(**kwargs)
File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 118, in from_peft_type
return config_cls(**kwargs)
TypeError: LoraConfig.init() got an unexpected keyword argument 'layer_replication'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Program\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\Program\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\Program\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 265, in generate
generator = Generator(model_path, is_accelerate)
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 48, in init
self.try_wo_onnx_pipeline(model_path)
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 59, in try_wo_onnx_pipeline
self.pipe = get_default_pipeline(model_path)
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 65, in get_default_pipeline
model = get_model(model_name)
File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 38, in get_model
model = AutoModelForCausalLM.from_pretrained(
File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3565, in from_pretrained
model.load_adapter(
File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\integrations\peft.py", line 180, in load_adapter
peft_config = PeftConfig.from_pretrained(
File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 151, in from_pretrained
return cls.from_peft_type(**kwargs)
File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 118, in from_peft_type
return config_cls(**kwargs)
TypeError: LoraConfig.init() got an unexpected keyword argument 'layer_replication'

[BUG] UTF-8 not handled correctly

Hi!

an issue I noticed. If the input text for prompt-generator contains accented characters, for example:

ă

then prompt generator errors with an exception:

Error occurred when executing Prompt Generator: 'charmap' codec can't encode character '\u0159' in position 161: character maps to

Thanks!

Error occurred when executing Prompt Generator: The following `model_kwargs` are not used by the model: ['guidance_scale'] (note: typos in the generate arguments will also show up in this list)[BUG]

Describe the bug
Error occurred when executing Prompt Generator:

The following model_kwargs are not used by the model: ['guidance_scale'] (note: typos in the generate arguments will also show up in this list)

File "D:\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 249, in generate
generated_texts = self.get_generated_texts(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 108, in get_generated_texts
results = generator.generate_multiple_output_texts(prompt, gen_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 75, in generate_multiple_output_texts
outputs = self.pipe(input, **args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\transformers\pipelines\text_generation.py", line 201, in call
return super().call(text_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\transformers\pipelines\base.py", line 1120, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\transformers\pipelines\text_generation.py", line 263, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\transformers\generation\utils.py", line 1271, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "D:\ComfyUI\venv\Lib\site-packages\transformers\generation\utils.py", line 1144, in _validate_model_kwargs
raise ValueError(
To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System

  • [e.g. Windows]

Python Version
Add the python version you are using

Couldn't instantiate the backend tokenizer

Error occurred when executing Prompt Generator:

Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

File "F:\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func

Extract generated prompt

I'm using SD Prompt Saver and would like to include generated positive prompt. How can I do that? SD Prompt Saver expects a string, while output of PromptGenerator is already CLIP encoded - if I understand it correctly.

It'd be great if PromptGenerator had an output containing the actual string before encoding. It doesn't really matter if it's a replacement for existing output (which one would then need to chain into CLIP Text Encode node) or a parallel output.

Thanks!

[BUG]Import failed No module named 'keras.engine'

Describe the bug
Windows portable version. When starting got following error message:

Cannot import E:\DEV\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui module for custom nodes: Failed to import optimum.onnxruntime.modeling_ort because of the following error (look up to see its traceback):
Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
No module named 'keras.engine'

Operating System

  • Windows 11

** Python Version**

  • 3.10.6

** Transformers and optimum packages versions **
Name: transformers
Version: 4.26.1
Name: optimum
Version: 1.17.1

** ComfyUI Commit **

  • ComfyUI Revision: 1969 [02409c30] | Released on '2024-02-12'

** prompt-generator-comfyui Commit **

  • latest

Randomize prompt checkbox or per X count / batch?[FEATURE]

Maybe there is an option to allow it to continue running and it randomly generate prompts, however I havent found it and it tends to generate an infinite amount of the same styling unless very vague and even then still won't vary much.

I use ComfyUi manager and set to autofill batch then allow to generate overnight and wake up to 800 of the same styled images.

[Errno 22] Invalid argument

Describe the bug
[Errno 22] Invalid argument
use basicWorkflowWithPromptGenerator.json workflow, error when running to Prompt Generator node

To Reproduce
Use basicWorkflowWithPromptGenerator.json workflow, error when running to Prompt Generator node

Code
Traceback (most recent call last):
File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 284, in generate
self._generated_prompts = get_generated_texts(
File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 106, in get_generated_texts
generated_text = preprocess(prompt + result, preprocess_mode)
File "D:\SDLocal\ComfyUI_windows_portable\python_embeded\lib\site-packages\preprocess.py", line 363, in preprocess
contentType = registry.getContentType(infile)
File "D:\SDLocal\ComfyUI_windows_portable\python_embeded\lib\site-packages\preprocess.py", line 803, in getContentType
content = open(path, 'rb').read()
OSError: [Errno 22] Invalid argument: '((masterpiece, best quality, ultra detailed)), illustration, digital art, 1girl, solo, ((stunningly beautiful)), ((upper body portrait)), by Stanley Artgerm Lau, (by Loimu), nami with black hair and blue eye makeup mernie (honkai star rail yuuutsu ), smile\n'

Operating System
Windows

** Python Version**
Python version: 3.10.9

** Transformers and optimum packages versions **
transformers-4.39.2
optimum-1.18.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.