Git Product home page Git Product logo

sd-face-editor's Introduction

Face Editor

Face Editor for Stable Diffusion. This Extension is useful for the following purposes:

  • Fixing broken faces
  • Changing facial expressions
  • Apply blurring or other processing

example

This is a extension of AUTOMATIC1111's Stable Diffusion Web UI. If you are using SD.Next, use the sd.next branch.

Setup

  1. Open the "Extensions" tab then the "Install from URL" tab.
  2. Enter "https://github.com/ototadana/sd-face-editor.git" in the "URL of the extension's git repository" field. Install from URL
  3. Click the "Install" button and wait for the "Installed into /home/ototadana/stable-diffusion-webui/extensions/sd-face-editor. Use Installed tab to restart." message to appear.
  4. Go to "Installed" tab and click "Apply and restart UI".

Usage

  1. Click "Face Editor" and check "Enabled". Check Enabled
  2. Then enter the prompts as usual and click the "Generate" button to modify the faces in the generated images. Result
  3. If you are not satisfied with the results, adjust the parameters and rerun. see Tips.

Tips

Contour discomfort

If you feel uncomfortable with the facial contours, try increasing the "Mask size" value. This discomfort often occurs when the face is not facing straight ahead.

Mask size

If the forelock interferes with rendering the face properly, generally, selecting "Hair" from "Affected areas" results in a more natural image.

Affected ares - UI

This setting modifies the mask area as illustrated below:

Affected ares - Mask images


When multiple faces are close together

When multiple faces are close together, one face may collapse under the influence of the other. In such cases, enable "Use minimal area (for close faces)".

Use minimal area for close faces


Change facial expression

Use "Prompt for face" option if you want to change the facial expression.

Prompt for face

Individual instructions for multiple faces

Individual instructions for multiple faces

Faces can be individually directed with prompts separated by || (two vertical lines).

Individual instructions for multiple faces - screen shot

  • Each prompt is applied to the faces on the image in order from left to right.
  • The number of prompts does not have to match the number of faces to work.
  • If you write the string @@, the normal prompts (written at the top of the screen) will be expanded at that position.
  • If you are using the Wildcards Extension, you can use the __name__ syntax and the text file in the directory of the wildcards extension as well as the normal prompts.

Fixing images that already exist

If you wish to modify the face of an already existing image instead of creating a new one, follow these steps:

  1. Open the image to be edited in the img2img tab It is recommended that you use the same settings (prompt, sampling steps and method, seed, etc.) as for the original image. So, it is a good idea to start with the PNG Info tab.
    1. Click PNG Info tab.
    2. Upload the image to be edited.
    3. Click Send to img2img button.
  2. Set the value of "Denoising strength" of img2img to 0. This setting is good for preventing changes to areas other than the faces and for reducing processing time.
  3. Click "Face Editor" and check "Enabled".
  4. Then, set the desired parameters and click the Generate button.

How it works

This script performs the following steps:

Step 0

First, image(s) are generated as usual according to prompts and other settings. This script acts as a post-processor for those images.

Step 1. Face Detection

Detects faces on the image.

step-1

Step 2. Crop and Resize the Faces

Crop the detected face image and resize it to 512x512.

step-2

Step 3. Recreate the Faces

Run img2img with the image to create a new face image.

step-3

Step 4. Paste the Faces

Resize the new face image and paste it at the original image location.

step-4

Step 5. Blend the entire image

To remove the borders generated when pasting the image, mask all but the face and run inpaint.

step-5

Completed

Show sample image

step-6

Parameters

Basic Options

Workflow

Select a workflow. "Search workflows in subdirectories" can be enabled in the Face Editor section of the "Settings" tab to try some experimental workflows. You can also add your own workflows.

For more detailed information, please refer to the Workflow Editor section.

Use minimal area (for close faces)

When pasting the generated image to its original location, the rectangle of the detected face area is used. If this option is not enabled, the generated image itself is pasted. In other words, enabling this option applies a smaller face image, while disabling it applies a larger face image.

Save original image

This option allows you to save the original, unmodified image.

Show original image

This option allows you to display the original, unmodified image.

Show intermediate steps

This option enables the display of images that depict detected faces and masks. If the generated image is unnatural, enabling it may reveal the cause.

Prompt for face

Prompt for generating a new face. If this parameter is not specified, the prompt entered at the top of the screen is used.

For more information, please see: here.

Mask size (0-64)

Size of the mask area when inpainting to blend the new face with the whole image.

Show sample images

size: 0 mask size 0

size: 10 mask size 10

size: 20 mask size 20

Mask blur (0-64)

Size of the blur area when inpainting to blend the new face with the whole image.


Advanced Options

Step 1. Face Detection

Maximum number of faces to detect (1-20)

Use this parameter when you want to reduce the number of faces to be detected. If more faces are found than the number set here, the smaller faces will be ignored.

Face detection confidence (0.7-1.0)

Confidence threshold for face detection. Set a lower value if you want to detect more faces.

Step 2. Crop and Resize the Faces

Face margin (1.0-2.0)

Specify the size of the margin for face cropping by magnification.

If other parameters are exactly the same but this value is different, the atmosphere of the new face created will be different.

Show sample images

face margin

Size of the face when recreating

Specifies one side of the image size when creating a face image. If you are using the SDXL model, we recommend changing to 1024. For other models, there is usually no need to change from the default value (512), but you may see interesting changes if you do.

Ignore faces larger than specified size

Ignore if the size of the detected face is larger than the size specified in "Size of the face when recreating".

For more information, please see: here.

Upscaler

Select the upscaler to be used to scale the face image.

Step 3. Recreate the Faces

Denoising strength for face images (0.1-0.8)

Denoising strength for generating a new face. If the value is too small, facial collapse cannot be corrected, but if it is too large, it is difficult to blend with the entire image.

Show sample images

strength: 0.4 strength 0.4

strength: 0.6 strength 0.6

strength: 0.8 strength 0.8

Tilt adjustment threshold (0-180)

This option defines the angle, in degrees, above which tilt correction will be automatically applied to detected faces. For instance, if set to 20, any face detected with a tilt greater than 20 degrees will be adjusted. However, if the "Adjust tilt for detected faces" option in the Face Editor section of the "Settings" tab is enabled, tilt correction will always be applied, regardless of the tilt adjustment threshold value.

Step 4. Paste the Faces

Apply inside mask only

Paste an image cut out in the shape of a face instead of a square image.

For more information, please see: here.

Step 5. Blend the entire image

Denoising strength for the entire image (0.0-1.0)

Denoising strength when inpainting to blend the new face with the whole image. If the border lines are too prominent, increase this value.


API

If you want to use this script as an extension (alwayson_scripts) in the API, specify "face editor ex" as the script name as follows:

   "alwayson_scripts": {
      "face editor ex": {
         "args": [{"prompt_for_face": "smile"}]
      },

By specifying an object as the first argument of args as above, parameters can be specified by keywords. We recommend this approach as it can minimize the impact of modifications to the software. If you use a script instead of an extension, you can also specify parameters in the same way as follows:

   "script_name": "face editor",
   "script_args": [{"prompt_for_face": "smile"}],

Workflow Editor

Workflow Editor is where you can customize and experiment with various options beyond just the standard settings.

Workflow Editor

  • The editor allows you to select from a variety of implementations, each offering unique behaviors compared to the default settings.
  • It provides a platform for freely combining these implementations, enabling you to optimize the workflow according to your needs.
  • Within this workflow, you will define a combination of three components: the "Face Detector" for identifying faces within an image, the "Face Processor" for adjusting the detected faces, and the "Mask Generator" for integrating the processed faces back into the original image.
  • As you experiment with different settings, ensure to activate the "Show intermediate steps" option. This allows you to understand precisely the impact of each modification.

Using the Workflow Editor UI

Workflow list and Refresh button

Workflow list and Refresh button

  • Lists workflow definition files (.json) stored in the workflows folder.
  • The option "Search workflows in subdirectories" can be enabled in the Face Editor section of the "Settings" tab to use sample workflow definition files.
  • The Refresh button (🔄) can be clicked to update the contents of the list.

File name and Save button

Workflow list and Refresh button

  • This feature is used to save the edited workflow definition.
  • A file name can be entered in the text box and the Save button (💾) can be clicked to save the file.

Workflow definition editor and Validation button

Workflow definition editor and Validation button

  • This feature allows you to edit workflow definitions. Workflows are described in JSON format.
  • The Validation button (✅) can be clicked to check the description. If there is an error, it will be displayed in the message area to the left of the button.

Example Workflows

This project includes several example workflows to help you get started. Each example provides a JSON definition for a specific use case, which can be used as is or customized to suit your needs. To access these example workflows from the Workflow Editor, you need to enable the "Search workflows in subdirectories" option located in the Face Editor section of the "Settings" tab.

Settings

For more details about these example workflows and how to use them, please visit the workflows/examples/README.md.

Workflow Components (Inferencers)

In this project, the components used in the workflow are also referred to as "inferencers". These inferencers are part of the process that modifies the faces in the generated images:

  1. Face Detectors: These components are used to identify and locate faces within an image. They provide the coordinates of the detected faces, which will be used in the following steps.
  2. Face Processors: Once the faces are detected and cropped, these components modify or enhance the faces.
  3. Mask Generators: After the faces have been processed, these components are used to create a mask. The mask defines the area of the image where the modifications made by the Face Processors will be applied.

The "General components" provide the basic functionalities for these categories, and they can be used without the need for additional software installations. On the other hand, each functionality can also be achieved by different technologies or methods, which are categorized here as "Additional components". These "Additional components" provide more advanced or specialized ways to perform the tasks of face detection, face processing, and mask generation.

In this project, the components used in the workflow are also referred to as "inferencers". These inferencers fall into three functional categories: Face Detectors, Face Processors, and Mask Generators.

Note: When using "Additional components", ensure that the features you want to use are enabled in the "Additional Components" section of the "Settings" tab under "Face Editor". For detailed descriptions and usage of each component, please refer to the corresponding README.

General components

Additional components

Workflow JSON Reference

  • face_detector (string or object, required): The face detector component to be used in the workflow.
    • When specified as a string, it is considered as the name of the face detector implementation.
    • When specified as an object:
      • name (string, required): The name of the face detector implementation.
      • params (object, optional): Parameters for the component, represented as key-value pairs.
  • rules (array or object, required): One or more rules to be applied.
    • Each rule can be an object that consists of when and then:
      • when (object, optional): The condition for the rule.
        • tag (string, optional): A tag corresponding to the type of face detected by the face detector. This tag can optionally include a query following the tag name, separated by a '?'. This query is a complex condition that defines attribute-value comparisons using operators. The query can combine multiple comparisons using logical operators. For example, a tag could be "face?age<30&gender=M", which means that the tag name is "face" and the query is "age<30&gender=M". The query indicates that the rule should apply to faces that are identified as male and are less than 30 years old.
        • The available operators are as follows:
          • =: Checks if the attribute is equal to the value.
          • <: Checks if the attribute is less than the value.
          • >: Checks if the attribute is greater than the value.
          • <=: Checks if the attribute is less than or equal to the value.
          • >=: Checks if the attribute is greater than or equal to the value.
          • !=: Checks if the attribute is not equal to the value.
          • ~=: Checks if the attribute value contains the value.
          • *=: Checks if the attribute value starts with the value.
          • =*: Checks if the attribute value ends with the value.
          • ~*: Checks if the attribute value does not contain the value.
        • The logical operators are as follows:
          • &: Represents logical AND.
          • |: Represents logical OR.
        • criteria (string, optional): This determines which faces will be processed, based on their position or size. Available options for position include 'left', 'right', 'center', 'top', 'middle', 'bottom'. For size, 'small', 'large' are available. The selection of faces to be processed that match the specified criteria can be defined in this string, following the pattern {position/size}:{index range}. The {index range} can be a single index, a range of indices, or a combination of these separated by a comma. For example, specifying left:0 will process the face that is located the most to the left on the screen. left:0-2 will process the three faces that are the most left, and left:0,2,5 will process the most left face, the third from the left, and the sixth from the left. If left is specified without an index or range, it will default to processing the face most left in the frame. Essentially, this is the same as specifying left:0.
      • then (object or array of objects, required): The job or list of jobs to be executed if the when condition is met.
        • Each job is an object with the following properties:
          • face_processor (object or string, required): The face processor component to be used in the job.
            • When specified as a string, it is considered as the name of the face processor implementation.
            • When specified as an object:
              • name (string, required): The name of the face processor implementation.
              • params (object, optional): Parameters for the component, represented as key-value pairs.
          • mask_generator (object or string, required): The mask generator component to be used in the job.
            • When specified as a string, it is considered as the name of the mask generator implementation.
            • When specified as an object:
              • name (string, required): The name of the mask generator implementation.
              • params (object, optional): Parameters for the component, represented as key-value pairs.

Rules are processed in the order they are specified. Once a face is processed by a rule, it will not be processed by subsequent rules. The last rule can be specified with then only (i.e., without when), which will process all faces that have not been processed by previous rules.


Settings

In the "Face Editor" section of the "Settings" tab, the following settings can be configured.

"Search workflows in subdirectories"

Overview
"Search workflows in subdirectories" is a setting option that controls whether Face Editor includes subdirectories in its workflow search.

Value and Impact
The value is a boolean, with either True or False to be specified. The default value is False, which indicates that the search does not include subdirectories. When set to True, the workflow search extends into subdirectories, allowing for the reference of sample workflows.


"Additional components"

Overview
"Additional components" is a setting option that specifies the additional components available for use in Face Editor.

Value and Impact
This setting is a series of checkboxes labeled with component names. Checking a box (setting to Enabled) activates the corresponding component in Face Editor.


"Save original image if face detection fails"

Overview
"Save original image if face detection fails" is a setting option that specifies whether to save the original image if face detection fails.

Value and Impact
The value is a boolean, with either True or False to be specified. The default value is True, which means that the original image will be saved if face detection fails.


"Adjust tilt for detected faces"

Overview
"Adjust tilt for detected faces" is a setting option that specifies whether to adjust the tilt for detected faces.

Value and Impact
The value is a boolean, with either True or False to be specified. The default value is False, indicating that no tilt correction will be applied when a face is detected. Even when "Adjust tilt for detected faces" is not enabled, the tilt correction may still be applied based on the "Tilt adjustment threshold" setting.


"Auto face size adjustment by model"

Overview
"Auto face size adjustment by model" is a setting option that determines whether the Face Editor automatically adjusts the size of the face based on the selected model.

Value and Impact
The setting is a checkbox. When checked (enabled):

  • The face size will be set to 1024 if the SDXL model is selected. For other models, the face size will be set to 512.
  • The "Size of the face when recreating" setting will be hidden and its value will be ignored, since the face size will be determined based on the chosen model.

"The position in postprocess at which this script will be executed"

Overview
"The position in postprocess at which this script will be executed" is a setting option that specifies the position at which this script will be executed during postprocessing.

Value and Impact
The value is an integer, with a value from 0 to 99 to be specified. A smaller value means that the script will be executed earlier. The default value is 99, which indicates that this script will likely be executed last during postprocessing.


Contribution

We warmly welcome contributions to this project! If you're someone who is interested in machine learning, face processing, or just passionate about open-source, we'd love for you to contribute.

What we are looking for:

  • Workflow Definitions: Help us expand our array of workflow definitions. If you have a unique or interesting workflow design, please don't hesitate to submit it as a sample!
  • Implementations of FaceDetector, FaceProcessor, and MaskGenerator: If you have alternative approaches or models for any of these components, we'd be thrilled to include your contributions in our project.

Before starting your contribution, please make sure to check out our existing code base and follow the general structure. If you have any questions, don't hesitate to open an issue. We appreciate your understanding and cooperation.

We're excited to see your contributions and are always here to help or provide guidance if needed. Happy coding!

sd-face-editor's People

Contributors

betogaona7 avatar coder168 avatar iamrohitanshu avatar mikecokina avatar ototadana avatar pythias avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-face-editor's Issues

Is there a way to use it in Extras?

Sometimes I just want to easily swap faces in an original image.
Using img2img generation is much slower than using Extra.
So I'm wondering if this feature can be used in Extras?

Batch processing can only process the first image

Here is the error message:
Will process 427 images, creating 1 new images for each.
number of faces: 1
100%|██████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00, 9.95it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 8.94it/s]
Total progress: 0%|▎ | 20/4697 [00:02<11:01, 7.07it/s]
Error completing request | 20/4697 [00:02<09:22, 8.31it/s]
Arguments: ('task(ayh262ahoqxe51x)', 5, 'oilpaint,Sexy Dakotaskye smiling at you, (night-time:1.4) rave, (realistic face:1.3), (long hair),(Busty:1.3),(underboob:1.2), tight abs, (navel piercing:1.2), dancing on stage,low cut latex hotpants, cameltoe, perfect body, deep tan, happy expression, hand on hip, High Contrast, volumetric lighting, candid, Photograph, high resolution, 4k, 8k, Bokeh,((white sports bra)),((white shoes)),((black pants))', '3d, cartoon, anime, sketches, (worst quality:2), (low quality:2), (normal quality:2), low-res, normal quality, ((monochrome)), ((grayscale)), skin spots, acne, skin blemishes, bad anatomy, ((child)) ((loli)), tattoos, bad_prompt_version2, ng_deepnegative_v1_75t, (asian.1.2) bad-hands-5, handbag, Poorly drawn hands, ((too many fingers)), ((bad fingers)) bad-image-v2-39000,((nsfw)),naked,nude,breast,((Gloves)),((t-shirt)),((shirt))\n', [], <PIL.Image.Image image mode=RGBA size=540x960 at 0x1D3016CD8D0>, None, None, None, None, None, None, 27, 16, 4, 0, 1, False, False, 1, 1, 6.5, 1.5, 0.85, 4128014353.0, -1.0, 0, 0, 0, False, 960, 540, 0, 0, 32, 0, 'F:\序列帧\dance-out', 'F:\序列帧\dance-out-face', '', [], 1, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, <scripts.external_code.ControlNetUnit object at 0x000001D30B4EB610>, <scripts.external_code.ControlNetUnit object at 0x000001D30B4E8520>, <scripts.external_code.ControlNetUnit object at 0x000001D30B4E8A00>, <scripts.external_code.ControlNetUnit object at 0x000001D30B4E8610>, <scripts.external_code.ControlNetUnit object at 0x000001D30B4EAB60>, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, 1.6, 0.97, 0.4, 0.3, 1, 0, 0, '', False, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'None', '', '', 1, 'FirstGen', False, False, 'InputFrame', False, 1 2 3
0 , False, '', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '', '', 20.0, '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\stable-diffusion-webui\modules\img2img.py", line 166, in img2img
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args)
File "F:\stable-diffusion-webui\modules\img2img.py", line 76, in process_batch
if processed_image.mode == 'RGBA':
AttributeError: 'numpy.ndarray' object has no attribute 'mode'

image


image

scaling by factor miscalculates dimension

When scaling a picture of 1280x720 by a factor 2, the resulting dimension is 2560x1472 (not 1440). This slight change in aspect ratio affect the face shape.

But when I choose custom dimension and enter 2560 and 1440 respectively, the result is perfect.

Please check and this must be an easy fix. It's a great extension.

SyntaxError: invalid syntax

Error loading script: face_editor.py
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 846, in exec_module
File "", line 983, in get_code
File "", line 913, in source_to_code
File "", line 228, in _call_with_frames_removed
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/face_editor.py", line 8

^
SyntaxError: invalid syntax

another batch problem

Hi there,
I like your script, it works greater than another previous face swap script.
But when i try to use the batch functrion in i2i, after filled in the input directory, output directory, it can't generate even one image.
it showed "ValueError: operands could not be broadcast together with shapes (396,396,4) (396,396,3)"

i mean for example, i have 10 different images needed to change the faces. and i dont want to change it one by one by clicking the same setting.
please help! thank you!

Feature request: face restore

I noticed that the result pics always have face restore effects,is there any way to add an option to stop using face restore?
while using an anime model,face restore will make face strange.
thanks!

idk if amd user can use this

Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\stable-diffusion-webui\modules\scripts.py", line 376, in run
processed = script.run(p, *script_args)
File "D:\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 175, in run
return self.__proc_image(o, mask_model, detection_model,
File "D:\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 222, in __proc_image
faces = self.__crop_face(
File "D:\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 308, in __crop_face
face_boxes, _ = detection_model.align_multi(image, confidence)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\facexlib\detection\retinaface.py", line 255, in align_multi
rlt = self.detect_faces(img, conf_threshold=conf_threshold)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\facexlib\detection\retinaface.py", line 205, in detect_faces
loc, conf, landmarks, priors = self.__detect_faces(image)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\facexlib\detection\retinaface.py", line 156, in __detect_faces
loc, conf, landmarks = self(inputs)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\facexlib\detection\retinaface.py", line 121, in forward
out = self.body(inputs)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\torchvision\models_utils.py", line 69, in forward
x = module(x)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\stable-diffusion-webui\Python3.10\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (PrivateUse1FloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Can wildcard support be added?

Face editor works great especially in group picture. I wonder if wildcard support can be added to the prompt of face editor so that each face could have different expression. Thank you.

"TypeError: 'NoneType' object is not subscriptable"Keep appearing

To create a public link, set share=True in launch().
Create LRU cache (max_size=16) for preprocessor results.
Startup time: 4.8s (list SD models: 0.3s, load scripts: 3.7s, create ui: 0.4s, gradio launch: 0.1s, scripts app_started_callback: 0.2s).
number of faces: 1
prompt for the face: RAW photo,full-body portrait,a young male barbarian with short hair is wielding an axe,fiercely attacking a demon,pale skin,slim body,background is ancient city ruins,(high detailed skin:1.2),8k uhd,dslr,soft lighting,high quality,film grain,Fujifilm XT3,
100%|██████████████████████████████████████████████████████████████████████████████████| 11/11 [00:15<00:00, 1.38s/it]
Error completing request██████████████████████████████ | 11/22 [00:12<00:13, 1.27s/it]
Arguments: ('task(9iz26utx8d2hbcg)', 0, 'RAW photo,full-body portrait,a young male barbarian with short hair is wielding an axe,fiercely attacking a demon,pale skin,slim body,background is ancient city ruins,(high detailed skin:1.2),8k uhd,dslr,soft lighting,high quality,film grain,Fujifilm XT3,', 'lowres,bad anatomy,bad hands,missing fingers,extra digits,extra hands,extra feet,fewer digits,bad feet,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,username,blurry,1girl,3d rendering,((((big hands, un-detailed skin, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime)))),(((ugly mouth, ugly eyes, missing teeth, crooked teeth, close up, cropped, out of frame))),worst quality,low quality,jpeg artifacts,ugly,duplicate,morbid,mutilated,extra fingers,mutated hands,poorly drawn hands,poorly drawn face,mutation,deformed,blurry,dehydrated,bad anatomy,bad proportions,extra limbs,cloned face,disfigured,gross proportions,malformed limbs,missing arms,missing legs,extra arms,extra legs,fused fingers,too many fingers,long neck,easynegative,black and white style,missing head,missing hand,missing leg,', [], <PIL.Image.Image image mode=RGBA size=768x432 at 0x23CA1B52470>, None, None, None, None, None, None, 25, 16, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, 2790478946.0, -1.0, 0, 0, 0, False, 0, 432, 768, 1, 0, 0, 32, 0, '', '', '', [], 9, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 512, 64, True, True, True, False, False, 1.6, 0.97, 0.4, 0, 20, 0, 12, '', True, False, False, False, 512, False, True, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000023C9D59A530>, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 1.6, 0.97, 0.4, 0, 20, 1, 12, '', True, False, False, False, 512, False, True, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\aidraw\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\aidraw\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\aidraw\stable-diffusion-webui-directml\modules\img2img.py", line 176, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\aidraw\stable-diffusion-webui-directml\modules\scripts.py", line 441, in run
processed = script.run(p, *script_args)
File "D:\aidraw\stable-diffusion-webui-directml\extensions\sd-face-editor\scripts\face_editor.py", line 257, in run
return self.__proc_image(o, mask_model, detection_model,
File "D:\aidraw\stable-diffusion-webui-directml\extensions\sd-face-editor\scripts\face_editor.py", line 520, in __proc_image
proc = self.__save_images(p)
File "D:\aidraw\stable-diffusion-webui-directml\extensions\sd-face-editor\scripts\face_editor.py", line 574, in __save_images
infotext = create_infotext(p, p.all_prompts, p.all_seeds, p.all_subseeds, {}, 0, 0)
File "D:\aidraw\stable-diffusion-webui-directml\modules\processing.py", line 587, in create_infotext
return f"{all_prompts[index]}{negative_prompt_text}\n{generation_params_text}".strip()
TypeError: 'NoneType' object is not subscriptable

I'm not very familiar with the rules of this forum, so I'll just copy and paste the error message. The facial reconstruction starts for a few seconds and then this error keeps appearing. I have already uninstalled the face editor and reinstalled it several times.

Stopped working once Lora weight block is enabled

I updated A1111 to 1.4, everything seems working except for face editor. It says face detected and my lora processed in the command window, no error message, but the face was not changed. I tried reinstall face editor, same. I'm not sure if it's me alone.

Update: I think I had the cause pinned down. It's the extension called Lora block weight https://github.com/hako-mikan/sd-webui-lora-block-weight. Once I have it disabled, face editor works as it is supposed to. This conflict only happens after I updated A1111 to 1.4.

WE NEED extension version for video frame swap face.

one extension named batch face swap can do this,
but it is not fast than face editor script,
I am not a developer, I try change this script to extension,
but failed,
it have return scripts.AlwaysVisible ,
the script don't working,
I don't know how to change code next step,
I kown scripts dropdown must be selected,this script run() then working,
I can't find extensions developing doc, how to use a script as extension.

How to use the extension in 'stable-diffusion-webui-api'?

we deploy the webUI in aws service, how how invoke the face-editor extension with api, such as controlnet :
'img2img_payload': { 'init_images': [ encode_image_to_base64(image) ], 'enable_hr': False, 'denoising_strength': 0.7, 'firstphase_width': 0, 'firstphase_height': 0, 'prompt': 'masterpiece, best quality, ultra-detailed', 'styles': [''], 'seed': -1.0, 'subseed': -1.0, 'subseed_strength': 0, 'seed_resize_from_h': 0, 'seed_resize_from_w': 0, 'sampler_index': 'DPM++ 2S a Karras', 'batch_size': 1, 'n_iter': 1, 'steps': 50, 'cfg_scale': 7, 'width': 640, 'height': 1024, 'restore_faces': False, 'tiling': False, 'negative_prompt': 'nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name', 's_churn': 0, 's_tmax': None, 's_tmin': 0, 's_noise': 1, 'clip_skip': 1, 'alwayson_scripts': { "controlnet": { "args": [ { 'enabled':True, 'image': encode_image_to_base64(image), 'model': 'control_sd15_hed [fef5e48e]', 'weight': 0.55, 'processor_res': 640, 'guidance_start': 0, 'guidance_end': 0.7, 'threshold_a': 100, # default 'threshold_b': 200, # default 'guess_mode': False, 'module': 'none', 'low_vram': False, 'eta': 1, #'mask: '', }, { 'enabled':True, 'image': encode_image_to_base64(image), 'model': 'control_sd15_openpose [fef5e48e]', 'weight': 1, 'processor_res': 640, 'guidance_start': 0, 'guidance_end': 0.8, 'guidance': 1, 'threshold_a': 100, # default 'threshold_b': 200, # default 'guessmode': False, 'module': 'none', 'low_vram': False, 'eta': 1, #'mask: '', }, { "enabled": False, } ] } }

Using the script in img2img mode through the API possibly broken

Hi there!

First of all, thank you for this wonderful extension - it does exactly what I need it to do.

There seems to be a small issue when using the API in img2img mode (with the script, NOT the extension version).

First of all, here's the relevant part from my JSON payload, roughly corresponding to the defaults from the UI:

"script_name": "face editor",
"script_args": [1.6, 0.97, 0.4, 0, 20, 0, 12, "angry", true, false, false, false, 512, true, true]

As it is, this returns a 500 "Internal Server Error" and produces the following traceback in the console:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\StableDiffusion\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "C:\StableDiffusion\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "C:\StableDiffusion\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\StableDiffusion\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\StableDiffusion\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\fastapi\applications.py", line 271, in __call__
    await super().__call__(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\applications.py", line 125, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__
    response = await self.dispatch_func(request, call_next)
  File "C:\StableDiffusion\modules\api\api.py", line 96, in log_and_time
    res: Response = await call_next(req)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
    await self.app(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\StableDiffusion\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "C:\StableDiffusion\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "C:\StableDiffusion\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "C:\StableDiffusion\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "C:\StableDiffusion\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "C:\StableDiffusion\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\StableDiffusion\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\StableDiffusion\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\StableDiffusion\modules\api\api.py", line 244, in img2imgapi
    processed = scripts.scripts_img2img.run(p, *p.script_args)
  File "C:\StableDiffusion\modules\scripts.py", line 376, in run
    processed = script.run(p, *script_args)
  File "C:\StableDiffusion\extensions\sd-face-editor\scripts\face_editor.py", line 257, in run
    return self.__proc_image(o, mask_model, detection_model,
  File "C:\StableDiffusion\extensions\sd-face-editor\scripts\face_editor.py", line 427, in __proc_image
    for script in p.scripts.alwayson_scripts:
AttributeError: 'NoneType' object has no attribute 'alwayson_scripts'

As of the latest commit, this is no longer line 427 but 437 - though this part seems unchanged so I don't believe updating will solve this for me. Here's the relevant lines from the latest commit:

for script in p.scripts.alwayson_scripts:
if script.filename.endswith("stable-diffusion-webui-wildcards/scripts/wildcards.py"):
wildcards_script = script

I have a workaround but it seems hacky and probably not the best solution, I simply ignore the exception using a try / except block:

try:
    for script in p.scripts.alwayson_scripts:
        if script.filename.endswith("stable-diffusion-webui-wildcards/scripts/wildcards.py"):
            wildcards_script = script
except:
    pass

This fixes the issue and the returned image is now precisely the same as the result I get from using the WebUI. It's possible that the issue is caused by me not using the wildcards extension (I have no need for it), but I'm unsure.

I could open a PR for this, but I'm very wary of whether or not this will have repercussions elsewhere for other users, so I'd like to see your thoughts first.

One last thing, for anyone else who wants to use the script via API, apply_scripts_to_faces must be set to false, and the order of the script_args array corresponds to the following parameters:

face_margin
confidence
strength1
strength2
max_face_count
mask_size
mask_blur
prompt_for_face
apply_inside_mask_only
save_original_image
show_intermediate_steps
apply_scripts_to_faces
face_size
use_minimal_area
ignore_larger_faces

Thanks again for this wonderful extension!

picture outside the face is affected

When I use this face editor, the whole picture outside the face is affected (with added noises/elements).

The settings I used was:

  1. img2img: CFG=7, denoise=0 (or 0.2 to minimize the change elsewhere)
  2. face editor: mask margin 1.2, mask size 8, blur =0, denoise inside the mask 0.4, denoise for the rest 0
    apply inside the mask checked.

After the processing, arms have a lot freckles, and the beach is dotted with a lot more white shell fragments.

Am I using it incorrectly ?

Problems associated with variation seeds in batch generating images

When I use batch image generation with the variation seeds turned on, the variation seeds contained in the generated image information should be incremented by plus one. However, when I open the face editor plugin, the variation seeds for the batch generated images are all the first value set and are not incremented.
aaaaa
for example, my variation seeds start at 4012764026 and I batch generate 8 images. In the PNG info shows the variation seeds should be 4012764026-4012764033, this is what it looks like when I'm not using the face editor. when I use face editor, all images variation seeds show in the PNG info will be 4012764026.

error message

TypeError: 'NoneType' object is not subscriptable

It was working fine until now, I'm not sure what happened. I tried updating it to the newest version, but for some reason the newest version is not able to detect the face anymore....

Edit: I realized that the newest version is able to recognize face only when the image is small (512x768), but not able to recognize face after the same image has been scaled up by say 2x size. I tried changing the face detection confidence to the lowest and still not able to get it to detect the face on the enlarged image...

RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

Traceback (most recent call last):
File "D:\GitCode\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\GitCode\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\GitCode\stable-diffusion-webui\modules\img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\GitCode\stable-diffusion-webui\modules\scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "D:\GitCode\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 178, in run
return self.__proc_image(o, mask_model, detection_model,
File "D:\GitCode\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 261, in __proc_image
mask_image = self.__to_mask_image(
File "D:\GitCode\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 327, in to_mask_image
normalize(face_tensor, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
File "D:\GitCode\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py", line 363, in normalize
return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)
File "D:\GitCode\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms_functional_tensor.py", line 928, in normalize
return tensor.sub
(mean).div
(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

Batch processing function

I really like this script, but it can only handle one image at a time. When I generate like 100 large images at once, all faces may need to be repaired. I wish this script can be placed on the txt2img tab and repair faces while generating them, which will be more useful.
Thanks for your outstanding work。

`UnboundLocalError: local variable 'h' referenced before assignment`

Just found your extension and wanted to try it out. Sadly, even with the recommended setting, I can't get this to work, because I get this error on text2img:

Error running postprocess: D:\AIUI\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor_extension.py
Traceback (most recent call last):
  File "D:\AIUI\stable-diffusion-webui\modules\scripts.py", line 478, in postprocess
    script.postprocess(p, processed, *script_args)
  File "D:\AIUI\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor_extension.py", line 98, in postprocess
    script.proc_images(mask_model, detection_model, o, res,
  File "D:\AIUI\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 321, in proc_images
    proc = self.__proc_image(p, mask_model, detection_model,
  File "D:\AIUI\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 479, in __proc_image
    proc = process_images(p)
  File "D:\AIUI\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "D:\AIUI\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\AIUI\stable-diffusion-webui\modules\processing.py", line 1262, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "D:\AIUI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\AIUI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "D:\AIUI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\AIUI\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\AIUI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 650, in sample_dpmpp_2m_sde
    h_last = h
UnboundLocalError: local variable 'h' referenced before assignment

I use Web UI 1.3.0

[Feature Request] save face editor info to generated png EXIF

把face editor里的参数保存到生成的图片的EXIF信息中。
并且反过来,也允许从png EXIF中还原参数到face editor(点“发送到文生图”的时候)。
这样能让face editor生成的图片可以重复还原,就像sd-webui-additional-networks那样。

Save the parameters in the face editor to the EXIF ​​information of the generated picture.
And, allow to restore parameters from png EXIF ​​to face editor (when "send to text2img").
This allows the images generated by the face editor to be restored repeatedly, just like sd-webui-additional-networks did.

After restarting the webui, an error message appears:

Error loading script: face_editor.py
Traceback (most recent call last):
File "H:\SDW\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "H:\SDW\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 879, in exec_module
File "", line 1017, in get_code
File "", line 947, in source_to_code
File "", line 241, in _call_with_frames_removed
File "H:\SDW\stable-diffusion-webui\scripts\face_editor.py", line 75

<title>sd-face-editor/face_editor.py at main · ototadana/sd-face-editor</title> ^ SyntaxError: invalid character '·' (U+00B7)

"Prompt for face" for lora

I want to use "Prompt for face" option for lora to generate one custom face, but lora prompt have no effect.

Can not display processed image with webui 1.3.2

Your extension still works wonderfully! But, with the new version of the webui, 1.3.2, I get this error

Traceback (most recent call last):s/it]
  File "D:\AIUI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\AIUI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1326, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "D:\AIUI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1260, in postprocess_data
    prediction_value = block.postprocess(prediction_value)
  File "D:\AIUI\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 4461, in postprocess
    file = self.pil_to_temp_file(img, dir=self.DEFAULT_TEMP_DIR)
  File "D:\AIUI\stable-diffusion-webui\modules\ui_tempdir.py", line 55, in save_pil_to_file
    file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=".png", dir=dir)
  File "...\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 559, in NamedTemporaryFile
    file = _io.open(dir, mode, buffering=buffering,
  File "...\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 556, in opener
    fd, name = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
  File " ...\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 256, in _mkstemp_inner
    fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '...\\AppData\\Local\\Temp\\gradio\\tmprp5_bug1.png'

I obscured the path with current users path with .... It doesn't occur when it's turned off.

This error only affects only the preview. There is no error during the processing.

Got a value error while rendering face

Not sure what I'm doing wrong, but here's the console log. I'm running today's WebUI build.

Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\img2img.py", line 150, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "C:\stable-diffusion-webui\modules\scripts.py", line 337, in run
processed = script.run(p, *script_args)
File "C:\stable-diffusion-webui\scripts\face_editor.py", line 208, in run
entire_image[
ValueError: could not broadcast input array from shape (364,364,3) into shape (364,364,4)

Face not detected with this lora

I tried to use face editor with this lora, somehow face editor failed to detect face properly in the image generated. Sometimes it only detects 1 face, other times it wont detect any face even when I can see there are three faces. The face detection confidence has been set at 0.7, checked the box Use minimal area (for close faces), but it doesn't help. Tried restarting webui and reinstall of face editor, no luck. Maybe it's due to these faces are too close to each other? As a comparison, Adetailer detects faces correctly.

Here's the prompt used for testing

masterpiece, (RAW photo, best quality), formal art, photo-realistic, 1girl, <lora:threeheads-v1:1> (3head, three heads:1.5)

Where is the picture before face editor working

As I generate 250 batches, batch size 8 pictures, total 2000 pictures, face editor will work after all pictures are generated. But the RAM will continously increase until out of ram. Where can I find these pictures before face editor process, thank you for your generious contribution!
image

[bug] Can't work with controlnet at the same time

While working with controlnet, it crash when images generating comes to face editor phase (which is the end of whole job).
If I disable controlnet, face editor works well, and vice versa, If I disable face editor, controlnet works well.

I'll post the error message later.

denoising problem

Hi, it's me again

I used to use i2i to change the style and the face, and now I found there are 3 denoising happening during the process, and most of the time it will change the details I preferred beside the face. so I am wondering whether you could merge some functions so that it won't affect too much of the area.

3 denosing:
the first one is the basic one in i2i. I think it will affect the whole image.
the other two are from the script of face editor.
I have 2 questions:
1.Denoising feature on the face is surely needed. but does it work on the original image I firstly put into i2i? or It works after i2i changed the image, since I found two images in the output. one of them seems the normal output of i2i.

2.from the description I found the 2nd denoising is to make the new face fit for the other parts, but it also change the other details beside the face. therefore I am wondering is it possible to make it works only for the margin beside the face? and could it be merged with the 1st face-demosing together and make the new face the margin at the same time?

Thanks for your work in advanced! Good day!

Script skips running

image

It will skip the processing when the prompt word contains ",,". If you use autocompletion and styles at the same time, it will easily appear ",,", and usually this will not have any effect.

ValueError: images do not match

It can complete the facial repair work excellently, but it will report an error during operation.

number of faces: 1
100%|██████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00, 10.65it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00,  9.67it/s]
Error completing request:39,  1.07it/s]
Arguments: ('task(8qx7mze6oljsi4q)', 0, 'oilpaint,Sexy Dakotaskye smiling at you, (night-time:1.4) rave, (realistic face:1.3), (long hair),(Busty:1.3),(underboob:1.2), tight abs, (navel piercing:1.2), dancing on stage,low cut latex hotpants, cameltoe, perfect body, deep tan, happy expression, hand on hip, High Contrast, volumetric lighting, candid, Photograph, high resolution, 4k, 8k, Bokeh,((white sports bra)),((white shoes)),((black pants))', '3d, cartoon, anime, sketches, (worst quality:2), (low quality:2), (normal quality:2), low-res, normal quality, ((monochrome)), ((grayscale)), skin spots, acne, skin blemishes, bad anatomy, ((child)) ((loli)), tattoos, bad_prompt_version2, ng_deepnegative_v1_75t, (asian.1.2) bad-hands-5, handbag, Poorly drawn hands, ((too many fingers)), ((bad fingers)) bad-image-v2-39000,((nsfw)),naked,nude,breast,((Gloves)),((t-shirt)),((shirt))\n', [], <PIL.Image.Image image mode=RGBA size=540x960 at 0x1A190A1BA90>, None, None, None, None, None, None, 27, 16, 4, 0, 1, False, False, 1, 1, 6.5, 1.5, 0.85, 4128014353.0, -1.0, 0, 0, 0, False, 960, 540, 0, 0, 32, 0, '', '', '', [], 1, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, <scripts.external_code.ControlNetUnit object at 0x000001A2015215D0>, <scripts.external_code.ControlNetUnit object at 0x000001A201521060>, <scripts.external_code.ControlNetUnit object at 0x000001A201521C60>, <scripts.external_code.ControlNetUnit object at 0x000001A201548760>, <scripts.external_code.ControlNetUnit object at 0x000001A201549660>, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, 1.6, 0.97, 0.4, 0.15, 20, 0, 0, '', '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'None', '', '', 1, 'FirstGen', False, False, 'InputFrame', False,   1 2 3
0      , False, '', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '', '', 20.0, '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\stable-diffusion-webui\modules\img2img.py", line 170, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "F:\stable-diffusion-webui\modules\scripts.py", line 407, in run
    processed = script.run(p, *script_args)
  File "F:\stable-diffusion-webui\scripts\face_editor.py", line 167, in run
    return self.__proc_image(o, mask_model, detection_model,
  File "F:\stable-diffusion-webui\scripts\face_editor.py", line 271, in __proc_image
    proc = process_images(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 711, in process_images_inner
    image_mask_composite = Image.composite(image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), p.mask_for_overlay.convert('L')).convert('RGBA')
  File "F:\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3341, in composite
    image.paste(image1, None, mask)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1731, in paste
    self.im.paste(im, box, mask.im)
ValueError: images do not match

image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.