Git Product home page Git Product logo

x-adapter's Issues

ComfyUI Implementation

Any consideration for actually implementing this tool into one of the platforms that people are using?

Disheartening to see researchers spend months of time building something like this, then post it only for it to be forgotten about because it's too complicated to implement by a bunch of amateur coders like ourselves or it's so low on the priority list for the devs that could actually implement it that they never do.

Your team has built an incredible tool here and I fear in this fast moving space it's going to be a complete was of time unless you are open to collaborating with the community on what we actually want to use it for <3

Code release

This paper seems incredible! Do you plan on releasing the code for it ?

Error when change width and heght

image

Error occurred when executing Diffusers_X_Adapter:

requested an output size of (128, 128), but valid sizes range from [127, 95] to [128, 96] (for an input of torch.Size([64, 48]))

File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\nodes.py", line 273, in load_checkpoint
pipe(prompt=prompt_sdxl, negative_prompt=negative_prompt, prompt_sd1_5=prompt_sd1_5,
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\pipeline\pipeline_sd_xl_adapter_controlnet.py", line 1117, in call
up_block_additional_residual = self.adapter(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 292, in forward
out = self.body[idx](x[i], output_size=output_size, temb=t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 150, in forward
x = self.up_opt(x, output_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 102, in forward
return self.op(x, output_size)
^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 948, in forward
output_padding = self._output_padding(
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 659, in _output_padding
raise ValueError(

Close

Implementation

Will be X-Adapter be implemented into Automatic1111 and ComfyUI? Also, does it work for both Checkpoints and LoRAs?

Also, will I be able to use it to use SDXL Loras on SD 1.5 models?

License

Hi,
Thank you for releasing this code! Would you mind adding an open source license?
Thank you!

Sorry, how do I use it exactly?

I'm very interested in this feature of yours, but as a non-programmer, I'm not sure how to use it
1.Is this designed for web UI or Comfy UI?
2.Where should I download this installation package to, and besides the code, where in the interface can I see and use it?

SDXL to SD 1.5

If I train a SDXL Lora and decide to use it on a SD 1.5 model, will it work using X-Adapter?

where can I GET X_Adapter_v1.bin?

Error occurred when executing Diffusers_X_Adapter:

[Errno 2] No such file or directory: 'K:\ComfyUI-aki-v1\custom_nodes\ComfyUI-Diffusers-X-Adapter\checkpoints\X-Adapter\X_Adapter_v1.bin'

File "K:\ComfyUI-aki-v1\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\custom_nodes\ComfyUI-Diffusers-X-Adapter\nodes.py", line 256, in load_checkpoint
adapter_ckpt = torch.load(os.path.join(adapter_checkpoint_path, "X_Adapter_v1.bin"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 986, in load
with _open_file_like(f, 'rb') as opened_file:
^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 435, in _open_file_like
return _open_file(name_or_buffer, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 416, in init
super().init(open(name, mode))
^^^^^^^^^^^^^^^^

The unexpected result

I have trained controlnet using this tutorial https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README.md with custom dataset

The conditioning image
179_triple

the generated image:
output

But the X-Adapter didn't produce the expected result.

I made some change in inference_controlnet.py to adapt with the condition_type arg

if args.condition_type == "canny":
        controlnet_path = args.controlnet_canny_path
        canny = CannyDetector()
    elif args.condition_type == "depth":
        controlnet_path = args.controlnet_depth_path  # todo: haven't defined in args
        depth = MidasDetector.from_pretrained("lllyasviel/Annotators")
    elif args.condition_type == "mask":
        controlnet_path = args.controlnet_mask_path
    else:
        raise NotImplementedError("not implemented yet")

    prompt = args.prompt
    if args.prompt_sd1_5 is None:
        prompt_sd1_5 = prompt
    else:
        prompt_sd1_5 = args.prompt_sd1_5

    if args.negative_prompt is None:
        negative_prompt = ""
    else:
        negative_prompt = args.negative_prompt

    torch.set_grad_enabled(False)
    torch.backends.cudnn.benchmark = True

    # load controlnet
    print(controlnet_path)
    
    controlnet = ControlNetModel.from_pretrained(
        controlnet_path, torch_dtype=weight_dtype
    )
    print('successfully load controlnet')

    input_image = Image.open(args.input_image_path)
    # input_image = input_image.resize((512, 512), Image.LANCZOS)
    input_image = input_image.resize((args.width_sd1_5, args.height_sd1_5), Image.LANCZOS)
    if args.condition_type == "canny":
        control_image = canny(input_image)
        control_image.save(f'{args.save_path}/{prompt[:10]}_canny_condition.png')
    elif args.condition_type == "depth":
        control_image = depth(input_image)
        control_image.save(f'{args.save_path}/{prompt[:10]}_depth_condition.png')
    elif args.condition_type == "mask":
        control_image = input_image
        control_image.save(f'{args.save_path}/{prompt[:10]}_mask_condition.png')

the command:

python inference.py --plugin_type "controlnet" --prompt "a metal_nut with a bent" --condition_type "mask" --input_image_path ".mvtec/metal_nut/bent/source/179_triple.png"  --controlnet_condition_scale_list 1.0 2.0 --adapter_guidance_start_list 1.00 --adapter_condition_scale_list 1.0 1.20 --height 1024 --width 1024 --height_sd1_5 512 --width_sd1_5 512

the screenshot of conditioning and the generated image:
Screenshot from 2024-06-28 09-29-46

Is SD15 base pass to T0 manatory?

I am trying to implement X-Adapter in A1111 here https://github.com/huchenlei/sd-webui-xadapter/. As A1111 does not support running 2 denosing unet side by side (Or change this need more engineering effort I would like to commit), I decided to make the SD15 pass run before the SDXL pass. Hidden state outputs of last 3 decoder blocks are stored, and then applied back onto the SDXL model via xadapter. Implementation can be found here: https://github.com/huchenlei/sd-webui-xadapter/blob/main/scripts/xadapter.py

I am omitting the SD15 base pass from T0 for simplicity. However, the result is not satisfactory. Am I missing something? Is the SD15 base pass manatory?

Subject: Feature Request: Support for Loading Local .safetensors Files for SD1.5 and SDXL Models

Dear [X-Adapter Development Team],

I hope this message finds you well. I am writing to express my appreciation for your work on the X-Adapter project, which I have found to be incredibly useful and well-crafted. Your efforts have undoubtedly contributed significantly to the community, and I am grateful for your dedication.

I am reaching out to kindly request a feature enhancement that I believe would add substantial value to X-Adapter. Specifically, I am interested in the ability to load .safetensors format files for both the SD1.5 and SDXL models from a local source. This feature would greatly benefit users like myself who work in environments where direct access to these models via the internet is restricted or prefer to use locally stored models for performance reasons.

The ability to specify local .safetensors files for model loading could enhance the flexibility and usability of X-Adapter, allowing for a wider range of applications and environments. I understand that implementing this feature may require significant effort and resources, but I believe it would be a valuable addition to the already impressive capabilities of X-Adapter.

I am more than willing to provide further details or clarification if needed. Additionally, I am open to contributing to this feature's development or testing, should you find it feasible to consider this request.

Thank you for considering my request, and once again, I would like to express my deepest appreciation for your work on the X-Adapter project. Your commitment to open-source development and the betterment of the community is truly inspiring.

Warm regards,

[katyukuki]

IndexError: index 51 is out of bounds for dimension 0 with size 51

Code:

python inference.py --plugin_type "lora" --prompt "masterpiece, best quality, ultra detailed, 1 girl , solo, smile, looking at viewer, holding flowers"  --prompt_sd1_5 "masterpiece, best quality, ultra detailed, 1 girl, solo, smile, looking at viewer, holding flowers, shuimobysim, wuchangshuo, bonian, zhenbanqiao, badashanren" --adapter_guidance_start_list 0.7 --adapter_condition_scale_list 1.00 --seed 3943946911

Error Message:

packages\diffusers\schedulers\scheduling_dpmsolver_multistep.py", line 649, in multistep_dpm_solver_second_order_update
    self.sigmas[self.step_index + 1],
IndexError: index 51 is out of bounds for dimension 0 with size 51

Convert the Lora version?

Is it possible to directly convert the version of LORA from 1.5 to a SDXL or reverse by the X-Adapter programme, instead of converting it during each image production process?

[Bug] Adapter cannot be applied directly as described on paper

In the paper, the author showed that the X-Adapter can be used to directly applied on SDXL checkpoint to denoise a pure latent noise and apply the plugable control condition (ControlNet, LoRA, etc).
image

However, according to my testing, this is not true. If I remove the sd15 pass, the SDXL model with SD15 ControlNet applied via adapter cannot produce img follow the control constraint.

Steps to reproduce

image

Is the adapter really making a difference?

I am suspecting if the xadapter is really doing its job on transfering the control of sd15 controlnet model to sdxl. If we are starting at T0 with the sd15 controlnet already applied with some steps, as long as the SDXL model not doing too much extra modification, the result should not deviate a lot from the T0 result.

Here I compare the result between 2 scenarios:

No Adapter

Add up_block_additional_residual = None after https://github.com/kijai/ComfyUI-Diffusers-X-Adapter/blob/81b77e441c2dd5566acf5025fa9cadfb8a574490/pipeline/pipeline_sd_xl_adapter_controlnet.py#L1166

Essentially this is doing generation with ControlNet using SD15 model, and then refine with SDXL model.
image
image

With Adapter

image
image

I do not observe clear improvement on the X-Adapter.

Instruct pix2pix (controlnet version) works significantly better through X-adapter?

Incredible work on this research!

I noticed that the Instruct pix2pix (controlnet version) through X-adapters looks like it understood the instruction much better than 1.5 (and produced significantly better image). Is this generally true, or was that a cherry-picked example?

Do you have more examples of Instruct pix2pix comparisons through your adapter vs base 1.5?

Other controlnets?

Does this work with other controlnets than canny, depth and tile? or would I have to train the mapping layers for specific ones?

Some inconsistencies

StableDiffusionXLAdapterControlnetPipeline is missing adapter_type (but the other two pipelines have it).

StableDiffusionXLAdapterControlnetPipeline and StableDiffusionXLAdapterControlnetI2IPipeline are both missing prompt_embeds_sd1_5 and negative_prompt_embeds_sd1_5 (but the regular pipeline has them).

None of them have negative_prompt_sd1_5 despite having prompt_sd1_5.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.