Git Product home page Git Product logo

x-adapter's Introduction

X-Adapter

This repository is the official implementation of X-Adapter.

X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model
Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, Mike Zheng Shou

Project Website arXiv

Overview_v7

X-Adapter enables plugins pretrained on the old version (e.g. SD1.5) directly work with the upgraded Model (e.g., SDXL) without further retraining.

Thank @kijai for CumfyUI implementation here! Please refer to this tutorial for hyperparameter setting.

News

  • [17/02/2024] Inference code released

Setup

Requirements

conda create -n xadapter python=3.10
conda activate xadapter

pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt

Installing xformers is highly recommended for high efficiency and low GPU cost.

Weights

[Stable Diffusion] Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The pre-trained Stable Diffusion models can be downloaded from Hugging Face (e.g., Stable Diffusion v1-5). You can also use fine-tuned Stable Diffusion models trained on different styles (e.g., Anything V4.0, Redshift, etc.).

[ControlNet] Controlnet is a method to control diffusion models with spatial conditions. You can download the ControlNet family here.

[LoRA] LoRA is a lightweight adapter to fine-tune large-scale pretrained model. It is widely used for style or identity customization in diffusion models. You can download LoRA from the diffusion community (e.g., civitai).

Checkpoint

Models can be downloaded from our Hugging Face page. Put the checkpoint in folder ./checkpoint/X-Adapter.

Usage

After preparing all checkpoints, we can run inference code using different plugins. You can refer to this tutorial to quickly get started with X-Adapter.

Controlnet Inference

Set --controlnet_canny_path or --controlnet_depth_path to ControlNet's path in the bash script. The default value is its Hugging Face model card.

sh ./bash_scripts/canny_controlnet_inference.sh
sh ./bash_scripts/depth_controlnet_inference.sh

LoRA Inference

Set --lora_model_path to LoRA's checkpoint in the bash script. In this example we use MoXin, and we put it in folder ./checkpoint/lora.

sh ./bash_scripts/lora_inference.sh

Controlnet-Tile Inference

Set --controlnet_tile_path to ControlNet-tile's path in the bash script. The default value is its Hugging Face model card.

sh ./bash_scripts/controlnet_tile_inference.sh

Cite

If you find X-Adapter useful for your research and applications, please cite us using this BibTeX:

@article{ran2023xadapter,
  title={X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model},
  author={Lingmin Ran and Xiaodong Cun and Jia-Wei Liu and Rui Zhao and Song Zijie and Xintao Wang and Jussi Keppo and Mike Zheng Shou},
  journal={arXiv preprint arXiv:2312.02238},
  year={2023}
}

x-adapter's People

Contributors

eltociear avatar semolce9 avatar vinthony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

x-adapter's Issues

Is SD15 base pass to T0 manatory?

I am trying to implement X-Adapter in A1111 here https://github.com/huchenlei/sd-webui-xadapter/. As A1111 does not support running 2 denosing unet side by side (Or change this need more engineering effort I would like to commit), I decided to make the SD15 pass run before the SDXL pass. Hidden state outputs of last 3 decoder blocks are stored, and then applied back onto the SDXL model via xadapter. Implementation can be found here: https://github.com/huchenlei/sd-webui-xadapter/blob/main/scripts/xadapter.py

I am omitting the SD15 base pass from T0 for simplicity. However, the result is not satisfactory. Am I missing something? Is the SD15 base pass manatory?

SDXL to SD 1.5

If I train a SDXL Lora and decide to use it on a SD 1.5 model, will it work using X-Adapter?

Convert the Lora version?

Is it possible to directly convert the version of LORA from 1.5 to a SDXL or reverse by the X-Adapter programme, instead of converting it during each image production process?

Some inconsistencies

StableDiffusionXLAdapterControlnetPipeline is missing adapter_type (but the other two pipelines have it).

StableDiffusionXLAdapterControlnetPipeline and StableDiffusionXLAdapterControlnetI2IPipeline are both missing prompt_embeds_sd1_5 and negative_prompt_embeds_sd1_5 (but the regular pipeline has them).

None of them have negative_prompt_sd1_5 despite having prompt_sd1_5.

Implementation

Will be X-Adapter be implemented into Automatic1111 and ComfyUI? Also, does it work for both Checkpoints and LoRAs?

Also, will I be able to use it to use SDXL Loras on SD 1.5 models?

ComfyUI Implementation

Any consideration for actually implementing this tool into one of the platforms that people are using?

Disheartening to see researchers spend months of time building something like this, then post it only for it to be forgotten about because it's too complicated to implement by a bunch of amateur coders like ourselves or it's so low on the priority list for the devs that could actually implement it that they never do.

Your team has built an incredible tool here and I fear in this fast moving space it's going to be a complete was of time unless you are open to collaborating with the community on what we actually want to use it for <3

IndexError: index 51 is out of bounds for dimension 0 with size 51

Code:

python inference.py --plugin_type "lora" --prompt "masterpiece, best quality, ultra detailed, 1 girl , solo, smile, looking at viewer, holding flowers"  --prompt_sd1_5 "masterpiece, best quality, ultra detailed, 1 girl, solo, smile, looking at viewer, holding flowers, shuimobysim, wuchangshuo, bonian, zhenbanqiao, badashanren" --adapter_guidance_start_list 0.7 --adapter_condition_scale_list 1.00 --seed 3943946911

Error Message:

packages\diffusers\schedulers\scheduling_dpmsolver_multistep.py", line 649, in multistep_dpm_solver_second_order_update
    self.sigmas[self.step_index + 1],
IndexError: index 51 is out of bounds for dimension 0 with size 51

Other controlnets?

Does this work with other controlnets than canny, depth and tile? or would I have to train the mapping layers for specific ones?

Instruct pix2pix (controlnet version) works significantly better through X-adapter?

Incredible work on this research!

I noticed that the Instruct pix2pix (controlnet version) through X-adapters looks like it understood the instruction much better than 1.5 (and produced significantly better image). Is this generally true, or was that a cherry-picked example?

Do you have more examples of Instruct pix2pix comparisons through your adapter vs base 1.5?

[Bug] Adapter cannot be applied directly as described on paper

In the paper, the author showed that the X-Adapter can be used to directly applied on SDXL checkpoint to denoise a pure latent noise and apply the plugable control condition (ControlNet, LoRA, etc).
image

However, according to my testing, this is not true. If I remove the sd15 pass, the SDXL model with SD15 ControlNet applied via adapter cannot produce img follow the control constraint.

Steps to reproduce

image

Is the adapter really making a difference?

I am suspecting if the xadapter is really doing its job on transfering the control of sd15 controlnet model to sdxl. If we are starting at T0 with the sd15 controlnet already applied with some steps, as long as the SDXL model not doing too much extra modification, the result should not deviate a lot from the T0 result.

Here I compare the result between 2 scenarios:

No Adapter

Add up_block_additional_residual = None after https://github.com/kijai/ComfyUI-Diffusers-X-Adapter/blob/81b77e441c2dd5566acf5025fa9cadfb8a574490/pipeline/pipeline_sd_xl_adapter_controlnet.py#L1166

Essentially this is doing generation with ControlNet using SD15 model, and then refine with SDXL model.
image
image

With Adapter

image
image

I do not observe clear improvement on the X-Adapter.

License

Hi,
Thank you for releasing this code! Would you mind adding an open source license?
Thank you!

Error when change width and heght

image

Error occurred when executing Diffusers_X_Adapter:

requested an output size of (128, 128), but valid sizes range from [127, 95] to [128, 96] (for an input of torch.Size([64, 48]))

File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\nodes.py", line 273, in load_checkpoint
pipe(prompt=prompt_sdxl, negative_prompt=negative_prompt, prompt_sd1_5=prompt_sd1_5,
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\pipeline\pipeline_sd_xl_adapter_controlnet.py", line 1117, in call
up_block_additional_residual = self.adapter(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 292, in forward
out = self.body[idx](x[i], output_size=output_size, temb=t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 150, in forward
x = self.up_opt(x, output_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 102, in forward
return self.op(x, output_size)
^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 948, in forward
output_padding = self._output_padding(
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 659, in _output_padding
raise ValueError(

Close

Code release

This paper seems incredible! Do you plan on releasing the code for it ?

Sorry, how do I use it exactly?

I'm very interested in this feature of yours, but as a non-programmer, I'm not sure how to use it
1.Is this designed for web UI or Comfy UI?
2.Where should I download this installation package to, and besides the code, where in the interface can I see and use it?

Subject: Feature Request: Support for Loading Local .safetensors Files for SD1.5 and SDXL Models

Dear [X-Adapter Development Team],

I hope this message finds you well. I am writing to express my appreciation for your work on the X-Adapter project, which I have found to be incredibly useful and well-crafted. Your efforts have undoubtedly contributed significantly to the community, and I am grateful for your dedication.

I am reaching out to kindly request a feature enhancement that I believe would add substantial value to X-Adapter. Specifically, I am interested in the ability to load .safetensors format files for both the SD1.5 and SDXL models from a local source. This feature would greatly benefit users like myself who work in environments where direct access to these models via the internet is restricted or prefer to use locally stored models for performance reasons.

The ability to specify local .safetensors files for model loading could enhance the flexibility and usability of X-Adapter, allowing for a wider range of applications and environments. I understand that implementing this feature may require significant effort and resources, but I believe it would be a valuable addition to the already impressive capabilities of X-Adapter.

I am more than willing to provide further details or clarification if needed. Additionally, I am open to contributing to this feature's development or testing, should you find it feasible to consider this request.

Thank you for considering my request, and once again, I would like to express my deepest appreciation for your work on the X-Adapter project. Your commitment to open-source development and the betterment of the community is truly inspiring.

Warm regards,

[katyukuki]

where can I GET X_Adapter_v1.bin?

Error occurred when executing Diffusers_X_Adapter:

[Errno 2] No such file or directory: 'K:\ComfyUI-aki-v1\custom_nodes\ComfyUI-Diffusers-X-Adapter\checkpoints\X-Adapter\X_Adapter_v1.bin'

File "K:\ComfyUI-aki-v1\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\custom_nodes\ComfyUI-Diffusers-X-Adapter\nodes.py", line 256, in load_checkpoint
adapter_ckpt = torch.load(os.path.join(adapter_checkpoint_path, "X_Adapter_v1.bin"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 986, in load
with _open_file_like(f, 'rb') as opened_file:
^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 435, in _open_file_like
return _open_file(name_or_buffer, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 416, in init
super().init(open(name, mode))
^^^^^^^^^^^^^^^^

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.