showlab / x-adapter Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2024] X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model
Home Page: https://showlab.github.io/X-Adapter/
License: Apache License 2.0
[CVPR 2024] X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model
Home Page: https://showlab.github.io/X-Adapter/
License: Apache License 2.0
Any consideration for actually implementing this tool into one of the platforms that people are using?
Disheartening to see researchers spend months of time building something like this, then post it only for it to be forgotten about because it's too complicated to implement by a bunch of amateur coders like ourselves or it's so low on the priority list for the devs that could actually implement it that they never do.
Your team has built an incredible tool here and I fear in this fast moving space it's going to be a complete was of time unless you are open to collaborating with the community on what we actually want to use it for <3
Can you also add the remaining pipelines for Image2Image, Inpaint, and ControlnetInpaint?
How to support multi contorlnet?
This paper seems incredible! Do you plan on releasing the code for it ?
Error occurred when executing Diffusers_X_Adapter:
requested an output size of (128, 128), but valid sizes range from [127, 95] to [128, 96] (for an input of torch.Size([64, 48]))
File "F:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\nodes.py", line 273, in load_checkpoint
pipe(prompt=prompt_sdxl, negative_prompt=negative_prompt, prompt_sd1_5=prompt_sd1_5,
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\pipeline\pipeline_sd_xl_adapter_controlnet.py", line 1117, in call
up_block_additional_residual = self.adapter(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 292, in forward
out = self.body[idx](x[i], output_size=output_size, temb=t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 150, in forward
x = self.up_opt(x, output_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Diffusers-X-Adapter\xadapter\model\adapter.py", line 102, in forward
return self.op(x, output_size)
^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 948, in forward
output_padding = self._output_padding(
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 659, in _output_padding
raise ValueError(
Close
Hello, was wondering how setting a control guidance end would affect the adapter parameters like adapter_guidance_start
? Is there a adapter_guidance_end
?
Will be X-Adapter be implemented into Automatic1111 and ComfyUI? Also, does it work for both Checkpoints and LoRAs?
Also, will I be able to use it to use SDXL Loras on SD 1.5 models?
Hi,
Thank you for releasing this code! Would you mind adding an open source license?
Thank you!
I'm very interested in this feature of yours, but as a non-programmer, I'm not sure how to use it
1.Is this designed for web UI or Comfy UI?
2.Where should I download this installation package to, and besides the code, where in the interface can I see and use it?
If I train a SDXL Lora and decide to use it on a SD 1.5 model, will it work using X-Adapter?
Error occurred when executing Diffusers_X_Adapter:
[Errno 2] No such file or directory: 'K:\ComfyUI-aki-v1\custom_nodes\ComfyUI-Diffusers-X-Adapter\checkpoints\X-Adapter\X_Adapter_v1.bin'
File "K:\ComfyUI-aki-v1\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1\custom_nodes\ComfyUI-Diffusers-X-Adapter\nodes.py", line 256, in load_checkpoint
adapter_ckpt = torch.load(os.path.join(adapter_checkpoint_path, "X_Adapter_v1.bin"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 986, in load
with _open_file_like(f, 'rb') as opened_file:
^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 435, in _open_file_like
return _open_file(name_or_buffer, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI-aki-v1.ext\Lib\site-packages\torch\serialization.py", line 416, in init
super().init(open(name, mode))
^^^^^^^^^^^^^^^^
Please load the lora using load_lora_weights
and unload with unload_lora_weights
methods on the pipeline.
I want adjust the sd2.1 plugin to sdxl
I have trained controlnet using this tutorial https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README.md with custom dataset
But the X-Adapter didn't produce the expected result.
I made some change in inference_controlnet.py to adapt with the condition_type arg
if args.condition_type == "canny":
controlnet_path = args.controlnet_canny_path
canny = CannyDetector()
elif args.condition_type == "depth":
controlnet_path = args.controlnet_depth_path # todo: haven't defined in args
depth = MidasDetector.from_pretrained("lllyasviel/Annotators")
elif args.condition_type == "mask":
controlnet_path = args.controlnet_mask_path
else:
raise NotImplementedError("not implemented yet")
prompt = args.prompt
if args.prompt_sd1_5 is None:
prompt_sd1_5 = prompt
else:
prompt_sd1_5 = args.prompt_sd1_5
if args.negative_prompt is None:
negative_prompt = ""
else:
negative_prompt = args.negative_prompt
torch.set_grad_enabled(False)
torch.backends.cudnn.benchmark = True
# load controlnet
print(controlnet_path)
controlnet = ControlNetModel.from_pretrained(
controlnet_path, torch_dtype=weight_dtype
)
print('successfully load controlnet')
input_image = Image.open(args.input_image_path)
# input_image = input_image.resize((512, 512), Image.LANCZOS)
input_image = input_image.resize((args.width_sd1_5, args.height_sd1_5), Image.LANCZOS)
if args.condition_type == "canny":
control_image = canny(input_image)
control_image.save(f'{args.save_path}/{prompt[:10]}_canny_condition.png')
elif args.condition_type == "depth":
control_image = depth(input_image)
control_image.save(f'{args.save_path}/{prompt[:10]}_depth_condition.png')
elif args.condition_type == "mask":
control_image = input_image
control_image.save(f'{args.save_path}/{prompt[:10]}_mask_condition.png')
the command:
python inference.py --plugin_type "controlnet" --prompt "a metal_nut with a bent" --condition_type "mask" --input_image_path ".mvtec/metal_nut/bent/source/179_triple.png" --controlnet_condition_scale_list 1.0 2.0 --adapter_guidance_start_list 1.00 --adapter_condition_scale_list 1.0 1.20 --height 1024 --width 1024 --height_sd1_5 512 --width_sd1_5 512
I am trying to implement X-Adapter in A1111 here https://github.com/huchenlei/sd-webui-xadapter/. As A1111 does not support running 2 denosing unet side by side (Or change this need more engineering effort I would like to commit), I decided to make the SD15 pass run before the SDXL pass. Hidden state outputs of last 3 decoder blocks are stored, and then applied back onto the SDXL model via xadapter. Implementation can be found here: https://github.com/huchenlei/sd-webui-xadapter/blob/main/scripts/xadapter.py
I am omitting the SD15 base pass from T0 for simplicity. However, the result is not satisfactory. Am I missing something? Is the SD15 base pass manatory?
where is it?
Dear [X-Adapter Development Team],
I hope this message finds you well. I am writing to express my appreciation for your work on the X-Adapter project, which I have found to be incredibly useful and well-crafted. Your efforts have undoubtedly contributed significantly to the community, and I am grateful for your dedication.
I am reaching out to kindly request a feature enhancement that I believe would add substantial value to X-Adapter. Specifically, I am interested in the ability to load .safetensors format files for both the SD1.5 and SDXL models from a local source. This feature would greatly benefit users like myself who work in environments where direct access to these models via the internet is restricted or prefer to use locally stored models for performance reasons.
The ability to specify local .safetensors files for model loading could enhance the flexibility and usability of X-Adapter, allowing for a wider range of applications and environments. I understand that implementing this feature may require significant effort and resources, but I believe it would be a valuable addition to the already impressive capabilities of X-Adapter.
I am more than willing to provide further details or clarification if needed. Additionally, I am open to contributing to this feature's development or testing, should you find it feasible to consider this request.
Thank you for considering my request, and once again, I would like to express my deepest appreciation for your work on the X-Adapter project. Your commitment to open-source development and the betterment of the community is truly inspiring.
Warm regards,
[katyukuki]
Code:
python inference.py --plugin_type "lora" --prompt "masterpiece, best quality, ultra detailed, 1 girl , solo, smile, looking at viewer, holding flowers" --prompt_sd1_5 "masterpiece, best quality, ultra detailed, 1 girl, solo, smile, looking at viewer, holding flowers, shuimobysim, wuchangshuo, bonian, zhenbanqiao, badashanren" --adapter_guidance_start_list 0.7 --adapter_condition_scale_list 1.00 --seed 3943946911
Error Message:
packages\diffusers\schedulers\scheduling_dpmsolver_multistep.py", line 649, in multistep_dpm_solver_second_order_update
self.sigmas[self.step_index + 1],
IndexError: index 51 is out of bounds for dimension 0 with size 51
Any Automatic1111 Extension?
Or Forge Automatic1111?
Is it possible to directly convert the version of LORA from 1.5 to a SDXL or reverse by the X-Adapter programme, instead of converting it during each image production process?
In the paper, the author showed that the X-Adapter can be used to directly applied on SDXL checkpoint to denoise a pure latent noise and apply the plugable control condition (ControlNet, LoRA, etc).
However, according to my testing, this is not true. If I remove the sd15 pass, the SDXL model with SD15 ControlNet applied via adapter cannot produce img follow the control constraint.
pipeline/pipeline_sd_xl_adapter_controlnet.py
I am suspecting if the xadapter is really doing its job on transfering the control of sd15 controlnet model to sdxl. If we are starting at T0 with the sd15 controlnet already applied with some steps, as long as the SDXL model not doing too much extra modification, the result should not deviate a lot from the T0 result.
Here I compare the result between 2 scenarios:
Add up_block_additional_residual = None
after https://github.com/kijai/ComfyUI-Diffusers-X-Adapter/blob/81b77e441c2dd5566acf5025fa9cadfb8a574490/pipeline/pipeline_sd_xl_adapter_controlnet.py#L1166
Essentially this is doing generation with ControlNet using SD15 model, and then refine with SDXL model.
I do not observe clear improvement on the X-Adapter.
Incredible work on this research!
I noticed that the Instruct pix2pix (controlnet version) through X-adapters looks like it understood the instruction much better than 1.5 (and produced significantly better image). Is this generally true, or was that a cherry-picked example?
Do you have more examples of Instruct pix2pix comparisons through your adapter vs base 1.5?
Does this work with other controlnets than canny, depth and tile? or would I have to train the mapping layers for specific ones?
Hello I'd like to know if we can put x-adapter at the end of a 1.5 models so I can checkpoint merging it a XL model, how should I do this ?
StableDiffusionXLAdapterControlnetPipeline is missing adapter_type
(but the other two pipelines have it).
StableDiffusionXLAdapterControlnetPipeline and StableDiffusionXLAdapterControlnetI2IPipeline are both missing prompt_embeds_sd1_5
and negative_prompt_embeds_sd1_5
(but the regular pipeline has them).
None of them have negative_prompt_sd1_5
despite having prompt_sd1_5
.
looks like this doesn't work on newer diffusers; can you update it? I'm not going to downgrade because I will lose newly added models (eg. stable cascade).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.