Git Product home page Git Product logo

decompose-and-realign's Introduction

Text-Anchored Score Composition: Tackling Condition Misalignment in Text-to-Image Diffusion Models (ECCV2024)

Luozhou Wang$^{{*}}$, Guibao Shen$^{{*}}$, Wenhang Ge, Guangyong Chen, Yijun Li, Yingcong Chen$^{**}$

HKUST(GZ), HKUST, ZJL, ZJU, Adobe.

${*}$: Equal contribution. **: Corresponding author.

๐ŸŽ Abstract

Text-to-image diffusion models have advanced towards more controllable generation via supporting various additional conditions (e.g., depth map, bounding box) beyond text. However, these models are learned based on the premise of perfect alignment between the text and extra conditions.

CLICK for the full abstract

Text-to-image diffusion models have advanced towards more controllable generation via supporting various additional conditions (e.g., depth map, bounding box) beyond text. However, these models are learned based on the premise of perfect alignment between the text and extra conditions. If this alignment is not satisfied, the final output could be either dominated by one condition, or ambiguity may arise, failing to meet user expectations. To address this issue, we present a trainingfree approach called Text-Anchored Score Composition (TASC) to further improve the controllability of existing models when provided with partially aligned conditions. The TASC firstly separates conditions based on pair relationships, computing the result individually for each pair. This ensures that each pair no longer has conflicting conditions. Then we propose an attention realignment operation to realign these independently calculated results via a cross-attention mechanism to avoid new conflicts when combining them back. Both qualitative and quantitative results demonstrate the effectiveness of our approach in handling unaligned conditions, which performs favorably against recent methods and more importantly adds flexibility to the controllable image generation process.

llustration of our proposed TASC showcasing the ability to handle the misalignment between conditions for controllable generation task

๐Ÿ”ง Quick Start

Installation

Our code relies also on Hugging Face's diffusers library.

pip install diffusers

Prepare your inputs

To generate an image using our model, structure the input conditions as a JSON object:

{
    "text": {
        "caption": "A panda hails a taxi on the street with a red suitcase at its feet", 
        "index": [10,11,12], 
        "control_info": 10,
        "cfg":7
    }, 
    "pose": {
        "index": [1, 2], 
        "control_info": "resources/pose.png",
        "cfg":5
    }, 
    "bbox": {
        "index": [4, 5], 
        "control_info": [[0.1, 0.5, 0.6, 0.8]],
        "cfg":4
    }, 
    "depth": {
        "index": [6, 7, 8], 
        "control_info": "resources/depth.png",
        "cfg":2
    }
}

Notes:

  • Text: Mandatory for generation. index specifies text tokens to enhance using our Confidence Focusing Operation and Concentration Refinement Operation, detailed in Sec 3.3 of our paper (see code). control_info acts as a multiplier for the attention values of these tokens, amplifying their visual prominence.

  • Image Conditions: For keys such as pose and depth, we utilize ControlNets which require a condition image. Here, control_info should be a path to the condition image. Ensure all images are loaded as PIL.Image objects prior to their integration into the pipeline.

  • Bounding Box (bbox): Implements control via a bounding box, in coordination with GLIGEN. The control_info for bbox should be formatted as [x,y,w,h], with each value ranging from 0 to 1, representing the coordinates and dimensions of the bounding box.

  • Configuration Weights (cfg): Each control signal is assigned a cfg value, acting as a weight in the final composition process.

Run

You can use our pipeline similarly to the StableDiffusionPipeline. Below is an example usage:

import torch
from PIL import Image
from diffusers import ControlNetModel
from pipeline_tasc import *

device = torch.device("cuda")

# Load required ControlNet models
controlnet_dict = {
    'depth': ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-depth"),
    'pose': ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose"),
}

# Initialize and configure the pipeline
pipe = TASCPipeline.from_pretrained("masterful/gligen-1-4-generation-text-box").to(device)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.set_controlnet(controlnet_dict)

# Generate the output
output = pipe(
    inputs=data, # json object.
    negative_prompt='unnatural colors, bad proportions, worst quality',
    dr_scheduled_sampling_beta=0.5,
    generator=torch.Generator(device="cuda").manual_seed(20),
)
output.images[0].save('output.png')

Notes

  • ControlNet Integration: Load and organize the required ControlNets into a dictionary, then register them to the pipeline using pipe.set_controlnet(controlnet_dict).
  • Model Loading: The adapter modules for GLIGEN are integrated into the masterful/gligen-1-4-generation-text-box model, which can be directly loaded.
  • Parameter Setting: The dr_scheduled_sampling_beta parameter controls the influence range of our method. A recommended setting is 0.5.

Or you can simply run the script available in main.py, and you can expect to reproduce the images shown below:

Example outputs generated using our method, presented without any curation.

๐Ÿšง Todo

  • Release the inference code
  • Release the guidance documents
  • Release the gradio demo
  • Release the extensions for Stable Diffusion WebUI

๐Ÿ“ Citation

@misc{wang2023decompose,
      title={Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models}, 
      author={Luozhou Wang and Guibao Shen and Yijun Li and Ying-cong Chen},
      year={2023},
      eprint={2306.14408},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

This code is builds on the code from the diffusers library as well as the Prompt-to-Prompt codebase.

decompose-and-realign's People

Contributors

wileewang avatar

Stargazers

 avatar Peyton avatar Hao Lu avatar  avatar Max Ku avatar Lu Ming avatar Yijun Li avatar Said avatar  avatar  avatar Dongyu Yan avatar ็ˆฑๅฏๅฏ-็ˆฑ็”Ÿๆดป avatar ShuaiY avatar spongy avatar Jing He avatar XinYang avatar Yixun Liang avatar  avatar Haodong LI avatar  avatar Ying-Cong Chen avatar

Watchers

 avatar  avatar

decompose-and-realign's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.