Git Product home page Git Product logo

stable-dreambooth's Introduction

Stable DreamBooth

This is an implementation of DreamBooth based on Stable Diffusion.

Update

Results

Dreambooth results from original paper: Results

The reproduced results: Results

Requirements

Hardware

  • A GPU with at least 30G Memory.
  • The training requires about 10 minites on A100 80G GPU with batch_size set to 4.

Environment Setup

Create conda environment with pytorch>=1.11.

conda env create -f environment.yaml
conda activate stable-diffusion

Quick Start

python sample.py # Generate class samples.
python train.py # Finetune stable diffusion model.

The generation results are in logs/dog_finetune.

Finetune with your own data.

1. Data Preparation

  1. Collect 3~5 images of an object and save into data/mydata/instance folder.
  2. Sample images of the same class as specified object using sample.py.
    1. Change corresponding variables in sample.py. The prompt should be like "a {class}". And the save_dir should be changed to data/mydata/class.
    2. Run the sample script.
    python sample.py

2. Finetuning

  1. Change the TrainConfig in train.py.
  2. Start training.
    python train.py

3. Inference

python inference.py --prompt "photo of a [V] dog in a dog house" --checkpoint_dir logs/dogs_finetune

Generated images are in outputs by default.

Acknowledgement

stable-dreambooth's People

Contributors

victarry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

stable-dreambooth's Issues

does it preserve identity of the subject ?

original inversion has a trouble with that, it synhetsized mutations of the trained subjects unles you overfit and they you cant edit style anymore and composition that much.
Is it really like dreambooth and it retains identity ?

train proplem

I have a similar question AttributeError: 'StableDiffusionPipeline' object has no attribute 'parameters' diffusers 0.15.0 can you help me solve it?

error when calling mode() on the vae encoded images

Error is:

Traceback (most recent call last):
  File "train.py", line 206, in <module>
    train_loop(config, model, noise_scheduler, optimizer, train_dataloader)
  File "train.py", line 129, in train_loop
    latents = model.vae.encode(imgs).mode() * 0.18215
AttributeError: 'AutoencoderKLOutput' object has no attribute 'mode'

It seems related to the diffusers library not running on the gpu? I am in an environment with an a6000 though

Unable to run the code on an RTX8000, out of memory

Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 47.46 GiB total capacity; 44.29 GiB already allocated; 862.56 MiB free; 45.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

error whie running train.py

File "train.py", line 206, in
train_loop(config, model, noise_scheduler, optimizer, train_dataloader)
File "train.py", line 131, in train_loop
noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps.cpu().numpy())
File "/HPS/EgofaceTrial/work/anaconda3/envs/stable-diffusion/lib/python3.8/site-packages/diffusers/schedulers/scheduling_ddpm.py", line 303, in add_noise
timesteps = timesteps.to(original_samples.device)
AttributeError: 'numpy.ndarray' object has no attribute 'to'

Dataset

It might be helpful to explain the needed images to train a new model.

The partial example with some images in data/dogs/instance is more confusing than it helps. Would it be possible for you to include an example training dataset?

error in sample.py

for text in datasets:
with torch.no_grad():
images = model(text, height=512, width=512, num_inference_steps=50)["sample"] ///
sample key work error in conda activate enviroment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.