Git Product home page Git Product logo

diffuzers's Introduction

diffuzers

A web ui and deployable API for πŸ€— diffusers.

< under development, request features using issues, prs not accepted atm >

Open In Colab Documentation Status

image

If something doesnt work as expected, or if you need some features which are not available, then create request using github issues

Features available in the app:

  • text to image
  • image to image
  • instruct pix2pix
  • textual inversion
  • inpainting
  • outpainting (coming soon)
  • image info
  • stable diffusion upscaler
  • gfpgan
  • clip interrogator
  • more coming soon!

Features available in the api:

  • text to image
  • image to image
  • instruct pix2pix
  • textual inversion
  • inpainting
  • outpainting (via inpainting)
  • more coming soon!

Installation

To install bleeding edge version of diffuzers, clone the repo and install it using pip.

git clone https://github.com/abhishekkrthakur/diffuzers
cd diffuzers
pip install -e .

Installation using pip:

pip install diffuzers

Usage

Web App

To run the web app, run the following command:

diffuzers app

API

To run the api, run the following command:

diffuzers api

Starting the API requires the following environment variables:

export X2IMG_MODEL=stabilityai/stable-diffusion-2-1
export DEVICE=cuda

If you want to use inpainting:

export INPAINTING_MODEL=stabilityai/stable-diffusion-2-inpainting

To use long prompt weighting, use:

export PIPELINE=lpw_stable_diffusion

If you have OUTPUT_PATH in environment variables, all generations will be saved in OUTPUT_PATH. You can also use other (or private) huggingface models. To use private models, you must login using huggingface-cli login.

API docs are available at host:port/docs. For example, with default settings, you can access docs at: 127.0.0.1:10000/docs.

All CLI Options for running the app:

❯ diffuzers app --help
usage: diffuzers <command> [<args>] app [-h] [--output OUTPUT] [--share] [--port PORT] [--host HOST]
                                        [--device DEVICE] [--ngrok_key NGROK_KEY]

✨ Run diffuzers app

optional arguments:
  -h, --help            show this help message and exit
  --output OUTPUT       Output path is optional, but if provided, all generations will automatically be saved to this
                        path.
  --share               Share the app
  --port PORT           Port to run the app on
  --host HOST           Host to run the app on
  --device DEVICE       Device to use, e.g. cpu, cuda, cuda:0, mps (for m1 mac) etc.
  --ngrok_key NGROK_KEY
                        Ngrok key to use for sharing the app. Only required if you want to share the app

All CLI Options for running the api:

❯ diffuzers api --help
usage: diffuzers <command> [<args>] api [-h] [--output OUTPUT] [--port PORT] [--host HOST] [--device DEVICE]
                                        [--workers WORKERS]

✨ Run diffuzers api

optional arguments:
  -h, --help         show this help message and exit
  --output OUTPUT    Output path is optional, but if provided, all generations will automatically be saved to this
                     path.
  --port PORT        Port to run the app on
  --host HOST        Host to run the app on
  --device DEVICE    Device to use, e.g. cpu, cuda, cuda:0, mps (for m1 mac) etc.
  --workers WORKERS  Number of workers to use

Using private models from huggingface hub

If you want to use private models from huggingface hub, then you need to login using huggingface-cli login command.

Note: You can also save your generations directly to huggingface hub if your output path points to a huggingface hub dataset repo and you have access to push to that repository. Thus, you will end up saving a lot of disk space.

diffuzers's People

Contributors

abhishekkrthakur avatar j3rr1ck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

diffuzers's Issues

Load local models

There doesn't seem to be a way to load local diffuser models, is there?
I'd like to load my local fine tuned models, without having to push them to huggingface

colab issue

import subprocess
import os; os.makedirs("images", exist_ok=True)
import time
import streamlit as st

Run the diffuzers app

diffuzers_process = subprocess.Popen(["diffuzers", "app", "--host", "localhost", "--port", "8001", "--device", "cuda","--output","/content/images"], stdout=subprocess.PIPE)

time.sleep(30)

Run localtunnel to expose the app to the public

lt_process = subprocess.Popen(["lt", "--port", "8001","--subdomain","myapp"], stdout=subprocess.PIPE)
#lt_process = subprocess.Popen(["ngrok", "http", "8501"], stdout=subprocess.PIPE)

Wait for the localtunnel URL to be generated

lt_url = None
while lt_url is None:
line = lt_process.stdout.readline()
if "https" in line.decode("utf-8"):
lt_url = line.decode("utf-8").strip()

Print the localtunnel URL

print("LocalTunnel URL:", lt_url)

Wait for user input to stop the processes

input("Press enter to stop the processes...")

Terminate the processes

lt_process.terminate()
diffuzers_process.terminate()

i want to use without ngork key or anything else so wrote this, this does not work sometime says connection error and when it does works and i reload the model or page, session state error comes, https://docs.streamlit.io/library/api-reference/session-state

"UnicodeDecodeError: 'charmap' codec can't decode..." setup.py

Error message: "subprocess-exited-with-error"

The error message "subprocess-exited-with-error" indicates that a subprocess being run by pip has failed. Specifically, in this case, it seems to be related to an issue with the metadata generation of a Python package.

The error message further explains that when attempting to read the long_description element in the setup.py file, a UnicodeDecodeError occurred. This usually happens when the character encoding is not specified correctly.

To resolve this issue, make sure that the character encoding is specified correctly when reading the long_description element in the setup.py file. For instance, if your file is assumed to be encoded with UTF-8, you can use the following code:

with open('README.md', encoding='utf-8') as f: long_description = f.read()

ModuleNotFoundError: No module named 'altair.vegalite.v4'

Using python 3.10 on Windows and Ubuntu 22.04 I get the error:

ModuleNotFoundError: No module named 'altair.vegalite.v4'

I was able to resolve this by manually running

pip install altair==4.2.2

This should probably be added to the requirements.txt?

Host with GPU access

Thank you for making it public, it's pretty awesome.
I have however a question that do you have any suggestion about the host that I can have the access to GPU?
Thanks,
Ai

Doesnt run

from diffuzers.inpainting import Inpainting

File "/home/ubuntu/diffuzers/./diffuzers/inpainting.py", line 9, in
import streamlit as st
File "/home/ubuntu/.local/lib/python3.8/site-packages/streamlit/init.py", line 55, in
from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator
File "/home/ubuntu/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 45, in
from streamlit.elements.arrow_altair import ArrowAltairMixin
File "/home/ubuntu/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 36, in
from altair.vegalite.v4.api import Chart
ModuleNotFoundError: No module named 'altair.vegalite.v4'

macOS M1 build error

While running on macOS M1 I get

ERROR: Command errored out with exit status 1: /Users/loreto/opt/miniconda3/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/dr/lzd3czf922qcgj0xpdyg3dc40000gn/T/pip-install-3lhdxoh9/pycryptodome_3d5818659ba54c90872d4993a0d9e451/setup.py'"'"'; __file__='"'"'/private/var/folders/dr/lzd3czf922qcgj0xpdyg3dc40000gn/T/pip-install-3lhdxoh9/pycryptodome_3d5818659ba54c90872d4993a0d9e451/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/dr/lzd3czf922qcgj0xpdyg3dc40000gn/T/pip-record-4rdpmt_6/install-record.txt --single-version-externally-managed --compile --install-headers /Users/loreto/opt/miniconda3/include/python3.9/pycryptodome Check the logs for full command output.

that seems to be connected to this issue
Nitrokey/pynitrokey#226

I have tried to update/install Xcode additional components - as suggested here

/Applications/Xcode.app/Contents/MacOS/Xcode -installComponents

with no success so far.

Colab seems not working

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "/usr/local/lib/python3.10/dist-packages/diffuzers/Home.py", line 7, in
from diffuzers.x2image import X2Image
File "/usr/local/lib/python3.10/dist-packages/diffuzers/x2image.py", line 14, in
from diffusers import (
File "", line 1075, in _handle_fromlist
File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/import_utils.py", line 675, in getattr
value = getattr(module, name)
File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/import_utils.py", line 675, in getattr
value = getattr(module, name)
File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/import_utils.py", line 674, in getattr
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/import_utils.py", line 686, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.pipelines.alt_diffusion.pipeline_alt_diffusion_img2img because of the following error (look up to see its traceback):
cannot import name 'CpuOffload' from 'accelerate.hooks' (/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py)

  • ``

pix2pix

hey all just checking if pix2pix is supposed to be working as I'm getting some odd errors
ValueError: Incorrect configuration settings! The config of pipeline.unet: FrozenDict([('sample_size', 64), ('in_channels', 4), ('out_channels', 4), ('center_input_sample', False), ('flip_sin_to_cos', True), ('freq_shift', 0), ('down_block_types', ['CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D']), ('mid_block_type', 'UNetMidBlock2DCrossAttn'), ('up_block_types', ['UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D']), ('only_cross_attention', False), ('block_out_channels', [320, 640, 1280, 1280]), ('layers_per_block', 2), ('downsample_padding', 1), ('mid_block_scale_factor', 1), ('act_fn', 'silu'), ('norm_num_groups', 32), ('norm_eps', 1e-05), ('cross_attention_dim', 1024), ('attention_head_dim', [5, 10, 20, 20]), ('dual_cross_attention', False), ('use_linear_projection', True), ('class_embed_type', None), ('num_class_embeds', None), ('upcast_attention', False), ('resnet_time_scale_shift', 'default'), ('_class_name', 'UNet2DConditionModel'), ('_diffusers_version', '0.8.0'), ('_name_or_path', '/home/jerrick/.cache/huggingface/diffusers/models--stabilityai--stable-diffusion-2-base/snapshots/d28fc8045793886e512c5389771d3b3d560f9575/unet')]) expects 4 but received num_channels_latents: 4 + num_channels_image: 4 = 8. Please verify the config of pipeline.unetor yourimageinput.

Processes stuck when press multiple generate times

Hi,
When i build a basic streamlit app like you, click a button GENERATE to start generate images. But when click this button multiple times, a numbers of processes will be spawned, run parallel, fill GPU utility and all stuck. With your app, this problem is not happen, every processes is spawn and wait in a queue. I have read your code, but don't see how to fix mine. Can you help me?
Thanks.

AttributeError: 'NoneType' object has no attribute 'to'

Ran in venv, followed instructions to run diffuzers app

Error

python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script

Hardware
MacBook Pro (13-inch, 2020
16 GB 3733 MHz LPDDR4X
Intel Iris Plus Graphics 1536 MB
2 GHz Quad-Core Intel Core i5

Conda env doesn't seem to be respected

Each time I run it with a new conda env, it always say all the dependencies/modules that were installed are not found

It is fixed when I move out of the env and do a pip install -r requirements.txt (not ideal as I have other apps conflicting)

Error happens when I run diffuzers api

Web API Link is not Working

Hi, Abshishek!
Thanks for amazing work. I was just trying and found the API has got some issues (or I am using it incorrectly, don't know). The screen shots are attached for your perusal, have a look at them, thanks!
The commands to run the API (Am I using these environment variables correctly?):
API

The link result:
APILink

And also could you please update the diffuzers.ipynb file as well?

Thanks again!

[future] support strict-origin-when-cross-origin

ngrok sometimes so slow, so I used other solutions.
so I need edit "cli/run_app.py"
L105
... "--server.enableCORS", "false", "--server.enableXsrfProtection", "false", "--server.enableWebsocketCompression", "false",
...
But hard coding isn't a good solution either, so maybe we can do it in the configuration,maybe we can add args:
diffuzers app ..... --cors http://xxx.xx.xx

thanks.

Encountered trouble using diffuzers API

I run the commond in the linux server:
diffuzers api --port 8000 --device cuda:6

it shows:
image

What I can see in the browser is:
image

Does anyone know why this is?

Intended audience - everyone?

Can you simplify diffuzers so it meets the desired intended audience of everyone? Or at least, spin a new version made for everyone from the ground up?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.