Git Product home page Git Product logo

deforumstablediffusionlocal's Introduction

Deforum Stable Diffusion visitors Replicate

Deforum Stable Diffusion Local Version

Local version of Deforum Stable Diffusion V0.7, supports txt settings file input and animation features!

🚩 Updates

  • ✅ 1/12/2023 Add support for Deforum V0.7!!

example

example

example

👇Animated Video👇

example video

Made this quick local Windows version mostly based on the Colab code by deforum, which supports very cool turbo mode animation output for Stable Diffusion!

As an artist and Unity game designer, I may not very familiar with Python code, so let me know whether there is any improvement for this project!

It's tested working on Windows 10 with RTX 2080 SUPER and RTX 3090 GPU (it runs somehow much faster on my local 3090 then Colab..), I haven't tested it on Mac though.

Installation

You can use an anaconda environment to host this local project:

conda create --name dsd python=3.10.6 -y
conda activate dsd

And then cd to the cloned folder, run the setup code, and wait for ≈ 5min until it's finished

python setup.py

Manually download 3 Model Files

You need to put these 3 model files on the ./models folder:

v1-5-pruned-emaonly.ckpt, can be downloaded from HuggingFace.

dpt_large-midas-2f21e586.pt, the download link is here

AdaBins_nyu.pt, the download link is here

How to use it?

After installation, you can try out this command to see if the code is working. The running command should looks like this:

python run.py --enable_animation_mode --settings "./runSettings_Template.txt" --model "v1-5-pruned-emaonly.ckpt"

You can also load customized dreambooth models by putting your model file on the ./models folder, and replace the --model parameter with the new model name.

Here are some extra examples for your reference. You can easily create your own txt file based on these templates!

    1. For generate still images:
python run.py --settings "./examples/runSettings_StillImages.txt"
    1. For animation feature, you need to add --enable_animation_mode to enable animation settings in text file:
python run.py --enable_animation_mode --settings "./examples/runSettings_Animation.txt"
    1. For mask feature:
python run.py --settings "./examples/runSettings_Mask.txt"
    1. For new feature of Deforum V0.7:
python run.py --enable_animation_mode --settings "./examples/runSettings_AnimationExtra.txt"

example My Original Painting on Artstation

The output results will be available at ./output folder.

All the needed variables & prompts for Deforum Stable Diffusion are set in the txt file (You can refer to the Colab page for definition of all the variables), you can have many of settings files for different tasks. There is a template file called runSettings_Template.txt. You can create your own txt settings file as well.

For usage of all the parameters, you can follow this doc

That's it!

deforumstablediffusionlocal's People

Contributors

dgspitzer avatar helixngc7293 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deforumstablediffusionlocal's Issues

Input video mask not working

Hello! I'm not sure if you're still working on this repo :P but i thought i would ask anyway

I can't seem to get video masking working.

my JSON is like this:
"video_init_path":"./input/INPUT_Kelp_Spiral.mp4",
"extract_nth_frame":1,
"overwrite_extracted_frames": true,
"use_mask_video": true,
"video_mask_path": "./input/masks/MASK_Gradient_302_INVERT.mp4",

the input video gets converted to the output folder, but the video mask does not.

No errors or console prints that indicate video masking has failed or even been initialised

Auto1111's version still has some bugs with video_init so I'm using this repo still!! thanks for putting it together :)

Trouble installing

screen

this is the error I recieved:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
daal4py 2021.5.0 requires daal==2021.4.0, which is not installed.
scipy 1.7.3 requires numpy<1.23.0,>=1.16.5, but you have numpy 1.23.4 which is incompatible.
numba 0.55.1 requires numpy<1.22,>=1.18, but you have numpy 1.23.4 which is incompatible.
Cloning into 'k-diffusion'...
remote: Enumerating objects: 427, done.
remote: Counting objects: 100% (146/146), done.
remote: Compressing objects: 100% (22/22), done.
Receiving objects: 90% (385/427)used 124 (delta 124), pack-reused 281 eceiving objects: 89% (381/427)
Receiving objects: 100% (427/427), 76.34 KiB | 473.00 KiB/s, done.
Resolving deltas: 100% (286/286), done.

Environment set up in 21847 seconds

(base) C:\Users\mayab\DeforumStableDiffusionLocal-main>python run.py --enable_animation_mode --settings "./runSettings_Template.txt"
Local Path Variables:

models_path: ./models
output_path: ./output
Traceback (most recent call last):
File "C:\Users\mayab\DeforumStableDiffusionLocal-main\run.py", line 1312, in
main()
File "C:\Users\mayab\DeforumStableDiffusionLocal-main\run.py", line 79, in main
import cv2
ModuleNotFoundError: No module named 'cv2'

(base) C:\Users\mayab\DeforumStableDiffusionLocal-main>

free variable 'device' referenced before assignment in enclosing scope

Hello sorry to bother you with this.
I can make it run on my PC

(ldm) C:\aivids\DeforumStableDiffusionLocal-main>python run.py --enable_animation_mode --settings "./examples/runSettings_Animation.txt"
Local Path Variables:

models_path: ./models
output_path: ./output
Using config: ./stable-diffusion/configs/stable-diffusion/v1-inference.yaml
Please download model checkpoint and place in ./models\sd-v1-4.ckpt
Saving animation frames to ./output\2022-09\Example_DGSpitzer
Traceback (most recent call last):
File "run.py", line 1312, in
main()
File "run.py", line 1250, in main
render_animation(args, anim_args)
File "run.py", line 953, in render_animation
depth_model = DepthModel(device)
NameError: free variable 'device' referenced before assignment in enclosing scope <===== THIS

The same is happening in collab

NameError Traceback (most recent call last)
in
480 # dispatch to appropriate renderer
481 if anim_args.animation_mode == '2D' or anim_args.animation_mode == '3D':
--> 482 render_animation(args, anim_args)
483 elif anim_args.animation_mode == 'Video Input':
484 render_input_video(args, anim_args)

in render_animation(args, anim_args)
185 predict_depths = (anim_args.animation_mode == '3D' and anim_args.use_depth_warping) or anim_args.save_depth_maps
186 if predict_depths:
--> 187 depth_model = DepthModel(device)
188 depth_model.load_midas(models_path)
189 if anim_args.midas_weight < 1.0:

NameError: name 'device' is not defined <=====THIS

I implemented 16 bit per channel depth maps (png16)!!!!

This change is really great for composing.

I don't know how to make a branch or how pull-requests work, so I will tell here the few changes needed.

In the dsd environment (anaconda 3) one should write: pip install numpngw (or modify setup.py to pip-install numpngw).

numpngw is a library that allows writing PNG16.

Then, being careful to not break the indentation, in stable-diffusion\helpers\depth.py one must replace the last three text lines, with:

    png_bit_depth = 16 # 8 will write 8bpc png, 16 will write 16bpc PNG
    if (png_bit_depth == 8):
        temp = rearrange((depth - self.depth_min) / denom * 255, 'c h w -> h w c')
        temp = repeat(temp, 'h w 1 -> h w c', c=3)
        Image.fromarray(temp.astype(np.uint8)).save(filename)
    else:
        temp16 = rearrange((depth - self.depth_min) / denom * 255 * 255, 'c h w -> h w c')
        temp16 = repeat(temp16, 'h w 1 -> h w c', c=3)
        write_png(filename, temp16.astype(np.uint16));

Ideally, one should set the bit depth in the runSettings text file, but I'm in a hurry right now (and I really never need the PNG8 version, so it suffices for me with this change).

Decoder json

Hello everyone,
I have an issue with animated mode (Image mode works perfectly). I test sample examples and get this error:

Local Path Variables: models_path: ./models output_path: ./output Traceback (most recent call last): File "run.py", line 1312, in main() File "run.py", line 125, in main master_args = load_args(opt.settings) File "run.py", line 122, in load_args loaded_args = json.load(f)#, ensure_ascii=False, indent=4) File "C:\Conda\envs\dsd\lib\json_init_.py", line 293, in load return loads(fp.read(), File "C:\Conda\envs\dsd\lib\json_init_.py", line 357, in loads return _default_decoder.decode(s) File "C:\Conda\envs\dsd\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Conda\envs\dsd\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 22 column 2 (char 473)

Maybe I need to install something more?
Thanks in advance.

hello what can i do

hello what can i do with this ??

RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 5.81 GiB total capacity; 4.95 GiB already allocated; 58.44 MiB free; 5.08 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Translation values - not able to use equations

Hi,

Firstly, excuse my english.

In the colab version one can make movements dynamic by using mathematical functions to create wave patterns, e.g. sin, cos, tan.

Example:

"translation_x": "0:(10sin(23.14*t/800))",

I have an error that it can not translate string to float.

I have tried converting to the python math library, but still no good.

Are you able to add math transaltion as a feature, as per colab version?

Much thanks,
Joyride

Help . ImportError: DLL load failed while importing cv2:

I used deforum stable diffusion local for two weeks, but nowI have this error.

Local Path Variables:

models_path: ./models
output_path: ./output
Traceback (most recent call last):
File "run.py", line 1312, in
main()
File "run.py", line 79, in main
import cv2
ImportError: DLL load failed while importing cv2: No se puede encontrar el módulo especificado.

What about this?

thank you

ModuleNotFoundError: No module named 'helpers'

i got an error when i run python run.py --settings "./examples/runSettings_StillImages.txt"

models_path: ./models output_path: ./output Traceback (most recent call last): File "run.py", line 1312, in main() File "run.py", line 112, in main from helpers import DepthModel, sampler_fn ModuleNotFoundError: No module named 'helpers'

Do you have an idea? Thanks

Broken Pipe on generating large-ish animation: AttributeError: 'tqdm' object has no attribute 'last_print_t'

Eventually the python script just stops - but when I ctrl + c out of it - the following is printed

File "/home/baaleos/deforum/DeforumStableDiffusionLocal/run.py", line 1312, in <module> main() File "/home/baaleos/deforum/DeforumStableDiffusionLocal/run.py", line 1250, in main render_animation(args, anim_args) File "/home/baaleos/deforum/DeforumStableDiffusionLocal/run.py", line 1071, in render_animation sample, image = generate(args, return_latent=False, return_sample=True) File "/home/baaleos/deforum/DeforumStableDiffusionLocal/run.py", line 457, in generate samples = sampler_fn( File "stable-diffusion/helpers/k_samplers.py", line 65, in sampler_fn samples = sampler_map[args.sampler](**sampler_args) File "/home/baaleos/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "k-diffusion/k_diffusion/sampling.py", line 186, in sample_lms for i in trange(len(sigmas) - 1, disable=disable): File "/home/baaleos/miniconda3/envs/ldm/lib/python3.8/site-packages/tqdm/std.py", line 1541, in trange return tqdm(_range(*args), **kwargs) File "/home/baaleos/miniconda3/envs/ldm/lib/python3.8/site-packages/tqdm/std.py", line 1107, in __init__ self.sp = self.status_printer(self.fp) File "/home/baaleos/miniconda3/envs/ldm/lib/python3.8/site-packages/tqdm/std.py", line 340, in status_printer getattr(sys.stdout, 'flush', lambda: None)() KeyboardInterrupt Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> BrokenPipeError: [Errno 32] Broken pipe (ldm) baaleos@ragnarokbuild:~/Aeoria/ArtService/ArtService$ Exception ignored in: <function tqdm.__del__ at 0x7f226f55d310> Traceback (most recent call last): File "/home/baaleos/miniconda3/envs/ldm/lib/python3.8/site-packages/tqdm/std.py", line 1162, in __del__ self.close() File "/home/baaleos/miniconda3/envs/ldm/lib/python3.8/site-packages/tqdm/std.py", line 1291, in close if self.last_print_t < self.start_t + self.delay: AttributeError: 'tqdm' object has no attribute 'last_print_t'

In this particular run - it got as far as frame 428 out of a requested 720

7 prompts were requested with roughly 120 frames for each prompt -1

Googling found a similar attribute error issue raised here:
tqdm/tqdm#261

Stable Diffusion used with Jupiter Lab over Cluster/ Problem no more Pictures just settings file

Hello :)

Im using Stable Diffusion used with Jupiter Lab over a Cluster in Trier.
Im working with my own init images to create edited images.

This is the Error:

Traceback (most recent call last):
File "run.py", line 1312, in
main()
File "run.py", line 1256, in main
render_image_batch(args)
File "run.py", line 884, in render_image_batch
results = generate(args)
File "run.py", line 392, in generate
init_image, mask_image = load_img(args.init_image,
File "run.py", line 183, in load_img
image = image.convert('RGB')
File "/home/jovyan/.conda_envs/dsd/lib/python3.8/site-packages/PIL/Image.py", line 901, in convert
self.load()
File "/home/jovyan/.conda_envs/dsd/lib/python3.8/site-packages/PIL/ImageFile.py", line 251, in load
raise OSError(
OSError: image file is truncated (5 bytes not processed)
(/home/jovyan/.conda_envs/dsd) jovyan@jupyter-ws-5f03:~/DeforumStableDiffusionLocal$

Thanks
Alina

Runtime Error: CUDA out of memory

I've tried adjusting the resolution via the fields in the runSettings_Animation.txt file, but even going down to 64x64 doesn't resolve the error. Most of the Google search results pertain to pytorch training so I'm not so sure how relevant those answers are for a pretrained model (e.g. I don't see any way to decrease batch size).

Is there something I'm missing? I'm kind of a noob at machine learning in general. But even reading the run.py script, I cannot determine why gpu memory that is unused (by any programs) won't be used by dsd (it says 5.16 GB is allocated, 0 bytes free, 5.30 GB is reserved pytorch).

Is there a prompt I can add to the CLI command to make sure it uses memory appropriately? In what file do I set max_split_size_mb?
(I'm running on RTX 2060 6GB, i5-9600k, 16 GB RAM)

Run Settings Documentation

Where can I find documentation on what each of the run settings does? Many of the options are obvious or at least semi-obvious, but many of them are also rather ambiguous. Is there documentation for these somewhere, or a parent repo that may describe them?

"animation_mode":"Interpolation" doesn't work when "interpolate_key_frames":true

Step to produce:
1- open runSettings_Template.txt
2- set animation_mode":"Interpolation"
3- make sure interpolate_key_frames":true

run python run.py --enable_animation_mode --settings "./runSettings_Template.txt"

And after rendering 3 frames, you get an error:

Global seed set to 3521683325
100%|████████████████████████████████████████████| 5/5 [00:03<00:00, 1.32it/s]
<PIL.Image.Image image mode=RGB size=512x512 at 0x16803F5E0>
Interpolation start...
Traceback (most recent call last):
File "run.py", line 1299, in
main()
File "run.py", line 1241, in main
render_interpolation(args, anim_args)
File "run.py", line 1150, in render_interpolation
dist_frames = list(animation_prompts.items())[i+1][0] - list(animation_promp
ts.items())[i][0]
TypeError: unsupported operand type(s) for -: 'str' and 'str'

Frames per second?

How is Frames per Second set or determined? The examples I have run seem to be 12 fps but I don't know how that is set.

no module named 'ldm.models.diffusion.sampling_utiil'

ive tried installing this thing multiple times now, i got all the other issues fixed via a clean install but the last two times ive tried installing ive gotten this error after trying to run:

No module named 'ldm.models.diffusion.sampling_util'

i dont know how to fix this. ive run into so many issues with this repo it's ridiculous. this is the only stable diffusion video thing ive found that can be installed locally and i desperately want it to work. please someone help.

No module named 'cleanfid'

It works, but for me only after installing "cleanfid", which is used by ".\k-diffusion\k_diffusion\evaluation.py". I set up more than one environment and always got that error.

pip install clean-fid

Turning off use_depth_warping in 3d mode results in TypeError: must be real number, not NoneType

Hey friends.

I unticked the use_depth_warping, and got a warning then. The values are the defaults. All that was changed was the prompt name, i switched to 3d mode, and i have unticked use_depth_warping. Turning it on again, and the video gets created again.

Both, automatic1111 and the DeforumStableDiffusionLocal repo are up to date.

`venv "E:\automattic111\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)]
Commit hash: 4b3c5bc24bffdf429c463a465763b3077fe55eb8
Installing requirements for Web UI
Launching Web UI with arguments: --disable-safe-unpickle
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [81761151] from E:\automattic111\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Deforum script for 2D, pseudo-2D and 3D animations
v0.5-webui-beta
Additional models path: E:\automattic111\stable-diffusion-webui\models/Deforum
Saving animation frames to E:\automattic111\stable-diffusion-webui\outputs/img2img-images\Deforum
Rendering animation frame 0 of 120
a blackbird in a tree 1607159813
Angle: 0.0 Zoom: 1.02
Tx: 0.0 Ty: 0.0 Tz: 10.0
Rx: 0.0 Ry: 0.0 Rz: 0.0
Positive prompt:a blackbird in a tree
Not using an init image (doing pure txt2img) - seed:1607159813; subseed:-1; subseed_strength:0; cfg_scale:7.0; steps:21
100%|██████████████████████████████████████████████████████████████████████████████████| 21/21 [00:04<00:00, 4.76it/s]
Rendering animation frame 1 of 120 | 21/1092 [00:02<02:22, 7.50it/s]
Deforum progress: 2%|█▏ | 21/1092 [00:03<02:39, 6.72it/s]
Error completing request
Arguments: (False, '', '3D', 120, 'replicate', '0:(0)', '0:(1.02+0.02sin(23.14*t/20))', '0:(0)', '0:(0)', '0:(10)', '0:(0)', '0:(0)', '0:(0)', False, '0:(0)', '0:(t%15)', '0:(0)', '0:(53)', '0: (0.08)', '0: (0.6)', '0: (1.0)', '0: (7)', '0: (40)', '0: (200)', '0: (10000)', '0: (t%4294967293)', 'Match Frame 0 LAB', 1, False, 0.3, 200.0, 10000.0, 40.0, 'border', 'bicubic', False, '/content/video_in.mp4', 1, False, False, '/content/video_in.mp4', False, 4, False, '20220829210106', '[\n "a beautiful forest by Asher Brown Durand, trending on Artstation",\n "a beautiful portrait of a woman by Artgerm, trending on Artstation"\n]\n', '{\n "0": "a blackbird in a tree"\n}\n', 512, 512, False, False, False, 0, 0, -1, 'Euler a', False, -1, 0, 0, 0, 21, 0.0, 1, False, 1, True, True, False, False, False, False, 'Deforum', '{timestring}{index}{prompt}.png', 'iter', False, False, True, 0, 'https://user-images.githubusercontent.com/14872007/195867706-d067cdc6-28cd-450b-a61e-55e25bc67010.png', False, False, False, True, 'https://www.filterforge.com/wiki/images/archive/b/b7/20080927223728%21Polygonal_gradient_thumb.jpg', 1.0, 1.0, 5.0, 1, True, 4, False, 12.0, 'PIL gif', 'ffmpeg', False, 'snowfall.mp3', False, False, 200.0, 'x0_pred', '/content/drive/MyDrive/AI/StableDiffusion/2022-09/20220903000939_%05d.png', '/content/drive/MyDrive/AI/StableDiffusion/content/drive/MyDrive/AI/StableDiffusion/2022-09/kabachuha/2022-09/20220903000939.mp4', '

Deforum v0.5-webui-beta

', "

Made by deforum.github.io, port for AUTOMATIC1111's webui maintained by kabachuha

", '

Original Deforum Github repo github.com/deforum/stable-diffusion

', "

This fork for auto1111's webui github.com/deforum-art/deforum-for-automatic1111-webui

", '

Join the official Deforum Discord discord.gg/deforum to share your creations and suggestions

', '

User guide for v0.5 docs.google.com/document/d/1pEobUknMFMkn8F5TMsv8qRzamXX_75BShMMXV8IFslI/edit

', '

Math keyframing explanation docs.google.com/document/d/1pfW1PwbDIuW0cv-dnuyYj1UzPqe23BlSLTJsqazffXM/edit?usp=sharing

', '

Import settings from file

', '

Animation settings

', '

Motion parameters:

', '

2D and 3D settings

', '

Prespective flip — Low VRAM pseudo-3D mode:

', '

Coherence:

', '

3D Depth Warping:

', '

Video Input:

', '

Interpolation (turned off atm)

', '

Resume animation:

', '

Prompts

', '

animation_mode: None batches on list of prompts. (Batch mode disabled atm, only animation_prompts are working)

', '

Important change from vanilla Deforum!

', '

This script uses the built-in webui weighting settings.

', '

So if you want to use math functions as prompt weights,

', '

keep the values above zero in both parts

', '

Negative prompt part can be specified with --neg

', '

Run settings

', '

Sampling settings

', '

3D Fov settings:

', '', '', '', '

Batch settings

', '

Init settings

', '

3D settings

', '

Generation settings:

', '

Video output settings

', '

To enable seed schedule select seed behavior — 'schedule'

') {}
Traceback (most recent call last):
File "E:\automattic111\stable-diffusion-webui\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "E:\automattic111\stable-diffusion-webui\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "E:\automattic111\stable-diffusion-webui\extensions\deforum\scripts\deforum.py", line 231, in run_deforum
processed = DeforumScript.run(None, p, override_settings_with_file, custom_settings_file, animation_mode, max_frames, border, angle, zoom, translation_x, translation_y, translation_z, rotation_3d_x, rotation_3d_y, rotation_3d_z, flip_2d_perspective, perspective_flip_theta, perspective_flip_phi, perspective_flip_gamma, perspective_flip_fv, noise_schedule, strength_schedule, contrast_schedule, cfg_scale_schedule, fov_schedule, near_schedule, far_schedule, seed_schedule, color_coherence, diffusion_cadence, use_depth_warping, midas_weight, near_plane, far_plane, fov, padding_mode, sampling_mode, save_depth_maps, video_init_path, extract_nth_frame, overwrite_extracted_frames, use_mask_video, video_mask_path, interpolate_key_frames, interpolate_x_frames, resume_from_timestring, resume_timestring, prompts, animation_prompts, W, H, restore_faces, tiling, enable_hr, firstphase_width, firstphase_height, seed, sampler, seed_enable_extras, subseed, subseed_strength, seed_resize_from_w, seed_resize_from_h, steps, ddim_eta, n_batch, make_grid, grid_rows, save_settings, save_samples, display_samples, save_sample_per_step, show_sample_per_step, override_these_with_webui, batch_name, filename_format, seed_behavior, use_init, from_img2img_instead_of_link, strength_0_no_init, strength, init_image, use_mask, use_alpha_as_mask, invert_mask, overlay_mask, mask_file, mask_contrast_adjust, mask_brightness_adjust, mask_overlay_blur, fill, full_res_mask, full_res_mask_padding, skip_video_for_run_all, fps, output_format, ffmpeg_location, add_soundtrack, soundtrack_path, use_manual_settings, render_steps, max_video_frames, path_name_modifier, image_path, mp4_path, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, i34, i35, i36)
File "E:\automattic111\stable-diffusion-webui\extensions\deforum\scripts\deforum.py", line 73, in run
render_animation(args, anim_args, root.animation_prompts, root)
File "E:\automattic111\stable-diffusion-webui\extensions\deforum\scripts\deforum_helpers\render.py", line 168, in render_animation
prev_img = anim_frame_warp_3d(root.device, prev_img_cv2, depth, anim_args, keys, frame_idx)
File "E:\automattic111\stable-diffusion-webui\extensions\deforum\scripts\deforum_helpers\animation.py", line 210, in anim_frame_warp_3d
result = transform_image_3d(device if not device.type.startswith('mps') else torch.device('cpu'), prev_img_cv2, depth, rot_mat, translate_xyz, anim_args, keys, frame_idx)
File "E:\automattic111\stable-diffusion-webui\extensions\deforum\scripts\deforum_helpers\animation.py", line 227, in transform_image_3d
z = torch.as_tensor(depth_tensor, dtype=torch.float32, device=device)
TypeError: must be real number, not NoneType

`

Disable internet/http requirement

I have noticed that you cannot run batch if network adapter disabled or internet is not working and will get http error. I would like to make a request to be able to run this completely offline. Trying to animate images of my own face and just would prefer if didn't haven network connectivity requirement. Thanks!

error

Hi
Dorry for asking maybe a newbie question but i got an error when i run
python run.py --settings "./examples/runSettings_StillImages.txt"

models_path: ./models output_path: ./output Traceback (most recent call last): File "run.py", line 1312, in <module> main() File "run.py", line 112, in main from helpers import DepthModel, sampler_fn ModuleNotFoundError: No module named 'helpers'

Do you have an idea?
Thanks

Update to Deforum Stable Diffusion version 0.5

Hello, it would be very cool to update to the last version of deforum stable diffusion. I am a developper myself, but i have little to no experience with google collab, and how the local version was done. if @HelixNGC7293, you have some hints about what you have done to achieve that, it would be great. If anyone is up to make it happen, here we can exchange how it can be done.

ldm and dlib are missing from the setup.

After following the setup instructions and trying to run it with bat.
gave me error that ldm was missing, checked anaconda and it was not inside the env.
did pip install ldm this added the classes but ldm.py has prints without () error. Added all the brackets for all the print methods.
After this it gave me the error that dlib is missing pip install dlib didn't work giving install error.

Anyone else run into this issues or has anyone run the setup successfully?

Zoom not working.

The zoom parameter changes nothing. I tried with 1.04 up to 10, without changes. Only translation_z works.

No can create animation video

After run: python run.py --settings ".\examples\runSettings_Animation.txt" i have this error:

`- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).

  • This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Saving to ./output\2022-10\Example_DGSpitzer\20221018091342_*
    Skipping video creation, uncheck skip_video_for_run_all if you want to run it`

RuntimeError: CUDA out of memory

I have an RTX 3050 4GB VRAM GPU. I tried to run the script with different width and height parameters (64x64, 128x128, 512x512)
It did not help
I have also did
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
The error i get is this
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.42 GiB already allocated; 0 bytes free; 3.48 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

What else can i do ? How can i solve this ?

Animation Prompts not working?

Hi!

Just managed to run the new updates locally, but I noticed the changes from the first animation prompt to the others are not so recognisable , in fact looks like not happening at all for me.

I copied the settings from the 'runSettings_Template.txt' file and put my own prompts in... Is there any settings I need to pay attention to?

Lower VRAM options

Is it possible to add something like the "--medvram --opt-split-attention" options from the AUTOMATIC1111 webui fork for those of us without great hardware? Apologies if this is the wrong place to ask. I'm new to Git.

(feature request): save main images of interpolation just like colab

With colab, when you start rendering using interpolation mode and keyframe set to true, colab starts to render the main keyframe images and output it, then start with the rest of the jobs, with the local, it does the same but it doesn't output the images of these main keyframes, can we get these main key frame images with this local version too?
An example, if we have 3 promopts, the colab render these three prompts first and show it to us with using output window.

Thanks!

Dynamic Prompting?

Hello,

the local Deforum SD works fine for still image creation with me.

I can't seem to use it for Dynamic Prompting though.

Is the Dynamic Prompting in the Deforum Stable Diffusion v0.4 Colab managed by the Colab itself and not inherint to the Deforum SD?

In the synthax of {opt1|opt2|opt3} all the "|" seem to be just overlooked.

Thank you very much!

Using animation_mode : video_input error

Hey, im trying to use a video_input.
I have the config setup and the video_in.mp4 in the /input directory.

but im getting this error:

Traceback (most recent call last):
File "run.py", line 1312, in
main()
File "run.py", line 1304, in main
raise RuntimeError(stderr)
RuntimeError: b"ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers\r\n built with gcc 10.2.1 (GCC) 20200726\r\n configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n libavutil 56. 51.100 / 56. 51.100\r\n libavcodec 58. 91.100 / 58. 91.100\r\n libavformat 58. 45.100 / 58. 45.100\r\n libavdevice 58. 10.100 / 58. 10.100\r\n libavfilter 7. 85.100 / 7. 85.100\r\n libswscale 5. 7.100 / 5. 7.100\r\n libswresample 3. 7.100 / 3. 7.100\r\n libpostproc 55. 7.100 / 55. 7.100\r\n[image2 @ 000001fef47561c0] Could find no file with path './output\2022-09\my_videotest2\20220928143935_%05d.png' and index in the range 0-4\r\n./output\2022-09\my_videotest2\20220928143935_%05d.png: No such file or directory\r\n"

seems like it is complaining about the missing png file...

any suggestions on how to resolve this?

FFMPEG error on animation mode

No errors while doing setup etc
but on running the run.py script with default settings
It generates all the frames - but then fails on using FFmpeg to assemble the video

Global seed set to 3521683333
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:06<00:00, 2.63it/s]
<PIL.Image.Image image mode=RGB size=512x512 at 0x7F49F11FD610>
Rendering animation frame 27 of 30
creating in between frame 24 tween:0.33
creating in between frame 25 tween:0.67
creating in between frame 26 tween:1.00
beautiful desolate city fill with giant flowers, moody :: by James Jean, Jeff Koons, Dan McPharlin Daniel Merrian :: ornate, dynamic, particulate, rich colors, intricate, elegant, highly detailed, centered, artstation, smooth, sharp focus, octane render, 3d 3521683334
Global seed set to 3521683334
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:06<00:00, 2.63it/s]
<PIL.Image.Image image mode=RGB size=512x512 at 0x7F49F11FD4C0>
./output/out_%05d.png -> ./output/out_%05d.mp4
b"ffmpeg version 9c33b2f Copyright (c) 2000-2021 the FFmpeg developers\n built with gcc 9.3.0 (crosstool-NG 1.24.0.133_b0863d8_dirty)\n configuration: --prefix=/home/baaleos/miniconda3/envs/ldm --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1627813612080/_build_env/bin/x86_64-conda-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-libx264 --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1627813612080/_build_env/bin/pkg-config\n libavutil 56. 51.100 / 56. 51.100\n libavcodec 58. 91.100 / 58. 91.100\n libavformat 58. 45.100 / 58. 45.100\n libavdevice 58. 10.100 / 58. 10.100\n libavfilter 7. 85.100 / 7. 85.100\n libavresample 4. 0. 0 / 4. 0. 0\n libswscale 5. 7.100 / 5. 7.100\n libswresample 3. 7.100 / 3. 7.100\n libpostproc 55. 7.100 / 55. 7.100\nHyper fast Audio and Video encoder\nusage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...\n\nUse -h to get full help or, even better, run 'man ffmpeg'\n"
Traceback (most recent call last):
File "run.py", line 1312, in
main()
File "run.py", line 1304, in main
raise RuntimeError(stderr)
RuntimeError: b"ffmpeg version 9c33b2f Copyright (c) 2000-2021 the FFmpeg developers\n built with gcc 9.3.0 (crosstool-NG 1.24.0.133_b0863d8_dirty)\n configuration: --prefix=/home/baaleos/miniconda3/envs/ldm --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1627813612080/_build_env/bin/x86_64-conda-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-libx264 --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1627813612080/_build_env/bin/pkg-config\n libavutil 56. 51.100 / 56. 51.100\n libavcodec 58. 91.100 / 58. 91.100\n libavformat 58. 45.100 / 58. 45.100\n libavdevice 58. 10.100 / 58. 10.100\n libavfilter 7. 85.100 / 7. 85.100\n libavresample 4. 0. 0 / 4. 0. 0\n libswscale 5. 7.100 / 5. 7.100\n libswresample 3. 7.100 / 3. 7.100\n libpostproc 55. 7.100 / 55. 7.100\nHyper fast Audio and Video encoder\nusage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...\n\nUse -h to get full help or, even better, run 'man ffmpeg'\n"

FFMPEG.EXE System Error

"The Code execution cannot proceed because open264-6.dll was not found. reinstalling the program may fix this problem."

I get this error constantly during the rendering animation fram 27 of 30, when "./output/out_%05d.png -> ./output/out_%05d.mp4.

I've tried manually adding the DLL, and nothing seems to work, any tips or tricks?

ModuleNotFoundError: No module named 'IPython'

Hllo, im trying to start it and im getting this error...

`C:\Users\trixs\Desktop\DeforumStableDiffusionLocal-main>python run.py --enable_animation_mode --settings "./runSettings_Template.txt"
Local Path Variables:

models_path: ./models
output_path: ./output
Traceback (most recent call last):
File "C:\Users\trixs\Desktop\DeforumStableDiffusionLocal-main\run.py", line 1312, in
main()
File "C:\Users\trixs\Desktop\DeforumStableDiffusionLocal-main\run.py", line 74, in main
import IPython
ModuleNotFoundError: No module named 'IPython'`

hash not correct error

Any idea what might be causing this error?

(dsd) E:\DeforumStableDiffusionLocal-main>python run.py --enable_animation_mode --settings "./runSettings_Template.txt"
Local Path Variables:

models_path: ./models
output_path: ./output
Using config: ./stable-diffusion/configs/stable-diffusion/v1-inference.yaml
./models\sd-v1-4.ckpt exists

...checking sha256
hash in not correct

Saving animation frames to ./output\2022-09\TaskName
Traceback (most recent call last):
File "run.py", line 1312, in
main()
File "run.py", line 1250, in main
render_animation(args, anim_args)
File "run.py", line 953, in render_animation
depth_model = DepthModel(device)
NameError: free variable 'device' referenced before assignment in enclosing scope

GPU not detected / modules not found

When attempting to run the examples, constantly gotten xxx module not found, fixed by navigating to python pip folder pip after each and installing ./pip install xxx

One issue i could not fix was - raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

it seems it can't detect the GPU or similar, tried this on 2 machines with same results. any ideas? thanks

Video as input.

It works perfectly for me. One question: Is it possible to have a video as input? Apparently you can... If so, what are the parameters?

Question: Prompt influence to video length

Hello,

this is not an issue, but a question I have about animation prompts. I could not find an explanation about that parameter in the original colab.

The animation prompts are numbered. Are the numbers in front of the prompts an indication on what frames they come into effect (e.g. 0 for prompt one from the start, 10 for prompt 2 and it comes into effect at frame ten), or are the numbers just similar to line numbering in code and the prompts are distributed evenly over the number of frames?

Thanks in advance for an answer.

two device CUDA ERROR

ubuntu 22.04 cuda 11 RTX 3060 laptop

python 3.10.6

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.