Git Product home page Git Product logo

animatelcm's Introduction

List of Projects

Video Editing/Synthesis

  • Motion-I2V: General explicit motion generation framework, working for Image-to-Video, Drag Video, Motion Brush, Vid2Vid.
  • AnimateLCM: video generation within 4 steps.
  • Be-Your-Outpainter: video outpainting pipeline.

Class-Incremental Learning

  • PyCIL: a python toolbox for class-incremental learning.
  • CIL.Survey: a survey in deep class-incremental learning.
  • CIL.Pytorch: a pytorch tutorial to class-incremental learning.

Contact

email: [email protected] or [email protected]

animatelcm's People

Contributors

dillfrescott avatar g-u-n avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animatelcm's Issues

Issue in using training AnimateLCM SVD

Thanks for the great work, also for releasing the training script train_svd_lcm.py.

I am trying to reproduce the results using the provided train_svd_lcm.py, but after half of the training (20,000 / 50,000 itrs) don't see any improvement neither in loss value nor generation qualities (training on a single A100 on WebVid2M).

Could you please confirm if Ishould set the hyper-params as follows?

accelerate launch train_svd_lcm.py \
--pretrained_model_name_or_path=stabilityai/stable-video-diffusion-img2vid-xt \
--per_gpu_batch_size=1 --gradient_accumulation_steps=1 \
--max_train_steps=50000 \
--width=576 \
--height=320 \
--checkpointing_steps=1000 --checkpoints_total_limit=1 \
--learning_rate=1e-6 --lr_warmup_steps=1000 \
--seed=123 \
--adam_weight_decay=1e-3 \
--mixed_precision="fp16" \
--N=40 \
--validation_steps=500 \
--enable_xformers_memory_efficient_attention \
--gradient_checkpointing \
--output_dir="outputs" \

In the current train_svd_lcm.py, the model is being trained on 576x320 resolutions, which is much lower than the standard SVD, i.e., 1024x572. Would not this cause a problem as normal (non LCM) SVD suffer from generating lower resolution videos?

Any input is much appreciated :)

Not working on M1

Code:

import torch

from diffusers import StableVideoDiffusionPipeline
from diffusers.utils import load_image, export_to_video

pipe = StableVideoDiffusionPipeline.from_pretrained(
    "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16"
)
pipe.enable_model_cpu_offload()

# Load the conditioning image
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png")
image = image.resize((1024, 576))

generator = torch.manual_seed(42)
frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0]

export_to_video(frames, "generated.mp4", fps=7)

Error:

Traceback (most recent call last):
  File "p.py", line 16, in <module>
    frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0]
  File "/Users/yuki/anaconda3/envs/ai/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/yuki/anaconda3/envs/ai/lib/python3.8/site-packages/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py", line 441, in __call__
    image_embeddings = self._encode_image(image, device, num_videos_per_prompt, self.do_classifier_free_guidance)
  File "/Users/yuki/anaconda3/envs/ai/lib/python3.8/site-packages/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py", line 168, in _encode_image
    image = image.to(device=device, dtype=dtype)
  File "/Users/yuki/anaconda3/envs/ai/lib/python3.8/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

How to install triton for python 3.9

File "C:\Users\SHREE\anaconda3\envs\animatelcm_svd\lib\site-packages\xformers_init_.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "C:\Users\SHREE\anaconda3\envs\animatelcm_svd\lib\site-packages\xformers\triton\softmax.py", line 11, in
import triton
ModuleNotFoundError: No module named 'triton'

please tell me how to install triton on python 3.9

Teacher-Free Adaptation is Latent Consistency Fine-tuning (LCF)?

Great works~

I am confused about 'Teacher-Free Adaptation'. Does it mean Latent Consistency Fine-tuning (LCF)? Directly select two time steps and get nosied data z_{t} and z_{t-1}, and then directly calculate the consistency loss for these two time steps to enforce self-consistency property as LCF in LCM paper?

So the training procedure is :

  1. Train base image diffusion model, using Latent Consistency Distillation and image data.
  2. Fix the lcm image diffusion weight and add a trainable temporal layer, using Latent Consistency Distillation and new Initialization strategy and video data.
  3. Add other controls (controlnet or IPadpter), using Latent Consistency Fine-tuning and the data about control conditions.

Do I have an exact understanding?

Header too large error when running app.py

Traceback (most recent call last):
  File "/home/cross/Downloads/animate/animatelcm_svd/app.py", line 59, in <module>
    model_select("AnimateLCM-SVD-xt-1.1.safetensors")
  File "/home/cross/Downloads/animate/animatelcm_svd/app.py", line 33, in model_select
    with safe_open(file_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

UCF101 evaluation details

Dear authors,

Thank you for the great work!

I want to seek some clarification on the evaluation details described in Section 5.1 of your paper, particularly concerning the resolution of the snippets generated for the UCF101 dataset analysis. In the section, it's mentioned that the snippets are generated at a resolution of 512x512. However, considering that the original UCF101 videos are at a resolution of 320x240 and the I3D classifier is trained on 224x224 resolution inputs.

Could you kindly provide further insight into the rationale behind selecting a 512x512 resolution for the snippets in this context?

Thank you in advance!

Regards,
Yuanhao

usable in sd-webui?

Hello,
Could you help? I miss something in my sd-webui (1.6.1) setup:
image

Any idea?

How to generate long video?

Dear AnimateLCM team,

Thank you for your great work, I really like it.

Could you tell me how to generate a long video (>10s) as you show on the readme page? I try to increate the num_frames from 16 to 32, the results degrade a lot.

output = pipe(
    prompt="a young woman walking on street, 4k, high resolution",
    negative_prompt="bad quality, worse quality, low resolution",
    num_frames=32, #16
    guidance_scale=2.0,
    num_inference_steps=6,
    generator=torch.Generator("cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "animatelcm.gif")

Thank you for your help.

Best Wishes,
Zongze

SDXL

Thanks for great work. Are you planning to shift to SDXL from SD1.5, will it take much effort?

import error

from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter
ImportError: cannot import name 'AnimateDiffPipeline' from 'diffusers' (/root/miniconda3/lib/python3.7/site-packages/diffusers/init.py)

diffusers version is 0.21.4

License

Would it be possible to get a license added to the repo? My company would love to use the project but without a license it's not possible to do so.

KeyError in webui

model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0]
KeyError: 'model.diffusion_model.input_blocks.0.0.weight'

Possible Typo in the Paper

Hi, Thanks for the interesting paper.

I think the part shown in equation (10) should be c. If x is correct, could you tell me why?

제목 없음

from multiprocessing.context import ForkProcess ImportError: cannot import name 'ForkProcess' from 'multiprocessing.context' (C:\Users\DELL\anaconda3\envs\animatelcm_svd\lib\multiprocessing\context.py)

(animatelcm_svd) C:\Users\DELL\OneDrive\Desktop\ImagetoVideo\AnimateSVD model\AnimateLCM\animatelcm_svd>python app.py
Traceback (most recent call last):
File "C:\Users\DELL\OneDrive\Desktop\ImagetoVideo\AnimateSVD model\AnimateLCM\animatelcm_svd\app.py", line 1, in
import spaces
File "C:\Users\DELL\anaconda3\envs\animatelcm_svd\lib\site-packages\spaces_init_.py", line 10, in
from .zero.decorator import GPU
File "C:\Users\DELL\anaconda3\envs\animatelcm_svd\lib\site-packages\spaces\zero\decorator.py", line 21, in
from .wrappers import regular_function_wrapper
File "C:\Users\DELL\anaconda3\envs\animatelcm_svd\lib\site-packages\spaces\zero\wrappers.py", line 15, in
from multiprocessing.context import ForkProcess
ImportError: cannot import name 'ForkProcess' from 'multiprocessing.context' (C:\Users\DELL\anaconda3\envs\animatelcm_svd\lib\multiprocessing\context.py)

I got this error, how to solve it? please help

Discuss for sigma and timesteps implementations

@G-U-N Hi,thanks for your great work.

I have finetune svd using the following way generating sigma and timesteps, which is consistent with EDMv2 paper

My implementation:
企业微信截图_17157433189090

EMDv2:
企业微信截图_1715743379560

You seems follow the original EDM paper implementation. Have you compare with EDMv2 way?

Distillation of Video Diffusion Based Model

Hi, Firstly, Great work.

I have a question regarding the distillation of video diffusion model. Did you used the DDIM sampler while distilling from the video based diffusion model and did the training used skip timesteps while training the online consistency distillation model?

Also, how many optimization steps did the training involved for generating good results with the distilled model?

Thanks for the help.

Image to Video generation

Hello devs!

I would like to know if it's possible to start from an image, instead of generating the image. I have many nice images I would like to animate with this. I've skimmed the code and I couldn't find an easy way.

Thanks for the answer!
Kind regard,
Timon Käch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.