Git Product home page Git Product logo

g-u-n / gen-l-video Goto Github PK

View Code? Open in Web Editor NEW
261.0 17.0 26.0 954.51 MB

The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".

Home Page: https://arxiv.org/abs/2305.18264

License: Apache License 2.0

Python 1.75% Jupyter Notebook 98.23% Shell 0.02%
diffusion-models long-video-generation stable-diffusion text-to-video text2video video-editing video-generation

gen-l-video's People

Contributors

g-u-n avatar winshot-thu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gen-l-video's Issues

Failed to follow the setup instructions on the ubuntu:latest docker image

Hi!

I was able to use your Colab, but ran into some issues while trying to set things up locally (on a fresh ubuntu:latest Docker image, after having installed a reasonable set of system dependencies to support building things from source).

The problematic step for me was this one:
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers

Here is the error I bumped into:

Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/14] c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/small_k.o.d -pthread -B /root/miniconda3/envs/glv/compi
ler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/envs/gl
v/lib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/T
H -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csr
c/attention/cpu/small_k.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/small_k.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXT
ENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/small_k.o
  c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/small_k.o.d -pthread -B /root/miniconda3/envs/glv/compiler_com
pat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/envs/glv/lib/p
ython3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/TH -I/ro
ot/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc/atten
tion/cpu/small_k.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/small_k.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_
H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8,
                   from /root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/ATen/cpu/vec/vec.h:6,
                   from /root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/ATen/cpu/vec/functional_base.h:6,
                   from /root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/ATen/cpu/vec/functional.h:3,
                   from /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc/attention/cpu/small_k.cpp:14:
  /root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/ATen/cpu/vec/vec_base.h:976: warning: ignoring ‘#pragma unroll ’ [-Wunknown-pragmas]
    976 | # pragma unroll
        |
  c++: fatal error: Killed signal terminated program cc1plus
  compilation terminated.
  [2/14] c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/autograd/matmul.o.d -pthread -B /root/miniconda3/envs/glv/c
ompiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/env
s/glv/lib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/inclu
de/TH -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers
/csrc/attention/autograd/matmul.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/autograd/matmul.o -O3 -fopenmp -DTORCH_API
_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/autograd/matmul.o
  c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/autograd/matmul.o.d -pthread -B /root/miniconda3/envs/glv/compiler
_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/envs/glv/l
ib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/TH -
I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc/a
ttention/autograd/matmul.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/autograd/matmul.o -O3 -fopenmp -DTORCH_API_INCLUD
E_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  c++: fatal error: Killed signal terminated program cc1plus
  compilation terminated.
  [3/14] c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/attention.o.d -pthread -B /root/miniconda3/envs/glv/compile
r_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/envs/glv/
lib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/TH
-I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc/
attention/attention.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/attention.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION
_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  [4/14] c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/matmul.o.d -pthread -B /root/miniconda3/envs/glv/compil
er_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/envs/glv
/lib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/TH
 -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc
/attention/cpu/matmul.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/matmul.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENS
ION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  [5/14] c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/sparse_softmax.o.d -pthread -B /root/miniconda3/envs/gl
v/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/
envs/glv/lib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/in
clude/TH -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xform
ers/csrc/attention/cpu/sparse_softmax.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/sparse_softmax.o -O3 -fopenmp -D
TORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  [6/14] c++ -MMD -MF /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/sddmm.o.d -pthread -B /root/miniconda3/envs/glv/compile
r_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc -I/root/miniconda3/envs/glv/
lib/python3.8/site-packages/torch/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/TH
-I/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/include/THC -I/root/miniconda3/envs/glv/include/python3.8 -c -c /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/xformers/csrc/
attention/cpu/sddmm.cpp -o /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/build/temp.linux-x86_64-cpython-38/xformers/csrc/attention/cpu/sddmm.o -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION
_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1900, in _run_ninja_build
      subprocess.run(
    File "/root/miniconda3/envs/glv/lib/python3.8/subprocess.py", line 516, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

  The above exception was the direct cause of the following exception:

  Traceback (most recent call last):
    File "<string>", line 2, in <module>
    File "<pip-setuptools-caller>", line 34, in <module>
    File "/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/setup.py", line 397, in <module>
      setuptools.setup(
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/__init__.py", line 107, in setup
      return distutils.core.setup(**attrs)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
      return run_commands(dist)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
      dist.run_commands()
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
      self.run_command(cmd)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/dist.py", line 1234, in run_command
      super().run_command(command)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 346, in run
      self.run_command("build")
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
      self.distribution.run_command(command)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/dist.py", line 1234, in run_command
      super().run_command(command)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/command/build.py", line 131, in run
      self.run_command(cmd_name)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
      self.distribution.run_command(command)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/dist.py", line 1234, in run_command
      super().run_command(command)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
      _build_ext.run(self)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 345, in run
      self.build_extensions()
    File "/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/setup.py", line 342, in build_extensions
      super().build_extensions()
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 843, in build_extensions
      build_ext.build_extensions(self)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 467, in build_extensions
      self._build_extensions_serial()
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 493, in _build_extensions_serial
      self.build_extension(ext)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
      _build_ext.build_extension(self, ext)
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 548, in build_extension
      objects = self.compiler.compile(
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 658, in unix_wrap_ninja_compile
      _write_ninja_file_and_compile_objects(
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1573, in _write_ninja_file_and_compile_objects
      _run_ninja_build(
    File "/root/miniconda3/envs/glv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1916, in _run_ninja_build
      raise RuntimeError(message) from e
  RuntimeError: Error compiling objects for extension
  error: subprocess-exited-with-error

  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /root/miniconda3/envs/glv/bin/python -u -c '
  exec(compile('"'"''"'"''"'"'
  # This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
  #
  # - It imports setuptools before invoking setup.py, to enable projects that directly
  #   import from `distutils.core` to work with newer packaging standards.
  # - It provides a clear error message when setuptools is not installed.
  # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
  #   setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
  #     manifest_maker: standard file '"'"'-c'"'"' not found".
  # - It generates a shim setup.py, for handling setup.cfg-only projects.
  import os, sys, tokenize

  try:
      import setuptools
  except ImportError as error:
      print(
          "ERROR: Can not execute `setup.py` since setuptools is not available in "
          "the build environment.",
          file=sys.stderr,
      )
      sys.exit(1)

  __file__ = %r
  sys.argv[0] = __file__

  if os.path.exists(__file__):
      filename = __file__
      with tokenize.open(__file__) as f:
          setup_py_code = f.read()
  else:
      filename = "<auto-generated setuptools caller>"
      setup_py_code = "from setuptools import setup; setup()"

  exec(compile(setup_py_code, filename, "exec"))
  '"'"''"'"''"'"' % ('"'"'/tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' bdist_wheel -d /tmp/pip-wheel-ijkrp58n
  cwd: /tmp/pip-install-ju61h9ea/xformers_5a4f9261469340c192ed2be26d8fc695/
  ERROR: Failed building wheel for xformers
  Building wheel for xformers (setup.py): finished with status 'error'
  Running setup.py clean for xformers
  Running command python setup.py clean
  No CUDA runtime is found, using CUDA_HOME='/root/miniconda3/envs/glv'
  running clean
  'build/lib.linux-x86_64-cpython-38' does not exist -- can't clean it
  'build/bdist.linux-x86_64' does not exist -- can't clean it
  'build/scripts-3.8' does not exist -- can't clean it
Failed to build xformers
ERROR: Could not build wheels for xformers, which is required to install pyproject.toml-based projects

The setup instructions from https://github.com/facebookresearch/xformers are also incompatible with this project, due to software version conflicts (e.g. even the Python version).
It would be great if you could provide a Dockerfile in this repository. It would make it super easy for users to try things out locally without having to worry about issues like this.

PS: Congratulations on this project, it's very impressive.

different samplers in the temporal co-denoising

Hello [,
First of all, thanks for the great work. I've been exploring the denoising process and found the results quite interesting.
I have a question regarding the use of different samplers in the temporal co-denoising process. In my experiments, I observed that the choice of sampler can significantly affect the results. For instance, when using the DDPM sampler, there appear to be notable differences compared to using other samplers.
Could you please shed some light on why this is the case? Is there any underlying mechanism or theoretical explanation that could help me better understand the reason behind these observed effects?
Any guidance or pointer towards relevant literature would be greatly appreciated.
Thank you in advance for your help!

code with videocrafter

Hello, have you released the code about "Pretrained Text-to-Video" in your paper? Thank you.

Question about LORA

Hello, I have some question about the LORA used in one-shot-turning.
image
What role does this LORA play here and what do the parameters mean in get_lora method? Thank you!

Image guided video editing

Current architecture does the text guided video editing. How can I make it to Image guided video editing. I will give image which should be placed in place of the mask region, similar to Make-A-Protagonist.

Adapting existing ModelScope architectures to generate long videos

Hi, I've been trying this repo out the last few days and it's really amazing. I wanted to see what is the maximum length to which I can stretch video generation without fine-tuning. I tried a lot but it doesn't seem like there's any script to adapt an existing short t2v model for long generation yet. I was wondering if you can guide me on how to achieve this, just even a pseudocode approach would be great. If it ends up working, I can submit a PR in case you're interested in integrating it with the rest of the code.

git clone https://huggingface.co/andite/anything-v4.0. This address is currently inaccessible and cannot download relevant weight files. What should I do

multi-text video generation

Thanks for the great work. I am wondering could this framework perform mulit-text long videl generation ? Thought the title of the slayer example is "multi-text conditioned long video generation", it seems that it is long video edition instead of generation ?

unknown mid_block_type : UNetMidBlock2DCrossAttn

First run after installing, downloading weights, modifying configs etc.

Traceback (most recent call last):
  File "D:\Gen-L-Video\one-shot-tuning.py", line 522, in <module>
    main(**conf)
  File "D:\Gen-L-Video\one-shot-tuning.py", line 128, in main
    unet = UNet3DConditionModel.from_pretrained_2d(pretrained_model_path, subfolder="unet")
  File "D:\Gen-L-Video\glv\models\unet.py", line 505, in from_pretrained_2d
    model = cls.from_config(config)
  File "D:\Gen-L-Video\voc_genlvideo\lib\site-packages\diffusers\configuration_utils.py", line 229, in from_config
    model = cls(**init_dict)
  File "D:\Gen-L-Video\voc_genlvideo\lib\site-packages\diffusers\configuration_utils.py", line 607, in inner_init
    init(self, *args, **init_kwargs)
  File "D:\Gen-L-Video\glv\models\unet.py", line 162, in __init__
    raise ValueError(f"unknown mid_block_type : {mid_block_type}")
ValueError: unknown mid_block_type : UNetMidBlock2DCrossAttn

Any ideas how to get past this one?

Missing citation of MultiDiffusion

Hello,

The primary idea of extending the length of generated videos likely originates from the paper MultiDiffusion [1]. MultiDiffusion expands the resolution of generated images at the spatial dimension, while Gen-L-Video expands at the temporal attention. Gen-L-Video uses the same algorithm as MultiDiffusion, which utilizes multiple diffusion processes and combines them with a weighted sum. Besides, in your code, it seems you reused the code from diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py, which is the diffusers' implement of MultiDiffusion.

I'm not the author of MultiDiffusion, but maybe you should add it to the citations in the paper. Just a reminder.

[1] Bar-Tal, O., Yariv, L., Lipman, Y., & Dekel, T. (2023). MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation. International Conference on Machine Learning.

Question about the One-shot tuning Text-to-Video algorithm

Hello, I have some question about the pipeline of One-shot tuning Text-to-Video algorithm. I am confused about how the algorithm below is reflected in the One-shot tuning code.
image
In the paper, it said 'The total number of frames of the video is S ∗ N + M'
In the code, the for loop is 'for i in range(0,video_length-clip_length+1,clip_length):'
Is it means S==M in this code?
Thank you very much!

scheduler_config.json is missing

When I tried to utilize the code accelerate launch tuning-free-control.py --config=./configs/tuning-free-control/girl-glass.yaml to generate long videos, it occured this error: OSError: Error no file named scheduler_config.json found in directory weights/chilloutmix. The chilloutmix model is downloaded from https://huggingface.co/hanafuusen2001/ChilloutMix. There is not any json files in this repository. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.