Git Product home page Git Product logo

sd-webui-mov2mov's Introduction

English | 中文简体

Mov2mov This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.

img.png img1.png

Features:

  • Directly process frames from videos
  • Package into a video after processing
  • Video Editing(beta)
    • Dramatically reduce video flicker by keyframe compositing!
    • You can customize the keyframe selection or auto-generate keyframes.
    • Backpropel keyframe tag
    • Currently only available for windows, if your system does not support, you can turn off this tab.

Also, mov2mov will work better with the bg-mask plugin 😃

Table of Contents

Usage Regulations

  1. Please resolve the authorization issues of the video source on your own. Any problems caused by using unauthorized videos for conversion must be borne by the user. It has nothing to do with mov2mov!
  2. Any video made with mov2mov and published on video platforms must clearly specify the source of the video used for conversion in the description. For example, if you use someone else's video and convert it through AI, you must provide a clear link to the original video; if you use your own video, you must also state this in the description.
  3. All copyright issues caused by the input source must be borne by the user. Note that many videos explicitly state that they cannot be reproduced or copied!
  4. Please strictly comply with national laws and regulations to ensure that the content is legal and compliant. Any legal responsibility caused by using this plugin must be borne by the user. It has nothing to do with mov2mov!

Installation

  1. Open the Extensions tab.
  2. Click on Install from URL.
  3. Enter the URL for the extension's git repository.
  4. Click Install.
  5. Restart WebUI.

Change Log

Change Log

Instructions

Thanks

sd-webui-mov2mov's People

Contributors

davg25 avatar scholar01 avatar shineyull avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-mov2mov's Issues

How to diffuse only specified fps?

Thank you for making this.

Lets say I have a video with 24 fps and I want it to diffuse only half the frames. If I set the first fps slider to 12, it still diffuses all the frames of the video (24 fps), which is a waste of time

最新版本报错

Error running install.py for extension D:\Program Files\AIdraw\novelai-webui-aki-v3\extensions\sd-webui-mov2mov.
Command: "d:\Program Files\AIdraw\novelai-webui-aki-v3\py310\python.exe" "D:\Program Files\AIdraw\novelai-webui-aki-v3\extensions\sd-webui-mov2mov\install.py"
Error code: 1
stdout:
stderr: Traceback (most recent call last):

File "D:\Program Files\AIdraw\novelai-webui-aki-v3\extensions\sd-webui-mov2mov\install.py", line 4, in

plat = platform.system().lower()

NameError: name 'platform' is not defined
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

openh264-1.8.0-win64

Failed to load OpenH264 library: openh264-1.8.0-win64.dll
Please check environment and/or download library: https://github.com/cisco/openh264/releases

[libopenh264 @ 00000271524cd600] Incorrect library version loaded
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:3049 open Could not open codec libopenh264, error: Unspecified error (-22)
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:3066 open VIDEOIO/FFMPEG: Failed to initialize VideoWriter
The generation is complete, the directory::outputs/mov2mov-videos\1682419443.mp4
Total progress: 90%|███████████████████████████████████████████████████████▊ | 5544/6160 [57:19<06:22, 1.61it/s]
Total progress: 90%|███████████████████████████████████████████████████████▊ | 5544/6160 [57:19<04:17, 2.39it/s]

无法加入 qq 频道

我有内测权限,也是最新版的 QQ 为啥无法加入频道,我加入其他频道都可以的

...py310\lib\site-packages\gradio\processing_utils.py", line 334, in hash_file with open(file_path, "rb") as f: FileNotFoundError: [Errno 2] No such file or directory: ''

有时候好用 有时候不好用,刚才还能用,突然就不行了,开始是第二张图时候报错,后面变成第三张报错
感觉是用locon时候就报这个错,后面换lora没报错,但是加了关键字又报这个错

Traceback (most recent call last):
File "E:\tools\stable\novelai-webui-aki-v3A\py310\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "E:\tools\stable\novelai-webui-aki-v3A\py310\lib\site-packages\gradio\blocks.py", line 1078, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "E:\tools\stable\novelai-webui-aki-v3A\py310\lib\site-packages\gradio\blocks.py", line 1012, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "E:\tools\stable\novelai-webui-aki-v3A\py310\lib\site-packages\gradio\components.py", line 1991, in postprocess
y = self.make_temp_copy_if_needed(y)
File "E:\tools\stable\novelai-webui-aki-v3A\py310\lib\site-packages\gradio\processing_utils.py", line 362, in make_temp_copy_if_needed
temp_dir = self.hash_file(file_path)
File "E:\tools\stable\novelai-webui-aki-v3A\py310\lib\site-packages\gradio\processing_utils.py", line 334, in hash_file
with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: ''

python: 3.10.8  •  torch: 2.0.0+cu118  •  xformers: 0.0.17rc482  •  gradio: 3.23.0  

bug,视频变长

发现不良的帧率选择可能导致视频变长。感觉是这里的取整原因。
image

[libopenh264 @ 0000020e85fdf080] Incorrect library version loaded

Hello !

I get this error after the process is done.

Start generating .mp4 file
[libopenh264 @ 0000020e85fdf080] Incorrect library version loaded
[ERROR:[email protected]] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (2822) open Could not open codec libopenh264, error: Unspecified error
[ERROR:[email protected]] global /build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (2839) open VIDEOIO/FFMPEG: Failed to initialize VideoWriter
The generation is complete, the directory::outputs/mov2mov-videos\1681738708.mp4
Traceback (most recent call last):
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1078, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1012, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1991, in postprocess
y = self.make_temp_copy_if_needed(y)
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 362, in make_temp_copy_if_needed
temp_dir = self.hash_file(file_path)
File "E:\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 334, in hash_file
with open(file_path, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'outputs/mov2mov-videos\1681738708.mp4'

在装有stable-diffusion-webui-aesthetic-gradients插件时,有个异常报错,不影响最后输出

Loading preprocessor: hed
Error running process: C:\ai\novelai-webui-aki-v2\extensions\stable-diffusion-webui-aesthetic-gradients\scripts\aesthetic.py
Traceback (most recent call last):
File "C:\ai\novelai-webui-aki-v2\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "C:\ai\novelai-webui-aki-v2\extensions\stable-diffusion-webui-aesthetic-gradients\scripts\aesthetic.py", line 36, in process
aesthetic.set_aesthetic_params(p, float(aesthetic_lr), float(aesthetic_weight), int(aesthetic_steps), aesthetic_imgs, aesthetic_slerp, aesthetic_imgs_text, aesthetic_slerp_angle, aesthetic_text_negative)
ValueError: invalid literal for int() with base 10: 'C:\Users\ADMINI~1\AppData\Local\Temp\test2y000000-000013000004-000010m5r9itwp.mp4'

输出帧和视频都很模糊

您好,我用这个插件的时候遇见输出结果很模糊的情况
在图生图的界面中验证测试的时候产出很清晰,但是相同的参数转到mov2mov就很模糊了,产出的视频只能看见一团影子,请问这个该怎么解决呢

抠像模型的问题

请教下,用的模型的下载地址和本地存放地址分别在哪里啊?

Error indicating that encoder cannot be found, how to solve it?

[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:2991 open Could not find encoder for codec_id=27, error: Encoder not found
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:3066 open VIDEOIO/FFMPEG: Failed to initialize VideoWriter
[ERROR:[email protected]] global cap.cpp:595 open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.7.0) /io/opencv/modules/videoio/src/cap_images.cpp:267: error: (-215:Assertion failed) number < max_number in function 'icvExtractPattern'

I have confirmed that ffmpeg and OpenCV have been installed.

Generate Movie Mode与ModNet使用报错

Generate Movie Mode使用H.264,报如下错误,视频生成失败:
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:2991 open Could not find encoder for codec_id=27, error: Encoder not found
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:3066 open VIDEOIO/FFMPEG: Failed to initialize VideoWriter
[ERROR:[email protected]] global cap.cpp:595 open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.7.0) /io/opencv/modules/videoio/src/cap_images.cpp:267: error: (-215:Assertion failed) number < max_number in function 'icvExtractPattern'

ModNet使用mobilenetv2_human_seg.ckpt模型,报如下错误,模型加载失败:
File "/stable-diffusion-webui/extensions/sd-webui-mov2mov-master/scripts/m2m_modnet.py", line 33, in get_model
modnet.load_state_dict(weights)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.backbone.model.features.0.0.weight", "

能否增加界面图片信息互转的界面?

能否把图片信息功能中提取的提示词,图片设置,采样方式等一系列的数据直接导入到mov2mov?
老版在图生图反而比较方便,新版整合到了新的ui里反而要挨个粘贴

FileNotFoundError: [Errno 2] No such file or directory: ''

我使用的autodl云端的Linux服务器,启动后报错如下:

Start converting video, frame number:12
mov2mov_images_path:/root/autodl-tmp/stable-diffusion-webui/outputs/mov2mov-images
current_mov2mov_images_path:/root/autodl-tmp/stable-diffusion-webui/outputs/mov2mov-images/1677864863
The video conversion is completed, frames:12, images:139
Error completing request
Arguments: ('task(60n20ne74t12x2e)', 0, '', '', [], None, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', 0, True, '/tmp/km5__1opy2.mp4', 0, False, 10, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, 0, 512, 512, False, False, True, True, True, False, False, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, False) {}
Traceback (most recent call last):
  File "/root/autodl-tmp/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/root/autodl-tmp/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/root/autodl-tmp/stable-diffusion-webui/extensions/mov2mov/scripts/main.py", line 19, in foo
    return hookfunc(*args, **kwargs)
  File "/root/autodl-tmp/stable-diffusion-webui/extensions/mov2mov/scripts/main.py", line 161, in mov2mov
    result = img2img(id_task, mode, prompt, negative_prompt, prompt_styles, init_img, sketch,
  File "/root/autodl-tmp/stable-diffusion-webui/modules/img2img.py", line 156, in img2img
    process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args)
  File "/root/autodl-tmp/stable-diffusion-webui/modules/img2img.py", line 22, in process_batch
    images = shared.listfiles(input_dir)
  File "/root/autodl-tmp/stable-diffusion-webui/modules/shared.py", line 669, in listfiles
    filenames = [os.path.join(dirname, x) for x in sorted(os.listdir(dirname)) if not x.startswith(".")]
FileNotFoundError: [Errno 2] No such file or directory: ''

install.py 启动报错 的临时解决方案

platform.system().lower() error code
添加 import platform 在 install.py 中得 import launch 后面 或者前面 换行都可以, 临时解决方案, 具体解决方案以所有者得为准

启动时报错

Error executing callback ui_tabs_callback for L:\AiWorkSpace\StableDiffusionWebUI\extensions\sd-webui-mov2mov\scripts\m2m_ui.py
Traceback (most recent call last):
File "L:\AiWorkSpace\StableDiffusionWebUI\modules\script_callbacks.py", line 125, in ui_tabs_callback
res += c.callback() or []
File "L:\AiWorkSpace\StableDiffusionWebUI\extensions\sd-webui-mov2mov\scripts\m2m_ui.py", line 323, in on_ui_tabs
generate_mov_mode,
UnboundLocalError: local variable 'generate_mov_mode' referenced before assignment

能否把工作参数保存下来

1 工作结束时在控制台输出了工作参数,能否把这些工作参数自动保存在个文本文件里面方便事后回顾
2 输出的工作参数里面加个工作开始时间,和工作耗时吧,这样能方便的对比不同参数的耗时

谢谢!

加两个时间参数

很多时候我们不需要整个视频转,是否能加两个start/end time的参数,控制视频起始终止的位置。

打开了ControlNet 就会报内存不足

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.18 GiB (GPU 0; 22.38 GiB total capacity; 12.23 GiB already allocated; 3.00 GiB free; 18.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

生成的文件大小始终只有几k

一切都正常,只有最终生成的视频大小是几k, 修改分辨率可以正常,但是这是不对的,我的视频文件是1080_1920, 我把输出设置成540_960就是正常等比例缩放。

但是这样的分辨率总是出问题,查看到生成的图片是536_960的,而opencv参数是540_960,这就始终无法生成视频。 我把生成视频的代码修改一下,可以成功。
def images_to_video(images, frames, mode, w, h, out_path):
fourcc = cv2.VideoWriter_fourcc(mode)
/
添加的代码, 开始 /
if len(images) > 0:
img = images[0]
img_width, img_height= img.size
w = img_width
h = img_height
/
添加的代码, 结束 */
video = cv2.VideoWriter(out_path, fourcc, frames, (w, h))
for i, image in enumerate(images):
img = cv2.cvtColor(numpy.asarray(image), cv2.COLOR_RGB2BGR)
video.write(img)
print("Processed frame {} - Frame {}, w_{} x h_{}, out_path::{}".format(image, i, w, h, out_path))
video.release()

return out_path

colab下运行直接掉线

之前可以正常运行, 无任何改动下, 这周运行时会卡在开始解析.. 然后就会webui就会自动掉线,升级更新后依然如此

启动时报错

Error executing callback ui_tabs_callback for L:\stable-diffusion-webui_W1\extensions\sd-webui-mov2mov\scripts\m2m_ui.py
Traceback (most recent call last):
File "L:\stable-diffusion-webui_W1\modules\script_callbacks.py", line 125, in ui_tabs_callback
res += c.callback() or []
File "L:\stable-diffusion-webui_W1\extensions\sd-webui-mov2mov\scripts\m2m_ui.py", line 284, in on_ui_tabs
generate_mov_mode,
UnboundLocalError: local variable 'generate_mov_mode' referenced before assignment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.