๐ Right click and loop me!
DepthFlow.2023.12.19.mp4
Note: Yes, the only input to DepthFlow was the Original Image
๐ฉ Click to see the Original Image
Source: Wallhaven. All images remain property of their original owners. โ๏ธ
๐ Image to โ 2.5D Parallax Effect Video. High quality, user first.
Home Page: https://brokensrc.dev
License: GNU Affero General Public License v3.0
๐ Right click and loop me!
Note: Yes, the only input to DepthFlow was the Original Image
how do i open it after installing it in my system?
shaderflow is also needed or not?
Hi
Noticed a worse quality when using cached depthmap. Example video below, the left was generated when there was no depthmap yet, the right video is using the came cmd, but depthmap was loaded from cache. You can see jagged edges (ears, hands) compared to the first run.
My cmd:
depthflow input -i https://img.bricklink.com/ItemImage/MN/0/ani004.original.png -b main --ssaa 2.0 -q 100 --render --output ani004.mp4 --format mp4 --open
Hello!
I have one quick question for you, is it possible to change the video codec used when saving the effect to a video? Currently, it appears it is using the h264 codec, and I would like to use the png codec.
If there is no such feature, could you please point me to where in the code I can make this change?
Thank you!
Hi I am running Depthflow on Mac Silicon and currently whenever I run depthflow I got
zsh: segmentation fault depthflow
any ideas?
My man. This seems like a fantastic project and I've been trying to get it running for the last 2 days.
It's been a constant roadblock of errors everywhere. Admittedly, not all of it comes down to your package (I'm on a headless server without admin rights), but some of the installation process is quite ridiculous.
Why do I need to install broken-source & every package you've created? Why are there sound libraries in a project which doesn't need sound? Why does installation take over 30mins?
I feel this could be as simple as "git clone depthflow" and then "depthflow --image X", with only the dependencies that are required for this project.
Or, stick your model on huggingface spaces/replicate. I know you may have a lot on your plate but the exposure you'd get by doing this might encourage some open source contributors to help out.
This is the first model of its kind on github (that I could find) -- make getting started with it easy and you'll be the main repo for this stuff
Hello,
really nice little tool, would it be possible to directly render the video using image and depth map directly from cli?
Is it planned to add a CLI only mode to this? This would be nice to have.
Recently it is working fine.
Lookin fine from the last one but i think i still missing some settings.
i used this command
depthflow input --image (url | path) main --render -s 2
so i want to ask you that is it possible to make a opensource website just like "Depthy". That will be so much easier for everyone.
Hello,
Today I downloaded the latest version and tried to run the code, but the result is significantly worse than with the code from around 3 weeks ago.
Here is the command that I run with the new code: depthflow input --image ./image.png main -w 1080 -h 1920 -s 8 -f 30.0 -q 100 -r -o ./output.avi --format avi -f 30
Here is the command that I run with the old code: broken depthflow parallax --image image.png main --format avi -q 100 -s 8 -w 1080 -h 1920 -f 30.0 -r -o ./output.avi
The new and low-quality result (see the distortion) can be seen here:
https://streamable.com/ri72fk
The old result can be seen here. While it was not perfect, there was significantly less "stretching".
https://streamable.com/y05h7u
Here is the image, in case you want to test (it was too big to upload here):
https://we.tl/t-yXmBj7zOjZ
(ignore the image quality, it is something I quickly generated with stable diffusion to test the new version)
Hello, I would like to try this project but as the title says, under the section "Grab the latest DepthFlow Release for your platform, run it" there is no release(s) available.
As well, are there any examples available for what exactly this does? Going by what's provided it sounds like it should take an image, generate a depth map and then animate it - but how exactly?
Hi,
First, thanks so much for making this, looks like it's got a lot of potential however I'm unable to get it to run.
I'm getting an AttributeError and Binary doesnt exist or was not foubnd on PATh as shown in the following when trying to run broken depthflow. I've tried both on my M1 Mac and Linux. x86.
Thanks!
ERROR โ โธ Binary doesn't exist or was not found on PATH (/Users/deepdey/Library/Caches/pypoetry/virtualenvs/depthflow-E2DdtqXj-py3.10/bin/main)
.
.
.
Users/deepdey/Media_Local/depthflow2/BrokenSource/Broken/main.py
)
C++ projects
if self.is_cpp:
log.error("C++ projects are not supported yet")
Avoid reinstalling on future runs
reinstall = False
Detect bad return status, reinstall virtualenv and retry once
if (status.returncode != 0) and (not reinstall):
log.warning(f"Detected bad Return Status ({status.returncode}) for the Project ({self.name}) at ({self.path})")
if self.is_python:
log.warning(f"โข Python Virtual Environment: ({venv})")
log.warning(f"โข Command: {tuple(status.args)}")
Prompt user for action
import rich.prompt
answer = rich.prompt.Prompt.ask(
f"โข Action: Run poetry (i)nstall, poetry (l)ock, (r)einstall venv, (e)xit or nothing (enter), then retry",
choices=["r", "e", "p", "l", ""],
AttributeError:
'NoneType' object has no attribute 'returncode'
Thanks for this nice project.
Sadly, I always get this mountain rendered but did not manage to set my own input image. If using drag&drop to the preview, I can make it work, but then I get no rendered output file.
I tried using -i or --image, but this gives me error that I should check help. The help page does not tell me how to set the input image.
$ broken depthflow -i DoA_DiamondValley.jpg
...
Usage: main [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...
No help provided
...
I then tried with parallax:
$ broken depthflow parallax -i DoA_DiamondValley.jpg
Exception in thread Thread-2 (__parallax__):
Traceback (most recent call last):
File "/usr/lib64/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/home/volker/dummy/BrokenSource/Projects/DepthFlow/DepthFlow/DepthFlow.py", line 40, in __parallax__
self.__load_depth__ = BrokenUtils.load_image(depth or self.mde(image, cache=cache))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/volker/dummy/BrokenSource/Projects/DepthFlow/DepthFlow/Modules/DepthFlowMDE.py", line 46, in __call__
image_hash = hashlib.md5(image.tobytes()).hexdigest()
^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'tobytes'
Is there something I missed or doing wrong?
I also do not get any file selection dialog opening up like the github page is saying? It just starts with the mountains.
I'm on OpenSUSE LEAP 15.5 x86_64 with most recent updates.
Hello,
Very nice tool, would it be possible to support conda/pip installation?
I have a server but no interface, I can only use bash, all the OSError when downloading the model, it gives me a headache to find a solution, please can you release a version that can run directly in the terminal as soon as possible.
Iโm trying to use the DepthFlow project, but it depends on ShaderFlow. After copying ShaderFlow into the projects, I found it depends on Brokens. However, I couldnโt find Brokens under https://github.com/BrokenSource. The entire project cannot run properly. Also, I would like to ask if there is a complete plan for using DepthFlow?
This project is absolutely awesome. The inference works so fast. I am thinking to integrate it to stable diffusion ui. Currently there is a similar plugin for the sd ui called https://github.com/thygate/stable-diffusion-webui-depthmap-script
But that one is painfully slow, first create a depthmap, then a mesh and then creates a video from the mesh. The mesh generation can take like 2-3 minutes and only works with CPU. (once generated the mesh you have to wait another time for video generation) And the result is almost the same as your project. With the clear difference that the results of your project are much better in quality, speed and also it uses CUDA.
So, congratulations for your project!
David Martin Rius
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.