Comments (24)
Even though I'd like to use the docker approach I can report that I was previously getting garbled images like the OP but can now get useful output (at least the first 100 frames, somehow it stopped getting output images after...) using CUDA 9.2 and cudnn 7. What I did was to set CUDNN_PATH="/usr/local/cuda/lib64/libcudnn.so.7"
and TORCH_NVCC_FLAGS="-D__CUDA_NO_HALF_OPERATORS__"
and update.sh torch. Then I used cudnn.torch bindings (from github) with branch R7:
git clone https://github.com/soumith/cudnn.torch -b R7
cd cudnn.torch
luarocks make cudnn-scm-1.rockspec
from fast-artistic-videos.
I seem to have got this working. I used a clean install of Ubuntu 14.04 and CUDA 7.5. Aside from following the steps in README.md, I did the following:
- Restart after installation
- Ensure cudnn is version 5.0 (Although error messages make this clear)
- After all torch installation is complete, run the update.sh script which appears in the torch directory.
I'm using the static binaries of DeepFlow and deepmatching .
For reference, my previous attempt on a more powerful machine used Ubuntu 16.04, CUDA 9.2, cudnn 5.0. I had run torch/update.sh on this machine, and I still got poor results, similar to those in the first post on this issue.
I tested this code on the few frames suggested by Manuel just above my last post. I get properly stylized results.
from fast-artistic-videos.
To further analyse, you could run fast-neural-style on the extracted video frames. If there is a case where fast-neural-style produces a correct stylization but mine fails let me know.
As an example you could take the five video frames from here. Then run stylizeVideo_*.sh example/marple8_%02d.ppm <path_to_video_model> [<path_to_image_model>
. (This works because path_to_video
can also be an already extracted sequence, or anything else that can be used as input to ffmpeg)
from fast-artistic-videos.
It's a work in progress, but here it is: https://github.com/positlabs/fast-artistic-videos-docker
I was able to get stylized videos from it last friday, but tried again today and it failed. I'll keep working on it.
from fast-artistic-videos.
In the past I often had issues with video that have a special color format (10 bit etc.). This software and it's libraries only work with normal, consumer-ready videos. You could check different videos from different sources.
It has cleanly nothing to do with DeepFlow
from fast-artistic-videos.
Unfortunately, I tried with a different video format (from an animation movie) but still get the same single-colored output files:
About DeepFlow, are really sure? Because I did get one error message from the deepflow static file:
deepmatching-static: conv.cpp:710: void fastconv(float_image*, float_layers*, int, int, int, float, int, res_scale*): Assertion
res->res_map.pixels || !"error: ran out of memory before sgemm"'`
and at the end:
run-deepflow.sh: line 13: 12432 Aborted (core dumped) ./deepmatching-static $1 $2 -nt 0 -downscale $4
12433 Killed | ./deepflow2-static $1 $2 $3 -match
from fast-artistic-videos.
If this issue was caused by DeepFlow, at least the first frame would have been stylized correctly. The first frame is generated without dependency on a previous frame or optical flow. In fact, for the first frame the algorithm is equal to fast-neural-style.
from fast-artistic-videos.
For what it's worth, we've had the same issue as @agilebean .
We tried many different scenarios;
- ffmpeg vs. avconv
- deepflow vs. flownet
- native resolution vs. reduced
- different input videos (generated by quicktime on mac, and camera from Android)
- gpu vs. cpu
@manuelruder, it would be great if you could include a simple video in this repository that should work, as a sanity check.
from fast-artistic-videos.
Having the same problem,
Tried multiple different containers, even sitting through the 800+ second per frame time of CPU rendering.
2nding a tried&true sample to test on.
from fast-artistic-videos.
I too am seeing these kind of results. I'm using ffmpeg, deepflow, half resolution. I would also like to have a known good input video, and parameters to run, to test things were all behaving.
from fast-artistic-videos.
@manuelruder Thanks for the example frames. These do not work on my installation. I see results which look very similar to the frames in the first post. Can you suggest any steps to work out what's wrong?
from fast-artistic-videos.
Yes, see my post above...
from fast-artistic-videos.
P.S. I've seen that a lot of people are reporting similar issues for fast-neural-style, see for example here. There it was suggested that a recent torch update (or a package) caused this issue. Unfortunately, there are no official releases or even a simple changelog. Instead if you install torch you'll get whatever the current master is at that time. Therefore I have no idea what I would need to change in order to fix this. (I'm not actively using torch anymore, like probably most other people I switched to a more recent framework)
from fast-artistic-videos.
What framework are you using instead? What would it take to port the torch elements to the new framework?
from fast-artistic-videos.
I'm currently using pytorch, it's more actively developed, although there are also breaking changed from time to time. But at least it has proper versioning. There exists code for fast-neural-style in pytorch, one could use this as a base.
from fast-artistic-videos.
Has anyone tried to set up this up on AWS? I have been trying for a few days I can't get a instance up and working. I get the same problems and results as @agilebean. I have gone through all the troubleshooting other people has done here and on https://github.com/jcjohnson/fast-neural-style/issues. I am using Ubuntu 14.04 CUDA 7.5 cudnn 5.0 and ran bash update.sh in the torch directory like @AndrewGibb said and that still gave me erroneous results. If you could share a working AMI that would also be appreciated.
from fast-artistic-videos.
I am also seeing this issue. Attempted building on ubuntu 16 with various versions of cuda. Downgraded to cuda 7.5, which forced me into ubuntu 14. In the end, this may have been the wrong path because I initially got the exact same results regardless of lib versions.
In running through the debugging steps mentioned above, I found that I could get it to work with some elbow grease. The issue appears to be related to how ffmpeg is handling the video > ppm conversion. I manually split the frames into pngs, then tested using an input like %05d.png and it produces stylized frames (although the output is a single png). After sending the frames back through ffmpeg (png > mp4), I get something that works:
This is a little odd because the png > ppm conversion works, but not the mp4 > ppm. I wonder if there's some missing build flag in my ffmpeg version.
For reference, I'm using the following lib versions:
- flownet2 docker modded for ubuntu 14 (
FROM nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04
) - ubuntu 14.04
- cudnn 5
- cuda 7.5
- latest torch
- ffmpeg from ppa:mc3man/trusty-media
Here's the ffmpeg build info, in case it helps track down the issue
ffmpeg version 3.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.4)
configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --disable-ffserver --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libtheora --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc --enable-libzimg
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
I have a dockerfile that I can publish once I have time to clean it up a bit.
from fast-artistic-videos.
This is what I observed with some videos having a higher bit depth. (see my first post)
Converting to png and then to ppm could reduce the bit depth to 8 bit and this could be the reason why it worked for you.
However, people also reported this issue with fast-neutral-style where they found that instance norm was not compatible with a specific cuda, cudnn or torch version, and they didn't use ffmpeg. Also note that AndrewGibb reported that the example images I provide didn't work for him.
I think we have multiple distinct issues here.
from fast-artistic-videos.
I have a dockerfile that I can publish once I have time to clean it up a bit.
I'd love to see that dockerfile... my initial attempt I could not build flownet using FROM nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04
from fast-artistic-videos.
My docker build is working now. The trick was to run torch's update.sh script AFTER all of the other dependencies were installed
from fast-artistic-videos.
I see this issue with flownet, but not with deepflow.
from fast-artistic-videos.
I have found that by re-exporting my videos in .mov format with PNG codec and 8bit depth I completely got rid of the issue
from fast-artistic-videos.
#produce mov file with 8bit depth
$FFMPEG -i $1 -vf scale=$resolution -crf 0 -c:v libx264 -preset veryslow ${filename}/${filename}.mov
#produces the ppm frames from the video
from fast-artistic-videos.
Even though I'd like to use the docker approach I can report that I was previously getting garbled images like the OP but can now get useful output (at least the first 100 frames, somehow it stopped getting output images after...) using CUDA 9.2 and cudnn 7. What I did was to set
CUDNN_PATH="/usr/local/cuda/lib64/libcudnn.so.7"
andTORCH_NVCC_FLAGS="-D__CUDA_NO_HALF_OPERATORS__"
and update.sh torch. Then I used cudnn.torch bindings (from github) with branch R7:
git clone https://github.com/soumith/cudnn.torch -b R7
cd cudnn.torch
luarocks make cudnn-scm-1.rockspec
I can confirm that this solution works! I had the same issue with CUDA 8, cuDNN 7.1 and Ubuntu 16. Then I set up everything from scratch with CUDA 9.2 and cuDNN 7.6 and still had the same poor results until I updated torch as bafonso advised. Also I had to install 'cuDNN Library for Linux' additionally to 'cuDNN Runtime Library for Ubuntu16.04 (Deb)' and 'cuDNN Developer Library for Ubuntu16.04 (Deb)', because I couldn't have found /usr/local/cuda/lib64/libcudnn.so.7. Now I get the proper results.
from fast-artistic-videos.
Related Issues (20)
- Training new model error with make_video_dataset.py HOT 2
- Style Model Training Independent of Optical Flow? HOT 13
- make_video_dataset.py Exception in thread Thread-3 HOT 4
- High Resolution Stylized Videos HOT 1
- Skip pre-computed *flo ? HOT 2
- error in BilinearSampler.updateOutput: no kernel image is available for execution on the device HOT 1
- Does anyone already computed flow for training ? HOT 1
- How to calcaulate the confidence map of optical flow
- Youtube examples HOT 1
- Spherical Video Training HOT 2
- What programming language is this? HOT 1
- finetuning on spherical vid - inconsistent tensor sizes HOT 1
- stylizeVideo_flownet.sh: line 85: th: command not found HOT 3
- Is `make_occlusions.sh` meant to be this slow? HOT 1
- stylizeVideo_deepflow.sh computes all .flo and .pgm but does not output images HOT 3
- Error running make_flow_list.py HOT 2
- Use multiple GPUs HOT 1
- Consistency Checker Motion Boundaries
- Basic help :) custom style image? HOT 3
- Not an issue - question on viability of using this repo
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fast-artistic-videos.