This is a fork of fast-style-transfer which has an additional script, run_webcam.py
to apply style models live to a webcam stream. Go to the README of the original page for instructions on how to train your own models, apply them to images and movies, and all the original functionality of that repository.
- CUDA + CuDNN
- TensorFlow GPU-enabled
- OpenCV (this is tested on cv 2.4, not most recent, but presumably both work)
Pre-trained models for Picasso, Hokusai, Kandinsky, Liechtenstein, Wu Guanzhong, Ibrahim el-Salahi, and Google Maps.
At the top of the file run_webcam.py
, there are paths to model files and style images in the variable list models
. They are not included in the repo because of space. If you'd like to use the pre-trained models referred to up there, these models may be downloaded from this shared folder. To train your own, refer to the original documentation.
python run_webcam.py --width 360 --disp_width 800 --display_source 1
There are three arguments:
width
refers to the width in pixels of the image being restyled (the webcam will be scaled down or up to this size).disp_width
is the width in pixels of the image to be shown on the screen. The restyled image is resized to this after being generated. Havingdisp_width
>width
lets you run the model more quickly but generate a bigger image of lesser quality.display_source
is whether or not to display the content image (webcam) and corresponding style image alongside the output image (1 by default, i.e. True)
You can toggle between the different models by hitting the 'a' and 's' keys on your keyboard.