This is a Pytorch implementation of the "Universal Style Transfer via Feature Trasforms" NIPS17 paper.
Given a content image and an arbitrary style image, the program attempts to transfer the visual style characteristics extracted from the style image to the content image generating stylized ouput.
The core architecture is a VGG19 Convolutional Autoencoder performing Whitening and Coloring Transformation on the content and style features in the bottleneck layer.
- Needed Python packages can be installed using
conda
package manager by runningconda env create -f environment.yaml
python main.py ARGS
Possible ARGS are:
--content CONTENT
path of the content image (or a directory containing images) to be trasformed;--style STYLE
path of the style image (or a directory containing images) to use;--contentSize CONTENTSIZE
reshape content image to have the new specified maximum size (keeping aspect ratio);--styleSize STYLESIZE
reshape style image to have the new specified maximum size (keeping aspect ratio);--outDir OUTDIR
path of the directory where stylized results will be saved (default isoutputs/
);--alpha ALPHA
hyperparameter balancing the blending between original content features and WCT-transformed features (default is0.2
);--no-cuda
flag to enable CPU-only computations (default isFalse
i.e. GPU (CUDA) accelaration);
An example
python main.py --content inputs/contents/in4.jpg --style inputs/styles/candy.jpg