Git Product home page Git Product logo

neural_artistic_style_transfer's Introduction

Neural artistic style tranfer


Based on: Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, "A Neural Algorithm of Artistic Style", arXiv:1508.06576
See also: https://github.com/fchollet/keras/blob/master/examples/neural_style_transfer.py
See some examples on: https://www.bonaccorso.eu/2016/11/13/neural-artistic-style-transfer-experiments-with-keras/

Usage

There are three possibile canvas setup:

  • Picture: The canvas is filled with the original picture
  • Style: The canvas is filled with the style image (resized to match picture dimensions)
  • Random from style: The canvas is filled with a random pattern generated starting from the style image

Some usage examples (both with VGG16 and VGG19):

Picture and style over random:
canvas='random_from_style', alpha_style=1.0, alpha_picture=0.25, picture_layer='block4_conv1'
Style over picture:
canvas='picture', alpha_style=0.0025, alpha_picture=1.0, picture_layer='block4_conv1'
Picture over style:
canvas='style', alpha_style=0.001, alpha_picture=1.0, picture_layer='block5_conv1'

For a mix of style transfer and deepdream generation, see the examples below.

Code snippets

neural_styler = NeuralStyler(picture_image_filepath='img\\GB.jpg',
                                 style_image_filepath='img\\Magritte.jpg',
                                 destination_folder='\\destination_folder',
                                 alpha_picture=0.4,
                                 alpha_style=0.6,
                                 verbose=True)

neural_styler.fit(canvas='picture', optimization_method='L-BFGS-B')
neural_styler = NeuralStyler(picture_image_filepath='img\\GB.jpg',
                                 style_image_filepath='img\\Magritte.jpg',
                                 destination_folder='\\destination_folder',
                                 alpha_picture=0.25,
                                 alpha_style=1.0,
                                 picture_layer='block4_conv1',
                                 style_layers=('block1_conv1',
                                               'block2_conv1',
                                               'block3_conv1',
                                               'block4_conv1',
                                               'block5_conv1'))
                                               
neural_styler.fit(canvas='random_from_style', optimization_method='CG')

Examples

(With different settings and optimization algorithms)

Cezanne

Magritte

Dalì

Matisse

Picasso

Rembrandt

De Chirico

Mondrian

Van Gogh

Schiele

Mixing style transfer and deep dreams

I'm still working on some experiments based on loss function which tries to maximize the L2 norm of the last convolutional block (layers 1 and 2). I've excluded those from the style_layers tuple and tuned the parameters to render a "dream" together with a styled image. You can try the following snippet:

# Dream loss function
dream_loss_function = -5.0*K.sum(K.square(convnet.get_layer('block5_conv1').output)) + \
                      -2.5*K.sum(K.square(convnet.get_layer('block5_conv2').output))

# Composite loss function
composite_loss_function = (self.alpha_picture * picture_loss_function) + \
                          (self.alpha_style * style_loss_function) + \
                          dream_loss_function

The composite loss function isnt't "free" to maximize the norm like in Keras DeepDream, because the MSE with the gramian terms forces the filters to get similar to the style, however, it's possible to obtain interesting results. The following pictures show the famous Tübingen styled with a Braque painting and forced to render "random" elements (they're similar to animal heads and eyes) like in a dream:

This example, instead, has been created using a VGG19 with a Cezanne painting and:

style_layers=('block1_conv1',
              'block2_conv1',
              'block3_conv1',
              'block4_conv1',
              'block5_conv1',
              'block5_conv2')
              
# Dream loss function
dream_loss_function = -10.0*K.sum(K.square(convnet.get_layer('block5_conv1').output)) + \
                      -5.0*K.sum(K.square(convnet.get_layer('block5_conv2').output))

(Original image by Manfred Brueckels - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6937538)

Requirements

  • Python 2.7-3.5
  • Keras
  • Theano/Tensorflow
  • SciPy

neural_artistic_style_transfer's People

Contributors

giuseppebonaccorso avatar

Watchers

Bai Feng avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.