Git Product home page Git Product logo

image-analogies's Introduction

neural image analogies

Image of arch Image of Sugar Steve Image of season transferImage of Trump

This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch matching and blending is inspired by the method described in "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis". Effects similar to that paper can be achieved by turning off the analogy loss (or leave it on!) --analogy-w=0 and turning on the B/B' content weighting via the --b-content-w parameter. Also, instead of using brute-force patch matching we use the PatchMatch algorithm to approximate the best patch matches. Brute-force matching can be re-enabled by setting --model=brute

The initial code was adapted from the Keras "neural style transfer" example.

The example arch images are from the "Image Analogies" website. They have some other good examples from their own implementation which are worth a look. Their paper discusses the various applications of image analogies so you might want to take a look for inspiration.

Installation

This requires either TensorFlow or Theano. If you don't have a GPU you'll want to use TensorFlow. GPU users may find to Theano to be faster at the expense of longer startup times. Here's the Theano GPU guide.

Here's how to configure the backend with Keras and set your default device (e.g. cpu, gpu0).

To install via virtualenv run the following commands.

virtualenv venv
source venv/bin/activate
pip install neural-image-analogies

If you have trouble with the above method, follow these directions to Install latest keras and theano or TensorFlow

The script make_image_analogy.py should now be on your path.

Before running this script, download the weights for the VGG16 model. This file contains only the convolutional layers of VGG16 which is 10% of the full size. Original source of full weights. The script assumes the weights are in the current working directory. If you place them somewhere else make sure to pass the --vgg-weights=<location-of-the-weights.h5> parameter or set the VGG_WEIGHT_PATH environment variable.

Example script usage: make_image_analogy.py image-A image-A-prime image-B prefix_for_output

e.g.:

make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch

The examples directory has a script, render_example.sh which accepts an example name prefix and, optionally the location of your vgg weights.

./render_example.sh arch /path/to/your/weights.h5

Currently, A and A' must be the same size, the same holds for B and B'. Output size is the same as Image B, unless specified otherwise.

It's too slow

If you're not using a GPU, use TensorFlow. My Macbook Pro with with can render a 512x512 image in approximately 12 minutes using TensorFlow and --mrf-w=0. Here are some other options which mostly trade quality for speed.

  • If you're using Theano enable openmp threading by using env variables THEANO_FLAGS='openmp=1' OMP_NUM_THREADS=<cpu_num>. You can read more about multi-core support here.
  • set --mrf-w=0 to skip optimization of local coherence
  • use fewer feature layers by setting --mrf-layers=conv4_1 and/or --analogy-layers=conv4_1 (or other layers) which will consider half as many feature layers.
  • generate a smaller image by either using a smaller source Image B, or setting the --width or --height parameters.
  • ensure you're not using --model=brute which needs a powerful GPU

I want it to look better

The default settings are somewhat lowered to give the average user a better chance at generating something on whatever computer they may have. If you have a powerful GPU then here are some options for nicer output:

  • --model=brute will turn on brute-force patch-matching and will be done on GPU. This is Theano-only (default=patchmatch)
  • --patch-size=3 this will allow for much nicer-looking details (default=1)
  • --mrf-layers=conv1_1,conv2_1,... add more layers to the mix (also analogy-layers and content-layers)

Parameters

  • --width Sets image output max width
  • --height Sets image output max height
  • --scales Run at N different scales
  • --iters Number of iterations per scale
  • --min-scale Smallest scale to iterate
  • --mrf-w Weight for MRF loss between A' and B'
  • --analogy-w Weight for analogy loss
  • --b-content-w Weight for content loss between B and B'
  • --tv-w Weight for total variation loss
  • --vgg-weights Path to VGG16 weights
  • --a-scale-mode Method of scaling A and A' relative to B
    • 'match': force A to be the same size as B regardless of aspect ratio (former default)
    • 'ratio': apply scale imposed by width/height params on B to A (current default)
    • 'none': leave A/A' alone
  • --a-scale Additional scale factor for A and A'
  • --pool-mode Pooling style used by VGG
    • 'avg': average pooling - generally smoother results
    • 'max': max pooling - more noisy but maybe that's what you want (original default)
  • --contrast adjust the contrast of the output by removing the bottom x percentile and scaling by the (100 - x)th percentile. Defaults to 0.02
  • --output-full Output all intermediate images at full size regardless of actual scale
  • --analogy-layers Comma-separated list of layer names to be used for the analogy loss (default: "conv3_1,conv_4_1")
  • --mrf-layers Comma-separated list of layer names to be used for the MRF loss (default: "conv3_1,conv_4_1")
  • --content-layers Comma-separated list of layer names to be used for the content loss (default: "conv3_1,conv_4_1")
  • --patch-size Patch size used for matching (default: 1)
  • --use-full-analogy match on all of the analogy patches, instead of combining them into one image (slower/more memory but maybe more accurate)
  • --model Select the patch matching model ('patchmatch' or 'brute') patchmatch is the default and requires less GPU memory but is less accurate then brute.
  • --nstyle-w Weight for neural style loss between A' and B'
  • --nstyle-layers Comma-separated list of layer names to be used for the neural style The analogy loss is the amount of influence of B -> A -> A' -> B'. It's a structure-preserving mapping of Image B into A' via A.

The MRF loss (or "local coherence") is the influence of B' -> A' -> B'. In the parlance of style transfer, this is the style loss which gives texture to the image.

The B/B' content loss is set to 0.0 by default. You can get effects similar to CNNMRF by turning this up and setting analogy weight to zero. Or leave the analogy loss on for some extra style guidance.

If you'd like to only visualize the analogy target to see what's happening, set the MRF and content loss to zero: --mrf-w=0 --content-w=0 This is also much faster as MRF loss is the slowest part of the algorithm.

License

The code for this implementation is provided under the MIT license.

The suggested VGG16 weights are originally from here and are licensed http://creativecommons.org/licenses/by-nc/4.0/ Open a ticket if you have a suggestion for a more free-as-in-free-speech license.

The attributions for the example art can be found in examples/images/ATTRIBUTIONS.md

image-analogies's People

Contributors

awentzonline avatar ketothxupack avatar marcelkottmann avatar mcwhittemore avatar vonclites avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

image-analogies's Issues

Image Analogies not Importing theano_backend from Keras Correctly

Here's the error output from running just a basic make image with image-analogies:

Using gpu device 0: GeForce GTX 970 (CNMeM is enabled with initial size: 75.0% of memory, cuDNN 4007)
Traceback (most recent call last):
File "make_image_analogy.py", line 17, in <module>
args = image_analogy.argparser.parse_args()
File "build\bdist.win-amd64\egg\image_analogy\argparser.py", line 101, in parse_args
AttributeError: 'module' object has no attribute '_on_gpu'

I'm able to run a full 12 epoch keras 1.0.5 test on the theano backend without problems. I've tried adding a "--a-scale-mode match" which gets me past the strange module issue but it just crashes on the first pass with an attribute error of

Convolution2D has no attribute 'get_output'

Not really sure what is going on.

Cuda Dimension Mismatch

when running this command

python2.7 make_image_analogy.py ~/Documents/imganal/examples/images/arch-A.jpg ~/Documents/imganal/examples/images/arch-Ap.jpg ~/Documents/imganal/examples/images/arch-B.jpg ~/Documents/imganal/out/img

on the arch example it runs for one pass (0x0 through 0x4) and then I get the following trace:

Traceback (most recent call last):
File "make_image_analogy.py", line 25, in
image_analogy.main.main(args, model_class)
File "build/bdist.linux-x86_64/egg/image_analogy/main.py", line 69, in main
model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width))
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 23, in build
File "build/bdist.linux-x86_64/egg/image_analogy/models/analogy.py", line 22, in build_loss
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 51, in precompute_static_features
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 60, in get_features
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 384, in call
return self.function(*inputs)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in call
outputs = self.fn()
ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 2, destination=290, source=289
Apply node that caused the error: GpuIncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}(GpuAlloc{memset_0=True}.0, GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 + i2))))}}[(0, 1)].0, Constant{1}, Constant{291}, Constant{1}, Constant{201})
Toposort index: 46
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), Scalar(int64), Scalar(int64), Scalar(int64), Scalar(int64)]
Inputs shapes: [(1, 64, 292, 202), (1, 64, 289, 200), (), (), (), ()]
Inputs strides: [(0, 58984, 202, 1), (0, 57800, 200, 1), (), (), (), ()]
Inputs values: ['not shown', 'not shown', 1, 291, 1, 201]
Outputs clients: [[GpuContiguous(GpuIncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}.0)]]

all my requirements are up to date/correct (or at least pip install -r requirements.txt says so). I've had this project installed for awhile so it might be that the cruft from older versions is conflicting with this one (it's not in a venv or anything). I think it has something to do with the img heights and widths. If I run with an example where A is not the same dimensions as B, it crashes before the first pass... I vaguely remember this problem happening on an older version of this project but I forget how I fixed it. I'm not sure if it's me or the new version, any ideas?

Add examples for sugar skull

Could you please add examples also for generating the sugar skull analogy, and provide a brief explanation in the README for why here are three input images required compared to the two input images for the arch example? Thank you!

SyntaxError: Missing parentheses in call to 'print'

try to run example,
make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch
the output is SyntaxError: Missing parentheses in call to 'print'
(venv) liudeMacBook-Pro:scripts liu$ make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch
Using Theano backend.
/Users/liu/Code/venv/lib/python3.4/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
Theano CPU mode detected. Forcing a-scale-mode to "match"
Using PatchMatch model
Traceback (most recent call last):
File "/Users/liu/Code/venv/bin/make_image_analogy.py", line 21, in
from image_analogy.models.nnf import NNFModel as model_class
File "/Users/liu/Code/venv/lib/python3.4/site-packages/image_analogy/models/nnf.py", line 7, in
from image_analogy.losses.nnf import nnf_analogy_loss, NNFState, PatchMatcher
File "/Users/liu/Code/venv/lib/python3.4/site-packages/image_analogy/losses/nnf.py", line 5, in
from .patch_matcher import PatchMatcher
File "/Users/liu/Code/venv/lib/python3.4/site-packages/image_analogy/losses/patch_matcher.py", line 187
print "[congrid] dimensions error. "
^
SyntaxError: Missing parentheses in call to 'print'

a-scale parameter doesn't work with Theano

The parameter --a_scale work well with tensorflow.
But with theano, I get the following error (example with "--a-scale=1.2") :

Scale factor 0.25 "A" shape (1, 3, 540, 720) "B" shape (1, 3, 450, 600)
Building loss...
Precomputing static features...
Traceback (most recent call last):
File "/home/kassius/anaconda3/lib/python3.6/site-packages/theano/compile/function_module.py", line 903, in call
self.fn() if output_subset is None else
RuntimeError: Shape error. v->dimensions[2] = 540, a->dimesions[2 + 0] = 450

And more detail :

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "scripts/make_image_analogy.py", line 27, in
image_analogy.main.main(args, model_class)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/main.py", line 167, in main
model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width))
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/models/nnf.py", line 17, in build
loss = self.build_loss(a_image, ap_image, b_image)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/models/nnf.py", line 55, in build_loss
all_a_features, all_ap_image_features, all_b_features = self.precompute_static_features(a_image, ap_image, b_image)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/models/base.py", line 53, in precompute_static_features
all_a_features = self.get_features(a_image, a_layers)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/models/base.py", line 62, in get_features
feature_outputs = f([x])
File "/home/kassius/anaconda3/lib/python3.6/site-packages/keras/backend/theano_backend.py", line 1388, in call
return self.function(*inputs)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/theano/compile/function_module.py", line 917, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/home/kassius/anaconda3/lib/python3.6/site-packages/theano/gof/link.py", line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/six.py", line 692, in reraise
raise value.with_traceback(tb)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/theano/compile/function_module.py", line 903, in call
self.fn() if output_subset is None else
RuntimeError: Shape error. v->dimensions[2] = 540, a->dimesions[2 + 0] = 450
Apply node that caused the error: GpuIncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}(GpuAlloc{memset_0=True}.0, GpuElemwise{Add}[(0, 0)].0, Constant{1}, Constant{451}, Constant{1}, Constant{601})
Toposort index: 82
Inputs types: [GpuArrayType(float32, 4D), GpuArrayType(float32, 4D), Scalar(int64), Scalar(int64), Scalar(int64), Scalar(int64)]
Inputs shapes: [(1, 64, 452, 602), (1, 64, 540, 720), (), (), (), ()]
Inputs strides: [(69658624, 1088416, 2408, 4), (99532800, 1555200, 2880, 4), (), (), (), ()]
Inputs values: ['not shown', 'not shown', 1, 451, 1, 601]
Outputs clients: [[GpuContiguous(GpuIncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}.0)]]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "scripts/make_image_analogy.py", line 27, in
image_analogy.main.main(args, model_class)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/main.py", line 115, in main
net = vgg16.get_model(img_width, img_height, weights_path=args.vgg_weights, pool_mode="avg")
File "/home/kassius/anaconda3/lib/python3.6/site-packages/image_analogy/vgg16.py", line 42, in get_model
model.add(ZeroPadding2D((1, 1)))
File "/home/kassius/anaconda3/lib/python3.6/site-packages/keras/engine/sequential.py", line 181, in add
output_tensor = layer(self.outputs[0])
File "/home/kassius/anaconda3/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in call
output = self.call(inputs, **kwargs)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/keras/layers/convolutional.py", line 2231, in call
data_format=self.data_format)
File "/home/kassius/anaconda3/lib/python3.6/site-packages/keras/backend/theano_backend.py", line 1193, in spatial_2d_padding
y = T.set_subtensor(output[indices], x)

Running without "--a-scale" parameter work like a charm.
Changing "--a-scale-mode" doesn't help.
Any idea?

couple of issues to support opencl

as you know theano works on opencl, but it's in beta.

first issue: this program forces cpu mode if not on cuda.

python make_image_analogy.py images/arch-A.jpg images/arch-Ap.jpg images/arch-B.jpg /nothing
Using Theano backend.
Mapped name None to device opencl0:0: Tahiti
/home/deep/.local/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module.
warnings.warn("downsample module has been moved to the pool module.")
Theano CPU mode detected. Forcing a-scale-mode to "match"
Using PatchMatch model

I think this is because of the _on_gpu() method of keras/backend/theano_backend.py that seems to assume opencl (and gpuarray) support don't exist.

second problem: this is the error I get running the code

Precomputing static features...
ERROR (theano.gof.opt): Optimization failure due to: local_error_convop
ERROR (theano.gof.opt): node: ConvOp{('imshp', (256, 16, 12)),('kshp', (3, 3)),('nkern', 512),('bsize', None),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', None),('unroll_kern', None),('unroll_patch', True),('imshp_logical', (256, 16, 12)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}(IncSubtensor{Set;::, ::, int64:int64:, int64:int64:}.0, HostFromGpu(gpuarray).0)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1772, in process_node
replacements = lopt.transform(node)
File "/home/deep/.local/lib/python2.7/site-packages/theano/sandbox/gpuarray/opt.py", line 141, in local_opt
new_op = maker(node, context_name)
File "/home/deep/.local/lib/python2.7/site-packages/theano/sandbox/gpuarray/opt.py", line 839, in local_error_convop
"""
AssertionError:
ConvOp does not work with the gpuarray backend.

Use the new convolution interface to have GPU convolution working:
theano.tensor.nnet.conv2d()

Traceback (most recent call last):
File "make_image_analogy.py", line 27, in
image_analogy.main.main(args, model_class)
File "/home/deep/.local/lib/python2.7/site-packages/image_analogy/main.py", line 69, in main
model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width))
File "/home/deep/.local/lib/python2.7/site-packages/image_analogy/models/nnf.py", line 16, in build
loss = self.build_loss(a_image, ap_image, b_image)
File "/home/deep/.local/lib/python2.7/site-packages/image_analogy/models/nnf.py", line 54, in build_loss
all_a_features, all_ap_image_features, all_b_features = self.precompute_static_features(a_image, ap_image, b_image)
File "/home/deep/.local/lib/python2.7/site-packages/image_analogy/models/base.py", line 51, in precompute_static_features
all_a_features = self.get_features(a_image, a_layers)
File "/home/deep/.local/lib/python2.7/site-packages/image_analogy/models/base.py", line 59, in get_features
f = K.function([self.net_input], [self.get_layer_output(layer_name) for layer_name in layers])
File "/home/deep/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 388, in function
return Function(inputs, outputs, updates=updates)
File "/home/deep/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 380, in init
allow_input_downcast=True, *_kwargs)
File "/home/deep/.local/lib/python2.7/site-packages/theano/compile/function.py", line 320, in function
output_keys=output_keys)
File "/home/deep/.local/lib/python2.7/site-packages/theano/compile/pfunc.py", line 479, in pfunc
output_keys=output_keys)
File "/home/deep/.local/lib/python2.7/site-packages/theano/compile/function_module.py", line 1776, in orig_function
output_keys=output_keys).create(
File "/home/deep/.local/lib/python2.7/site-packages/theano/compile/function_module.py", line 1456, in init
optimizer_profile = optimizer(fgraph)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 101, in call
return self.optimize(fgraph)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, *_kwargs)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 230, in apply
sub_prof = optimizer.optimize(fgraph)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 89, in optimize
ret = self.apply(fgraph, _args, *_kwargs)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 230, in apply
sub_prof = optimizer.optimize(fgraph)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 89, in optimize
ret = self.apply(fgraph, _args, *_kwargs)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 2196, in apply
lopt_change = self.process_node(fgraph, node, lopt)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1777, in process_node
lopt, node)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1673, in warn_inplace
return NavigatorOptimizer.warn(exc, nav, repl_pairs, local_opt, node)
File "/home/deep/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1659, in warn
raise exc
AssertionError:
ConvOp does not work with the gpuarray backend.

Use the new convolution interface to have GPU convolution working:
theano.tensor.nnet.conv2d()

I think using conv2d instead of ConvOp could fix it, but I really have no idea of how much of a work is it. if you'll ever decide to make it work on opencl I will happly make a readme here or on reddit on how to set it up. for now I'll try to figure out how to run with tensorflow!

Converting images causes unknown bus error.

I got all dependencies installed and I finally got some things to run, however I am getting an error when trying to run 2 really small images against each other with a raspberry pi 3 B+. I'm guessing it doesn't have enough memory to do this or I'm missing something.

Building loss...
WARNING:tensorflow:From /home/pi/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:460: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object.
Precomputing static features...
Building and combining losses...
/home/pi/.local/lib/python2.7/site-packages/sklearn/feature_extraction/image.py:287: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  indexing_strides = arr[slices].strides
Start of iteration 0 x 0
Bus error

hours to compute on decent gpu, is everything working ok?

I have cuda on a 1070 with cudnn.

I used patchsize of 3 and mode=brute.

to do a 700px square it took 7 hours (and 20 minutes).

100% sure the gpu was being used.

I noticed that the ''static feature computation'' took A LOT and was likely done on cpu (looking at gpu's memory usage). iteration 2x0 took also very long, the others were a lot faster.

is this how it is supposed to be? the result is stunning, so I'm ok with that...just wanna be sure.

for reference:
amor

ValueError: Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)

Hi, I got the following message running the command, it seems to be the weight shape problem , any idea on solving it? thanks :

Traceback (most recent call last):
File "/usr/local/bin/make_image_analogy.py", line 27, in
image_analogy.main.main(args, model_class)
File "/usr/local/lib/python2.7/dist-packages/image_analogy/main.py", line 69, in main
net = vgg16.get_model(img_width, img_height, weights_path=args.vgg_weights, pool_mode=args.pool_mode)
File "/usr/local/lib/python2.7/dist-packages/image_analogy/vgg16.py", line 89, in get_model
layer.set_weights(weights)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 1154, in set_weights
'provided weight shape ' + str(w.shape))
ValueError: Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)

getting errors installing

I should have all dependencies, theano enabled for opencl.

running

pip install -r requirements.txt
I get

pip install -r requirements.txt
Requirement already satisfied (use --upgrade to upgrade): Cython==0.23.4 in ./venv/lib/python2.7/site-packages (from -r requirements.txt (line 1))
Collecting h5py==2.5.0 (from -r requirements.txt (line 2))
  Using cached h5py-2.5.0.tar.gz
Collecting Keras==0.3.2 (from -r requirements.txt (line 3))
Requirement already satisfied (use --upgrade to upgrade): numpy==1.10.4 in ./venv/lib/python2.7/site-packages (from -r requirements.txt (line 4))
Collecting Pillow==3.1.1 (from -r requirements.txt (line 5))
Collecting PyYAML==3.11 (from -r requirements.txt (line 6))
Collecting scipy==0.17.0 (from -r requirements.txt (line 7))
  Using cached scipy-0.17.0.tar.gz
Requirement already satisfied (use --upgrade to upgrade): six==1.10.0 in ./venv/lib/python2.7/site-packages (from -r requirements.txt (line 8))
Obtaining Theano from git+git://github.com/Theano/Theano.git@954c3816a40de172c28124017a25387f3bf551b2#egg=Theano (from -r requirements.txt (line 9))
  Skipping because already up-to-date.
Building wheels for collected packages: h5py, scipy
  Running setup.py bdist_wheel for h5py ... error
  Complete output from command /home/alex/image-analogies/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-g7OTWL/h5py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpImCcmFpip-wheel- --python-tag cp27:
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-2.7
  creating build/lib.linux-x86_64-2.7/h5py
  copying h5py/highlevel.py -> build/lib.linux-x86_64-2.7/h5py
  copying h5py/__init__.py -> build/lib.linux-x86_64-2.7/h5py
  copying h5py/ipy_completer.py -> build/lib.linux-x86_64-2.7/h5py
  copying h5py/version.py -> build/lib.linux-x86_64-2.7/h5py
  creating build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/base.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/selections.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/selections2.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/group.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/datatype.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/attrs.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/dims.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/dataset.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/files.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  copying h5py/_hl/filters.py -> build/lib.linux-x86_64-2.7/h5py/_hl
  creating build/lib.linux-x86_64-2.7/h5py/tests
  copying h5py/tests/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests
  copying h5py/tests/common.py -> build/lib.linux-x86_64-2.7/h5py/tests
  creating build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_dataset.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_h5.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_h5p.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_h5f.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_selections.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_objects.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_dimension_scales.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_slicing.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_attrs_data.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_base.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_h5t.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_datatype.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/common.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_attrs.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  copying h5py/tests/old/test_group.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
  creating build/lib.linux-x86_64-2.7/h5py/tests/hl
  copying h5py/tests/hl/test_dataset_swmr.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
  copying h5py/tests/hl/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
  copying h5py/tests/hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
  copying h5py/tests/hl/test_dims_dimensionproxy.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
  copying h5py/tests/hl/test_dataset_getitem.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
  copying h5py/tests/hl/test_attribute_create.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
  running build_ext
  Autodetection skipped [libhdf5.so: cannot open shared object file: No such file or directory]
  ********************************************************************************
                         Summary of the h5py configuration

      Path to HDF5: None
      HDF5 Version: '1.8.4'
       MPI Enabled: False
  Rebuild Required: False

  ********************************************************************************
  Executing api_gen rebuild of defs
  Executing cythonize()
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/defs.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_errors.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_objects.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_proxy.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5fd.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5z.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5i.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5r.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/utils.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_conv.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5t.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5s.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5p.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5d.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5a.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5f.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5g.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5l.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5o.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5ds.pyx because it changed.
  Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5ac.pyx because it changed.
  [ 1/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_conv.pyx
  [ 2/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_errors.pyx
  [ 3/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_objects.pyx
  [ 4/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_proxy.pyx
  [ 5/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/defs.pyx
  [ 6/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5.pyx
  [ 7/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5a.pyx
  [ 8/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5ac.pyx
  [ 9/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5d.pyx
  [10/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5ds.pyx
  [11/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5f.pyx
  [12/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5fd.pyx
  [13/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5g.pyx
  [14/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5i.pyx
  [15/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5l.pyx
  [16/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5o.pyx
  [17/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5p.pyx
  [18/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5r.pyx
  [19/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5s.pyx
  [20/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5t.pyx
  [21/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5z.pyx
  [22/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/utils.pyx
  building 'h5py.defs' extension
  creating build/temp.linux-x86_64-2.7
  creating build/temp.linux-x86_64-2.7/tmp
  creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL
  creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py
  creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py
  x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DH5_USE_16_API -I/tmp/pip-build-g7OTWL/h5py/lzf -I/opt/local/include -I/usr/local/include -I/home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c /tmp/pip-build-g7OTWL/h5py/h5py/defs.c -o build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py/defs.o
  In file included from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1781:0,
                   from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                   from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                   from /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:26,
                   from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:
  /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
   #warning "Using deprecated NumPy API, disable it by " \
    ^
  In file included from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:0:
  /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: File o directory non esistente
  compilation terminated.
  error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

  ----------------------------------------
  Failed building wheel for h5py
  Running setup.py clean for h5py
  Running setup.py bdist_wheel for scipy ... done
  Stored in directory: /home/alex/.cache/pip/wheels/76/aa/e2/031ee833b4abfd33d8620e4bc36f8178b95cfcf36ec550a6b9
Successfully built scipy
Failed to build h5py
Installing collected packages: h5py, scipy, Theano, PyYAML, Keras, Pillow
  Running setup.py install for h5py ... error
    Complete output from command /home/alex/image-analogies/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-g7OTWL/h5py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-x5IHF0-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/alex/image-analogies/venv/include/site/python2.7/h5py:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-2.7
    creating build/lib.linux-x86_64-2.7/h5py
    copying h5py/highlevel.py -> build/lib.linux-x86_64-2.7/h5py
    copying h5py/__init__.py -> build/lib.linux-x86_64-2.7/h5py
    copying h5py/ipy_completer.py -> build/lib.linux-x86_64-2.7/h5py
    copying h5py/version.py -> build/lib.linux-x86_64-2.7/h5py
    creating build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/base.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/selections.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/selections2.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/group.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/datatype.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/attrs.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/dims.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/dataset.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/files.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    copying h5py/_hl/filters.py -> build/lib.linux-x86_64-2.7/h5py/_hl
    creating build/lib.linux-x86_64-2.7/h5py/tests
    copying h5py/tests/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests
    copying h5py/tests/common.py -> build/lib.linux-x86_64-2.7/h5py/tests
    creating build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_dataset.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5p.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5f.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_selections.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_objects.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_dimension_scales.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_slicing.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_attrs_data.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_base.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5t.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_datatype.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/common.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_attrs.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    copying h5py/tests/old/test_group.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
    creating build/lib.linux-x86_64-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_dataset_swmr.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
    copying h5py/tests/hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_dims_dimensionproxy.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_dataset_getitem.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_attribute_create.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
    running build_ext
    Autodetection skipped [libhdf5.so: cannot open shared object file: No such file or directory]
    ********************************************************************************
                           Summary of the h5py configuration

        Path to HDF5: None
        HDF5 Version: '1.8.4'
         MPI Enabled: False
    Rebuild Required: False

    ********************************************************************************
    Executing cythonize()
    building 'h5py.defs' extension
    creating build/temp.linux-x86_64-2.7
    creating build/temp.linux-x86_64-2.7/tmp
    creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL
    creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py
    creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DH5_USE_16_API -I/tmp/pip-build-g7OTWL/h5py/lzf -I/opt/local/include -I/usr/local/include -I/home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c /tmp/pip-build-g7OTWL/h5py/h5py/defs.c -o build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py/defs.o
    In file included from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1781:0,
                     from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                     from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                     from /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:26,
                     from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:
    /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
     #warning "Using deprecated NumPy API, disable it by " \
      ^
    In file included from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:0:
    /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: File o directory non esistente
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

    ----------------------------------------
Command "/home/alex/image-analogies/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-g7OTWL/h5py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-x5IHF0-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/alex/image-analogies/venv/include/site/python2.7/h5py" failed with error code 1 in /tmp/pip-build-g7OTWL/h5py/

I guess it has something to do with some depricated apis or some missing file, but I really don't get what I should do.

Any help?

new version error with opencl

I installed via pip.
giving the simple make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch

I get this error.


images/arch.jpg images/arch-newmask.jpg out/arch
Using Theano backend.
Traceback (most recent call last):
  File "/usr/local/bin/make_image_analogy.py", line 11, in <module>
    import image_analogy.argparser
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/argparser.py", line 4, in <module>
    from keras import backend as K
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/__init__.py", line 46, in <module>
    from .theano_backend import *
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 3, in <module>
    from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
  File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/rng_mrg.py", line 30, in <module>
    from theano.sandbox.gpuarray.basic_ops import GpuKernelBase, Kernel
  File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/gpuarray/__init__.py", line 20, in <module>
    import pygpu
  File "/usr/local/lib/python2.7/dist-packages/pygpu-0.2.1-py2.7-linux-x86_64.egg/pygpu/__init__.py", line 7, in <module>
    from . import gpuarray, elemwise, reduction
  File "__init__.pxd", line 155, in init pygpu.gpuarray (pygpu/gpuarray.c:31070)
ValueError: numpy.dtype has the wrong size, try recompiling

I tried reinstalling numpy and panda (whatever it is) as stackoverflow said, but no luck for now. I don't think it's relevant but I have an amd card, opencl from amd website and from nvidia website too (cmake wasn't finding amd's).

everything worked fine before, did I corrupt something?

relevant tech or idea

http://arxiv.org/abs/1511.06421

Many tasks in computer vision can be cast as a "label changing" problem, where the goal is to make a semantic change to the appearance of an image or some subject in an image in order to alter the class membership. Although successful task-specific methods have been developed for some label changing applications, to date no general purpose method exists. Motivated by this we propose deep manifold traversal, a method that addresses the problem in its most general form: it first approximates the manifold of natural images then morphs a test image along a traversal path away from a source class and towards a target class while staying near the manifold throughout. The resulting algorithm is surprisingly effective and versatile. It is completely data driven, requiring only an example set of images from the desired source and target domains. We demonstrate deep manifold traversal on highly diverse label changing tasks: changing an individual's appearance (age and hair color), changing the season of an outdoor image, and transforming a city skyline towards nighttime.

1 in that paper new r3 verson 'Deep Manifold Traversal' author say will open source in github,but now has not yet open .

2 DGN can Arithmetic on faces https://github.com/Newmu/dcgan_code;
https://plus.google.com/+AndersBoesenLindboLarsen/posts/ffvSc3q82Dw demo video

Mismatch of Images

I get this error when I run the app with images I insert, I check with numpy/cv2 and they have the same shape, so I was wondering where this error is coming from?

Using Theano backend.
/usr/local/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
Theano CPU mode detected. Forcing a-scale-mode to "match"
Using PatchMatch model
Scale factor 0.25 "A" shape (1, 4, 132, 186) "B" shape (1, 3, 132, 186)
Building loss...
Precomputing static features...
Traceback (most recent call last):
File "/usr/local/bin/make_image_analogy.py", line 27, in
image_analogy.main.main(args, model_class)
File "/usr/local/lib/python2.7/site-packages/image_analogy/main.py", line 69, in main
model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width))
File "/usr/local/lib/python2.7/site-packages/image_analogy/models/nnf.py", line 17, in build
loss = self.build_loss(a_image, ap_image, b_image)
File "/usr/local/lib/python2.7/site-packages/image_analogy/models/nnf.py", line 55, in build_loss
all_a_features, all_ap_image_features, all_b_features = self.precompute_static_features(a_image, ap_image, b_image)
File "/usr/local/lib/python2.7/site-packages/image_analogy/models/base.py", line 53, in precompute_static_features
all_a_features = self.get_features(a_image, a_layers)
File "/usr/local/lib/python2.7/site-packages/image_analogy/models/base.py", line 62, in get_features
feature_outputs = f([x])
File "/usr/local/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 384, in call
return self.function(*inputs)
File "/usr/local/lib/python2.7/site-packages/theano/compile/function_module.py", line 871, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/site-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/site-packages/theano/compile/function_module.py", line 859, in call
outputs = self.fn()
ValueError: The hardcoded shape for the image stack size (3) isn't the run time shape (4).
Apply node that caused the error: ConvOp{('imshp', (3, 134, 188)),('kshp', (3, 3)),('nkern', 64),('bsize', None),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', None),('unroll_kern', None),('unroll_patch', True),('imshp_logical', (3, 134, 188)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}(IncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}.0, <TensorType(float32, 4D)>)
Toposort index: 26
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D)]
Inputs shapes: [(1, 4, 134, 188), (64, 3, 3, 3)]
Inputs strides: [(403072, 100768, 752, 4), (108, 36, 12, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{Composite{(i0 * (Abs((i1 + i2)) + i1 + i2))}}[(0, 1)](TensorConstant{%281, 1, 1, 1%29 of 0.5}, ConvOp{%28'imshp', %283, 134, 188%29%29,%28'kshp', %283, 3%29%29,%28'nkern', 64%29,%28'bsize', None%29,%28'dx', 1%29,%28'dy', 1%29,%28'out_mode',),('unroll_batch', None),('unroll_kern', None),('unroll_patch', True),('imshp_logical', (3, 134, 188)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}.0, Reshape{4}.0)]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "/usr/local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 496, in get_output
X = self.get_input(train)
File "/usr/local/lib/python2.7/site-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 312, in get_output
X = self.get_input(train)
File "/usr/local/lib/python2.7/site-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 763, in get_output
X = self.get_input(train)
File "/usr/local/lib/python2.7/site-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 317, in get_output
filter_shape=self.W_shape)
File "/usr/local/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 624, in conv2d
filter_shape=filter_shape)

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Error while running example ("Cannot convert %s to TensorType" % str_x, type(x))

Hello guys, I am try to run the examples and I keep getting this error.

File "/home/cyb/image-analogies/venv/local/lib/python2.7/site-packages/theano/tensor/basic.py", line 208, in as_tensor_variable
raise AsTensorError("Cannot convert %s to TensorType" % str_x, type(x))
theano.tensor.var.AsTensorError: ('Cannot convert Tensor("ExpandDims:0", shape=(1, 256, 32, 22), dtype=float32) to TensorType', <class 'tensorflow.python.framework.ops.Tensor'>)

Would you have any clues how to fix that ? Thanks a lot!

Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)

hi when i run this code i got the following error:

f = h5py.File(weights_path)
for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
# we don't look at the last (fully-connected) layers in the savefile
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]

model.layers[k].set_weights(weights)

f.close()

ValueError: Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)

i tried this code with both theano and tensorflow but got the same error i tried also 'convert_kernel' but didn't work also any help please?

Add support for video

Hi, do you think it would be hard to implement deepflow/deepmatching to process multiple frames ? Let me know if it seems too difficult or not and I can have a look. I'm interested in making videos with it.

What is the significance of the blur in the skeleton on Example 2?

I refer to this example:

In this example, image A is a slightly blurred version of image A'. What is the significance of this for the NN? Why not just use the non-blurred version of the skeleton face for both A and A'? Does the amount of blur make a difference?

My own examples

Test 1

In my own results I tested going from photo (A) -> watercolor of the photo (A'). Then I use another similar photo (B) to generate a watercolor for that photo (B'). This way produces pretty unsatisfactory results.

A

19114764_10213232260783929_1021121038_n

A'

19114532_10213232356586324_723944625_n

B

19184093_10213232257223840_82941991_n

B'

19191065_10213232259383894_653646551_n

Test 2

Following the skeleton method, I use a blurred image of watercolor of a photo (A) -> watercolor of the photo (A'). Then I use a similar photo (B) to generate a watercolor for that photo (B') which produces much better results. I don't know why this method works better than the former, it is somewhat non-intuitive for me.

A

19113434_10213232356266316_42629097_n

A'

19114532_10213232356586324_723944625_n

B

19184093_10213232257223840_82941991_n

B'

19114810_10213232342505972_1864231564_n

Unable to open file: name = 'vgg16_weights.h5', errno = 17, error message = 'file exists'

I am trying to run image-analogies on the Tensorflow docker image. This docker image already has tensor flow and python installed.

I ran pip install neural-image-analogies and then curl https://github.com/awentzonline/image-analogies/releases/download/v0.0.5/vgg16_weights.h5 > vgg16_weights.h5 so I have the vgg16 weights file in my working directory.

When I try to run the image-analogies script, I get an error that the weights file cannot be created because it already exists:

root@62081df76460:/share/training# make_image_analogy.py training.001.jpeg training.002.jpeg training.040.jpeg out/door
Using TensorFlow backend.
Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images)
Using PatchMatch model
Scale factor 0.25 "A" shape (1, 3, 50, 50) "B" shape (1, 3, 50, 50)
Traceback (most recent call last):
  File "/usr/local/bin/make_image_analogy.py", line 27, in <module>
    image_analogy.main.main(args, model_class)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/main.py", line 69, in main
    net = vgg16.get_model(img_width, img_height, weights_path=args.vgg_weights, pool_mode=args.pool_mode)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/vgg16.py", line 79, in get_model
    f = h5py.File(weights_path)
  File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 272, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
  File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 117, in make_fid
    fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2684)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2642)
  File "h5py/h5f.pyx", line 96, in h5py.h5f.create (/tmp/pip-4rPeHA-build/h5py/h5f.c:2097)
IOError: Unable to create file (Unable to open file: name = 'vgg16_weights.h5', errno = 17, error message = 'file exists', flags = 15, o_flags = c2)

I tried also tried putting the weights in another directory and using the VGG_WEIGHT_PATH var, but got the same error. Any help would be appreciated.

Thanks

Computation stopped after iteration 2 x 4

Hi

I runned images-analogies.py using one of the basic example images (just resized them to a 256X256).
image-analogy-explanation

The last iteration I get looks really far from your result. It stops as iteration 2 x 4. I guess it should do a lot more iterations to look nice but I can't set it up to go for longer. I tried changing those values and run it again but It doesn't make any differences.

--patch-size=3 this will allow for much nicer-looking details (default=1)
--mrf-layers=conv1_1,conv2_1,... add more layers to the mix (also analogy-layers and content-layers)

What is the proper way to increase the number of iterations ?
arch_at_iteration_2_4

Thanks!

AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'

Hi,

I am using Keras with Tensorflow backend to build and run autoencoder. I am having this error, when I use autoencoder to predict:

AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'

screen shot 2019-02-08 at 5 38 38 pm

I have already made sure I am running python 3.7 in my notebook. I created a virtual environment and used pip to install all my packages and they are all properly imported, but the error is still there. Can you please help?

ValueError: Function has keyword-only arguments or annotations,

Hi,
I'm getting the following error and not sure how to fix it. Thanks for any help.
Paul

make_image_analogy.py input/img1b.jpg input/img1a.jpg input/IMG_0006_12.jpg output/img2.jpg --vgg-weights=vgg16_weights.h5 --height=500 --a-scale-mode=match --patch-size=5 --scales=5 --mrf-w=0.2

Using TensorFlow backend.
Traceback (most recent call last):
File "/dscrhome/pl39/anaconda3/bin/make_image_analogy.py", line 12, in
import image_analogy.argparser
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/image_analogy/argparser.py", line 4, in
from keras import backend as K
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/keras/init.py", line 2, in
from . import backend
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/keras/backend/init.py", line 69, in
from .tensorflow_backend import *
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 1, in
import tensorflow as tf
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/init.py", line 23, in
from tensorflow.python import *
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/python/init.py", line 65, in
import tensorflow.contrib as contrib
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/init.py", line 30, in
from tensorflow.contrib import learn
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/init.py", line 72, in
from tensorflow.contrib.learn.python.learn import *
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/init.py", line 23, in
from tensorflow.contrib.learn.python.learn import *
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/init.py", line 26, in
from tensorflow.contrib.learn.python.learn import estimators
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/init.py", line 23, in
from tensorflow.contrib.learn.python.learn.estimators.autoencoder import TensorFlowDNNAutoencoder
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/autoencoder.py", line 25, in
from tensorflow.contrib.learn.python.learn.estimators.base import TensorFlowBaseTransformer
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/base.py", line 34, in
from tensorflow.contrib.learn.python.learn.estimators import estimator
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 39, in
from tensorflow.contrib.learn.python.learn.learn_io import data_feeder
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/learn_io/init.py", line 22, in
from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_data
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/learn_io/dask_io.py", line 26, in
import dask.dataframe as dd
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/dask/dataframe/init.py", line 1, in
from .core import (DataFrame, Series, Index, _Frame, map_partitions,
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/dask/dataframe/core.py", line 23, in
from .. import array as da
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/dask/array/init.py", line 4, in
from .core import (Array, stack, concatenate, take, tensordot, transpose,
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/dask/array/core.py", line 13, in
from toolz.curried import (pipe, partition, concat, unique, pluck, join, first,
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/toolz/curried/init.py", line 53, in
_curry_namespace(vars(toolz)),
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/toolz/curried/init.py", line 48, in _curry_namespace
for name, f in ns.items() if '' not in name
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/toolz/curried/init.py", line 48, in
for name, f in ns.items() if '
' not in name
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/toolz/curried/init.py", line 42, in _should_curry
return (callable(f) and _nargs(f) > 1 or f in do_curry)
File "/dscrhome/pl39/anaconda3/lib/python3.5/site-packages/toolz/curried/init.py", line 35, in _nargs
return len(inspect.getargspec(f).args)
File "/dscrhome/pl39/anaconda3/lib/python3.5/inspect.py", line 1050, in getargspec
raise ValueError("Function has keyword-only arguments or annotations"
ValueError: Function has keyword-only arguments or annotations, use getfullargspec() API which can support them

Specify theano flags in readme and create an out directory

it took me a while to figure out

mkdir out
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch

I also creating an AMI in northern virginia region. will share once its ready.

FileNotFoundError: [Errno 2] No such file or directory: ''

I followed the installation upto installing venv. tried running the program with my images and the vgg16 file in cwd.

I installed tensorflow on my machine using pip3 install tensorflow --user

venv did not detect tensorflow, so I installed tensorflow with pip install tensorflow while in venv.

then I ran my command again.

(venv) user@user-X230:~/Documents/Programming/image$ make_image_analogy.py a.jpg a.jpg b.jpg output
Using TensorFlow backend.
Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images)
Using PatchMatch model
Traceback (most recent call last):
File "/home/user/.keras/venv/bin/make_image_analogy.py", line 27, in
image_analogy.main.main(args, model_class)
File "/home/user/.keras/venv/lib/python3.6/site-packages/image_analogy/main.py", line 34, in main
os.makedirs(output_dir)
File "/home/user/.keras/venv/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileNotFoundError: [Errno 2] No such file or directory: ''

I am getting this error involving patchmatch model, and I have no clue where to begin troubleshooting it.

Cannot convert Tensor("ExpandDims:0", shape=(1, 256, 32, 22), dtype=float32) to TensorType'

I tried running an example and I am getting a ton of errors:

(DL) ♦ examples / ➞  ./render-example.sh arch ../vgg16_weights.h5
Only using analogy loss
/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images)
Using brute-force model
Scale factor 0.25 "A" shape (1, 3, 128, 89) "B" shape (1, 3, 128, 89)
2018-01-14 10:28:44.742492: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:859] OS X does not support NUMA - returning NUMA node zero
2018-01-14 10:28:44.742578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:01:00.0
totalMemory: 11.00GiB freeMemory: 8.34GiB
2018-01-14 10:28:44.742589: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1300] Adding visible gpu device 0
2018-01-14 10:28:44.951639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:987] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8064 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Building loss...
WARNING:tensorflow:From /Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1247: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Precomputing static features...
Building and combining losses...
Traceback (most recent call last):
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/type.py", line 269, in dtype_specs
    }[self.dtype]
KeyError: 'object'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/basic.py", line 246, in constant
    ttype = TensorType(dtype=x_.dtype, broadcastable=bcastable)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/type.py", line 51, in __init__
    self.dtype_specs()  # error checking is done there
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/type.py", line 272, in dtype_specs
    % (self.__class__.__name__, self.dtype))
TypeError: Unsupported dtype for TensorType: object

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/basic.py", line 194, in as_tensor_variable
    return constant(x, name=name, ndim=ndim)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/basic.py", line 266, in constant
    raise TypeError("Could not convert %s to TensorType" % x, type(x))
TypeError: ('Could not convert Tensor("ExpandDims:0", shape=(1, 256, 32, 22), dtype=float32) to TensorType', <class 'tensorflow.python.framework.ops.Tensor'>)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/carbon/Dev/anaconda/envs/DL/bin/make_image_analogy.py", line 27, in <module>
    image_analogy.main.main(args, model_class)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/image_analogy/main.py", line 71, in main
    model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width))
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/image_analogy/models/base.py", line 23, in build
    loss = self.build_loss(a_image, ap_image, b_image)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/image_analogy/models/analogy.py", line 37, in build_loss
    patch_stride=self.args.patch_stride)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/image_analogy/losses/analogy.py", line 28, in analogy_loss
    best_a_prime_patches = find_analogy_patches(a, a_prime, b, patch_size=patch_size, patch_stride=patch_stride)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/image_analogy/losses/analogy.py", line 14, in find_analogy_patches
    a_patches, a_patches_norm = patches.make_patches(K.variable(a), patch_size, patch_stride)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/image_analogy/losses/patches.py", line 14, in make_patches
    mode='valid')
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/nnet/neighbours.py", line 714, in images2neibs
    return Images2Neibs(mode)(ten4, neib_shape, neib_step)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/gof/op.py", line 615, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/nnet/neighbours.py", line 101, in make_node
    ten4 = T.as_tensor_variable(ten4)
  File "/Users/carbon/Dev/anaconda/envs/DL/lib/python3.6/site-packages/theano/tensor/basic.py", line 200, in as_tensor_variable
    raise AsTensorError("Cannot convert %s to TensorType" % str_x, type(x))
theano.tensor.var.AsTensorError: ('Cannot convert Tensor("ExpandDims:0", shape=(1, 256, 32, 22), dtype=float32) to TensorType', <class 'tensorflow.python.framework.ops.Tensor'>)

tensorflow termination; what(): std::bad_alloc

Hi, I don't have nvidia so I've been trying to use TensorFlow backend with --mrf-w=0 to speed things up (Theano works but really slow), but I get this error (tested with many different images that all worked using Theano backend).. any ideas how to fix it?

xxx:~/Code/python/neural-image-analogies$ make_image_analogy.py images/1.jpg images/1.jpg images/2.jpg out/arch --mrf-w=0
Using TensorFlow backend.
Using PatchMatch model
Scale factor 0.25 "A" shape (1, 3, 603, 653) "B" shape (1, 3, 300, 225)
Building loss...
Precomputing static features...
Building and combining losses...
Start of iteration 0 x 0
Current loss value: 62929842176.0
Image saved as out/arch_at_iteration_0_0.png
Iteration completed in 1359.27 seconds
Start of iteration 0 x 1
Current loss value: 59368124416.0
Image saved as out/arch_at_iteration_0_1.png
Iteration completed in 1354.37 seconds
Start of iteration 0 x 2
Current loss value: 58041049088.0
Image saved as out/arch_at_iteration_0_2.png
Iteration completed in 1315.46 seconds
Start of iteration 0 x 3
Current loss value: 57320632320.0
Image saved as out/arch_at_iteration_0_3.png
Iteration completed in 1324.93 seconds
Start of iteration 0 x 4
Current loss value: 56854339584.0
Image saved as out/arch_at_iteration_0_4.png
Iteration completed in 990.21 seconds
/home/xxx/Code/python/neural-image-analogies/venv/local/lib/python2.7/site-packages/scipy/ndimage/interpolation.py:573: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
  "the returned array has changed.", UserWarning)
Scale factor 0.625 "A" shape (1, 3, 1508, 1633) "B" shape (1, 3, 751, 563)
Building loss...
Precomputing static features...
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

not working as well as it did a few days ago?

Hey, great wrk on this. looks great. However the changes you've made in the past few days have changed the output quite a lot. A few days ago (commit: c6f35e7 ) I would get this (which looks perfect):
pf0000_at_iteration_2_4_old

but now I get this (with the same settings):
pf0000_at_iteration_2_4_new

my mask is this:
pf-nm_0000

(also new version is exactly 2x faster, which is great. but I'm not anywhere near the same results)

Error running render-example.sh example script

While running one of the example scripts I am running into following issue:

./render-example.sh arch ../vgg16_weights.h5
Only using analogy loss
Using Theano backend.
Using gpu device 0: Tesla K80 (CNMeM is enabled with initial size: 90.0% of memory, CuDNN 3007)
Using brute-force model
Scale factor 0.25 "A" shape (1, 3, 116, 80) "B" shape (1, 3, 128, 89)
Building loss...
Precomputing static features...
Traceback (most recent call last):
File "/usr/local/bin/make_image_analogy.py", line 5, in
pkg_resources.run_script('neural-image-analogies==0.0.11', 'make_image_analogy.py')
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 528, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1401, in run_script
exec(script_code, namespace, namespace)
File "/usr/local/lib/python2.7/dist-packages/neural_image_analogies-0.0.11-py2.7.egg/EGG-INFO/scripts/make_image_analogy.py", line 27, in

File "build/bdist.linux-x86_64/egg/image_analogy/main.py", line 69, in main
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 23, in build
File "build/bdist.linux-x86_64/egg/image_analogy/models/analogy.py", line 23, in build_loss
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 53, in precompute_static_features
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 61, in get_features
File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 72, in get_layer_output
AttributeError: 'Convolution2D' object has no attribute 'get_output'

Any idea what is wrong here. Any help would be really appreciated. Thanks

Flip the filter for finding matched patch

In the patches.py file, there is a method called find_patch_matches. I wonder to know why the third and forth dimension of b should be changed to b[:, :, ::-1, ::-1]
if convs is None:
convs = K.conv2d(a, b[:, :, ::-1, ::-1], border_mode='valid')
Thank you so much!

ValueError: Layer weight shape not compatible with provided weight shape

after installing Tensorflow Python 3 / CPU only on anaconda, I tried to run the script without success:

$ make_image_analogy.py images/a.jpg images/a.jpg images/b.jpg out/b                                       
Using TensorFlow backend.
Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images)
Using PatchMatch model
Scale factor 0.25 "A" shape (1, 3, 48, 64) "B" shape (1, 3, 48, 64)
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these areavailable on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these areavailable on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Traceback (most recent call last):
  File "/home/dori/.conda/envs/py36/bin/make_image_analogy.py", line 27, in <module>
    image_analogy.main.main(args, model_class)
  File "/home/dori/.conda/envs/py36/lib/python3.6/site-packages/image_analogy/main.py", line 69, in main
    net = vgg16.get_model(img_width, img_height, weights_path=args.vgg_weights, pool_mode=args.pool_mode)
  File "/home/dori/.conda/envs/py36/lib/python3.6/site-packages/image_analogy/vgg16.py", line 89, in get_model
    layer.set_weights(weights)
  File "/home/dori/.conda/envs/py36/lib/python3.6/site-packages/keras/engine/topology.py", line 1154, in set_weights
    'provided weight shape ' + str(w.shape))
ValueError: Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)

Any idea how to solve this issue ?

PS: Note that I renamed all calls Convolution2D(XXX, 3, 3, activation=... into Conv2D(XXX, (3, 3), activation=... to fix the many UserWarnings

/home/dori/.conda/envs/py36/lib/python3.6/site-packages/image_analogy/vgg16.py:71: 
UserWarning: Update your `Conv2D` call to theKeras 2 API: `Conv2D(512, (3, 3), activation="relu", name="conv5_3")`
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))

Model Brute cannot convert to Tensor

Hi, I'm trying to run the sugarskull example with Tensorflow. Everything works as expected except when I specify -model=brute. When I do, I get this error:

Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images)
Using brute-force model
Scale factor 0.25 "A" shape (1, 3, 644, 483) "B" shape (1, 3, 644, 483)
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: Tesla K80
major: 3 minor: 7 memoryClockRate (GHz) 0.8235
pciBusID 0000:00:1e.0
Total memory: 11.17GiB
Free memory: 11.11GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0)
Building loss...
Precomputing static features...
Building and combining losses...
Traceback (most recent call last):
  File "/usr/local/bin/make_image_analogy.py", line 27, in <module>
    image_analogy.main.main(args, model_class)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/main.py", line 71, in main
    model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width))
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/models/base.py", line 23, in build
    loss = self.build_loss(a_image, ap_image, b_image)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/models/analogy.py", line 37, in build_loss
    patch_stride=self.args.patch_stride)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/losses/analogy.py", line 28, in analogy_loss
    best_a_prime_patches = find_analogy_patches(a, a_prime, b, patch_size=patch_size, patch_stride=patch_stride)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/losses/analogy.py", line 14, in find_analogy_patches
    a_patches, a_patches_norm = patches.make_patches(K.variable(a), patch_size, patch_stride)
  File "/usr/local/lib/python2.7/dist-packages/image_analogy/losses/patches.py", line 14, in make_patches
    mode='valid')
  File "/usr/local/lib/python2.7/dist-packages/theano/tensor/nnet/neighbours.py", line 553, in images2neibs
    return Images2Neibs(mode)(ten4, neib_shape, neib_step)
  File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 611, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/theano/tensor/nnet/neighbours.py", line 84, in make_node
    ten4 = T.as_tensor_variable(ten4)
  File "/usr/local/lib/python2.7/dist-packages/theano/tensor/basic.py", line 208, in as_tensor_variable
    raise AsTensorError("Cannot convert %s to TensorType" % str_x, type(x))
theano.tensor.var.AsTensorError: ('Cannot convert Tensor("ExpandDims:0", shape=(1, 256, 161, 120), dtype=float32) to TensorType', <class 'tensorflow.python.framework.ops.Tensor'>)

What is going wrong here?

Out of memory error

I'm getting an "Error allocating 45158400 bytes of device memory (out of memory). Driver report 44666880 bytes free and 536543232 bytes total" running the example, at iteration 1x0. Maybe 512mb video memory is simply too little?

No scaling in Tensorflow

My script fails if I run in Tensorflow mode (on the cpu) and define the --a-scale=2 parameter. Without this parameter I can run. also --a-scale=1 seems to work

My script looks like this:
KERAS_BACKEND='tensorflow' THEANO_FLAGS='device=cpu' make_image_analogy.py \
    images/$PREFIX-A.jpg images/$PREFIX-Ap.jpg \
    images/$PREFIX-B.jpg out/$PREFIX-h-$HEIGHT-scale-$SCALE/$PREFIX-mrf-$MRF_VAL-ana$
    --mrf-w=$MRF_VAL --patch-size=3 --height=$HEIGHT \
    --analogy-w=$ANALOGY_VAL  --a-scale=$4\
    --vgg-weights=$VGG_WEIGHTS --output-full

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.