Hi, so I was playing around with raspberry pi and tried using Keras to see how it handle which by the was not so well but it worked. When I tried to run the heatmap demo however, I always get a memory error even after reducing the size of the output image which is probably not what causes the crash to start with. I looked it up on-line and the solution seem to related to batch size, something which you would be able to refer me to where in the package I might find. The error I receive is found bellow, the same error appears with python2:
pi@raspberrypi:~/heatmaps/examples $ python3 demo.py
/usr/lib/python3/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using Theano backend.
Traceback (most recent call last):
File "demo.py", line 38, in <module>
model = VGG16()
File "/home/pi/.local/lib/python3.5/site-packages/keras/applications/vgg16.py", line 146, in VGG16
x = Dense(4096, activation='relu', name='fc1')(x)
File "/home/pi/.local/lib/python3.5/site-packages/keras/engine/topology.py", line 590, in __call__
self.build(input_shapes[0])
File "/home/pi/.local/lib/python3.5/site-packages/keras/layers/core.py", line 842, in build
constraint=self.kernel_constraint)
File "/home/pi/.local/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/pi/.local/lib/python3.5/site-packages/keras/engine/topology.py", line 414, in add_weight
constraint=constraint)
File "/home/pi/.local/lib/python3.5/site-packages/keras/backend/theano_backend.py", line 154, in variable
strict=False)
File "/usr/local/lib/python3.5/dist-packages/theano/compile/sharedvalue.py", line 268, in shared
allow_downcast=allow_downcast, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/theano/tensor/sharedvar.py", line 54, in tensor_constructor
value=np.array(value, copy=(not borrow)),
MemoryError: you might consider using 'theano.shared(..., borrow=True)'
Update:
The issue is exclusive to vgg16, any call to vgg16 by any code causes the problem. I will close the issue after your reply as it does not seem to be an issue related to your package.
I do however have an issue as things work fine vgg16 but not much so with ResNet50 as I always receive the following error even on my usual machine.
Using Theano backend.
Model type detected: local pooling - flatten
Model cut at layer: 173
Pool size infered: 1
Traceback (most recent call last):
File "demo.py", line 39, in <module>
new_model = to_heatmap(model)
File "/home/pi/heatmaps/heatmap/heatmap.py", line 259, in to_heatmap
x = copy_last_layers(model, index + 3, x)
File "/home/pi/heatmaps/heatmap/heatmap.py", line 182, in copy_last_layers
if last_activation == 'softmax':
UnboundLocalError: local variable 'last_activation' referenced before assignment
It also does not work with InceptionV3 as it yield the following error.
Using Theano backend.
Model type detected: global pooling
Model cut at layer: 310
Traceback (most recent call last):
File "demo.py", line 40, in <module>
new_model = to_heatmap(model)
File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 239, in to_heatmap
x = copy_last_layers(model, index + 1, x)
File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 169, in copy_last_layers
x = add_reshaped_layer(layer, x, 1, no_activation=True)
File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 209, in add_reshaped_layer
x = new_layer(x)
File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 573, in __call__
self.assert_input_compatibility(inputs)
File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 472, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer predictions: expected ndim=4, found ndim=2
Using DenseNet121 and Theano:
Using Theano backend.
Model type detected: global pooling
Model cut at layer: 425
Traceback (most recent call last):
File "demo.py", line 40, in <module>
new_model = to_heatmap(model)
File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 239, in to_heatmap
x = copy_last_layers(model, index + 1, x)
File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 169, in copy_last_layers
x = add_reshaped_layer(layer, x, 1, no_activation=True)
File "/home/abdu/anaconda2/envs/theano/heatmaps/heatmap/heatmap.py", line 209, in add_reshaped_layer
x = new_layer(x)
File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 573, in __call__
self.assert_input_compatibility(inputs)
File "/home/abdu/anaconda2/envs/theano/lib/python2.7/site-packages/keras/engine/topology.py", line 472, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer fc1000: expected ndim=4, found ndim=2
Another Update:
So I test some other models using TesorFlow backend and I basically recive the same error which as follow, note that vgg16 model run with no issue.
Model type detected: global pooling
Traceback (most recent call last):
File "demo.py", line 39, in <module>
new_model = to_heatmap(model)
File "/home/abdu/anaconda2/envs/tensorflowGpu/heatmaps/heatmap/heatmap.py", line 234, in to_heatmap
x = middle_model(img_input)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
output = self.call(inputs, **kwargs)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 2078, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 2229, in run_internal_graph
output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/layers/normalization.py", line 185, in call
self.momentum),
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1001, in moving_average_update
x, value, momentum, zero_debias=True)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 70, in assign_moving_average
update_delta = _zero_debias(variable, value, decay)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 180, in _zero_debias
"biased", initializer=biased_initializer, trainable=False)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 367, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
use_resource=use_resource)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 664, in _get_single_variable
name, "".join(traceback.format_list(tb))))
ValueError: Variable block1_conv1_bn/moving_mean/biased already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1001, in moving_average_update
x, value, momentum, zero_debias=True)
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/layers/normalization.py", line 185, in call
self.momentum),
File "/home/abdu/anaconda2/envs/tensorflowGpu/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
output = self.call(inputs, **kwargs)
I would not wanna bother you with all these models but it would be great i I could at least get it to work with either MobileNet or DenseNet121 as they are light models which is what I need for Raspberry Pi. You should also note how errors are different based on the backend I am using, same error using TensorFlow while different errors based on the model used with Theano.
Thanks