Git Product home page Git Product logo

deeplift's Issues

problem with the normalize weights function for convolution layer in tensorflow branch

Hello,

I was trying to use the function mean_normalise_first_conv_layer_weights in the keras.conversion for my model built with the tensorflow backend. However I kept getting the following error:

File "/home/Programs/deeplift/deeplift/util.py", line 132, in mean_normalise_weights_for_sequence_convolution assert weights.shape[1]==1, weights.shape
AssertionError: (4, 10, 1, 45)

My guess is that this method hasn't been converted from the theano to the tensorflow expected
layer weight shapes?

The Average layer

Hi Avanti and others,

Thank you very much for this great package. I would like to use your package to compute the feature contribution for my research problem. I used Keras functional API. So I believe I should use the following command to convert the model:

deeplift_model = kc.convert_functional_model(model,nonlinear_mxts_mode=deeplift.blobs.NonlinearMxtsMode.DeepLIFT_GenomicsDefault)

In my model, I used an average layer like this "output = Average()([input1, input2])". Based on my understanding, it seems there is no corresponding conversion function in your package for the layer "Average". Is that true? I could have replaced it with the merge function, like this output = merge([input1, input2], mode='ave'), but I found (1) though you have the conversion function for the merge layer, that is only for merge in the concatenate mode in your package, (2) I got an warning message that "the merge function is deprecated and will be removed after 08/2017. Use instead layers from keras.layers.merge, e.g. add, concatenate, etc.".

Is my understanding correct? If yes, do you mind adding a conversion function for the "Average" layer?

If not, please let me know if I misunderstood anything.

Thank you very much for your help. I really appreciate your great work.

Best wishes,
Jessie

Deeplift with embedding layers

Is there any way to use deeplift on a model that has been trained with an embedding layer? I use an embedding layer that is essentially just a lookup and the input is a matrix of indices. When I try to import my model into deeplift I get the following error:

  File "/Users/gmcinnes/src/external_repos/deeplift/deeplift/conversion/kerasapi_conversion.py", line 388, in convert_model_from_saved_files
return model_conversion_function(model_config=model_config, **kwargs)
  File "/Users/gmcinnes/src/external_repos/deeplift/deeplift/conversion/kerasapi_conversion.py", line 442, in convert_sequential_model
layer_overrides=layer_overrides)
  File "/Users/gmcinnes/src/external_repos/deeplift/deeplift/conversion/kerasapi_conversion.py", line 475, in sequential_container_conversion
layer_config["class_name"])
  File "/Users/gmcinnes/src/external_repos/deeplift/deeplift/conversion/kerasapi_conversion.py", line 327, in layer_name_to_conversion_function
return name_dict[layer_name.lower()]
KeyError: u'embedding'

Is it possible to use the model I have or will I have to find some workaround?

Thanks

TypeError: string indices must be integers

I just have a normal MNIST CNN and get the following error when running DeepLIFT

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-9-4145c122a002> in <module>
      4     kc.convert_model_from_saved_files(
      5         "NN.h5",
----> 6         nonlinear_mxts_mode=deeplift.layers.NonlinearMxtsMode.DeepLIFT_GenomicsDefault) 
      7 
      8 # Which layer to propagate contribution scores?

/anaconda3/envs/exp1/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py in convert_model_from_saved_files(h5_file, json_file, yaml_file, **kwargs)
    361     for layer_config in layer_configs:
    362 
--> 363         layer_name = layer_config["config"]["name"]
    364         assert layer_name in model_weights,\
    365             ("Layer "+layer_name+" is in the layer names but not in the "

TypeError: string indices must be integers

Any idea what this could be? Should I post a whole working example to make it clearer?

Using same references for every task ?

Hi !
I'm using deeplift to compute scores on genomics, I have 16000 input of 251bp and 81 tasks. Using shuffled reference takes too much time and memory. Would it be possible to compute shuffled sequences and use them for all the tasks instead of generating new one each time? Will it change something in the scores ?
Thanks

Error with RNNs

I have tried to use it with RNNs and I get the following error:
KeyError: 'lstm' with LSTMs, KeyError: 'gru' with GRUs, KeyError: 'simplernn' with SimpleRNNs ...
Did anyone else experience this behavior? It is not supported for RNNs?

PR proposal: adding simple MNIST example

I thought it would be helpful to add a dead simple use case to the master branch -- the MNIST dataset example adapted from the keras_1_compatability branch! Let me know if you think this is a good idea!

unable to convert `TimeDistributed` layer

While attempting to load a keras model, getting an error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-62-0d96cf9dc4e0> in <module>
      3     kc.convert_model_from_saved_files(
      4         saved_hdf5_file_path,
----> 5         nonlinear_mxts_mode=deeplift.layers.NonlinearMxtsMode.DeepLIFT_GenomicsDefault)

/Applications/anaconda3/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py in convert_model_from_saved_files(h5_file, json_file, yaml_file, **kwargs)
    398             layer_config["config"]["weights"] = layer_weights
    399 
--> 400     return model_conversion_function(model_config=model_config, **kwargs)
    401 
    402 

/Applications/anaconda3/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py in convert_functional_model(model_config, nonlinear_mxts_mode, verbose, dense_mxts_mode, conv_mxts_mode, maxpool_deeplift_mode, layer_overrides, custom_conversion_funcs)
    822                             maxpool_deeplift_mode=maxpool_deeplift_mode,
    823                             layer_overrides=layer_overrides,
--> 824                             custom_conversion_funcs=custom_conversion_funcs)
    825 
    826     for output_layer in converted_model_container.output_layers:

/Applications/anaconda3/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py in functional_container_conversion(config, name, verbose, nonlinear_mxts_mode, dense_mxts_mode, conv_mxts_mode, maxpool_deeplift_mode, layer_overrides, custom_conversion_funcs, outer_inbound_node_infos, node_id_to_deeplift_layers, node_id_to_input_node_info, name_to_deeplift_layer)
    575         else:
    576             conversion_function = layer_name_to_conversion_function(
--> 577                                     layer_config["class_name"])
    578 
    579         #We need to deal with the case of shared layers, i.e. the same

/Applications/anaconda3/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py in layer_name_to_conversion_function(layer_name)
    332     # lowercase to create resistance to capitalization changes
    333     # was a problem with previous Keras versions
--> 334     return name_dict[layer_name.lower()]
    335 
    336 

KeyError: 'timedistributed'

Is it possible to add support for it?

AttributeError: 'module' object has no attribute 'max_pool_grad'

After running this line:
deeplift_contribs_func = deeplift_model.get_target_contribs_func(
find_scores_layer_idx=find_scores_layer_idx,
target_layer_idx=-2)
I got the error message: AttributeError: 'module' object has no attribute 'max_pool_grad'.
I am not sure what went wrong. Could you please help me with this issue? Thanks.
The architecture of my model is:


Layer (type) Output Shape Param #

conv2d_1 (Conv2D) (None, 1, 1493, 64) 2112


activation_1 (Activation) (None, 1, 1493, 64) 0


conv2d_2 (Conv2D) (None, 1, 1493, 64) 32832


activation_2 (Activation) (None, 1, 1493, 64) 0


max_pooling2d_1 (MaxPooling2 (None, 1, 187, 64) 0


dropout_1 (Dropout) (None, 1, 187, 64) 0


conv2d_3 (Conv2D) (None, 1, 187, 128) 65664


activation_3 (Activation) (None, 1, 187, 128) 0


conv2d_4 (Conv2D) (None, 1, 187, 128) 131200


activation_4 (Activation) (None, 1, 187, 128) 0


max_pooling2d_2 (MaxPooling2 (None, 1, 24, 128) 0


dropout_2 (Dropout) (None, 1, 24, 128) 0


conv2d_5 (Conv2D) (None, 1, 24, 64) 65600


activation_5 (Activation) (None, 1, 24, 64) 0


conv2d_6 (Conv2D) (None, 1, 24, 64) 32832


activation_6 (Activation) (None, 1, 24, 64) 0


max_pooling2d_3 (MaxPooling2 (None, 1, 3, 64) 0


dropout_3 (Dropout) (None, 1, 3, 64) 0


flatten_1 (Flatten) (None, 192) 0


dense_1 (Dense) (None, 128) 24704


activation_7 (Activation) (None, 128) 0


dropout_4 (Dropout) (None, 128) 0


dense_2 (Dense) (None, 64) 8256


activation_8 (Activation) (None, 64) 0


dense_3 (Dense) (None, 2) 130


activation_9 (Activation) (None, 2) 0

Total params: 363,330
Trainable params: 363,330
Non-trainable params: 0


InvalidArgumentError: indices[0] = 1 is not in [0, 1) For binary classification

After reading the other related error solved before, I'm unable to fix mine...
I have a small NN with a last layer activated with a sigmoid, and I'm having this issue...

I already tried to change the task_idx from 10 to 0, but i'm stuck at the same error...

method_to_task_to_scores = OrderedDict()
print("HEADS UP! integrated_grads_5 and integrated_grads_10 take 5x and 10x longer to run respectively")
print("Consider leaving them out to get faster results")
for method_name, score_func in [
                               ('revealcancel', revealcancel_func),
                               ('guided_backprop_masked', guided_backprop_func_masked),
                               ('guided_backprop_times_inp', guided_backprop_times_inp_func),
                               ('simonyan_masked', simonyan_func_masked),
                               ('grad_times_inp', grad_times_inp_func),
                               ('integrated_grads_5', integrated_grads_5),
                               ('integrated_grads_10', integrated_grads_10)
]:
    print("Computing scores for:",method_name)
    method_to_task_to_scores[method_name] = {}
    task_idx=0
    print("\tComputing scores for task: "+str(task_idx))
    scores = np.array(score_func(
                task_idx=task_idx,
                input_data_list=[dataSet.test_data_norm],
                input_references_list=[np.zeros_like(dataSet.test_data_norm)],
                batch_size=1000,
                progress_update=None))
    method_to_task_to_scores[method_name][task_idx] = scores

I can't see what's wrong...

How to apply deeplift on autoencoder made by keras?

Hello! How to apply deeplift on autoencoder made by keras?
Since the autoencoder on keras now is like this:

input_img = Input(shape=(784,))

"encoded" is the encoded representation of the input

encoded = Dense(encoding_dim, activation='relu')(input_img)

"decoded" is the lossy reconstruction of the input

decoded = Dense(784, activation='sigmoid')(encoded)

this model maps an input to its reconstruction

autoencoder = Model(input_img, decoded)

so how to convert the autoencoder layer to deeplift format?
Thank you so much!

New, unknown printing behavior

When I run the MNIST example, a bunch of "twos" are printed:

screen shot 2017-03-06 at 10 16 32 pm

This seems undesirable for users.

Note: I had to temporarily fix issue #14 (deleting some imports in blobs/__init__.py) to get the MNIST example running. However, it seems unlikely that this is the problem.

InvalidArgumentError (see above for traceback): indices[0] = 1 is not in [0, 1)

I'm getting the following error & am not sure how to resolve.

HEADS UP! integrated_grads_5 and integrated_grads_10 take 5x and 10x longer to run respectively
Consider leaving them out to get faster results
Computing scores for: revealcancel
	Computing scores for task: 0
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
~/anaconda3/envs/yourenvname/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1349     try:
-> 1350       return fn(*args)
   1351     except errors.OpError as e:
...
...
...
InvalidArgumentError (see above for traceback): indices[0] = 1 is not in [0, 1)
	 [[Node: ScatterUpdate_14 = ScatterUpdate[T=DT_FLOAT, Tindices=DT_INT32, _class=["loc:@Variable_21"], use_locking=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Assign_63, ScatterUpdate_14/indices, ScatterUpdate_14/updates)]]

Here's the code snippet, error occurs when calling score_func.

from  collections  import OrderedDict
method_to_task_to_scores = OrderedDict()
print("HEADS UP! integrated_grads_5 and integrated_grads_10 take 5x and 10x longer to run respectively")
print("Consider leaving them out to get faster results")
for method_name, score_func in [
                               ('revealcancel', revealcancel_func)
                               ('guided_backprop_masked', guided_backprop_func_masked),
                               ('guided_backprop_times_inp', guided_backprop_times_inp_func),
                               ('simonyan_masked', simonyan_func_masked), 
                               ('grad_times_inp', grad_times_inp_func),
                               ('integrated_grads_5', integrated_grads_5),
                               ('integrated_grads_10', integrated_grads_10)
]:
    print("Computing scores for:",method_name)
    method_to_task_to_scores[method_name] = {}
    for task_idx in range(10):
        print("\tComputing scores for task: "+str(task_idx))

        scores = np.array(score_func(
                    task_idx=task_idx,
                    input_data_list=[X_test.values],
                    input_references_list=[np.zeros_like(X_test.values)],
                    batch_size=1000,
                    progress_update=None))
        method_to_task_to_scores[method_name][task_idx] = scores

Environment:

  • Using dev-tf branch with TF v1.5.0, Keras 1.2.0 & DeepLIFT v0.5.1-tensorflow. MNIST notebook works using this environment.

  • I'd like to explain a NN model I built using Keras front-end & TF backend w/ 2-dim tabular data (binary logistic regression).

  • Running the MNIST example notebook with 2-dim X_train & X_test (both data frames but using .values to convert to ndarray ).

  • difference in predictions: 0.0

  • target_layer_idx=-2 for binary logistic regression

Could this problem be due to MNIST being a multi-class vs my task being a binary regression? If so, It's not clear what to modify in MNIST notebook to indicate binary logistic regression.

How can I use deeplift to do motif discovery?

I looked at the genomic simulation examples, but I'm not sure how to use it to do motif discovery. For example, in MEME, we can just give it a fasta file and it will output some motifs.

Thanks!
Yichao

run_function_in_batches for deeplift_contribs functions

It seems like run_function_in_batches is not designed to be run with deeplift_contribs functions or I am using it incorrectly:

     scores = run_function_in_batches(
            func=self.deeplift_contribs_func,
            input_data_list=x_standardized,
            batch_size=self.batch_size,
            progress_update=1000,
>           task_idx=self.task_idx)
E       TypeError: run_function_in_batches() got an unexpected keyword argument 'task_idx'

where

self.deeplift_contribs_func = self.deeplift_model.get_target_contribs_func(
            find_scores_layer_idx=self.input_layer_idxs,
            target_layer_idx=output_layer)

TypeError: startswith first arg must be bytes or a tuple of bytes, not str

Hi,

I'm trying to analise a model I built using the Tensorflow.keras implementation of Keras instead of Keras proper. I get this error when I try to call kc.convert_model_from_saved_files:

    435                 layer_weights = [np.array(nested_model_weights[x]) for x in
    436                                  nested_model_weights.keys() if
--> 437                                  x.startswith(layer_name+"/")]
    438                 if (len(layer_weights) > 0):
    439                     layer_config["config"]["weights"] = layer_weights

TypeError: startswith first arg must be bytes or a tuple of bytes, not str

Looking around I found this, that essentially says x is a bytes object, so you're calling bytes.startswith() instead of str.startswith(). Is that the intended behaviour?

Thanks.

Which kind of layer does DeepLIFT support as its last layer?

Hello,

I was trying to use the function get_target_contribs_func for my model built with keras. However, I got following error:

RuntimeError: There is a layer after your target layer but it is not an activation layer, which seems odd...if doing regression, make sure to set the target layer to the last layer

My last layer is a linear layer, it was generated by keras.layers.core.Activation("linear"). This error will disappear when changing "linear" to "sigmoid", So I guess "linear" is not supported by DeepLIFT as last layer. Would you add it in the future? Or it's just something wrong with my model?

Update for Keras 2

Hi,
Thanks for great paper and library.

I think you need to update the library for Keras 2 as the syntaxes have been changed there.

Module 'tensorflow' has no attribute 'pack'

Using tensorflow 1.0.1 and Keras 1.1.2 in dev-tf branch, still I cannot run the MNIST example:

import keras
print ('Keras: ' + keras.__version__)
import tensorflow as ft
print ('TF: ' + tf.__version__)
keras_model = keras.models.load_model("mnist_cnn_allconv_tensorflow.h5")
keras_model.summary()
Keras: 1.1.2
TF: 1.0.1
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-7-b8a871dfa50b> in <module>()
      3 import tensorflow as ft
      4 print ('TF: ' + tf.__version__)
----> 5 keras_model = keras.models.load_model("mnist_cnn_allconv_tensorflow.h5")
      6 keras_model.summary()

~/tensorflow_keras1/lib/python3.5/site-packages/keras/models.py in load_model(filepath, custom_objects)
    138         raise ValueError('No model found in config file.')
    139     model_config = json.loads(model_config.decode('utf-8'))
--> 140     model = model_from_config(model_config, custom_objects=custom_objects)
    141 
    142     # set weights

~/tensorflow_keras1/lib/python3.5/site-packages/keras/models.py in model_from_config(config, custom_objects)
    187         raise Exception('`model_fom_config` expects a dictionary, not a list. '
    188                         'Maybe you meant to use `Sequential.from_config(config)`?')
--> 189     return layer_from_config(config, custom_objects=custom_objects)
    190 
    191 

~/tensorflow_keras1/lib/python3.5/site-packages/keras/utils/layer_utils.py in layer_from_config(config, custom_objects)
     32         layer_class = get_from_module(class_name, globals(), 'layer',
     33                                       instantiate=False)
---> 34     return layer_class.from_config(config['config'])
     35 
     36 

~/tensorflow_keras1/lib/python3.5/site-packages/keras/models.py in from_config(cls, config, layer_cache)
   1059             conf = normalize_legacy_config(conf)
   1060             layer = get_or_create_layer(conf)
-> 1061             model.add(layer)
   1062         return model

~/tensorflow_keras1/lib/python3.5/site-packages/keras/models.py in add(self, layer)
    322                  output_shapes=[self.outputs[0]._keras_shape])
    323         else:
--> 324             output_tensor = layer(self.outputs[0])
    325             if type(output_tensor) is list:
    326                 raise Exception('All layers in a Sequential model '

~/tensorflow_keras1/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, x, mask)
    515         if inbound_layers:
    516             # This will call layer.build() if necessary.
--> 517             self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
    518             # Outputs were already computed when calling self.add_inbound_node.
    519             outputs = self.inbound_nodes[-1].output_tensors

~/tensorflow_keras1/lib/python3.5/site-packages/keras/engine/topology.py in add_inbound_node(self, inbound_layers, node_indices, tensor_indices)
    569         # creating the node automatically updates self.inbound_nodes
    570         # as well as outbound_nodes on inbound layers.
--> 571         Node.create_node(self, inbound_layers, node_indices, tensor_indices)
    572 
    573     def get_output_shape_for(self, input_shape):

~/tensorflow_keras1/lib/python3.5/site-packages/keras/engine/topology.py in create_node(cls, outbound_layer, inbound_layers, node_indices, tensor_indices)
    153 
    154         if len(input_tensors) == 1:
--> 155             output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
    156             output_masks = to_list(outbound_layer.compute_mask(input_tensors[0], input_masks[0]))
    157             # TODO: try to auto-infer shape if exception is raised by get_output_shape_for.

~/tensorflow_keras1/lib/python3.5/site-packages/keras/layers/core.py in call(self, x, mask)
    438 
    439     def call(self, x, mask=None):
--> 440         return K.batch_flatten(x)
    441 
    442 

~/tensorflow_keras1/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in batch_flatten(x)
    860     the first dimension is conserved.
    861     '''
--> 862     x = tf.reshape(x, tf.pack([-1, prod(shape(x)[1:])]))
    863     return x
    864 

AttributeError: module 'tensorflow' has no attribute 'pack'

Support residual network architecture

Hi Avanti and others,

Thank you for the great tool! I would like to use your package to compute the feature contribution for the residual network, however it seems that this type of layers is not supported currently. Could you please adapt your tool to handle this type of networks?

My network architecture looks like this:
`
import keras as ke
PS = 6212
DR = 0.2

inputs = Input(shape=(PS,))

x = Dense(2000, activation='relu')(inputs)

x = Dense(1000, activation='relu')(x)
x = Dropout(DR)(x)

y = Dense(1000, activation='relu')(x)

z = ke.layers.add([x,y])
z = Dropout(DR)(z)

y = Dense(1000, activation='relu')(z)

x = ke.layers.add([z,y])
x = Dropout(DR)(x)

y = Dense(1000, activation='relu')(x)

z = ke.layers.add([x,y])
z = Dropout(DR)(z)

y = Dense(1000, activation='relu')(z)

x = ke.layers.add([z,y])
x = Dropout(DR)(x)

x = Dense(500, activation='relu')(x)
x = Dropout(DR)(x)

x = Dense(250, activation='relu')(x)
x = Dropout(DR)(x)

x = Dense(125, activation='relu')(x)
x = Dropout(DR)(x)

x = Dense(62, activation='relu')(x)
x = Dropout(DR)(x)

x = Dense(30, activation='relu')(x)
x = Dropout(DR)(x)

outputs = Dense(2, activation='softmax')(x)

model = Model(inputs=inputs, outputs=outputs)

model.summary()

model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=0.001, momentum=0.9),
metrics=['accuracy'])
`
Thanks for help.

Explicit sequential model breaks connect_list_of_layers keras2compat

Running keras 2.1.6, model config saved as a .yaml file, weights as a .h5, on keras2compat branch.

When I try to load the model, I get the error: AttributeError: 'Input' object has no attribute 'set_inputs' from connect_list_of_layers.

I fixed this by modifying the sequential_container_conversion function to start at config[1:], but this isn’t a great fix.

Issues stems from the fact that, if you have an explicit input layer in a sequential model then an input layer is created and appended to converted_layers, then it is appended again to converted_layers in sequential_container_conversion.

Cannot convert Keras model

I'm using Keras 1.1.2 and Theano backend.

After training a simple Sequential model,


Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
sequence convolution (Convolutio (None, 281, 10)       810         convolution1d_input_1[0][0]      
____________________________________________________________________________________________________
flattening (Flatten)             (None, 2810)          0           sequence convolution[0][0]       
____________________________________________________________________________________________________
hidden layer (Dense)             (None, 5)             14055       flattening[0][0]                 
____________________________________________________________________________________________________
Output layer (Dense)             (None, 1)             6           hidden layer[0][0]               
====================================================================================================
Total params: 14871

I attempt to convert to deeplift via

deeplift_model = kc.convert_sequential_model(seq_model, nonlinear_mxts_mode=deeplift.blobs.NonlinearMxtsMode.DeepLIFT)

and generate the following error, which looks to be caused by calling grad on T.nnet.sigmoid, which I can't find in the tensor.nnet API docs

Thanks!

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-12-9c97f8d060c4> in <module>()
      1 deeplift_model = kc.convert_sequential_model(
      2                     seq_model,
----> 3                     nonlinear_mxts_mode=deeplift.blobs.NonlinearMxtsMode.DeepLIFT)

/.local/lib/python2.7/site-packages/deeplift/conversion/keras_conversion.pyc in convert_sequential_model(model, num_dims, nonlinear_mxts_mode, verbose, dense_mxts_mode, maxpool_deeplift_mode)
    351                 converted_layers=converted_layers)
    352     deeplift.util.connect_list_of_layers(converted_layers)
--> 353     converted_layers[-1].build_fwd_pass_vars()
    354     return models.SequentialModel(converted_layers)
    355 

/.local/lib/python2.7/site-packages/deeplift/blobs/core.pyc in build_fwd_pass_vars(self, output_layer)
    143             self._output_layers.append(output_layer)
    144         if (self._built_fwd_pass_vars == False):
--> 145             self._build_fwd_pass_vars()
    146             self._built_fwd_pass_vars = True
    147 

/.local/lib/python2.7/site-packages/deeplift/blobs/activations.pyc in _build_fwd_pass_vars(self)
     29         super(Activation, self)._build_fwd_pass_vars()
     30         self._gradient_at_default_activation =\
---> 31          self._get_gradient_at_activation(self.get_reference_vars())
     32 
     33     def _get_gradient_at_default_activation_var(self):

/.local/lib/python2.7/site-packages/deeplift/blobs/activations.pyc in _get_gradient_at_activation(self, activation_vars)
    131 
    132     def _get_gradient_at_activation(self, activation_vars):
--> 133         return B.sigmoid_grad(activation_vars)
    134 
    135 

/.local/lib/python2.7/site-packages/deeplift/backend/theano_backend.pyc in sigmoid_grad(inp)
    120 def sigmoid_grad(inp):
    121     out = sigmoid(inp)
--> 122     grad = T.nnet.sigmoid.grad((inp,), (out,))
    123     return grad
    124 

AttributeError: 'Elemwise' object has no attribute 'grad'

Keras 1.2 Functional API 'GraphModel' object has no attribute 'get_layers'

I'm using the Keras 1.2 w/ TF (dev-tf branch) functional API & updated to use kc.convert_functional_model.

Here's the error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-10-262bf3b052dd> in <module>()
      4 
      5 deeplift_model = revealcancel_model
----> 6 deeplift_prediction_func = compile_func([deeplift_model.get_layers()[0].get_activation_vars()],
      7                                        deeplift_model.get_layers()[-1].get_activation_vars())
      8 original_model_predictions = keras_model.predict(X_test.values, batch_size=200)

AttributeError: 'GraphModel' object has no attribute 'get_layers'

Here's the code snippet I'm running...

...
revealcancel_model = kc.convert_functional_model(model=keras_model, nonlinear_mxts_mode=NonlinearMxtsMode.RevealCancel)

from  deeplift.util  import compile_func
import numpy as np
from keras import backend as K

deeplift_model = revealcancel_model
deeplift_prediction_func = compile_func([deeplift_model.get_layers()[0].get_activation_vars()],
                                       deeplift_model.get_layers()[-1].get_activation_vars())
original_model_predictions = keras_model.predict(X_test.values, batch_size=200)
converted_model_predictions = deeplift.util.run_function_in_batches(
                                input_data_list=[X_test.values],
                                func=deeplift_prediction_func,
                                batch_size=200,
                                progress_update=None)
print("difference in predictions:",np.max(np.array(converted_model_predictions)-np.array(original_model_predictions)))
assert np.max(np.array(converted_model_predictions)-np.array(original_model_predictions)) < 10**-5
predictions = converted_model_predictions

KeyError: 'inputlayer'?

Hi Avanti,

I am getting a pretty obscure error as follows:

KeyError: 'inputlayer'

There is no more message after that.

I used the keras functional API to build my model, which contains two inputs. Could it be that none of my inputs is named "inputlayer"?

Bosh

deeplift implementation for sequential vs. functional model architectures

I am using a simple model architecture with an 8-dimensional input and 2 dense layers, relu and sigmoid, to predict the onset of diabetes on the diabetes dataset.
The convert_sequential_model() works and gives out importance scores for the the sequential version of the above architecure. However, for the functional model equivalent of the above, convert_functional_model() gives the following error:

Following is my code:

dataset = np.loadtxt("/home/jupyter/data/temp/pima-indians-diabetes.data.csv", delimiter=",")

X = dataset[:,0:8]
Y = dataset[:,8]
input = Input(shape = (8,))
model = Dense(8,activation='relu')(input)
output = Dense(1,activation='sigmoid')(model)
model = Model(input = input,output = output)


model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=200, batch_size=10)

revealcancel_model = kc.convert_functional_model(model=model, nonlinear_mxts_mode=NonlinearMxtsMode.RevealCancel)


AttributeError Traceback (most recent call last)
in ()
4 from deeplift.conversion import keras_conversion as kc
5
----> 6 revealcancel_model = kc.convert_functional_model(model=model, nonlinear_mxts_mode=NonlinearMxtsMode.RevealCancel)

/home/jupyter/deeplift/deeplift/conversion/keras_conversion.py in convert_functional_model(model, nonlinear_mxts_mode, verbose, dense_mxts_mode, conv_mxts_mode, maxpool_deeplift_mode, custom_conversion_funcs, auto_build_outputs)
611 id(output_node)][-1]
612 print(layer_to_build)
--> 613 layer_to_build.build_fwd_pass_vars()
614 return models.GraphModel(name_to_blob=name_to_blob,
615 input_layer_names=model.input_names)

/home/jupyter/deeplift/deeplift/blobs/core.py in build_fwd_pass_vars(self, output_layer)
167 self._output_layers.append(output_layer)
168 if (self._built_fwd_pass_vars == False):
--> 169 self._build_fwd_pass_vars()
170 self._built_fwd_pass_vars = True
171

/home/jupyter/deeplift/deeplift/blobs/activations.py in _build_fwd_pass_vars(self)
31 def _build_fwd_pass_vars(self):
32 #can't just inherit from parent due to gradient building
---> 33 self._build_fwd_pass_vars_core()
34 self._activation_vars = self._build_activation_vars(
35 self._get_input_activation_vars())

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars_core(self)
329
330 def _build_fwd_pass_vars_core(self):
--> 331 self._build_fwd_pass_vars_for_all_inputs()
332 print("Class name",type(self).name)
333 print("Node input shape",self._get_input_shape())

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars_for_all_inputs(self)
422
423 def _build_fwd_pass_vars_for_all_inputs(self):
--> 424 self.inputs.build_fwd_pass_vars(output_layer=self)
425
426 def _reset_built_fwd_pass_vars_for_inputs(self):

/home/jupyter/deeplift/deeplift/blobs/core.py in build_fwd_pass_vars(self, output_layer)
167 self._output_layers.append(output_layer)
168 if (self._built_fwd_pass_vars == False):
--> 169 self._build_fwd_pass_vars()
170 self._built_fwd_pass_vars = True
171

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars(self)
341 mxts will not be correct
342 """
--> 343 self._build_fwd_pass_vars_core()
344 self._activation_vars =
345 self._build_activation_vars(

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars_core(self)
329
330 def _build_fwd_pass_vars_core(self):
--> 331 self._build_fwd_pass_vars_for_all_inputs()
332 print("Class name",type(self).name)
333 print("Node input shape",self._get_input_shape())

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars_for_all_inputs(self)
422
423 def _build_fwd_pass_vars_for_all_inputs(self):
--> 424 self.inputs.build_fwd_pass_vars(output_layer=self)
425
426 def _reset_built_fwd_pass_vars_for_inputs(self):

/home/jupyter/deeplift/deeplift/blobs/core.py in build_fwd_pass_vars(self, output_layer)
167 self._output_layers.append(output_layer)
168 if (self._built_fwd_pass_vars == False):
--> 169 self._build_fwd_pass_vars()
170 self._built_fwd_pass_vars = True
171

/home/jupyter/deeplift/deeplift/blobs/activations.py in _build_fwd_pass_vars(self)
31 def _build_fwd_pass_vars(self):
32 #can't just inherit from parent due to gradient building
---> 33 self._build_fwd_pass_vars_core()
34 self._activation_vars = self._build_activation_vars(
35 self._get_input_activation_vars())

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars_core(self)
329
330 def _build_fwd_pass_vars_core(self):
--> 331 self._build_fwd_pass_vars_for_all_inputs()
332 print("Class name",type(self).name)
333 print("Node input shape",self._get_input_shape())

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars_for_all_inputs(self)
422
423 def _build_fwd_pass_vars_for_all_inputs(self):
--> 424 self.inputs.build_fwd_pass_vars(output_layer=self)
425
426 def _reset_built_fwd_pass_vars_for_inputs(self):

/home/jupyter/deeplift/deeplift/blobs/core.py in build_fwd_pass_vars(self, output_layer)
167 self._output_layers.append(output_layer)
168 if (self._built_fwd_pass_vars == False):
--> 169 self._build_fwd_pass_vars()
170 self._built_fwd_pass_vars = True
171

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_fwd_pass_vars(self)
344 self._activation_vars =
345 self._build_activation_vars(
--> 346 self._get_input_activation_vars())
347 self._reference_vars =
348 self._build_reference_vars()

/home/jupyter/deeplift/deeplift/blobs/core.py in _build_activation_vars(self, input_act_vars)
667 print("Input activation variable shape",input_act_vars.shape)
668
--> 669 return K.dot(input_act_vars, self.W) + self.b
670
671 def _build_pos_and_neg_contribs(self):

/opt/conda/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in dot(x, y)
820 ```
821 """
--> 822 if ndim(x) is not None and (ndim(x) > 2 or ndim(y) > 2):
823 x_shape = []
824 for i, s in zip(int_shape(x), tf.unstack(tf.shape(x))):

/opt/conda/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in ndim(x)
435 ```
436 """
--> 437 dims = x.get_shape()._dims
438 if dims is not None:
439 return len(dims)

AttributeError: 'TensorVariable' object has no attribute 'get_shape'

Please help me out!

support tanh activation

Hi,
Does deeplift support the tanh activation function?
I can use sigmoid activation function correctly, but I got error messages when I switch to tanh activation function.

Thanks for help.

enum has no attribute

I'm having trouble running the mnist and genomics examples. The issues arise when using NonlinearMxtsMode and any specified mode. I get attribute errors no matter what method is specified. Any suggestions?

image

kc.prelu_conversion() fails

The Keras model conversion fails for models with a PReLU layer. It appears that the reason is that the way the alpha parameters are accessed (in the keras conversion module) is no longer compatible with the current version of Keras (2.2.0).

getting non zero prediction difference

Hi,

When applying deeplift over the dataset of "cats and dogs" with the same architecture as provided in Mnist examples. The difference in predictions is coming to be a non-zero value.

Model

model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(4,4),
strides=(2,2),
input_shape=(1, 150, 150)))
model.add(Activation("relu"))
model.add(Conv2D(filters=64, kernel_size=(4,4),
strides=(2,2)))
model.add(Activation("relu"))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(units=128))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(units=2))
model.add(Activation("softmax"))
from keras.optimizers import Adam
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])

keras version : 2.2.0
tensorflow version : 1.10.1

I started with some another architecture but it was showing the difference so for a trial, I changed the architecture same as that used in Mnist example but then I am getting the difference again.

TypeError: concat() got an unexpected keyword argument 'axis'

Getting an error when trying to follow the examples. I had it working at some point but must have installed an update somewhere, failed to install something, or installed when I shouldn't! Wondering if anyone has encountered the error below. Currently using the Conda install of DeepLift.

Traceback (most recent call last):
File "DNAdemo.py", line 118, in
method_to_model[method_name] = kc.convert_model_from_saved_files('model_7.h5',nonlinear_mxts_mode=nonlinear_mxts_mode)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/conversion/kerasapi_conversion.py", line 388, in convert_model_from_saved_files
return model_conversion_function(model_config=model_config, **kwargs)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/conversion/kerasapi_conversion.py", line 812, in convert_functional_model
output_layer.build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/activations.py", line 34, in _build_fwd_pass_vars
self._build_fwd_pass_vars_core()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 292, in _build_fwd_pass_vars_core
self._build_fwd_pass_vars_for_all_inputs()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 383, in _build_fwd_pass_vars_for_all_inputs
self.inputs.build_fwd_pass_vars(output_layer=self)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 301, in _build_fwd_pass_vars
self._build_fwd_pass_vars_core()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 292, in _build_fwd_pass_vars_core
self._build_fwd_pass_vars_for_all_inputs()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 383, in _build_fwd_pass_vars_for_all_inputs
self.inputs.build_fwd_pass_vars(output_layer=self)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/activations.py", line 34, in _build_fwd_pass_vars
self._build_fwd_pass_vars_core()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 292, in _build_fwd_pass_vars_core
self._build_fwd_pass_vars_for_all_inputs()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 383, in _build_fwd_pass_vars_for_all_inputs
self.inputs.build_fwd_pass_vars(output_layer=self)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 301, in _build_fwd_pass_vars
self._build_fwd_pass_vars_core()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 292, in _build_fwd_pass_vars_core
self._build_fwd_pass_vars_for_all_inputs()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 383, in _build_fwd_pass_vars_for_all_inputs
self.inputs.build_fwd_pass_vars(output_layer=self)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/activations.py", line 34, in _build_fwd_pass_vars
self._build_fwd_pass_vars_core()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 292, in _build_fwd_pass_vars_core
self._build_fwd_pass_vars_for_all_inputs()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 383, in _build_fwd_pass_vars_for_all_inputs
self.inputs.build_fwd_pass_vars(output_layer=self)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 301, in _build_fwd_pass_vars
self._build_fwd_pass_vars_core()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 292, in _build_fwd_pass_vars_core
self._build_fwd_pass_vars_for_all_inputs()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 383, in _build_fwd_pass_vars_for_all_inputs
self.inputs.build_fwd_pass_vars(output_layer=self)
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 174, in build_fwd_pass_vars
self._build_fwd_pass_vars()
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 304, in _build_fwd_pass_vars
self._get_input_activation_vars())
File "/Users/christopherpenfold/Desktop/Code/deeplift/deeplift/layers/core.py", line 628, in _build_activation_vars
values=input_act_vars)
TypeError: concat() got an unexpected keyword argument 'axis'

License and clarification on using baremetal TF models?

Hi,

I have two questions. What license is deeplift under? I am guessing MIT license? I looked around but couldn't quite track the license down.

  1. I have some models in tensorflow that I have trained and would like to use deeplift to 'interpret', is this best to just import that weights into keras and interpret the keras version? Is there a straightforward way to this or am I missing something.

Thanks!

Errors in models for API and ReLU

Hi,

I'm trying to got through an example using a simple API keras model. I've loaded the model in e.g.,
deeplift_model = kc.convert_model_from_saved_files('Model.h5',
nonlinear_mxts_mode=NonlinearMxtsMode.RevealCancel)

Now I'm attempting to get to the next step to compile the functions (compile_func) and check everything is the same vs the standard keras model. I can't seem to find the list of layers:

deeplift_model.get_name_to_blob().keys()
Traceback (most recent call last):
File "", line 1, in
AttributeError: 'GraphModel' object has no attribute 'get_name_to_blob'

I seem to be able to compile the function via:

deeplift_prediction_func = compile_func([deeplift_model.get_name_to_layer()["input_1_0"].get_activation_vars()], deeplift_model.get_name_to_layer()["preact_dense_1_0"].get_activation_vars())

where
deeplift_model.get_name_to_layer().keys()
odict_keys(['input_1_0', 'preact_conv1d_1_0', 'conv1d_1_0', 'max_pooling1d_1_0', 'dropout_1_0', 'flatten_1_0', 'preact_dense_1_0', 'dense_1_0', 'preact_dense_2_0', 'dense_2_0', 'preact_dense_3_0', 'dense_3_0'])

However, when I try to call get_target_contribs_func

revealcancel_func = deeplift_model.get_target_contribs_func(find_scores_layer_name="input_1_0", pre_activation_target_layer_name="preact_conv1d_1_0")

I get told off about the final activation function:

Traceback (most recent call last):
File "", line 1, in
File "/Users/christopher_penfold/Desktop/Code/deeplift/deeplift/models.py", line 113, in get_target_contribs_func
return self._get_func(*args, func_type=FuncType.contribs, **kwargs)
File "/Users/christopher_penfold/Desktop/Code/deeplift/deeplift/models.py", line 269, in _get_func
**kwargs)
File "/Users/christopher_penfold/Desktop/Code/deeplift/deeplift/models.py", line 47, in _get_func
self._set_scoring_mode_for_target_layer(target_layer)
File "/Users/christopher_penfold/Desktop/Code/deeplift/deeplift/models.py", line 182, in _set_scoring_mode_for_target_layer
+final_activation_type)
RuntimeError: Unsupported final_activation_type: ReLU

Sure I'm doing something obvious wrong somewhere ... any pointers in the right direction would be appreciated.

convert_model_from_saved_files can not work for Inception V3 and other models

Deeplift is a great tool and "Learning Important Features Through Propagating Activation Differences" is a great paper.
However I can not use it for inception v3 and other models(I know deeplift do not implement resnet so far)

File "/home/ubuntu/.conda/envs/dlp/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py", line 122, in conv2d_conversion
bias=config[KerasKeys.weights][1],
IndexError: list index out of range

tensorflow branch lacks json model design parameter

Hi,
it seems like that the tensorflow branch lacks the json parameter option for deeplift.conversion.keras_conversion.load_keras_model, which is present in the master branch. Using the function as it is implemented in the master branch worked for me also for tensorflow.

KeyError: 'batchnormalizationv1'

Hey :) I'm trying to use kc.convert_model_from_saved_files but I have this keyerror :

  File "C:\Users\XXXXXXX\Anaconda3\lib\site-packages\deeplift\conversion\kerasapi_conversion.py", line 349, in layer_name_to_conversion_function
    return name_dict[layer_name.lower()]

KeyError: 'batchnormalizationv1'

The model is not complicated and it works without the batchnormalization layers.

def create_model():
    from tensorflow.keras.layers import Conv1D, Dense, MaxPooling1D, Flatten, Dropout, BatchNormalization, Activation
    from tensorflow.keras.models import Sequential
    from tensorflow.keras.optimizers import SGD

    model = Sequential()
    model.add(Conv1D(filters=300, kernel_size=19, padding='same', # activation='relu', # , activation='relu'
                 input_shape=(251, 4)))
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    model.add(MaxPooling1D(pool_size=3, strides=3, padding='same'))

    model.add(Conv1D(filters=200, kernel_size=11, padding='same')) # , activation='relu'
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    model.add(MaxPooling1D(pool_size=4, strides=4, padding='same'))

    model.add(Conv1D(filters=200, kernel_size=7, padding='same')) # , activation='relu'
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    model.add(MaxPooling1D(pool_size=4, strides=4, padding='same'))

    model.add(Flatten())
    model.add(Dense(1000, activation='relu')) # kernel_regularizer=keras.regularizers.l2(0.001),
    model.add(Dropout(0.03)) # change dropout rate to 0.03

    model.add(Dense(1000, activation='relu')) 
    model.add(Dropout(0.03)) # change dropout rate to 0.03

    ## Change this number to adapt to the number of classes
    model.add(Dense(81)) # softmax vs. sigmoid? , activation='sigmoid'

    sgd = SGD(lr=0.002, decay=0, momentum=0.98, nesterov=True) # decay=1e-6,
    
    model.compile(loss=pearson_loss, optimizer=sgd, metrics=['mse']) 

    return model

model = create_model()
model.save(checkpoint_path + model_name+'.h5')

keras_model_weights = folder + 'results/keras/' + model_name + '.h5'
deeplift_model = kc.convert_model_from_saved_files(h5_file=keras_model_weights)

Can someone help me with this please ? :)

MNIST Example returns difference in predictions: 1.0

I'm using Google Collab GPU runtime to run MNIST example using Keras 1.2.0.

This snippet from MNIST Example

from deeplift.util import compile_func
import numpy as np
from keras import backend as K

deeplift_model = revealcancel_model
deeplift_prediction_func = compile_func([deeplift_model.get_layers()[0].get_activation_vars()],
                                       deeplift_model.get_layers()[-1].get_activation_vars())
original_model_predictions = keras_model.predict(X_test, batch_size=200)
converted_model_predictions = deeplift.util.run_function_in_batches(
                                input_data_list=[X_test],
                                func=deeplift_prediction_func,
                                batch_size=200,
                                progress_update=None)
print("difference in predictions:",np.max(np.array(converted_model_predictions)-np.array(original_model_predictions)))
assert np.max(np.array(converted_model_predictions)-np.array(original_model_predictions)) < 10**-5
predictions = converted_model_predictions

However, I'm getting the following result:

difference in predictions: 1.0
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-4-3f4204ab927a> in <module>()
     13                                 progress_update=None)
     14 print("difference in predictions:",np.max(np.array(converted_model_predictions)-np.array(original_model_predictions)))
---> 15 assert np.max(np.array(converted_model_predictions)-np.array(original_model_predictions)) < 10**-5
     16 predictions = converted_model_predictions

I used the provided shell script to download .h5 model file.

Furthermore, the Compute Importance Scores snippet results in the following error:

RuntimeError: You set the target layer to an activation layer, which is unusual so I am throwing an error - did you mean to set the target layer to the layer *before* the activation layer instead? (recommended for  classification)

Can't install the tensorflow version

Hello, I know it's still in alpha, but I tried to install the tensorflow version as per instructions
and I got the following error after running the pip install:

error: package directory 'deeplift/backend' does not exist

And yes, I don't see the "backend" folder, though it is used by util.py

How to make DeepLIFT support customized keras layer?

Hi,

I was trying to use DeepLIFT to interpret my CNN model. I don't know how to convert my model because I built a customized layer.
My model is built with keras, and that customized layer is something like global max pooling. My results show it does work for my data, so I should not remove it.
However, I really want to understand my model, and DeepLIFT is such a good choice. So I was wondering how to make DeepLIFT support my customized layer?

Thanks!

deepLIFT with Merge output layer

Hi Avanti,
Thanks for your amazing work on deepLIFT. Currently, I have problem using deepLIFT with a Keras model using a Minimum layer, which pulls computation from K separated paths, as output (code sample is as following):
...
minOutput = Minimum()([path_1,...,path_K])
model = Model(inputs=inputMatrix, outputs=minOutput)

It seems that deepLIFT does not support models with merged output layer yet (please correct me if I am wrong). In such case, is there any way to work around the problem?

Thank you very much in advance.

Best,

ModuleNotFoundError: No module named 'deeplift.conversion'; 'deeplift' is not a package

Hi,

I have successfully installed DeepLIFT.
However when I tried using it, I got following error.

from deeplift.conversion import kerasapi_conversion as kc
ModuleNotFoundError: No module named 'deeplift.conversion'; 'deeplift' is not a package

I checked package directory, all files are in place.

How can I solve this issue ?

Best regards,
Mayur

Which version of keras should be used?

I have newest keras version installed Keras (2.0.1) and Theano (0.8.2) (while deeplift depend on theano >0.8)
No wonder, I have runtime exceptions about API mismatch:

 .....
/home/kirill/Desktop/deeplift/deeplift/conversion/keras_conversion.py in layer_name_to_conversion_function(layer_name)
    335     # lowercase to create resistance to capitalization changes
    336     # was a problem with previous Keras versions
--> 337     return name_dict[layer_name.lower()]
    338 
    339 

KeyError: 'conv1d'

I have very little experience with Keras and it seems that I can't fix this bug by myself :(
I think that easiest solution would be to add specific(no matter how old) Keras version to dependencies.
Could you, please, do it?

TAL-GATA example: KeyError: 'class_name' when loading Keras model

Hi,

I am learning how to use deeplift by trying the TAL-GATA example you provided. But unfortunately, when loading your keras model, I met : "KeyError: 'class_name'

I am using Keras 1.1.1 version with Theano as the backend. So I guess this error is caused by format of yaml changed in Keras 1.X.

Could you please upload a separate model data generated using Keras 1.X, or could you please share the code you used for model constructing and training?

Thank you so much!

Suggestion: Keep track of Keras layer names during conversion

In order to make it easier to retrieve original Keras layers in a DeepLIFT model after conversion I would suggest having a layer object variable alt_name or original_name or something like that, which contains the name that was used in the original Keras model. For activations this variable can be set to e.g. the Keras layer name + '_act'. This way a re-identification of layers in the converted model is possible.

Importing models from JSON

I noticed while doing an export / import of a keras model it doesn't parse Keras's model_to_json() output anymore.

The fix I've been using is to remove the 'Layers' child of the first 'Config' parent.

JSON output
{"class_name": "Sequential", "config": {"name": "sequential_1", "layers": [{"class_name": "Conv1D", "config": {"name": "conv1d_1", "trainable": true, "batch_input_shape": [null, 1948, 4], "dtype": "float32", "filters": 64, "kernel_size": [3], "strides": [1], "padding": "valid", "data_format": "channels_last", "dilation_rate": [1], "activation": "relu", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Flatten", "config": {"name": "flatten_1", "trainable": true, "data_format": "channels_last"}}, {"class_name": "Dense", "config": {"name": "dense_1", "trainable": true, "units": 100, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Dropout", "config": {"name": "dropout_1", "trainable": true, "rate": 0.5, "noise_shape": null, "seed": null}}, {"class_name": "Dense", "config": {"name": "dense_2", "trainable": true, "units": 1, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Activation", "config": {"name": "activation_1", "trainable": true, "activation": "sigmoid"}}]}, "keras_version": "2.2.4", "backend": "tensorflow"}

Error: `---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
4 h5_file=keras_model_weights,
5 json_file=keras_model_json,
----> 6 nonlinear_mxts_mode=NonlinearMxtsMode.DeepLIFT_GenomicsDefault)
7

~/.local/share/virtualenvs/SeniorProject-AFH1tCxG/lib/python3.6/site-packages/deeplift/conversion/kerasapi_conversion.py in convert_model_from_saved_files(h5_file, json_file, yaml_file, **kwargs)
361 for layer_config in layer_configs:
362
--> 363 layer_name = layer_config["config"]["name"]
364 assert layer_name in model_weights,
365 ("Layer "+layer_name+" is in the layer names but not in the "

TypeError: string indices must be integers
`

Accepted JSON
{"class_name":"Sequential","config":[{"class_name":"Conv1D","config":{"name":"conv1d_1","trainable":true,"batch_input_shape":[null,1948,4],"dtype":"float32","filters":64,"kernel_size":[3],"strides":[1],"padding":"valid","data_format":"channels_last","dilation_rate":[1],"activation":"relu","use_bias":true,"kernel_initializer":{"class_name":"VarianceScaling","config":{"scale":1.0,"mode":"fan_avg","distribution":"uniform","seed":null}},"bias_initializer":{"class_name":"Zeros","config":{}},"kernel_regularizer":null,"bias_regularizer":null,"activity_regularizer":null,"kernel_constraint":null,"bias_constraint":null}},{"class_name":"Flatten","config":{"name":"flatten_1","trainable":true,"data_format":"channels_last"}},{"class_name":"Dense","config":{"name":"dense_1","trainable":true,"units":100,"activation":"linear","use_bias":true,"kernel_initializer":{"class_name":"VarianceScaling","config":{"scale":1.0,"mode":"fan_avg","distribution":"uniform","seed":null}},"bias_initializer":{"class_name":"Zeros","config":{}},"kernel_regularizer":null,"bias_regularizer":null,"activity_regularizer":null,"kernel_constraint":null,"bias_constraint":null}},{"class_name":"Dropout","config":{"name":"dropout_1","trainable":true,"rate":0.5,"noise_shape":null,"seed":null}},{"class_name":"Dense","config":{"name":"dense_2","trainable":true,"units":1,"activation":"linear","use_bias":true,"kernel_initializer":{"class_name":"VarianceScaling","config":{"scale":1.0,"mode":"fan_avg","distribution":"uniform","seed":null}},"bias_initializer":{"class_name":"Zeros","config":{}},"kernel_regularizer":null,"bias_regularizer":null,"activity_regularizer":null,"kernel_constraint":null,"bias_constraint":null}},{"class_name":"Activation","config":{"name":"activation_1","trainable":true,"activation":"sigmoid"}}],"keras_version":"2.2.4","backend":"tensorflow"}

Attached are the files, changed to .txt. for uploading, the h5 wont upload due to size constraints
JSON conversion Issue.txt
JSONkerasbase_model.txt
JSONkerasbase_modelAdjusted.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.