Git Product home page Git Product logo

Comments (8)

anilsemizoglu avatar anilsemizoglu commented on July 18, 2024 2

changing line 5 in keras2cpp.py to the following line has removed the tuple error.

from tensorflow.keras.layers import (

from keras2cpp.

comntr avatar comntr commented on July 18, 2024

Attaching the 7 MB weights file: weights.zip

from keras2cpp.

comntr avatar comntr commented on July 18, 2024

I've added some logging and got this:

def export_model(model, filename):
    with open(filename, 'wb') as f:
        layers = [layer for layer in model.layers
                  if type(layer).__name__ not in ['Dropout']]
        f.write(struct.pack('I', len(layers)))

        for layer in layers:
            print(f'Exporting layer: {type(layer)}')
            f.write(struct.pack('I', LAYERS.index(type(layer)) + 1))
            export(layer, f)
Exporting layer: <class 'keras.layers.embeddings.Embedding'>
Exporting layer: <class 'keras.layers.recurrent.LSTM'>
Exporting layer: <class 'keras.layers.recurrent.LSTM'>
Exporting layer: <class 'keras.layers.recurrent.LSTM'>
Exporting layer: <class 'keras.layers.wrappers.TimeDistributed'>
Traceback (most recent call last):
  File "h5-parser.py", line 14, in <module>
    export_model(model, 'example.model')
  File "/home/user1/data/src/keras2cpp/keras2cpp.py", line 214, in export_model
    f.write(struct.pack('I', LAYERS.index(type(layer)) + 1))
ValueError: tuple.index(x): x not in tuple

from keras2cpp.

comntr avatar comntr commented on July 18, 2024

I guess it's just this lib doesn't implement the Keras TimeDistributed layer.

"Fixing" it this way in the original keras-char-rnn model:

    # model.add(TimeDistributed(Dense(vocab_size)))
    model.add(Dense(vocab_size))

Training still converges, the model still works, though it's probably weaker now.

Regarding the cmake problem. I have no idea what causes it, but running just cmake . produces quite a bit of output without errors. I don't know how to get it compile the files, but running g++ directly works:

g++ -std=gnu++17 cpp_model.cc src/*.cc src/layers/*.cc

Now I could run ./a.out and it indeed printed a long vector - supposedly that softmax output with the 256 probabilities.

from keras2cpp.

oleg3630 avatar oleg3630 commented on July 18, 2024

I'm having the same problem:

Exporting layer: <class 'keras.engine.input_layer.InputLayer'>
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-f273440699f5> in <module>
      1 print("saving keras2cpp model")
----> 2 export_model(model, 'segmentation.model')
      3 print("done!")
      4 #model.save('segmentation_16classes_'+str(imageSize[0])+'_final.h5')
      5 #!ls

~\GoogleDrive/NN\keras2cpp.py in export_model(model, filename)
    212         for layer in layers:
    213             print(f'Exporting layer: {type(layer)}')
--> 214             f.write(struct.pack('I', LAYERS.index(type(layer)) + 1))
    215             export(layer, f)

ValueError: tuple.index(x): x not in tuple

My model is Unet from segmentation_models with variable input size (None, None):
model = Unet('resnet50', input_shape=(None, None, 3), intencoder_weights='imagenet', encoder_freeze = True, classes=classesCount, decoder_filters=(384, 256, 192, 128, 64), activation='softmax')

Is this library incompatible with None input size or what?
Thank you in advance!

from keras2cpp.

furkank75 avatar furkank75 commented on July 18, 2024

@oleg3630 have you solve this problem? I would like convert the unet from Keras to cpp.

from keras2cpp.

furkank75 avatar furkank75 commented on July 18, 2024
Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 256, 256, 1)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 256, 256, 24) 240         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 256, 256, 24) 96          conv2d_1[0][0]                   
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 256, 256, 24) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 256, 256, 24) 5208        activation_1[0][0]               
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 256, 256, 24) 96          conv2d_2[0][0]                   
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 256, 256, 24) 0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 128, 128, 24) 0           activation_2[0][0]               
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 128, 128, 24) 0           max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 128, 128, 48) 10416       dropout_1[0][0]                  
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 128, 128, 48) 192         conv2d_3[0][0]                   
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 128, 128, 48) 0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 128, 128, 48) 20784       activation_3[0][0]               
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 128, 128, 48) 192         conv2d_4[0][0]                   
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 128, 128, 48) 0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 64, 64, 48)   0           activation_4[0][0]               
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 64, 64, 48)   0           max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 64, 64, 96)   41568       dropout_2[0][0]                  
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 64, 64, 96)   384         conv2d_5[0][0]                   
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 64, 64, 96)   0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 64, 64, 96)   83040       activation_5[0][0]               
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 64, 64, 96)   384         conv2d_6[0][0]                   
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 64, 64, 96)   0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 32, 32, 96)   0           activation_6[0][0]               
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 32, 32, 96)   0           max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 32, 32, 192)  166080      dropout_3[0][0]                  
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 32, 32, 192)  768         conv2d_7[0][0]                   
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 32, 32, 192)  0           batch_normalization_7[0][0]      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 32, 32, 192)  331968      activation_7[0][0]               
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 32, 32, 192)  768         conv2d_8[0][0]                   
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 32, 32, 192)  0           batch_normalization_8[0][0]      
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D)  (None, 16, 16, 192)  0           activation_8[0][0]               
__________________________________________________________________________________________________
dropout_4 (Dropout)             (None, 16, 16, 192)  0           max_pooling2d_4[0][0]            
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 16, 16, 384)  663936      dropout_4[0][0]                  
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 16, 16, 384)  1536        conv2d_9[0][0]                   
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 16, 16, 384)  0           batch_normalization_9[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 16, 16, 384)  1327488     activation_9[0][0]               
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 16, 16, 384)  1536        conv2d_10[0][0]                  
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 16, 16, 384)  0           batch_normalization_10[0][0]     
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 192)  663744      activation_10[0][0]              
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 32, 32, 384)  0           conv2d_transpose_1[0][0]         
                                                                 activation_8[0][0]               
__________________________________________________________________________________________________
dropout_5 (Dropout)             (None, 32, 32, 384)  0           concatenate_1[0][0]              
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 32, 32, 192)  663744      dropout_5[0][0]                  
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 32, 32, 192)  768         conv2d_11[0][0]                  
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 32, 32, 192)  0           batch_normalization_11[0][0]     
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 32, 32, 192)  331968      activation_11[0][0]              
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 32, 32, 192)  768         conv2d_12[0][0]                  
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 32, 32, 192)  0           batch_normalization_12[0][0]     
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 96)   165984      activation_12[0][0]              
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 64, 64, 192)  0           conv2d_transpose_2[0][0]         
                                                                 activation_6[0][0]               
__________________________________________________________________________________________________
dropout_6 (Dropout)             (None, 64, 64, 192)  0           concatenate_2[0][0]              
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 64, 64, 96)   165984      dropout_6[0][0]                  
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 64, 64, 96)   384         conv2d_13[0][0]                  
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 64, 64, 96)   0           batch_normalization_13[0][0]     
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 64, 64, 96)   83040       activation_13[0][0]              
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 64, 64, 96)   384         conv2d_14[0][0]                  
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 64, 64, 96)   0           batch_normalization_14[0][0]     
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 48) 41520       activation_14[0][0]              
__________________________________________________________________________________________________
concatenate_3 (Concatenate)     (None, 128, 128, 96) 0           conv2d_transpose_3[0][0]         
                                                                 activation_4[0][0]               
__________________________________________________________________________________________________
dropout_7 (Dropout)             (None, 128, 128, 96) 0           concatenate_3[0][0]              
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 128, 128, 48) 41520       dropout_7[0][0]                  
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 128, 128, 48) 192         conv2d_15[0][0]                  
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 128, 128, 48) 0           batch_normalization_15[0][0]     
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 128, 128, 48) 20784       activation_15[0][0]              
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 128, 128, 48) 192         conv2d_16[0][0]                  
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 128, 128, 48) 0           batch_normalization_16[0][0]     
__________________________________________________________________________________________________
conv2d_transpose_4 (Conv2DTrans (None, 256, 256, 24) 10392       activation_16[0][0]              
__________________________________________________________________________________________________
concatenate_4 (Concatenate)     (None, 256, 256, 48) 0           conv2d_transpose_4[0][0]         
                                                                 activation_2[0][0]               
__________________________________________________________________________________________________
dropout_8 (Dropout)             (None, 256, 256, 48) 0           concatenate_4[0][0]              
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 256, 256, 24) 10392       dropout_8[0][0]                  
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 256, 256, 24) 96          conv2d_17[0][0]                  
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 256, 256, 24) 0           batch_normalization_17[0][0]     
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 256, 256, 24) 5208        activation_17[0][0]              
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 256, 256, 24) 96          conv2d_18[0][0]                  
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 256, 256, 24) 0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 256, 256, 1)  25          activation_18[0][0]              
==================================================================================================
Total params: 4,863,865
Trainable params: 4,859,449
Non-trainable params: 4,416
__________________________________________________________________________________________________
Exporting layer: <class 'keras.engine.input_layer.InputLayer'>
Traceback (most recent call last):
  File "D:/Projects/1_uNet/test.py", line 6, in <module>
    export_model(model1, 'example.model')
  File "D:\Projects\1_uNet\keras2cpp.py", line 214, in export_model
    f.write(struct.pack('I', LAYERS.index(type(layer)) + 1))
ValueError: tuple.index(x): x not in tuple

The error came by conv2d..

from keras2cpp.

bevancollins avatar bevancollins commented on July 18, 2024

changing line 5 in keras2cpp.py to the following line has removed the tuple error.

from tensorflow.keras.layers import (

the above didn't work for me but this did:

from tensorflow.python.keras.layers import (

from keras2cpp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.