Comments (5)
Answering this in more general terms first.
Our default recommendation would be to use Conv2d: tensorflow/tensorflow#43141 (comment)
However, as you have mentioned in tensorflow/tensorflow#48290 (comment), the path forward for you is to have a custom kernel.
Since you are using TFLM (instead of TfLite) all the kernels (custom or builtin) need to be in the https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/kernels directory.
Can you try to use the pattern of the circular_buffer custom TFLM op:
-
tflite-micro/tensorflow/lite/micro/micro_mutable_op_resolver.h
Lines 161 to 164 in 0afb62f
from tflite-micro.
Thanks for your reply.
Is it correct to give a name to Conv1D in this way, with name=...:
model.add(Conv1D(filters=16, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features), name="cd1"))
and add neural network layer op as follows?
static tflite::MicroMutableOpResolver<3> micro_op_resolver;
tflite_status = micro_op_resolver.AddCustom(
"cd1", tflite::ops::micro::Register_CONV_1D());
if (tflite_status != kTfLiteOk)
{
error_reporter->Report("Could not add Conv op");
while(1);
}
from tflite-micro.
Unfortunately I do not have any out of the box solutions for you. In my opinion, your best bet is to see if you can use conv2d during training.
If you continue down the path of a custom op, what you need to ensure is that the tflite model has the same custom op as well. This is going to be tricky in your case because converting from TF to tflite is going to switch out conv1d with expand_dims and conv2d.
What you could to do here is to modify the flatbuffer after conversion to add in your custom op instead of the expand_dims + conv2d. While technically possible, this is off the beaten path so you will be on your own w.r.t. working with flatbuffers.
For completeness, I wanted to call out that porting https://github.com/tensorflow/tensorflow/blob/4b6c1e5418c43c38d3b0f9e3bd45e4c915439a1a/tensorflow/lite/kernels/expand_dims.cc from TfLite to TFLM (e.g. picking up tensorflow/tensorflow#35189) would not really help since Conv2d does not support causal padding (tensorflow/tensorflow#48567 (comment)).
The underlying issue is the support for causal padding in both TfLite and TfLite Micro tensorflow/tensorflow#48567.
To echo my comment on #137:
When there is a feature that is missing in both TfLite and TFLM, the path forward is to first implement in TfLite and then port to TFLM.
from tflite-micro.
Sorry @advaitjain,
is it possible to create a custom Conv1D layer by following this link: https://www.tensorflow.org/guide/keras/custom_layers_and_models, then try to implement the corresponding TF Lite op (by following https://www.tensorflow.org/lite/guide/ops_custom) instead?
from tflite-micro.
That route may be possible, but unfortunately it's not a path we have any documentation or support for, since it's a pretty advanced customization. As @advaitjain says above, we recommend using Conv2D with a 1xSize kernel shape instead of Conv1D. Closing this bug, unless there's some information I'm missing?
from tflite-micro.
Related Issues (20)
- Sypport for sensing qcom chip? HOT 1
- Cortex-M CI Run Failed HOT 4
- Flashing code to sparkfun edge HOT 2
- Find LSTMCell accum_shift and LSTMCell accum_multiplier HOT 1
- GreedyMemoryPlanner::operator delete(void *) is private HOT 4
- Post Tests CI Run Failed
- Fixes for generic benchmark wrt corstone-300
- Cortex-M CI Run Failed HOT 6
- Post Tests CI Run Failed HOT 3
- TFLite-Micro doesn't support models quantized to float16? HOT 1
- Enabling NEON for TFLM on Raspberry Pi HOT 1
- more fixes for generic benchmark wrt Corstone-300
- Passing custom/additional data to kernels HOT 11
- can not access https://source.codeaurora.org/quic/embedded_ai/tensorflow HOT 3
- Want to check the variable values of OPs HOT 1
- HexagonEvalQuantizedInt8 implementation HOT 1
- Building tflm lib for hexagon HOT 2
- Support for rolled LSTM HOT 1
- Missing files HOT 1
- [Informational] : Delegates Support HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tflite-micro.