This is a tool for converting Kaldi model to onnx / tensorflow model for inference based on XiaoMi's Kaldi-ONNX project.
XiaoMi's project can convert Kaldi model to onnx model. Usually, onnx model can be easily used for inference using ONNX Runtime toolkit, or convert to tensorflow model using ONNX-TF toolkit.
However, XiaoMi's implementation of converted onnx node can only support MACE framework.
Based on their project, Kaldi node is implemented using common onnx node, so that it can be easily used for inference.
The inference of Kaldi's most popular TDNN-F networks in nnet3 is supported.
python3.6.8
pip3 install -r requirements.txt
This tool only supports Kaldi's text model as an input.
If you have a binary model, Kaldi's nnet3-copy
tool can help you get a text one:
kaldi/src/nnet3bin/nnet3-copy --binary=false --prepare-for-test=true <final.mdl> <final.txt>
Don't forget to use the --prepare-for-test=true
option.
Before converting, you need to use kaldi/src/nnet3bin/nnet3-am-info <final.mdl>
to get left_context and right_context, which is required for converting.
python3 -m converter.converter <input_kaldi_nnet3_file> <left_context> <right_context> <out_model_file> [--format <format>] [--chunk_size <chunk_size>]
--format
: 'onnx' - default, output onnx model, 'tf' - output tensorflow pb model.
--chunk_size
: using 21 as default chunk size. Subsample_factor is 3, input 21 frames feature, then 7 frames will be output for decoding.
See tests for how to use pb model.
model1 use the model struct from swbd/tdnn_7p.
model2 use the model struct from swbd/tdnn_7q.
After converting, there is a graphic tool for you to review the onnx model: ONNX Model Viewer.