Git Product home page Git Product logo

nncase's Introduction

nncase

GitHub repository Gitee repository GitHub release

切换中文

nncase is a neural network compiler for AI accelerators.

Telegram: nncase community Technical Discussion QQ Group: 790699378 . Answer: 人工智能


Tips

  • [2024/05/28] [BUG] nncase v2.8.3: ReduceSum(onnx) has a BUG that causes segmentfault. Please downgrade to v2.8.2, if your model has ReduceSum.

K230

Install

  • Linux:

    pip install nncase nncase-kpu
  • Windows:

    1. pip install nncase
    2. Download `nncase_kpu-2.x.x-py2.py3-none-win_amd64.whl` in below link.
    3. pip install nncase_kpu-2.x.x-py2.py3-none-win_amd64.whl

All version of nncase and nncase-kpu in Release.

Supported operators

benchmark test

kind model shape quant_type(If/W) nncase_fps tflite_onnx_result accuracy info
Image Classification mobilenetv2 [1,224,224,3] u8/u8 600.24 top-1 = 71.3%
top-5 = 90.1%
top-1 = 71.1%
top-5 = 90.0%
dataset(ImageNet 2012, 50000 images)
tflite
resnet50V2 [1,3,224,224] u8/u8 86.17 top-1 = 75.44%
top-5 = 92.56%
top-1 = 75.11%
top-5 = 92.36%
dataset(ImageNet 2012, 50000 images)
onnx
yolov8s_cls [1,3,224,224] u8/u8 130.497 top-1 = 72.2%
top-5 = 90.9%
top-1 = 72.2%
top-5 = 90.8%
dataset(ImageNet 2012, 50000 images)
yolov8s_cls(v8.0.207)
Object Detection yolov5s_det [1,3,640,640] u8/u8 23.645 bbox
mAP50-90 = 0.374
mAP50 = 0.567
bbox
mAP50-90 = 0.369
mAP50 = 0.566
dataset(coco val2017, 5000 images)
yolov5s_det(v7.0 tag, rect=False, conf=0.001, iou=0.65)
yolov8s_det [1,3,640,640] u8/u8 9.373 bbox
mAP50-90 = 0.446
mAP50 = 0.612
mAP75 = 0.484
bbox
mAP50-90 = 0.404
mAP50 = 0.593
mAP75 = 0.45
dataset(coco val2017, 5000 images)
yolov8s_det(v8.0.207, rect = False)
Image Segmentation yolov8s_seg [1,3,640,640] u8/u8 7.845 bbox
mAP50-90 = 0.444
mAP50 = 0.606
mAP75 = 0.484
segm
mAP50-90 = 0.371
mAP50 = 0.578
mAP75 = 0.396
bbox
mAP50-90 = 0.444
mAP50 = 0.606
mAP75 = 0.484
segm
mAP50-90 = 0.371
mAP50 = 0.579
mAP75 = 0.397
dataset(coco val2017, 5000 images)
yolov8s_seg(v8.0.207, rect = False, conf_thres = 0.0008)
Pose Estimation yolov8n_pose_320 [1,3,320,320] u8/u8 36.066 bbox
mAP50-90 = 0.6
mAP50 = 0.843
mAP75 = 0.654
keypoints
mAP50-90 = 0.358
mAP50 = 0.646
mAP75 = 0.353
bbox
mAP50-90 = 0.6
mAP50 = 0.841
mAP75 = 0.656
keypoints
mAP50-90 = 0.359
mAP50 = 0.648
mAP75 = 0.357
dataset(coco val2017, 2346 images)
yolov8n_pose(v8.0.207, rect = False)
yolov8n_pose_640 [1,3,640,640] u8/u8 10.88 bbox
mAP50-90 = 0.694
mAP50 = 0.909
mAP75 = 0.776
keypoints
mAP50-90 = 0.509
mAP50 = 0.798
mAP75 = 0.544
bbox
mAP50-90 = 0.694
mAP50 = 0.909
mAP75 = 0.777
keypoints
mAP50-90 = 0.508
mAP50 = 0.798
mAP75 = 0.54
dataset(coco val2017, 2346 images)
yolov8n_pose(v8.0.207, rect = False)
yolov8s_pose [1,3,640,640] u8/u8 5.568 bbox
mAP50-90 = 0.733
mAP50 = 0.925
mAP75 = 0.818
keypoints
mAP50-90 = 0.605
mAP50 = 0.857
mAP75 = 0.666
bbox
mAP50-90 = 0.734
mAP50 = 0.925
mAP75 = 0.819
keypoints
mAP50-90 = 0.604
mAP50 = 0.859
mAP75 = 0.669
dataset(coco val2017, 2346 images)
yolov8s_pose(v8.0.207, rect = False)

Demo

|eye gaze | space_resize | face pose || |---|---|---| |gif | gif| |


K210/K510

Supported operators


Features

  • Supports multiple inputs and outputs and multi-branch structure
  • Static memory allocation, no heap memory acquired
  • Operators fusion and optimizations
  • Support float and quantized uint8 inference
  • Support post quantization from float model with calibration dataset
  • Flat model with zero copy loading

Architecture

nncase arch

Build from source

It is recommended to install nncase directly through pip. At present, the source code related to k510 and K230 chips is not open source, so it is not possible to use nncase-K510 and nncase-kpu (K230) directly by compiling source code.

If there are operators in your model that nncase does not yet support, you can request them in the issue or implement them yourself and submit the PR. Later versions will be integrated, or contact us to provide a temporary version. Here are the steps to compile nncase.

git clone https://github.com/kendryte/nncase.git
cd nncase
mkdir build && cd build

# Use Ninja
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./install
ninja && ninja install

# Use make
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./install
make && make install

Resources

Canaan developer community

Canaan developer community contains all resources related to K210, K510, and K230.

  • 资料下载 --> Pre-compiled images available for the development boards corresponding to the three chips.
  • 文档 --> Documents corresponding to the three chips.
  • 模型库 --> Examples and code for industrial, security, educational and other scenarios that can be run on the K210 and K230.
  • 模型训练 --> The model training platform for K210 and K230 supports the training of various scenarios.

Bilibili

K210 related repo

K230 related repo


nncase's People

Contributors

aaltonenzhang avatar aaltonenzhangjizhao avatar aclex avatar akemihomua avatar chahatdeep avatar curioyang avatar fusionbolt avatar hejunchao100813 avatar kartben avatar krishnak avatar lerenhua avatar liuzm6217-jianan avatar mirecta avatar nihui avatar raylin51 avatar shtsno24 avatar sunnycase avatar uranus0515 avatar xhuohai avatar xiangbingj avatar zhangyang2057 avatar zhen8838 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nncase's Issues

model run error!

我将如下模型转换为KMODEL后成功运行,如图中所示,该模型有两个输出层,尺寸分别为21x10x7=1470和21x20x14=5880。但KMODEL显示两个输出层的尺寸分别为5880和23520,请问这是为什么呢?

模型
QQ图片20190424234831

如何输出网络最终的反量化结果并显示在屏幕上?

使用TF尝试构建了一个分割的网络,在PC上运行良好,经过nncase_v0.1转换为kmodel后,主体部分是一堆卷积,和本问题有关的模型的最后几行信息如下:
image

使用了maix bit开发板,固件0.4.0_44_minimum,下载进上述网络模型kmodel后打印网络信息的最后几行如下:
image

上述两个网络信息相对比,感觉后者最后的“layer[55] KL_INVALID, 16 bytes”对应前者网络信息的“56:Logistic 1x8x64x128->1x8x64x128”,而这在原网络中是个Sigmoid函数,那么

问题1:加载到开发板后的模型信息显示INVALID等关键字,但Sigmoid函数在nncase_v0.1中是支持的,实际运行中,模型是否真正运行了Sigmoid函数?

由于图像分割最终需要在屏幕上输出一幅图,参考http://blog.sipeed.com/p/673.html#more-673
输出了自己网络第53层的特征图,程序如下:
image

中间层的fmap可以显示,但是始终无法获得最终的网络输出,加载kmodel运行到不同层,打印出输出的信息如下:
image

感觉[54]和[55]是参与量化的参数,因此

问题2:如何获得网络最终输出的结果,即最终的经过kmodel中layer[54]和layer[55]后的结果?能生成一幅图像显示在小屏幕上的那种

谢谢!

是否支持DepthwiseConv2d

我用tensorflow官方的MobileNetV2测试,发现并不支持DepthwiseConv2d,是什么原因呢?

Fatal: Layer DepthwiseConv2d is not supported
NnCase.Converter.Converters.LayerNotSupportedException: Layer DepthwiseConv2d is not supported

例子中的yolo.tflite文件是如何训练的

你好,我想请教你一下yolo.tfilte文件你是怎么训练的?我现在想要训练自己的yolo模型,但是担心训练完的模型,相互转换的时候会出问题。方便的话,你能提供一下代码吗?万分感谢。

How to generate kmodel for non image data

Hello, I am using nncase to generate k210model from tflite model. But it requires some images to be present in some folder. But My model is not for image data. It will take reading from other sensors and make decision. Model is a fully connected model(no conv layer) with input shape (1,4).
How do I convert it to kmodel?

error shape

HI

I got the following error:

Fatal: Shapes must be same, but got [1x64x56x56] and [1x64x55x55]

caffe模型转换失败

liu@EAIU:~/ncc-linux-x86_64$ ncc -i caffe -o k210model --dataset ./images ./mobilenet_yolov3_lite_deploy_iter_3000.caffemodel ./1.kmodel
Fatal: Layer AnnotatedData is not supported
NnCase.Converter.Converters.LayerNotSupportedException: Layer AnnotatedData is not supported
at NnCase.Converter.Converters.CaffeToGraphConverter.ConvertLayer(LayerParameter layerParam) in D:\Work\Repository\nncase\src\NnCase.Converter\Converters\CaffeToGraphConverter.cs:line 47
at System.Linq.Enumerable.SelectIListIterator2.MoveNext() at System.Linq.Enumerable.WhereEnumerableIterator1.ToList()
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at NnCase.Converter.Converters.CaffeToGraphConverter.Convert() in D:\Work\Repository\nncase\src\NnCase.Converter\Converters\CaffeToGraphConverter.cs:line 29
at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 97
at NnCase.Cli.Program.

(String[] args)

不支持数据读取层 怎么解决呢? @sunnycase

Output node shape does not honor stride - 'Fatal: Allocator has ran out of memory'

Create a graph with input node 256x256x3, an output node 128x128x32, 3x3 kernel, and stride=2 Example tflite attached - detect.tflite.zip

graph

nncase fails with Fatal: Allocator has ran out of memory

Result: The output node has shape: m_data = {1, 32, 256, 256}
Expected: Output node shape: 1x32x128x128

GDB log:

(gdb) b /home/tj/nncase/src/scheduler/freelist.cpp:69
Breakpoint 1 at 0x5555556738ed: file /home/tj/nncase/src/scheduler/freelist.cpp, line 69.
(gdb) run compile test/detect.tflite test/out.kmodel -i tflite --dataset test/test_000_0000000.png
Starting program: /home/tj/nncase/out/bin/ncc compile test/detect.tflite test/out.kmodel -i tflite --dataset test/test_000_0000000.png
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff5714700 (LWP 17532)]
[New Thread 0x7fffecf13700 (LWP 17533)]
[New Thread 0x7ffff4f13700 (LWP 17534)]
[New Thread 0x7fffeffff700 (LWP 17535)]
[New Thread 0x7fffef7fe700 (LWP 17536)]
[New Thread 0x7fffeeffd700 (LWP 17537)]
[New Thread 0x7fffee7fc700 (LWP 17538)]

Thread 1 "ncc" hit Breakpoint 1, nncase::scheduler::freelist::reserve (this=0x5555562a0090, size=2097152) at /home/tj/nncase/src/scheduler/freelist.cpp:69
69 throw std::runtime_error("Allocator has ran out of memory");
(gdb) backtrace
#0 nncase::scheduler::freelist::reserve (this=0x5555562a0090, size=2097152) at /home/tj/nncase/src/scheduler/freelist.cpp:69
1 0x0000555555673760 in nncase::scheduler::freelist::allocate (this=0x5555562a0090, size=2097152) at /home/tj/nncase/src/scheduler/freelist.cpp:42
2 0x0000555555672973 in nncase::scheduler::memory_allocator::allocate (this=0x5555562a0080, size=2097152) at /home/tj/nncase/src/scheduler/memory_allocator.cpp:68
3 0x000055555566af2b in nncase::scheduler::allocation_context::allocate_default (this=0x7fffffffd710, conn=...) at /home/tj/nncase/src/scheduler/scheduler.cpp:50
4 0x000055555566b2d6 in nncase::scheduler::<lambda(nncase::ir::node&)>::operator()(nncase::ir::node &) const (closure=0x7fffffffd600, node=...) at /home/tj/nncase/src/scheduler/scheduler.cpp:82
5 0x000055555566bd03 in nncase::ir::relay_ir_visitor<nncase::ir::dfs_ir_visitor, nncase::scheduler::schedule(tcb::spannncase::ir::output_node*, nncase::scheduler::allocation_context&, std::vectornncase::ir::node*&)::<lambda(nncase::ir::node&)> >::visit(nncase::ir::node &) (this=0x7fffffffd5c0, node=...) at /home/tj/nncase/src/ir/include/ir/visitor.h:67
6 0x0000555555d4bf6d in nncase::ir::dfs_ir_visitor::visit_strategry (this=0x7fffffffd5c0, node=...) at /home/tj/nncase/src/ir/visitor.cpp:66
7 0x0000555555d4bf2e in nncase::ir::dfs_ir_visitor::visit_strategry (this=0x7fffffffd5c0, node=...) at /home/tj/nncase/src/ir/visitor.cpp:61
8 0x0000555555d4bf2e in nncase::ir::dfs_ir_visitor::visit_strategry (this=0x7fffffffd5c0, node=...) at /home/tj/nncase/src/ir/visitor.cpp:61
9 0x0000555555d4bf2e in nncase::ir::dfs_ir_visitor::visit_strategry (this=0x7fffffffd5c0, node=...) at /home/tj/nncase/src/ir/visitor.cpp:61
10 0x0000555555d4bd37 in nncase::ir::ir_visitor::visit (this=0x7fffffffd5c0, outputs=...) at /home/tj/nncase/src/ir/visitor.cpp:31
11 0x000055555566b895 in nncase::scheduler::schedule (outputs=..., context=..., compute_sequence=std::vector of length 1, capacity 1 = {...}) at /home/tj/nncase/src/scheduler/scheduler.cpp:123
12 0x00005555555d5905 in (anonymous namespace)::gencode (target=..., graph=..., options=...) at /home/tj/nncase/src/cli/compile.cpp:212
13 0x00005555555d7234 in compile (options=...) at /home/tj/nncase/src/cli/compile.cpp:265
14 0x00005555555a1222 in main (argc=8, argv=0x7fffffffe408) at /home/tj/nncase/src/cli/cli.cpp:38
(gdb) f 3
3 0x000055555566af2b in nncase::scheduler::allocation_context::allocate_default (this=0x7fffffffd710, conn=...) at /home/tj/nncase/src/scheduler/scheduler.cpp:50
50 auto &node = allocator->second->allocate(size);
(gdb) p conn
$1 = (nncase::ir::output_connector &) @0x55555629adc0: {nncase::ir::base_connector = {owner
= @0x55555629ab10, name
= "output", type_ = nncase::dt_uint8, shape_ = {static alignment = 8,
m_allocator = {<gnu_cxx::new_allocator> = {}, }, m_begin = 0x55555629ae10, m_end = 0x55555629ae30, m_capacity = 0x55555629ae30, m_data = {1, 32, 256,
256}}}, connections
= std::vector of length 1, capacity 1 = {0x555556299ce0}, memory_type
= nncase::mem_k210_kpu}
(gdb) c
Continuing.
Fatal: Allocator has ran out of memory
[Thread 0x7fffedffb700 (LWP 17596) exited]
[Thread 0x7fffeffff700 (LWP 17592) exited]
[Thread 0x7ffff7fd4740 (LWP 17586) exited]
[Thread 0x7fffee7fc700 (LWP 17595) exited]
[Thread 0x7fffeeffd700 (LWP 17594) exited]
[Thread 0x7fffef7fe700 (LWP 17593) exited]
[Thread 0x7ffff4f13700 (LWP 17591) exited]

Maybe related: #47 (comment)

dataset

Hi,
I use mobilenet keras 0.5 and convert it to kmodel but I got error:

uasge: ./tflite2kmodel.sh xxx.tflite
Fatal: Invalid dataset.
System.ArgumentException: Invalid dataset.
at NnCase.Converter.Data.Dataset..ctor(String path, IReadOnlyCollection1 allowdExtensions, ReadOnlySpan1 dimensions, Int32 batchSize, PostprocessMethods postprocessMethod, Nullable1 mean, Nullable1 std) in D:\Work\Repository\nncase\src\NnCase.Converter\Data\Dataset.cs:line 67
at NnCase.Converter.Data.ImageDataset..ctor(String path, ReadOnlySpan1 dimensions, Int32 batchSize, PreprocessMethods preprocessMethods, PostprocessMethods postprocessMethod, Nullable1 mean, Nullable`1 std) in D:\Work\Repository\nncase\src\NnCase.Converter\Data\Dataset.cs:line 165
at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 225
at NnCase.Cli.Program.

(String[] args)

Thanks

对 ReLU 的支持

我看到项目描述中有对PReLU的支持,但今天在编译模型时无法编译 tflite 中的 ReLU 。我尝试将原模型中的 ReLU 转换为 PReLU 后在用 tf.lite 转换模型时还是会把PReLU转换为ReLU,应该怎么办。

Layer SUB is not supported

Hi,
I convert tflite to kmodel with command:
./ncc/ncc -i tflite -o k210model --dataset images $1 ./$name

this is error:

uasge: ./tflite2kmodel.sh xxx.tflite
Fatal: Layer SUB is not supported
NnCase.Converter.Converters.LayerNotSupportedException: Layer SUB is not supported
at NnCase.Converter.Converters.TfLiteToGraphConverter.ConvertOperator(Operator op) in D:\Work\Repository\nncase\src\NnCase.Converter\Converters\TfLiteToGraphConverter.cs:line 63
at System.Linq.Enumerable.SelectEnumerableIterator2.ToList() at System.Linq.Enumerable.ToList[TSource](IEnumerable1 source)
at NnCase.Converter.Converters.TfLiteToGraphConverter.Convert() in D:\Work\Repository\nncase\src\NnCase.Converter\Converters\TfLiteToGraphConverter.cs:line 34
at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 110
at NnCase.Cli.Program.

(String[] args)

thanks

Fatal: Layer Quantize is not supported

i tried to do inference for mnist lenet using the night build:
nncase_night/ncc -i k210model -o inference --dataset ./images ./lenet2.kmodel ./output

I got this error:
Fatal: Layer Quantize is not supported
NnCase.Converter.Converters.LayerNotSupportedException: Layer Quantize is not supported
at NnCase.Converter.K210.Emulator.K210Emulator.Run(Tensor`1 batch, K210Conv2dLayerArgument inputArgument) in C:\projects\nncase\src\NnCase.Converter.K210\Emulator\K210Emulator.cs:line 106
at NnCase.Converter.K210.Emulator.K210Emulator.RunAsync(String datasetPath, String outputPath) in C:\projects\nncase\src\NnCase.Converter.K210\Emulator\K210Emulator.cs:line 59
at NnCase.Converter.K210.Emulator.K210Emulator.RunAsync(String datasetPath, String outputPath) in C:\projects\nncase\src\NnCase.Converter.K210\Emulator\K210Emulator.cs:line 55
at NnCase.Cli.Program.Main(String[] args) in C:\projects\nncase\src\NnCase.Cli\Program.cs:line 304
at NnCase.Cli.Program.

(String[] args)

Fatal: Not supported tflite opcode: LOGISTIC

Hello guys,
we just try to convert our tflite model and got this error:
Fatal: Not supported tflite opcode: LOGISTIC
In the readme, LOGISTIC is one of the ops supported by the toolkit. I had tried on both pre-release v0.2.0 and release v0.1.5 rc5 but got same error,
Hope to receive feedback from you guys soon,
Thanks in advance

images

Hi,

I do not understand images\: quantization dataset images . What is it and how I can get it ???

Thanks

如何转换caffe模型?

感谢您提供的工具。在readme里面,说是可以用ncc -i caffe *** 来转换caffe的模型。但是我从release下载的工具名字叫toco.exe,而不是ncc。另外使用toco.exe --input_format="caffe"报错,说不支持的格式。请问使用toco.exe如何将caffe模型转化成kmodel模型?其实虽然caffe现在没落了,但是很多企业由于历史原因,计算机视觉这块还在用caffe,所以对caffe模型的支持还是比较重要的。谢谢,盼复。

CMakeLists.txt

Could you please supply a CMakeLists.txt for compiling your yolo_v2 demo?

when I compiling the project, the error occurred as follows:
CMake Error: The source directory "X:/kendryte/nncase-master/examples/20classes_yolo/k210/kpu_20classes_example" does not appear to contain CMakeLists.txt. Specify --help for usage, or press the help button on the CMake GUI.

Fatal: Layer tflite.Operator is not supported: Only scalar multiply is supported

hu@hu-VirtualBox:~/ws$ nncase_night/ncc -i tflite -o k210model lenet01.tflite lenet01.kmodel
Fatal: Layer tflite.Operator is not supported: Only scalar multiply is supported
NnCase.Converter.Converters.LayerNotSupportedException: Layer tflite.Operator is not supported: Only scalar multiply is supported
at NnCase.Converter.Converters.TfLiteToGraphConverter.ConvertMul(Operator op) in C:\projects\nncase\src\NnCase.Converter\Converters\TfLiteToGraphConverter.cs:line 243
at NnCase.Converter.Converters.TfLiteToGraphConverter.ConvertOperator(Operator op) in C:\projects\nncase\src\NnCase.Converter\Converters\TfLiteToGraphConverter.cs:line 79
at System.Linq.Enumerable.SelectEnumerableIterator2.ToList() at System.Linq.Enumerable.ToList[TSource](IEnumerable1 source)
at NnCase.Converter.Converters.TfLiteToGraphConverter.Convert() in C:\projects\nncase\src\NnCase.Converter\Converters\TfLiteToGraphConverter.cs:line 34
at NnCase.Cli.Program.Main(String[] args) in C:\projects\nncase\src\NnCase.Cli\Program.cs:line 113
at NnCase.Cli.Program.

(String[] args)

Fatal: input-format

ncc.exe -i k210model -o inference --dataset ./images ./lenet2.kmodel ./output

Fatal: input-format
System.ArgumentException: input-format at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 117
at NnCase.Cli.Program.

(String[] args)

I get the error Fatal: Invalid dataset

I'm doing a tutorial on making this kmodel
https://bbs.sipeed.com/t/topic/682

When proceeding in this way, the error mentioned in the title occurred

# ./tflite2kmodel.sh /tf/work/Maix-Keras-workspace/mbnet/mbnet75.tflite
uasge: ./tflite2kmodel.sh xxx.tflite
Fatal: Invalid dataset.
System.ArgumentException: Invalid dataset.
   at NnCase.Converter.Data.Dataset..ctor(String path, IReadOnlyCollection`1 allowdExtensions, ReadOnlySpan`1 dimensions, Int32 batchSize, PostprocessMethods postprocessMethod, Nullable`1 mean, Nullable`1 std) in D:\Work\Repository\nncase\src\NnCase.Converter\Data\Dataset.cs:line 67
   at NnCase.Converter.Data.ImageDataset..ctor(String path, ReadOnlySpan`1 dimensions, Int32 batchSize, PreprocessMethods preprocessMethods, PostprocessMethods postprocessMethod, Nullable`1 mean, Nullable`1 std) in D:\Work\Repository\nncase\src\NnCase.Converter\Data\Dataset.cs:line 212
   at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 249
   at NnCase.Cli.Program.<Main>(String[] args)

I would be grateful if you could give me advice on what to check.

Below is the model I created

https://drive.google.com/file/d/1W2u1D2zJAmnrNlqFqWP6E6l9QxpvQJyY/view?usp=sharing

Fatal: Layer DepthwiseConv2d is not supported

2019-05-03 12:05:56.214925: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Fatal: Layer DepthwiseConv2d is not supported
NnCase.Converter.Converters.LayerNotSupportedException: Layer DepthwiseConv2d is not supported
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.g__ConvertLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 51
at NnCase.Converter.K210.Converters.Stages.Convert.Converter.Convert(Graph graph, QuantizationContext quantizationContext, Int32 weightsBits) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\Stages\Convert\Converter.cs:line 60
at NnCase.Converter.K210.Converters.GraphToK210Converter.ConvertAsync(Dataset dataset, GraphPlanContext planContext, String outputDir, String prefix, Boolean channelwiseOutput) in C:\projects\nncase\src\NnCase.Converter.K210\Converters\GraphToK210Converter.cs:line 38
at NnCase.Cli.Program.Main(String[] args) in C:\projects\nncase\src\NnCase.Cli\Program.cs:line 246
at NnCase.Cli.Program.

(String[] args)

Fatal : Dimensions must be equal

Hello, When trying to convert my tflite model, I'm running into the following error:

Fatal: Dimensions must be equal. System.InvalidOperationException: Dimensions must be equal.    at NnCase.Converter.Model.InputConnector.SetConnection(OutputConnector from) in D:\Work\Repository\nncase\src\NnCase.Converter\Model\InputConnector.cs:line 32    at NnCase.Converter.K210.Transforms.K210SeparableConv2dTransform.Process(TransformContext context) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Transforms\K210SeparableConv2dTransform.cs:line 89    at NnCase.Converter.Transforms.Transform.Process(Graph graph, IReadOnlyList`1 transforms) in D:\Work\Repository\nncase\src\NnCase.Converter\Transforms\Transform.cs:line 77    at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 235    at NnCase.Cli.Program.<Main>(String[] args) ``` 

What could be the source of the problem? 
Thank you for your help.

model output error

您好,我改变输入,但模型的输出始终没有变化, 是否我的模型转换为KMODEL后有问题?
4HLDXFLK6VMTX59P)_%H(RN
VYHZL`2_79O5V28)29IQ9@Q
模型

Can not compile to kmodel

I tried to compile the model from tflite to kmodel on Google Colaboratory..

./ncc/ncc compile /content/model.tflite /content/model.kmodel -i tflite -o kmodel --dataset /content/test/

But ncc output the error.

Fatal: Shapes must be same, but got [1x1x1x1] and [1x1x1x7]

How can I compile successfully?

KPU ran out of memory

While converting tflite to kmodel, I am encountering out of memory. I still have 2.6GB ram left. Any comment?

2019-08-03 00:41:10.769773: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
0: InputLayer -> 1x3x224x224
1: K210Conv2d 1x3x224x224 -> 1x24x112x112
2: K210Conv2d 1x24x112x112 -> 1x24x112x112
3: K210Conv2d 1x24x112x112 -> 1x16x112x112
4: K210Conv2d 1x16x112x112 -> 1x96x112x112
Fatal: KPU ran out of memory.
System.InvalidOperationException: KPU ran out of memory.
   at NnCase.Converter.K210.Converters.Stages.Inference.KPUMemoryAllocator.Allocate(UInt32 size) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\KPUMemoryAllocator.cs:line 28
   at NnCase.Converter.K210.Converters.Stages.Inference.InferenceContext.GetOrAllocateKPUMemory(OutputConnector output) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferenceContext.cs:line 40
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.AllocateInputMemoryDefault(Layer layer, OutputConnector input, InferenceContext context) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 123
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 50
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.<Infer>g__InferLayer|0_6(Layer layer, <>c__DisplayClass0_0& ) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 41
   at NnCase.Converter.K210.Converters.Stages.Inference.InferExecutor.Infer(Graph graph, ConvertContext convertContext) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\Stages\Inference\InferExecutor.cs:line 106
   at NnCase.Converter.K210.Converters.GraphToK210Converter.ConvertAsync(Dataset dataset, GraphPlanContext planContext, String outputDir, String prefix, Boolean channelwiseOutput) in D:\Work\Repository\nncase\src\NnCase.Converter.K210\Converters\GraphToK210Converter.cs:line 41
   at NnCase.Cli.Program.Main(String[] args) in D:\Work\Repository\nncase\src\NnCase.Cli\Program.cs:line 271
   at NnCase.Cli.Program.<Main>(String[] args)```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.