Comments (15)
This function is implemented only for ARM64 and WebAssembly
from xnnpack.
Hi @Maratyszcza ,
I switched to a bare-metal ARM64 (X-GENE, aarch64) machine running Ubuntu 16.04 (xenial). However, an error ”error: unknown value ‘armv8.2-a+fp16’ for -march“ occured when I run 'make end2end-bench'.
I checked issue #323 and #242 but still not sure how to solve this.
from xnnpack.
You need a newer compiler (e.g. gcc 7+). Older versions don’t support FP16 arithmetic instructions in some of XNNPACK microkernels.
from xnnpack.
Hi @Maratyszcza ,
It seems I made the sparse version work. However, I have a monor question about the paddings in some layers in the mobilenet where they are using 0 /* top padding /, 1 / right padding /, 1 / bottom padding /, 0 / left padding /. In your current nchw implementation, left and right paddings are required to be all 1's which cannot be satisfied for the aformationed layers. Though it seems running with no problem even if I creat the layer with 1 / top padding /, 1 / right padding /, 0 / bottom padding /, 1 / left padding */, the output will be a little different. I am wondering if your team is considering making more padding patterns in the implementation. Thanks!
from xnnpack.
MobileNet in TFLite model zoo was trained with TensorFlow SAME padding, which results in asymmetric padding for convolutions with non-unit stride. To get padding 1 one all sides in TF you need to explicitly pad the convolution input and set convolution padding mode to VALID.
from xnnpack.
Hi @Maratyszcza,
Sorry that I did not make my question clear. I am building the mobilenet v1 with sparse weight only using the XNNPACK library. I am using the template in the "models" folder and replacing the nhwc operators with the nchw's which can handle the sparse weights. In the mobile net v1, there is a layer looks like:
xnn_operator_t op3 = nullptr;
status = xnn_create_convolution2d_nhwc_f32(
0 /* top padding /, 1 / right padding /,
1 / bottom padding /, 0 / left padding /,
3 / kernel height /, 3 / kernel width /,
2 / subsampling height /, 2 / subsampling width /,
1 / dilation_height /, 1 / dilation_width /,
48 / groups /,
1 / input channels per group /,
1 / output_channels_per_group /,
48 / input pixel stride /,
48 / output pixel stride /,
w6, w7,
0.0f / output min /, 6.0f / output max /,
0 / flags */,
&op3);
which requests asymmetric padding that is not implemented in the nchw's operators. it seems running with no problem even if I creat the layer with 1 / top padding /, 1 / right padding /, 0 / bottom padding /, 1 / left padding */, but I think the output will be a little different. I am wondering if your team is considering making more padding patterns in the implementation. Thanks!
from xnnpack.
@WeiHao97 Are you calling xnn_create_convolution2d_ncwc_f32
? It allows top padding of 0, but all other paddings must be 1, see
XNNPACK/src/operators/convolution-nchw.c
Lines 171 to 172 in 1f29b80
from xnnpack.
@WeiHao97 Are you calling
xnn_create_convolution2d_ncwc_f32
? It allows top padding of 0, but all other paddings must be 1, see
Yes, My question is how should I replace this nhwc operator :
XNNPACK/models/mobilenet-v1.cc
Line 237 in 1f29b80
Since it is using asymmetric padding
from xnnpack.
You need a convolution operator with explicit padding. TensorFlow doesn't provide such primitive, but you can simulate it via a combination of tf.pad
and a convolution with VALID padding. You may look at our pre-trained models for an example.
from xnnpack.
You need a convolution operator with explicit padding. TensorFlow doesn't provide such primitive, but you can simulate it via a combination of
tf.pad
and a convolution with VALID padding. You may look at our pre-trained models for an example.
Sorry, I am confused. Do you mean that it is not possible to build the mobilenet v1 handling sparse weight only using the XNNPACK library (as the way it is done with the nhwc version )?
from xnnpack.
It is possible to build MobileNet v1 in XNNPACK. But XNNPACK operators don't necessarily map 1:1 to TensorFlow or TFLite operators. In particular, XNNPACK operator with explicit padding generally maps to two operators in TF/TFLite (PAD + CONV_2D).
from xnnpack.
It is possible to build MobileNet v1 in XNNPACK. But XNNPACK operators don't necessarily map 1:1 to TensorFlow or TFLite operators. In particular, XNNPACK operator with explicit padding generally maps to two operators in TF/TFLite (PAD + CONV_2D).
So, in case of building MobileNet v1 in XNNPACK that handles sparse weight, my understanding is to replace every operator in the mobilenet-v1.cc with the corresponding operator that takes nchw format input. However, some layers requests asymmetric padding (0 for left padding and 1 for right padding) that is not allowed in the nchw setting. Do you have any suggestions on how should I deal with this problem? Thanks!
from xnnpack.
When you train a MobileNet model, emulate convolution with explicit padding via a combination of pad operator (to pad input image with 1 pixel on each side) and convolution operator with VALID padding. When you convert the model to run on XNNPACK, replace the pad+convolution with XNNPACK Convolution operator with padding_left = padding_right = padding_top = padding_bottom = 1.
from xnnpack.
Sparse inference is now integrated in the TensorFlow Lite XNNPACK backend, see https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/README.md#sparse-inference-experimental for instructions.
from xnnpack.
Thank you!
from xnnpack.
Related Issues (20)
- XNNPACK make error scc1: error: invalid feature modifier 'i8mm' in '-march=armv8.2-a+i8mm+fp16' HOT 2
- Fallthroughs should be explicit HOT 1
- XNNPACK on LicheePi Console 4A HOT 2
- XNN_FLAG_KEEP_DIM not backwards compatible HOT 1
- When running `build -c opt --config android_arm64 :end2end_bench` under bench, `sys/system_properties.h` is missing HOT 2
- Build Android arm-v7 faild HOT 3
- Xnnpack still builds with `+dotprod` and `+fp16` with `-DXNNPACK_ENABLE_ARM_DOTPROD=OFF -DXNNPACK_ENABLE_ARM_FP16_SCALAR=OFF -DXNNPACK_ENABLE_ARM_FP16_VECTOR=OFF` HOT 10
- Is running TEST(CONVERT_NC_F16_QD8, unit_batch) failed because it does not support armv7a ? HOT 1
- Why is Signal 7 reporting an error on the armv7a platform TEST (F16_VCMUL_NEONFP16ARITH_U8, batch_lt_8) ? HOT 3
- A segment error occurred while running test case static-reshape-test on the armv7a platform HOT 1
- ARMv7 (with NEON) can not support on Linux but only support ARMv7 (with NEON) on Android HOT 3
- Enable HEXAGON to build XNNPack
- Work with the gvisor team on this
- scripts/build-android-armv7.sh fails with NDK 21
- `xnn_weights_cache_provider` look_up doesn't work? HOT 2
- How can I parallelize the execution of this benchmark? (https://github.com/google/XNNPACK/blob/master/bench/spmm-benchmark.h)
- cmake build failure with XNNPACK_BUILD_TESTS=ON and XNNPACK_LIBRARY_TYPE=shared
- test/sigmoid_nc_test fails on Hexagon simulator HOT 1
- Load-from-misaligned-address failures on Hexagon simulator HOT 3
- XNNPACK tests that use mmap() fail on Hexagon devices
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xnnpack.