Comments (2)
The difference between inference
and output
(forward pass in training) functions is based on two considerations:
inference
functions are optimized for batch = 1 case, whileoutput
functions are optimized for moderately large batch sizes (64 and above).- While none of the following is currently implemented, I have plans to merge some extra operations into convolutional/fully connected layers. The type of operations which can be merged, depends on whether the framework would need to run backward pass on the layer using the outputs:
- Both ReLU and ELU can be merged into the preceding convolutional (or fully connected) layer.
- If the model is used for inference, 2x2 pooling with 2x2 stride can be merged into preceding convolutional or fully connected layer (potentially with embedded ReLU/ELU). If the layer is used for training, this fusion is impossible, because the backward pass on the convolutional layer would need the output of the convolutional layer before 2x2 pooling, which wouldn't be produced by the forward pass.
- If the model is used for training, batch normalization layer can be partially merged into preceding convolutional or fully connected layer. Specifically, computation of per-channel mean values can be merged into the store stage of the preceding layer. In inference, batch normalization layer becomes a static scale + bias layer, and often can be statically merged into preceding convolutional or fully connected layer, so computation of per-channel mean activations makes no sense.
The reason why nnp_convolution_inference
supports strides while nnp_convolution_output
does not is that NNPACK is in active development and implicit_gemm
algorithm and strided convolutions were implemented for inference
function first. Support for implicit_gemm
algorithm and strided convolutions in the training functions is in short-term plans, but for now you'd need to fall back to an implementation outside NNPACK.
One more clarification about implicit_gemm
algorithm: it does not do patch2cols+sgemm
, but rather does something smarter. High-performance implementations of SGEMM
internally repack the matrix into cache-friendly form. Thus, a typical SGEMM-based implementation of a convolutional layer would involve two repacking operations: one inside patch2cols
and another inside sgemm
. NNPACK's implicit_gemm
algorithm combines these two repacking operations into one; the main motivation is to operate with low memory overhead (NNPACK doesn't allocate memory for the whole patch2cols
matrix, just for L3-sized block of it), but you'd likely find it performing better than traditional patch2cols+sgemm
implementations due to fewer memory repacking operations.
from nnpack.
any update about the short term implementation, for stride size bigger than 1.
from nnpack.
Related Issues (20)
- A compilation error occurs in the Linux ARM environment HOT 1
- potential unitialized variable in nnp_sgemm_upto_4x8__psimd HOT 1
- not found /bin/banchmarkxxx
- Why do more threads take longer?
- AltiVec/PowerPC (OpenPOWER ISA 3.0B or greater) Acceleration Support HOT 1
- CMakeLists.txt broken on MSYS2/MINGW64/AMD64 (Windows) HOT 3
- Real-time human detection on Pi 4 HOT 1
- 'vdotq_lane_s32' is invalid in C99 [-Wimplicit-function-declaration] HOT 1
- Build failed, cos_npi_over_8 is not available in common HOT 1
- ModuleNotFoundError: No module named 'peachpy.x86_64.avx' HOT 7
- make install dont link to libcpuinfo.so HOT 1
- NNPACK builds are not bit-for-bit reproducible HOT 1
- Unsupported Hardware on VM with compatible CPU HOT 3
- Does NNPACK fall back to non-accelerated code when "Could not initialize NNPACK! Reason: Unsupported hardware." occurs? HOT 1
- ld: in lib/libnnpack.a(conv1x1.py.o), section __TEXT/__const address out of range for architecture x86_64
- Use CPack for packaging HOT 1
- After Installing NNPACK on MacBook Pro 15, late 2012 retina, I still get: [W NNPACK.cpp:51] Could not initialize NNPACK! Reason: Unsupported hardware.
- CMake error cpuinfo-gitclone.cmake:40 (message): Failed to checkout tag: 'master'
- [W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware
- SIGFPE when using the nosmt linux kernel parameter HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nnpack.