Comments (16)
Here is what a deep learning system stack would look like in nowday.
- 1 Build operator level graph description language
- Name whatever dl frameworks you care about
- 2 Tensor primitive level graph description lanugage
- NNVM, HLO, NGraph
- It is close enough to the first one that you can also build graph optimization on first layer and bypass this layer
- 3 DSL for description and codegen.
- 4 Hardcoded optimized kernel library like nnpack, cudnn, libdnn
- 5 Device dependent library
Most libraries goes with 1 -> 4. An easy and restrictive path for compilation and fusion is going from 2 -> 4/5, by manually code up fused kernels, or have rules to generate certain fused kernels. TVM sits on level 3, to make jump from level 2 to level 5 easier and give user more control.
In terms of design philosophy, we want to make it work together with existing ecosystem. This include
- Friendly frontend that can be directly used for kernel generation
- Give framework full control of memory allocation, graph execution, data layout etc.
- Generate DLPack compatible kernels that every framework can directly take.
- Make use of blackbox calls like cudnn when user says so.
I think we can expect all approaches in the stack will continue to exist. We just design a layer in 3 that can incrementally transit toward automation while still being able to transparently benefit from things in 4.
from tvm.
They are orthogonal.
- XLA is more high level, like NNVM, developer of XLA need to define codegen and loop transformation rules(like writing kernel) for each operator, on how to generate kernels, and the system stitches the kernel for you
- TVM is one level below, provide common low level primitives for describing the computation, as well as the loop transformation rules, and allow user to do these, you can use these to implement something like XLA(by using NNVM or high level graph description), or simply directly bypass the high level description layer and directly use it in framework
from tvm.
What will be the role of Fabian libdnn and Fair sponsored NNPACK in this?
from tvm.
both libdnn and nnpack are different, they can maybe be used as blackbox calls. (NNPACK is not FAIR sponsored, it's just continued research/dev after FAIR)
from tvm.
What is the goal here? Rewrite new kernels?
from tvm.
write kernels in a new language that can be retargeted to multiple backends with great perf.
folks can build languages or collectives to write kernels on top of TVM.
from tvm.
see the matrix-multiply or persistent-rnn examples, maybe?
from tvm.
@soumith I thought that investing FAIR work hours on NNPACK was like sponsoring. But it is ok if you meant that is not officially sponsored by FAIR
from tvm.
yes, we did not sponsor a grant and say: give us NNPACK.
from tvm.
Yes ok.. so what I meant is that we would try to superseed libdnn and NNPACK at some point if we will share this DSL kernels
from tvm.
yes, slowly and incrementally we can try move the value into TVM backend. Will happen over time. There's some systems research needed to be done before we get there as well, so there's a little bit of uncertainty too.
from tvm.
Yes of course I was just talking about the "great design"
from tvm.
So are you trying to do what TF team didn't want to do?
from tvm.
@soumith with collectives you mean different frameworks (like the ones we represent) sharing kernel codes?
from tvm.
Can we put some of this info in a file so that we can close it?
from tvm.
Yes, let us have an FAQ file https://github.com/dmlc/tvm/blob/master/docs/faq.md
from tvm.
Related Issues (20)
- [Bug] Init block not discoverable after sch.blockize HOT 2
- [Bug] `MatMul` operator in TVM seems fragile HOT 1
- [Bug] Graph optimization model compilation error involving `Pad` operator HOT 3
- [Release] v0.16.0 Release Candidate Notes
- [VOTE] Release Apache TVM v0.16.0.rc0 HOT 8
- [Bug] run tests/python/relax/test_codegen_tensorrt.py coredump
- [Bug] Missing key in Vitis Ai docker demo HOT 1
- CI failed for MacOS due to upstream update
- [CI Problem] how i load a tvm model from memory rather than from so file? HOT 1
- [RESULT][VOTE] Release Apache TVM v0.16.0
- [CI Problem] HOT 1
- [Bug] 1 test fails: InterfaceAPI.ContainsRunFunctionWithWorkspaceAndConstantPools HOT 1
- [CI Problem] Microtvm demo is failing to run successfully HOT 1
- [Bug]
- [Bug] TVM 0.13.0 version does not work with Python 3.8 - Error building tvm HOT 5
- [Bug] Segmentation Fault when Running Relay Transformations Iteratively
- [Bug][Meta-schedule][tensorizing] BERT Meta-schedule tensorizing Runtime Error
- Undefined Reference Errors When Linking Against PyTorch in CMake
- [Docs] Dead link for resnet HOT 3
- [CI Problem] Missing AWS Key for S3 file cache storage HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tvm.