Git Product home page Git Product logo

Comments (16)

tqchen avatar tqchen commented on June 29, 2024 6

Here is what a deep learning system stack would look like in nowday.

  • 1 Build operator level graph description language
    • Name whatever dl frameworks you care about
  • 2 Tensor primitive level graph description lanugage
    • NNVM, HLO, NGraph
    • It is close enough to the first one that you can also build graph optimization on first layer and bypass this layer
  • 3 DSL for description and codegen.
  • 4 Hardcoded optimized kernel library like nnpack, cudnn, libdnn
  • 5 Device dependent library

Most libraries goes with 1 -> 4. An easy and restrictive path for compilation and fusion is going from 2 -> 4/5, by manually code up fused kernels, or have rules to generate certain fused kernels. TVM sits on level 3, to make jump from level 2 to level 5 easier and give user more control.

In terms of design philosophy, we want to make it work together with existing ecosystem. This include

  • Friendly frontend that can be directly used for kernel generation
  • Give framework full control of memory allocation, graph execution, data layout etc.
  • Generate DLPack compatible kernels that every framework can directly take.
  • Make use of blackbox calls like cudnn when user says so.

I think we can expect all approaches in the stack will continue to exist. We just design a layer in 3 that can incrementally transit toward automation while still being able to transparently benefit from things in 4.

from tvm.

tqchen avatar tqchen commented on June 29, 2024 1

They are orthogonal.

  • XLA is more high level, like NNVM, developer of XLA need to define codegen and loop transformation rules(like writing kernel) for each operator, on how to generate kernels, and the system stitches the kernel for you
  • TVM is one level below, provide common low level primitives for describing the computation, as well as the loop transformation rules, and allow user to do these, you can use these to implement something like XLA(by using NNVM or high level graph description), or simply directly bypass the high level description layer and directly use it in framework

from tvm.

bhack avatar bhack commented on June 29, 2024

What will be the role of Fabian libdnn and Fair sponsored NNPACK in this?

from tvm.

soumith avatar soumith commented on June 29, 2024

both libdnn and nnpack are different, they can maybe be used as blackbox calls. (NNPACK is not FAIR sponsored, it's just continued research/dev after FAIR)

from tvm.

bhack avatar bhack commented on June 29, 2024

What is the goal here? Rewrite new kernels?

from tvm.

soumith avatar soumith commented on June 29, 2024

write kernels in a new language that can be retargeted to multiple backends with great perf.
folks can build languages or collectives to write kernels on top of TVM.

from tvm.

soumith avatar soumith commented on June 29, 2024

see the matrix-multiply or persistent-rnn examples, maybe?

from tvm.

bhack avatar bhack commented on June 29, 2024

@soumith I thought that investing FAIR work hours on NNPACK was like sponsoring. But it is ok if you meant that is not officially sponsored by FAIR

from tvm.

soumith avatar soumith commented on June 29, 2024

yes, we did not sponsor a grant and say: give us NNPACK.

from tvm.

bhack avatar bhack commented on June 29, 2024

Yes ok.. so what I meant is that we would try to superseed libdnn and NNPACK at some point if we will share this DSL kernels

from tvm.

soumith avatar soumith commented on June 29, 2024

yes, slowly and incrementally we can try move the value into TVM backend. Will happen over time. There's some systems research needed to be done before we get there as well, so there's a little bit of uncertainty too.

from tvm.

bhack avatar bhack commented on June 29, 2024

Yes of course I was just talking about the "great design"

from tvm.

bhack avatar bhack commented on June 29, 2024

So are you trying to do what TF team didn't want to do?

from tvm.

edgarriba avatar edgarriba commented on June 29, 2024

@soumith with collectives you mean different frameworks (like the ones we represent) sharing kernel codes?

from tvm.

bhack avatar bhack commented on June 29, 2024

Can we put some of this info in a file so that we can close it?

from tvm.

tqchen avatar tqchen commented on June 29, 2024

Yes, let us have an FAQ file https://github.com/dmlc/tvm/blob/master/docs/faq.md

from tvm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.