Git Product home page Git Product logo

tvmdbg's Introduction

Open Deep Learning Compiler Stack

GitHub license Build Status

Documentation | Contributors | Community | Release Notes

TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends. Checkout the tvm stack homepage for more information.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide

Acknowledgement

We learnt a lot from the following projects when building TVM.

  • Halide: TVM uses HalideIR as data structure for arithematic simplification and low level lowering. We also learnt and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.

tvmdbg's People

Contributors

dayananda-v avatar joyalbin avatar pariksheetpinjari909 avatar siju-samuel avatar

Watchers

 avatar  avatar

tvmdbg's Issues

[DEBUG]TVMDBG -> Support a debug framework for TVM Runtime

OBJECTIVE
Support a debugging tool for TVM's computation graphs which helps to access internal graph structures, ops, input and output values at TVM runtime.

In TVM's current computation-graph framework, computation after graph construction happens as part of Python function(graphruntime.run). Basic Python debugging tools such as pdb cannot be used to debug graphruntime.run because TVM's graph execution happens in the underlying C++ layer. C++ debugging tools such as gdb are not ideal either, because of their inability to recognise and organise the stack frames and variables in a way relevant to TVM's operations, tensors and other graph constructs.

Runtime debug will fulfil the below objectives.

  • Easy access enabling debug by setting a variable while creating graphruntime.
  • Inspection of runtime ops output values and node connections

TODOs

  • Show fused graph summary
  • Perform debug run and show node details including inputs & outputs tensors
  • Provide flexibility to run without debug
  • Call graph run-time n times from UI
  • Support check for NAN during computation and break
  • Support check for INF during computation and break
  • Support for step debug(debug step by step over the graph nodes)
  • Inject a specific graph node value as numpy array through CLI and re-run the dependent nodes explicitly
  • Inject a graph node value from dump file through CLI
  • Support dumping of node outputs to a file
  • Support comparison of node output with a dump output
  • Support profiler for performance debugging
  • Test framework for tvmdbg

Proposed API Changes
tvm.contrib.graph_runtime.create add a new Boolean flag debug to make the runtime debug-gable, this API will be exposed to user to enable or disable debug functionality.
In class GraphModule two members debug and dbgobj are added. debug flag will store whether the debug for this is enabled or not and dbgobj holds the object of debugruntime(including the ui framework)

tvm.contrib.graph_runtime.set_inputs is modified to pass the inputs data set from script to the debugruntime if the debug flag is enabled.

tvm.contrib.graph_runtime.run is modified to invoke the _debug_cli_run which will bring up the ncurses framework.
ncurses framework will wait for actual user-input for the run operation. once user gives the input, will invoke the runtime.GraphRuntime.DebugRun() in graph_runtime.cc if user select to run with debug. Otherwise usual runtime.GraphRuntime.Run() in graph_runtime.cc is invoked. 'DebugRun' can execute a specific node only if all the inputs are ready.
c_runtime_api.h is modified to add new struct to hold the output information.

/*!
 * \brief A Device context for Tensor and operator.
 */
typedef struct {
  /*! \brief DL Tensor to collect the output. */
  DLTensor out_tensor;
  /*! \brief The timestamp of each output */
  int64_t time_stamp;
} TVMDbgTensor;

tvm.contrib.graph_runtime.set_debug_buffers this new api is introduced to collect the run output of each node. In GraphRuntime a new field std::vector<TVMDbgTensor*> debug_buffers_; is introduced to store the pointers of output buffers.

After each operation execution is completed runtime.GraphRuntime.DebugRun() the output is copied to the debug buffer and the outputs are dumped to a temporary directory. UI framework will read this outputs from the temporary directory and will show in the display.

tvm.contrib.graph_runtime.inject_value used to inject a node tensor value during the execution
Stepper functionality is supported to run each node by node.
Stepper will be invoked with 'invoke_stepper' from 'tvm.tools.debug.wrapper.ui_framework' based on the user run option.
invoke_stepper in tvm.tools.debug.wrapper.ui_wrapper create DebugStepper class (in tvm.tools.debug.ui.ui_stepper) for Stepper UI and handlers.
tvm.tools.debug.runtime.debug_runtime uses tvm.contrib.graph_runtime to create below stepper interfaces:

  • step: perform the step by step execution from the current node
  • goto: Specify the node to be executed next, step will continue from the this next node
  • inject_value: used to inject a node tensor value during the execution

A wrapper interfaces layer will be created in tvm.tools.debug.wrapper.ui_wrapper for the above interfaces.
Based on DebugStepper user events, stepper runtime interfaces will be called through tvm.tools.debug.wrapper.ui_wrapper

TVMDBG profiler can be used for profiling the model based on TVM kernels.
The objective is to provide the execution time of each graph node and map its source in the TVM kernels.
This can be used to identify the time consuming nodes and analyse its kernel source.
This helps identify the areas to be analyse more to optimise.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.