Git Product home page Git Product logo

onnx-modifier's Introduction

English | 简体中文

Introduction

To edit an ONNX model, one common way is to visualize the model graph, and edit it using ONNX Python API. This works fine. However, we have to code to edit, then visualize to check. The two processes may iterate for many times, which is time-consuming. 👋

What if we have a tool, which allows us to edit and preview the editing effect in a totally visualization fashion?

Then onnx-modifier comes. With it, we can focus on editing the model graph in the visualization pannel. All the editing information will be summarized and processed by Python ONNX API automatically at last. Then our time can be saved! 🚀

onnx-modifier is built based on the popular network viewer Netron and the lightweight web application framework Flask.

Currently, the following editing operations are supported:

✅ Delete/recover nodes
✅ Rename the node inputs/outputs
✅ Rename the model inputs/outputs
✅ Add new model outputs
✅ Edit the attribute of nodes
✅ Add new nodes
✅ Change batch size
✅ Edit model initializers

Here is the update log and TODO list.

Hope it helps!

Getting started

We have three methods to launch onnx-modifier now.

launch from command line

Clone the repo and install the required Python packages by

git clone [email protected]:ZhangGe6/onnx-modifier.git
cd onnx-modifier

pip install -r requirements.txt

Then run

python app.py

Click the url in the output info generated by flask (defaults to http://127.0.0.1:5000/), then onnx-modifier will be launched in the web browser.

launch from executable file

Click to expand
  • Windows: Download onnx-modifier.exe (27.6MB) Google Drive / Baidu NetDisk, double-click it and enjoy.
    • Edge browser is used for runtime environment by default.

I recorded how I made the the executable file in app_desktop.py. The executable file for other platforms are left for future work.

launch from a docker container

Click to expand

We create a docker container like this:

git clone [email protected]:ZhangGe6/onnx-modifier.git
cd onnx-modifier
docker build --file Dockerfile . -t onnx-modifier

After building the container, we run onnx-modifier by mapping docker port and a local folder modified_onnx

mkdir -p modified_onnx
docker run -d -t \
  --name onnx-modifier \
  -u $(id -u ${USER}):$(id -g ${USER}) \
  -v $(pwd)/modified_onnx:/modified_onnx \
  -p 5000:5000 \
  onnx-modifier

Then we have access to onnx-modifer from URL http://127.0.0.1:5000. The modified ONNX models are expected to be found inside the local folder modified_onnx.

Click Open Model... to upload the ONNX model to edit. The model will be parsed and shown on the page.

Usage

Graph-level-operation elements are placed on the left-top of the page. Currently, there are three buttons: Reset, Download and Add node. They can do:

  • Reset: Reset the whole model graph to its initial state;
  • Download: Save the modified model into disk. Note the two checkboxes on the right
    • (experimental) select shape inferece to do shape inferece when saving model.
      • The shape inferece feature is built on onnx-tool, which is a powerful ONNX third-party tool.
    • (experimental) select clean up to remove the unused nodes and tensors (like ONNX GraphSurgeon).
  • Add node: Add a new node into the model.

Node-level-operation elements are all in the sidebar, which can be invoked by clicking a specific node.

Let's take a closer look.

Delete/recover nodes

There are two modes for deleting node: Delete With Children and Delete Single Node. Delete Single Node only deletes the clicked node, while Delete With Children also deletes all the node rooted on the clicked node, which is convenient and natural if we want to delete a long path of nodes.

The implementation of Delete With Children is based on the backtracking algorithm.

For previewing, The deleted nodes are in grey mode at first. If a node is deleted by mistake, Recover Node button can help us recover it back to graph. Click Enter button to take the deleting operation into effect, then the updated graph will show on the page automatically.

The following figure shows a typical deleting process:

Rename the name of node inputs/outputs

By changing the input/output name of nodes, we can change the model forward path. It can also be helpful if we want to rename the model output(s).

Using onnx-modifier, we can achieve this by simply enter a new name for node inputs/outputs in its corresponding input placeholder. The graph topology is updated automatically and instantly, according to the new names.

For example, Now we want remove the preprocess operators (Sub->Mul->Sub->Transpose) shown in the following figure. We can

  1. Click on the 1st Conv node, rename its input (X) as serving_default_input:0 (the output of node data_0).
  2. The model graph is updated automatically and we can see the input node links to the 1st Convdirectly. In addition, the preprocess operators have been split from the main routine. Delete them.
  3. We are done! (click Download, then we can get the modified ONNX model).

Note: To link node $A$ (data_0 in the above example) to node $B$ (the 1st Conv in the above example), it is suggested to edit the input of node $B$ to the output of node A, rather than edit the output of node $A$ to the input of node B. Because the input of $B$ can also be other node's output (Transpose in the above example ) and unexpected result will happen.

The process is shown in the following figure:

Rename the model inputs/outputs

Click the model input/output node, type a new name in the sidebar, then we are done.

rename_model_io

Add new model outputs

Sometimes we want to add/extract the output of a certain node as model output. For example, we want to add a new model output after the old one was deleted, or extract intermediate layer output for fine-grained analysis. In onnx-modifier, we can achieve this by simply clicking the Add Output button in the sidebar of the corresponding node. Then we can get a new model output node following the corresponding node. Its name is the same as the output of the corresponding node.

In the following example, we add 2 new model outputs, which are the outputs of the 1st Conv node and 2nd Conv node, respectively.

add_new_outputs

Edit the attribute of nodes

Change the original attribute to a new value, then we are done.

By clicking the + in the right side of placeholder, we can get some helpful reference.

Add new node

Sometimes we want to add new nodes into the existed model. onnx-modifier supports this feature experimentally now.

Note there is an Add node button, following with a selector elements on the top-left of the index page. To do this, what we need to do is as easy as 3 steps:

  1. Choose a node type in the selector, and click Add node button. Then an empty node of the chosen type will emerge on the graph.

    The selector contains all the supported operator types in domains of ai.onnx(171), ai.onnx.preview.training(4), ai.onnx.ml(18) and com.microsoft(1).

  2. Click the new node and edit it in the invoked siderbar. What we need to fill are the node Attributes (undefined by default) and its Inputs/Outputs (which decide where the node will be inserted in the graph).

  3. We are done.

The following are some notes for this feature:

  1. By clicking the ? in the NODE PROPERTIES -> type element, or the + in each Attribute element, we can get some reference to help us fill the node information.

  2. It is suggested to fill all of the Attribute, without leaving them as undefined. The default value may not be supported well in the current version.

  3. For the Attribute with type list, items are split with ',' (comma). Note that [] is not needed.

  4. For the Inputs/Outputs with type list, it is forced to be at most 8 elements in the current version. If the actual inputs/outputs number is less than 8, we can leave the unused items with the name starting with list_custom, and they will be automatically omitted.

Change batch size

onnx-modifier supports editing batch size now. Both Dynamic batch size and Fixed batch size modes are supported.

  • Dynamic batch size: Click the Dynamic batch size button, then we get a model which supports dynamic batch size inferece;
  • Fixed batch size: Input the fixed batch size we want, then we are done;

Note the differences between fixed batch size inference and dynamic batch size inference, as this blog illustrates:

  • When running a model with only fixed dimensions, the ONNX Runtime will prepare and optimize the graph for execution when constructing the Inference Session.
  • when the model has dynamic dimensions like batch size, the ONNX Runtime may instead cache optimized graphs for specific batch sizes when inputs are first encountered for that batch size.

Edit model initializers

Sometimes we want to edit the values which are stored in model initializers, such as the weight/bias of a convolution layer and the shape parameter of a Reshape node. onnx-modifier supports this feature now! Input a new value for the initializer in the invoked sidebar and click Download, then we are done.

Note: For the newly added node, we should also input the datatype of the initializer. (If we are not sure what the datatype is, click NODE PROPERTIES->type->?, we may get some clues.)

Sample models

For quick testing, some typical sample models are provided as following. Most of them are from onnx model zoo

onnx-modifier is under active development 🛠. Welcome to use, create issues and pull requests! 🥰

Credits and referred materials

onnx-modifier's People

Contributors

zhangge6 avatar fengwang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.