Git Product home page Git Product logo

sparta's Introduction

SparTA Getting Started

This repo is under active development. We're trying to make it stable and user-friendly, but it is not surprising to meet problems in current phase. Please open issue or contact the authors when you need help.

SparTA is an end-to-end system to harvest the speeding up gain from the model sparsity.

Installation

SparTA depends on user's local CUDA environments. Here are some requirements

User could install through pip command as below (The PyPI install path is coming soon)

pip install git+https://github.com/microsoft/SparTA.git

or

git clone [email protected]:microsoft/SparTA.git
pip install SparTA

Please make sure that the CUDA version matches the version used to compile PyTorch binaries. If cuda and nvcc version issues met, the following commands may be helpful to verify the environments.

import os
import torch
import pycuda.driver

if torch.cuda.is_available():
    os.system('nvcc --version')
    print(torch.version.cuda)
    print(pycuda.driver.get_version())
    print(pycuda.driver.get_driver_version())

Usage

Tune a sparse operator

import torch
import sparta

batch_size, in_features, out_features = 1024, 1024, 1024
sparsity = 0.9
granularity = (8, 8)

# prepare data
x = torch.rand((batch_size, in_features), device='cuda')
weight = torch.rand((out_features, in_features), device='cuda')
bias = torch.rand((out_features, ), device='cuda')

# generate and apply weight mask
mask = sparta.testing.block_mask(weight.shape, granularity, sparsity, device='cuda')
weight = torch.mul(weight, mask)

# create a dense operator
dense_linear = torch.nn.Linear(in_features, out_features, device='cuda')
dense_linear.load_state_dict({'weight': weight, 'bias': bias})

# create a sparse operator
sparse_linear = sparta.nn.SparseLinear(dense_linear, weight_mask=mask)

# tune the sparse operator
best_config = sparta.nn.tune(sparse_linear, sample_inputs=[x], max_trials=10, algo='rand')

# check if the sparse operator runs correctly
torch.testing.assert_close(sparse_linear(x), dense_linear(x))

Build a sparse operator with specified config

# create a sparse operator
sparse_linear = sparta.nn.SparseLinear(dense_linear, weight_mask=mask)

# build the sparse operator with the `best_config` we got before.
sparse_linear.nn.build(best_config['root'], sample_inputs=[x])

Citing SparTA

If SparTA is helpful in your projects, please cite our paper as below

@inproceedings {SparTA2022,
    author = {Ningxin Zheng and Bin Lin and Quanlu Zhang and Lingxiao Ma and Yuqing Yang and Fan Yang and Yang Wang and Mao Yang and Lidong Zhou},
    title = {SparTA: Deep-Learning Model Sparsity via Tensor-with-Sparsity-Attribute},
    booktitle = {16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22)},
    year = {2022},
    isbn = {978-1-939133-28-1},
    address = {Carlsbad, CA},
    pages = {213--232},
    url = {https://www.usenix.org/conference/osdi22/presentation/zheng-ningxin},
    publisher = {USENIX Association},
    month = jul,
}

Contributor License Agreement

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

sparta's People

Contributors

4562448 avatar microsoft-github-operations[bot] avatar microsoftopensource avatar mydmdm avatar starmys avatar zheng-ningxin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparta's Issues

[Error] Request for a docker file

Hi, it appears that I am encountering a GCC error when attempting to install your project. The error message reads: gcc: error trying to exec 'cc1plus': execvp: No such file or directory. It seems that the issue may be caused by pycuda. Do you have any recommended containers that I could use to quickly run the code?

All trials failed!

I keep getting '[root] All trials failed.' error for every example I try to run. It has been installed properly and everything works fine until it reaches 'best_config = sparta.nn.tune', then I get this error.

[design] Specializer design

updated

SparTA specializer generates customized codes for the sparse operators.

Highlight difference with existing implementation

  1. use jinja2 to handle templates
  2. use PyTorch JIT Cpp extension to compile generated kernel codes, load kernel function, and test via numpy (torch) tensor
  3. decouple TeSA transform (e.g., block coverage) and computing code emitter
  4. definition of TeSA class, make it extensible for quantization formats and diverse storage layout (e.g., CSR, BCSR, etc.)
  5. tune the performance in op level, other than in kernel function level
  6. make the sparse operator class clean for example, let external tuner hold tuning history

Known limits

  1. only support single transform and emitter in existing base class; we may need a nested search space and decorate to define a pipeline with multiple transform or emitter choices
  2. not support use sparse operator as submodule yet, but I believe current interface is compatible (that's why I use op-level tuning)
  3. simplify the burden of tuner, we could support value choice (selecting from a list or set) only

Sparse operators based on single TeSA transformer and emitter

SparTA offers some basic sparse operators e.g., sparse linear, sparse multi-head self-attention. These operators are based on the efficient sparse computing library (template).

from torch.utils.cpp_extension import load

__kernel_binding__ = None # must run op.specialize() first

class DynamicSparseLinearFunction(torch.autograd.Function):
    @staticmethod
    def forward(...): 
        return __kernel_binding__.forward_func(...)
    @staticmethod
    def backward(...):
        return __kernel_binding__.backward_func(...)

class DynamicSparseLinear(nn.Linear, SparseOpBase):

    def __init_specialization__(self, ...):
        self.tesa_transform = BlockCoverage
        self.emitter = TemplateEmitterBase(templates=['operators/dynamic_sparse_linear.cu', 'kernels/sparta_sparse_matmul.cu'])
        self.search_space = SearchSpace({ # we could support only value choice first
            'transform_params': ValueChoice((32,32), (32,64), (64,32), (64,64)), # value choice is a list / set of 
            'emitter_params': SearchSpace({
                'BLOCK_SIZE_M_VALUE': ValueChoice(8,16,32,64,128),
                #...
            })
        })

    def forward(self, ...):
        return DynamicSparseLinearFunction.apply(...)
    
    def specialize(self, tesa: dict, sample: TemplateParam):
        transform_params, emitter_params = ..(sample)..
        self.weight_tesa = self.tesa_transform(tesa['weight'], **transform_params)
        # meta information such as BCSR format (block size), number of non-zero block, indexes (even bidirectional) are embedded
        emitter_params.update(...) # update according to tesa
        files = self.emitter.code_gen(emitter_params)
        __kernel_binding__ = load(name, sources = files)
        return self

    def performance_optimization(self, tesa: dict, tuner_cls: type(Tuner), evaluator: Evaluator):
        tuner = tuner_cls(self.search_space)
        while not tuner.completed():
            samp = tuner.get_param()
            self.specialize(tesa, samp)
            metric = evaluator(self) # forward or backward step latency
            tuner.receive_metric(metric)
        return self.specialize(tesa, tuner.best_param()) 

class SparseWeightLinearEvaluator:

    def __init__(self, in_features, out_features):
        self.ref_op = nn.Linear(in_features, out_features)
        self.inputs = ...

    def evaluate(self, spop: SparseWeightLinear):
        pass

# usage example
weight_mask = ..Tensor..
weight_tesa = TeSAUtils.from_mask(weight_mask)
splinear = SparseWeightLinear(in_dim, out_dim)
splinear.tune_specialize({'weight': weight_tesa}, RandomSearchTuner, SparseWeightLinearEvaluator().evaluate)

TeSA and transform

@dataclass
class TeSA:
    mask: torch.Tensor # int tensor
    formats: List[str|dict]
    layout: dict = None # or name it as storage

class TeSAUtils
    @staticmethod
    def from_mask(mask: torch.Tensor, formats: list = None):
        return TeSA(mask=mask, layout={'layout': 'dense', 'meta': None},
                    formats=['float32', 'pruned'] if formats is None else formats)

The mask is integer tensor indicate each element's format, its value is between 0 and #formats - 1.
The formats is the registered formats corresponding to the integer values in mask, for example, the common setting for pruning and quantization could be like

pruning_tesa = TeSA(..mask{0,1}.., ['float32', 'pruned'])
pruning_quant_tesa = TeSA(..mask{0,1}.., [{'format': 'int8', 'scale':.., 'zero-point': 0}, 'pruned'])

The layout is {'layout': 'dense', 'meta': None} by default, which means the tensor is stored in dense format. After transformation (or convert), it could be changed to a detailed sparse format like

{
    'layout': 'BCSR',
    'meta': {
        'block': (32,32),
        'index': ..numpy.array.., # optional
        'transpose_index': ..numpy.array.., # optional
    } 
}

Emitter

The emitter class generate kernel codes for specific functions, e.g., sparse matrix multiplication, from a given template or scheduling primitives.

Here is an example of template based emitter (via jinja2).

from jinja2 import Template

class TemplateEmitterBase:

    def __init_template__(self, template_names: str, compile_opts: List[str]):
        self.templates = [t:Template(..path/to/template_name..) for t in template_names]
        self.compile_opts = compile_opts

    def code_gen(self, params: dict):
        src_code = [k:v.render(*params) for k,v in self.templates]
        files = .. write to disk ..
        return files

In fact we have two kinds of templates, the computing kernels and the operators interface. We could separate them by folders like:

templates
    |- kernels
        |- sparta_sparse_matmul.cu
        |- ...
    |- operators
        |- sparse_weight_linear.cu
        |- sparse_msa.cu

We could leverage the template engine to write complicated templates, for example, matmul with or without bias input as below

__global__ void BLOCK_SPARSE_MATMUL_BIAS(
    float* A, float* W_val, int* W_row, int* W_col,
    float* C, 
{% if USE_BIAS %}
    float *bias, 
{% endif %}
    int M, int K, int N){
        ...
    }

or we could let the template selecting different computing kernels according to its block size

void dynamic_forward_function(float* activation, int* row_ptr, int* col_idx,
                    float * val, float* bias, int M, int K, int N, int block_h, int block_w, float* output)
{
    dim3 gridDim(N/{{ BLOCK_SIZE_N }}, M/{{ BLOCK_SIZE_M }});
    dim3 blockDim({{ BLOCK_SIZE_N }} / {{ THREAD_SIZE_N }}, {{ BLOCK_SIZE_M }}/{{ THREAD_SIZE_M }});

    {% if (BLOCK_SIZE_N == 32) and (BLOCK_SIZE_M == 32) %}
    ...efficient 32x32 kernel...
    {% else %}
    ...general kernel...
    {% endif%}
}

Installation Failed

I followed the README to install SparTa, but it failed even though there were no errors during the installation.
My CUDA, pytorch and pycuda are all adapted to 11.8๏ผš
image

But when I use the 'Tune a sparse operator' in the README, I still get an error: ModuleNotFoundError: No module named 'sparse_moe_cpp'. I don't know what's wrong with that.
image

Version 1.0-alpha Iteration Plan

  • Estimated release date:
    • public preview (alpha): 9/1
    • public preview (beta): 9/30
    • refactor kernels & TeSA: 10/15

P0

  • Tuning more steps to show the speedup gain of the pytorch sparse modules
  • Support the openai kernel/template
  • code review
  • Usage Interface(8.19) (update one version on 8.26)
  • Fix triton speed(8.19)
  • Sparse Softmax Kernel
  • Biased OpenAI MatMul Kernel
  • finegrained 99% + block size 8x8 95% + block size 32 x 32
  • Documentation (test)
  • package data (test)
  • sparta.tune(): hook, set search space
  • Fix sparse softmax
  • Integration test/example: Linear, Softmax
  • Fix JIT latency
  • Read the docs
  • SparTA DDS MatMul kernel
  • Batch MatMul & Softmax
  • Sparse Attention
  • Add sparse matmul kernel: transpose_A
  • Functional
  • Support backward
  • Add perfermance test: Compare with Triton 1.1.2 (Upload test scripts)
  • Test current tuner
  • Test Sparse Attention
  • Update kernel pycuda interface
  • Profile Layout converting
  • Construct sparse attention op with linear & softmax ops
  • Beta version: docs, docstrings & examples
  • Test on V100; backward
  • Fix kernel output
  • Module tuner: get combined search space of connected ops automatically
  • Connect to NNI's new tuner

P1

  • Apply roller's rules
  • Support multi-process tuning
  • BCSR kernel: convert(), inverse(), swapaxes(), sum(), rebuild TeSA Converter when set_mask()
  • Auto converter: support value mask in matmul kernels
  • PyCUDA device context register & operator.to() (multiple cards)
  • Support the multiple sparse formats: sdd dsd, dds for linear
  • Support the block quantization kernel/fp16/bf16
  • Compare Sparse Softmax with Triton's Sparse Softmax and keep improving.
  • unit tests
  • Model tuning interface / documents / examples
  • Common mask patterns
  • Refactor TeSA (Meta, linter)
  • Fuse layout converting into kernels

P2

  • Support the offline LUT or the kernel cache/DB

requirements.txt and README may cause build error

The second way to build SparTA in Get Start part in README is wrong.
The correct way to build it from source code may be this:

git clone https://github.com/microsoft/SparTA.git
cd SparTA
pip install -r requirements.txt
python setup.py install

and there is a bug in requirements.txt, it is missing a package ninjia.
If ninjia is not installed, it will cause an undefined symbol: xxxxxx.so xxxxx error

v0.1 Iteration Plan

Work Items

Propagation

  • Support propagation for SparTA(import from NNI or move the speedup into SparTA?). 6.1 @zheng-ningxin need discuss about the limitation of current implementation

Codegen

  • Block Sparse Matmul Code generation
    • Template(with/without fusion) (formalize the yaml format)
    • Emitter for different frameworks/libraries
    • Transformation policy()
    • Environment & measurement
    • Parallel process pool
  • Finegrained
  • Block quantization
  • Block+finegrained

Test

  • Matmul block sparse end2end code generation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.