Git Product home page Git Product logo

bitnet-transformers's Introduction

0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch with Llama(2) Architecture

BitNet Architecture

BitNet

Prepare Dev env

# Clone this repo
git clone https://github.com/beomi/bitnet-transformers
cd bitnet-transformers

# Install requirements
pip install -r clm_requirements.txt

# Clone transformers repo
git clone https://github.com/huggingface/transformers
pip install -e transformers

# Update Llama(2) model
rm ./transformers/src/transformers/models/llama/modeling_llama.py
ln -s $(pwd)/bitnet_llama/modeling_llama.py ./transformers/src/transformers/models/llama/modeling_llama.py

We'll overwrite bitnet_llama/modeling_llama.py into transformers. Since the file is linked, any changes made to the file will be reflected in the transformers repo.

Train Wikitext-103

Train Loss Graph when train BitLLAMA using Wikitext-103

You can track metrics via wandb

./train_wikitext.sh

GPU Mem Usage Comparison

Train Config

  • Batch size: 1
  • Gradient accumulation: 1
  • Seq length: 2048
  • Model: LLamaForCausalLM with BitLinear layer
  • Model size: 47,452,672 (47.5M)

Original LLAMA - 16bit

  • Uses 250MB GPU memory for Model weights

BitLLAMA - Mixed 16bit

  • Uses 200MB GPU memory for Model weights
  • Use bf16(or fp16) to store model weights
  • Use int8 to store -1/1 1-bit weights
  • Use more memory when training than original LLAMA: It saves 1-bit weight and 16bit weight together

BitLLAMA - 8bit

  • Uses 100MB GPU memory for Model weights
  • Use bf16(or fp16) on-the-fly when needed
  • Use 8bit to save 1-bit BitLinear weight & other weights

BitLLAMA - 1bit

  • Use bf16(or fp16) on-the-fly when needed
  • Use 1bit to save 1-bit weight
TBD

Todo

  • Add BitLinear layer
  • Add LLamaForCausalLM model with BitLinear layer
    • Update .save_pretrained method (for 1-bit weight saving)
  • Add sample code for LM training
  • Update BitLinear layer to use 1-bit weight
    • Use uint8 instead of bfloat16
    • Use custom cuda kernel for 1-bit weight

bitnet-transformers's People

Contributors

beomi avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.