Git Product home page Git Product logo

litgpt's Introduction

LitGPT

⚡ LitGPT

PyPI - Python Version cpu-tests license Discord

 

⚡ LitGPT is a hackable implementation of state-of-the-art open-source large language models released under the Apache 2.0 license.

 

LitGPT supports

✅  The latest model weights: Gemma, Mistral, Mixtral, Phi 2, Llama 2, Falcon, CodeLlama, and many more.

✅  Optimized and efficient code: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, optional CPU offloading, and TPU and XLA support.

✅  Pretraining, finetuning, and inference in various precision settings: FP32, FP16, BF16, and FP16/FP32 mixed.

✅  Configuration files for great out-of-the-box performance.

✅  Efficient finetuning: LoRA, QLoRA, Adapter, and Adapter v2.

✅  Quantization: 4-bit floats, 8-bit integers, and double quantization.

✅  Exporting to other popular model weight formats.

✅  Many popular datasets for pretraining and finetuning, and support for custom datasets.

✅  Readable and easy-to-modify code to experiment with the latest research ideas.

 
 

Project templates

The following Lightning Studio templates provide LitGPT tutorials and projects in reproducible environments with multi-GPU and multi-node support:

Prepare the TinyLlama 1T token dataset

Pretrain LLMs - TinyLlama 1.1B

Continued Pretraining with TinyLlama 1.1B

Instruction finetuning - TinyLlama 1.1B LLM

 
 

Installing LitGPT

You can install LitGPT with all dependencies (including CLI, quantization, tokenizers for all models, etc.) using the following pip command:

pip install 'litgpt[all]'

Alternatively, can install litgpt from a cloned GitHub repository:

git clone https://github.com/Lightning-AI/litgpt
cd litgpt
pip install -e '.[all]'

 

Using LitGPT

Below is a minimal example to get started with the LitGPT command line interface (CLI), illustrating how to download and use a model:

# 1) Download a pretrained model
litgpt download --repo_id mistralai/Mistral-7B-Instruct-v0.2

# 2) Chat with the model
litgpt chat \
  --checkpoint_dir checkpoints/mistralai/Mistral-7B-Instruct-v0.2

>> Prompt: What do Llamas eat?

For more information, refer to the download and inference tutorials.

 

Note

We recommend starting with the Zero to LitGPT: Getting Started with Pretraining, Finetuning, and Using LLMs if you are looking to get started with using LitGPT.

 

Finetuning and pretraining

LitGPT supports pretraining and finetuning to optimize models on excisting or custom datasets. Below is an example showing how to finetune a model with LoRA:

# 1) Download a pretrained model
litgpt download --repo_id microsoft/phi-2

# 2) Finetune the model
litgpt finetune lora \
  --checkpoint_dir checkpoints/microsoft/phi-2 \
  --data Alpaca2k \
  --out_dir out/phi-2-lora

# 3) Chat with the model
litgpt chat \
  --checkpoint_dir out/phi-2-lora/final

 

Configuration files for enhanced performance

LitGPT also allows users to use configuration files in YAML format instead of specifying settings via the command line interface and comes with a set of model-specific defaults for good out-of-the-box performance:

litgpt finetune lora \
  --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml

For added convenience, you can also manually override config file setting via the CLI:

litgpt finetune lora \
  --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \
  --lora_r 4

You can browse the available configuration files here.

 

Tip

Run large models on smaller consumer devices: We support 4-bit quantization (as in QLoRA), (bnb.nf4, bnb.nf4-dq, bnb.fp4, bnb.fp4-dq) and 8-bit quantization (bnb.int8) for inference by following this guide.

 
 

Customization

LitGPT supports rich and customizable config files to tailor the LLM training to your dataset and hardware needs. Shown below is a configuration file for LoRA finetuning:

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf

# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
out_dir: out/finetune/qlora-llama2-7b

# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
precision: bf16-true

# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
quantize: bnb.nf4

# How many devices/GPUs to use. (type: Union[int, str], default: 1)
devices: 1

# The LoRA rank. (type: int, default: 8)
lora_r: 32

# The LoRA alpha. (type: int, default: 16)
lora_alpha: 16

# The LoRA dropout value. (type: float, default: 0.05)
lora_dropout: 0.05

# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
lora_query: true

# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
lora_key: false

# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
lora_value: true

# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
lora_projection: false

# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
lora_mlp: false

# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
lora_head: false

# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
data:
  class_path: litgpt.data.Alpaca2k
  init_args:
    mask_prompt: false
    val_split_fraction: 0.05
    prompt_style: alpaca
    ignore_index: -100
    seed: 42
    num_workers: 4
    download_dir: data/alpaca2k

# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
train:

  # Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
  save_interval: 200

  # Number of iterations between logging calls (type: int, default: 1)
  log_interval: 1

  # Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
  global_batch_size: 8

  # Number of samples per data-parallel rank (type: int, default: 4)
  micro_batch_size: 2

  # Number of iterations with learning rate warmup active (type: int, default: 100)
  lr_warmup_steps: 10

  # Number of epochs to train on (type: Optional[int], default: 5)
  epochs: 4

  # Total number of tokens to train on (type: Optional[int], default: null)
  max_tokens:

  # Limits the number of optimizer steps to run (type: Optional[int], default: null)
  max_steps:

  # Limits the length of samples (type: Optional[int], default: null)
  max_seq_length: 512

  # Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
  tie_embeddings:

  #   (type: float, default: 0.0003)
  learning_rate: 0.0002

  #   (type: float, default: 0.02)
  weight_decay: 0.0

  #   (type: float, default: 0.9)
  beta1: 0.9

  #   (type: float, default: 0.95)
  beta2: 0.95

  #   (type: Optional[float], default: null)
  max_norm:

  #   (type: float, default: 6e-05)
  min_lr: 6.0e-05

# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
eval:

  # Number of optimizer steps between evaluation calls (type: int, default: 100)
  interval: 100

  # Number of tokens to generate (type: Optional[int], default: 100)
  max_new_tokens: 100

  # Number of iterations (type: int, default: 100)
  max_iters: 100

# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
logger_name: csv

# The random seed to use for reproducibility. (type: int, default: 1337)
seed: 1337

 

LitGPT design principles

This repository follows the main principle of openness through clarity.

LitGPT is:

  • Simple: Single-file implementation without boilerplate.
  • Correct: Numerically equivalent to the original model.
  • Optimized: Runs fast on consumer hardware or at scale.
  • Open-source: No strings attached.

Avoiding code duplication is not a goal. Readability and hackability are.

 

Get involved!

We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the GitHub Issue tracker.

We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

 

Tip

Unsure about contributing? Check out our How to Contribute to LitGPT guide.

If you have general questions about building with LitGPT, please join our Discord.

 

Tutorials, how-to guides, and docs

Note

We recommend starting with the Zero to LitGPT: Getting Started with Pretraining, Finetuning, and Using LLMs if you are looking to get started with using LitGPT.

Tutorials and in-depth feature documentation can be found below:

 

XLA

Lightning AI has partnered with Google to add first-class support for Cloud TPUs in Lightning’s frameworks and LitGPT, helping democratize AI for millions of developers and researchers worldwide.

Using TPUs with Lightning is as straightforward as changing one line of code.

We provide scripts fully optimized for TPUs in the XLA directory.

 

Acknowledgements

This implementation extends on Lit-LLaMA and nanoGPT, and it's powered by Lightning Fabric.

 

Community showcase

Check out the projects below using and building on LitGPT. If you have a project you'd like to add to this section, please don't hestiate to open a pull request.

 

🏆 NeurIPS 2023 Large Language Model Efficiency Challenge: 1 LLM + 1 GPU + 1 Day

The LitGPT repository was the official starter kit for the NeurIPS 2023 LLM Efficiency Challenge, which is a competition focused on finetuning an existing non-instruction tuned LLM for 24 hours on a single GPU.

 

🦙 TinyLlama: An Open-Source Small Language Model

LitGPT powered the TinyLlama project and TinyLlama: An Open-Source Small Language Model research paper.

 

Citation

If you use LitGPT in your research, please cite the following work:

@misc{litgpt-2023,
  author       = {Lightning AI},
  title        = {LitGPT},
  howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
  year         = {2023},
}

 

License

LitGPT is released under the Apache 2.0 license.

litgpt's People

Contributors

carmocca avatar awaelchli avatar rasbt avatar lantiga avatar andrei-aksionov avatar aniketmaurya avatar gkroiz avatar jxtngx avatar t-vi avatar williamfalcon avatar arturk-85 avatar bkiat1123 avatar borda avatar m0saan avatar mf-foom avatar lucas-ventura avatar salykova avatar nkasmanoff avatar patrickhwood avatar agmo1993 avatar konstantinjdobler avatar larrylawl avatar laurentmazare avatar marco-c avatar shenxiangzhuang avatar mehrdad-es avatar mosheber avatar cx0 avatar rcmalli avatar richginsberg avatar

Forkers

a7mad-magdy77

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.