Git Product home page Git Product logo

openmoss / language-model-saes Goto Github PK

View Code? Open in Web Editor NEW
26.0 26.0 5.0 24.73 MB

For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.

Python 4.24% Jupyter Notebook 95.29% JavaScript 0.02% HTML 0.01% TypeScript 0.42% CSS 0.01% Dockerfile 0.01% Makefile 0.01% Batchfile 0.01%
interpretability mechanistic-interpretability sparse-autoencoders sparse-dictionary

language-model-saes's Issues

[Proposal] Accelerate Inference in TransformerLens

The main bottleneck of SAE training lies in activation gen. It can be annoying when we try to work with larger models.

Try to accelerate TL inference, especially attn forward. What are some possible options? FlashAttn2 or VLLM or something?

Since we usually do not cache Q K V, attn forward can be replaced with some faster alternatives.

  • Support FlashAttn-2 in TL

[Proposal] Server app & frontend need optimization

I have noticed 2 problems in current frontend + backend service.

  • [urgent] byte_decoder does not work in llama-3 8B decoder. What is a workaround?

AttributeError: 'PreTrainedTokenizerFast' object has no attribute 'byte_decoder'.

  • [low priority] Larger models have longer context window. Will this make visualization messier?

Simple truncating might not work. It may fail to catch longer range dependency.

I propose to make some changes in the frontend. What about a preview (shorter local context) which can be expanded to full context? @dest1n1s

[Proposal] Optimize dataset loading and activation store

The current activation store implementation has some drawbacks. Maybe we need to add some new features for streaming activation store and make some optimizations. Below I list some details.

  1. Text Dataset Collate Config
    We need to support SAE training on both pretraining and SFT data, unlike Anthropic's Scaling Monosemanticity in which only pretrained data is used to train SAEs on a supervised finetuned model.

IMO pretraining data should be packed and SFT data should be sorted by length and batched with post paddings. Activations in the residual stream of s should be ignored in SAE training. I believe this is better fitted to real-world distribution.

We need to add into the configuration to configure this.

  • Support two types of activation gen
  1. Shuffle
    When training SAEs with data from multiple distributions, shuffling should be an option to add to diversity of information in a batch. This can be implemented by filling in the activation buffer with random sources.
  • Support buffer filling from multiple sources

[Proposal] Documentation coverage and static documentation site

It is much easier for people (who may be new to mechanistic interpretability) to get started with detailed tutorial and documentation. Currently this project lacks documentation and comments in many modules. We should raise the documentation coverage, to ensure detailed explanation of every part of our library.

Furthermore, we should consider building a static documentation site with tools like MkDocs. This helps people get an overview of the usage of the library without actually downloading it.

[Proposal] Add Automatic (Unit) Testing and CI Workflows

Automatic testing is fundamental to keep a collaborative developed project from endless bugs corrupting modules that originally work. As for a deep learning library, always running the whole training or analyzing process from the outermost can consume lots of time and computational resources. Minor bugs may also not be triggered in a fixed training setting. Thus, it's necessary to test at different levels to ensure proper functioning as much as possible.

I propose adding the following 4 categories of testing:

  • Unit testing: Testing if every innermost method works well with mock data, e.g. a single forward pass in a minimal SAE, a single generation of activation. Unit testing should cover almost all parts of the library, so every single test is required to run fast.
  • Integrated testing: Testing if low-level modules work with one another properly, e.g. getting feature activation directly from text input (needs co-working of transformers and SAEs), a single training pass, and loading pretrained SAEs from HuggingFace. These tests should cover the common usage of the library at a rather high level. It also requires an acceptable time cost (maybe no more than several seconds). These tests should not depend on GPUs if possible.
  • Acceptance testing: Testing if modules work with a high performance (loss, memory allocated, time cost), e.g. if a pretrained SAE gives a reasonable loss. Some of these tests may require GPUs to run. Failure of these tests may be acceptable in some situations.
  • Benchmarks: Testing the time usage of a complete process and some bottleneck modules.

Continuous Integration (CI) with GitHub workflows should also be added to run testing on every push/PR. PRs should not be merged unless all tests are passed.

[Proposal] Support from_pretrained

We aim to build a really useful infrastructure for SAE research.

Maybe we need to open source our SAEs with a huggingface-type interface. This may require some sort of cloud service for storage or stuff like that? @dest1n1s

This has lower priority since this feature is only of use after we trained considerably good SAEs on larger language models.

[Proposal] Publish on PyPI

We can publish this library on PyPI so that people can use this package simply using pip install lm-saes! However, before this we should first get this library well-tested and well-documented:

Besides, even if we publish this library, people may still need to clone this repository for using the visualization tools. Perhaps we can publish docker images of the backend & frontend of visualization.

[Proposal] Support early stop & partial loading model weights

  1. Early stop @dest1n1s

TransformerLens already supports stop at a given layer. We can utilize this feature to remove unused calculation. This can be applied at any cases that uses run_with_cache.

  1. Partial loading model weights @Frankstein73

Same as Early Stop, what if we do not load these unused weights at all? This may save GPU memory when training early and middle layers.

  • Early stop
  • Partial loading model weights

[Proposal] Support DDP for activation generation and SAE training.

A natural approach to faster SAE training is data parallel. Maybe we can just simply use DDP to make 8 copies of the TL model to yield activation and synchronize SAE gradients. This may help accelerate activation gen, which is the speed bottleneck for larger LMs.

This may not work on larger size models, say 70B models. Maybe the ultimate solution is a producer-consumer design pattern. Let's leave this for later.

  • Support DDP

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.