Git Product home page Git Product logo

analysis360's Introduction

Analysis360: Analyze LLMs in 360 degrees




license

HF [Amber]W&B dashboard [Amber]HF [CrystalCoder]W&B dashboard [CrystalCoder]Publication

Welcome to Analysis360!

This repo contains all of the code that we used for model evaluation and analysis. It serves as the single source of truth for all evaluation metrics and provides in-depth analysis from many different angles. Feel free to click on the links above to have a quick glance around the LLM360 project and experiments' data.

Our Approach

We run evaluations on a variety of benchmarks, including the conventional benchmarks like MMLU, Hellaswag, ARC, user-preference aligned benchmarks like MT-bench, long-context evaluations like LongEval, and additional studies on safety benchmarks for truthfulness, toxicity, and bias. Moreover, we report results on the model samples we preselected from a suite of LLMs where they all trained on same data seen in the exact same order to better observe and understand how our models develop and evolve over the training process. We also provide public access to all checkpoints, all code and all wandb dashboards for detailed training and evaluation curves.

W&B Dashboards

Every model has one wandb project/dashboard, each project will have multiple runs, and all of projects should be in the same base structure. For example, Amber project has runs train, downstream_eval, and perplexity_eval. The train run collects data for training processes like loss and learning rate while the others collects data for evaluation. Additionally, we added a resources section for Amber project to specifically record the resources related information for anyone who's interested. To quickly find the metric you are looking for, you could use the search bar on the top or/and the filter on the top right.

List of Analysis and Metrics

Here's a full list of analysis/metrics we have collected so far. For each model we release, at this point, Amber and CrystalCoder, we put down the links to specific wandb reports if the evaluation is done. Amber and CrystalCoder currently use their own evaluation scripts, we are working on consolidating these in the future, more details can be found in later sections. Please refer to model cards (Amber, CrystalCoder) for any terms or technology you find unfamiliar. We will keep updating and expanding the list as our study proceeds, please stay tuned on the upcoming changes!

Metrics/Analysis Description Amber CrystalCoder
mmlu A test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more 5 shot 0 shot
5 shot
race A test to measure reading comprehension ablity 0 shot 0 shot
arc_challenge A set of grade-school science questions 25 shot 0 shot
25 shot
boolq A question answering dataset for yes/no questions containing 15942 examples 0 shot 0 shot
hellaswag A test of commonsense inference 10 shot 0 shot
10 shot
openbookqa A question-answering dataset modeled after open book exams for assessing human understanding of a subject 0 shot 0 shot
piqa A test to measure physical commonsense and reasoning 0 shot 0 shot
siqa A test to measure commonsense reasoning about social interactions 0 shot
winogrande An adversarial and difficult Winograd benchmark at scale, for commonsense reasoning 0 shot 0 shot
5 shot
crowspairs A challenge set for evaluating what language models (LMs) on their tendency to generate biased outputs 0 shot
truthfulqa A test to measure a model’s propensity to reproduce falsehoods commonly found online 0 shot 0 shot
pile A test to measure model's perplexity, we covered 18/22 sub datasets perplexity
drop A reading comprehension benchmark requiring discrete reasoning over paragraphs 3 shot
mbpp Around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry-level programmers pass 1
pass 10
humaneval A test to measure functional correctness for synthesizing programs from docstrings pass 1
pass 10
gsm8k Diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems 5 shot
copa A test to assess progress in open-domain commonsense causal reasoning 0 shot
toxigen A test to measure model's toxicity on text generation toxigen
toxicity identification A test to measure model's capability on identifying toxic text toxicity identification
bold A test to evaluate fairness in open-ended language generation in English language bold
memorization and token orders analysis An analysis to understand model's memorization abilities memorization

How to reproduce our results

Most of our evaluations are built based on lm-evaluation-harness's core lm_eval module. We reused the metrics that were supported by harness and added in our own to support more. Please follow the instructions here to get started. For any metric that's not included in the harness folder, users should be able to find a dedicated folder for that metric in the root level of the repo and follow the instructions there. Note, we are still working on getting code consolidated and uploaded so please wait for future releases to fill out the missing gaps.

analysis360's People

Contributors

yukiontheiceberg avatar willieneis avatar aurickq avatar mylibrar avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.