Git Product home page Git Product logo

loong's Introduction

Loong

 Loong: Benchmarking Long-Context LLMs with Extended Multi-Doc QA

This repository contains code for our paper Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA. We propose a novel long-context benchmark, 🐉 Loong, aligning with realistic scenarios through extended multi-document question answering (QA). Loong typically consists of 11 documents per test instance on average, spanning three real-world scenarios in English and Chinese: (1) Financial Reports, (2) Legal Cases, and (3) Academic Papers. Meanwhile, Loong introduces new evaluation tasks from the perspectives of Spotlight Locating, Comparison, Clustering, and Chain of Reasoning, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Furthermore, Loong features inputs of varying lengths (e.g., 10K-50K, 50K-100K, 100K-200K, beyond 200K) and evaluation tasks of diverse difficulty, enabling fine-grained assessment of LLMs across different context lengths and task complexities.

Please find more details of this work in our paper.

Overview of Loong

Showcase of four evaluation tasks in Loong (<di>...</di> marks the content of the i-th document). (a) Spotlight Locating: Locate the evidence. (b) Comparison: Locate and compare the evidence. (c) Clustering: Locate and cluster the evidence into groups. (d) Chain of Reasoning: Locate and reasoning along a logical chain.

📰News

[2024-07-03] 🔥The code and benchmark are releasing. If you encounter any issues, please feel free to contact us.

[2024-06-25] 👨‍💻The code is currently being refined, and we plan to release the evaluation code and benchmark within the next one or two weeks. If you encounter any issues, please feel free to contact me at [email protected].

🏆Leaderboard

Models Claimed Length Spotlight Locating Comparison Clustering Chain of Reason Overall
Gemini-1.5-pro 1000K 75.020.56 49.940.27 44.100.09 64.970.37 55.370.27
GPT-4o 128K 73.950.62 50.500.28 44.290.09 57.950.28 53.470.26
Claude3.5-Sonnet 200K 58.450.49 54.210.35 45.770.07 43.920.25 48.850.23
Claude3-Haiku 200K 68.680.59 42.100.21 35.040.02 47.590.17 44.880.19
Qwen2-72B-Instruct (72B) 128K 54.170.36 42.380.20 36.710.04 47.760.18 43.290.15
GLM4-Chat (9B) 1000K 57.350.47 40.380.20 28.520.02 39.940.16 38.310.16
Kimi-Chat 200K 60.980.50 34.740.13 28.760.04 38.520.15 37.490.16

Overall results on four evaluation tasks. For each task, the indicator on the left represents the Avg Scores(0~100), while the right one represents the Perfect Rate(0~1).

Model Claimed Length Spotlight Locating Comparison Clustering Chain of Reasoning Overall
Set1 (10K-50K)
GPT-4o 128K 85.670.81 64.270.33 57.010.24 81.580.55 70.400.44
Claude3.5-Sonnet 200K 60.850.55 69.070.47 58.630.13 68.570.50 63.690.37
Gemini-1.5-pro 1000K 75.000.60 54.880.28 56.150.23 70.640.37 63.360.34
Qwen2-72B-Instruct 200K 68.490.55 60.600.37 47.080.08 70.390.36 60.110.29
Claude3-Haiku 200K 60.940.55 59.970.40 45.530.04 66.850.34 57.140.28
Kimi-Chat 200K 81.110.74 46.700.20 47.840.07 53.770.17 55.020.24
GLM4-9B-Chat 1000K 63.110.53 54.100.27 39.500.08 56.320.28 51.430.25
Set2 (50K-100K)
GPT-4o 128K 86.760.72 59.810.40 47.830.11 62.090.34 58.380.29
Gemini-1.5-pro 1000K 76.500.57 54.510.34 44.580.09 64.870.34 55.560.26
Claude3.5-Sonnet 200K 63.830.53 58.900.39 50.960.10 46.090.26 52.730.24
Qwen2-72B-Instruct 128K 64.530.43 42.600.21 38.520.05 51.180.20 45.710.17
Claude3-Haiku 200K 73.710.66 41.900.22 36.180.02 50.200.15 45.450.17
Kimi-Chat 200K 72.820.52 46.770.21 33.460.06 40.510.15 42.400.16
GLM4-9B-Chat 1000K 65.040.54 41.800.23 30.720.02 42.340.17 40.190.17
Set3 (100K-200K)
Gemini-1.5-pro 1000K 81.250.56 44.660.20 39.900.05 58.380.36 52.050.24
GPT-4o 128K 74.840.65 42.400.21 38.700.04 45.060.09 46.950.19
Claude3.5-Sonnet 200K 65.360.56 50.320.34 37.790.03 25.950.11 42.060.19
Claude3-Haiku 200K 77.810.67 37.070.17 30.940.01 36.870.12 41.410.18
Qwen2-72B-Instruct 128K 46.990.27 37.060.13 31.500.02 35.010.07 35.940.09
GLM4-9B-Chat 1000K 69.190.56 37.990.18 26.630.01 32.300.09 37.360.16
Kimi-Chat 200K 62.130.54 24.200.05 21.980.01 31.020.14 31.370.14
Set4 (200K-250K)
Gemini-1.5-pro 1000K 62.230.49 43.080.20 36.480.00 68.510.49 50.700.25
Claude3-Haiku 200K 53.260.40 27.000.03 25.360.00 28.110.05 32.150.10
GPT-4o 128K 36.790.19 23.970.08 30.400.00 32.890.07 31.110.07
Claude3.5-Sonnet 200K 36.910.24 28.820.05 28.680.00 28.770.08 30.510.08
Qwen2-72B-Instruct 128K 33.180.16 26.590.08 29.840.01 25.810.04 28.920.06
GLM4-9B-Chat 1000K 15.670.12 21.330.05 12.350.00 21.040.05 16.840.05
Kimi-Chat 200K 20.170.12 9.170.00 5.650.00 22.610.11 13.500.05

The performance of LLMs on four evaluation tasks with different length sets. For each task, the indicator on the left represents the Avg Scores(0~100), while the right one represents the Perfect Rate(0~1).

  • Following previous work, we prompt GPT-4 as a judge to evaluate the model's output based on the golden answer and the question's requirements from three aspects: Accuracy, Hallucinations, and Completeness, scoring from 0 to 100. For a detailed prompt, please refer to our paper.
  • We design two indicators: (1) Avg Scores: the average value of scores given by GPT-4 for all questions; (2) Perfect Rate: the proportion of cases scoring 100 out of the total cases. The latter is a more stringent evaluation metric compared to the former.
  • We set temperature = 0 to eliminate randomness and keep other hyper-parameters default. For API-Based LLMs, we directly utilize the official API for testing. Since the Kimi-Chat-200k currently does not provide an interface, we manually input content on the web. As for open-source models, we conduct experiments on a server with 8 $\times$ A100 80GB.

🔧Evaluate long-context LLMs

Step1 Download Loong benchmark

git clone https://github.com/MozerWang/Loong.git
cd Loong

Step2 Create a conda environment and Install other dependencies.

conda create --name loong python=3.9 -y
conda activate loong
pip install -r requirements.txt

Step3 Preparing the Model

  1. (Must) Set up your OPENAI key in config/models/gpt4.yaml
api_key: "Your OPENAI key"
  1. If you are using API-based LLM
# Firstly, Set up your key in config/models/*.yaml
api_key: "Your API key"
  1. If you are using Open-sourced LLM
# We recommend using vLLM. And we use HTTP server that implements OpenAI’s Completions and Chat API.
# We have provided using example for Qwen2 and GLM4. See details in Loong/src/vllm_eample.sh
cd src
sh vllm_example.sh

Step4 Evaluate

cd src
sh run.sh

Things To Know

  • We provide a complete evaluation process:
    step1_load_data.py Data loading
    step2_model_generate.py Model generation
    step3_model_evaluate.py GPT-4 evaluation
    step4_cal_metric.py Result statistics

  • For step2_model_generate.py, you can design the model generation part yourself, modifying it to use your own model's inference method. Just make sure the input and output interfaces in src/utils/generate.py remain consistent:

# Input
generate(prompts, config, output_path, process_num, tag)

# Output
result = prompt.copy() # for prompt in prompts
result[tag] = response_content # Your LLM's response
with open(output_path, 'a', encoding='utf-8') as fw:
    fw.write(json.dumps(result, ensure_ascii=False) + '\n')

Citation

@article{wang2024loong,
  title={Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA},
  author={Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang, Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Fei Huang, Yongbin Li},
  year={2024}
  journal={arXiv preprint arXiv:2406.17419},
}

loong's People

Contributors

october2001 avatar mozerwang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.