Git Product home page Git Product logo

charxiv's Introduction

πŸ“Š CharXiv

🏠Home | πŸ€—Data | πŸ₯‡Leaderboard | πŸ“„Paper | Current Version: Beta

This repository contains the code to evaluate models on CharXiv from the paper CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs.

πŸ€— We are first-time evaluation suite builders and this codebase is released for the first-time. We are committed to improving it. If you have any questions, feel free to raise issues and/or submit pull requests for new features of bug fixes.

πŸ”Š An 80-second video teaser of CharXiv

CharXiv_MV_H265.mp4

πŸ‘‹ Introduction

Chart understanding plays a pivotal role when applying Multimodal Large Language Models (MLLMs) to real-world tasks such as analyzing scientific papers or financial reports. However, existing datasets often focus on oversimplified and homogeneous charts with template-based questions, leading to an over-optimistic measure of progress. In this work, we propose CharXiv, a comprehensive evaluation suite involving 2,323 natural, challenging, and diverse charts from scientific papers. CharXiv includes two types of questions: (1) descriptive questions about examining basic chart elements and (2) reasoning questions that require synthesizing information across complex visual elements in the chart. To ensure quality, all charts and questions are handpicked, curated, and verified by human experts. Our results reveal a substantial, previously underestimated gap between the reasoning skills of the strongest proprietary model (i.e., GPT-4o), which achieves 47.1% accuracy, and the strongest open-source model (i.e., InternVL Chat V1.5), which achieves 29.2%. All models lag far behind human performance of 80.5%, underscoring weaknesses in the chart understanding capabilities of existing MLLMs. We hope CharXiv facilitates future research on MLLM chart understanding by providing a more realistic and faithful measure of progress.

Comparison of model performance

πŸ› οΈ Evaluate Your Models

Setup

Download the images:

git clone https://github.com/princeton-nlp/CharXiv.git
cd images
wget https://huggingface.co/datasets/princeton-nlp/CharXiv/resolve/main/images.zip
unzip images.zip && rm images.zip
(Optional) A short tour for the codebase
.
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ descriptive_test.json
β”‚   β”œβ”€β”€ descriptive_val.json
β”‚   β”œβ”€β”€ image_metadata_test.json
β”‚   β”œβ”€β”€ image_metadata_val.json
β”‚   β”œβ”€β”€ reasoning_test.json
β”‚   β”œβ”€β”€ reasoning_val.json
β”‚   β”œβ”€β”€ README.md
β”‚   └── LICENSE
β”œβ”€β”€ images/
β”‚   β”œβ”€β”€ 0.jpg
β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ 2399.jpg
β”‚   └── README.md
β”œβ”€β”€ results/
β”‚   └── README.md
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ constants.py
β”‚   β”œβ”€β”€ descriptive_utils.py
β”‚   β”œβ”€β”€ reasoning_utils.py
β”‚   β”œβ”€β”€ evaluate.py
β”‚   β”œβ”€β”€ generate.py
β”‚   └── get_score.py
β”œβ”€β”€ run.sh
β”œβ”€β”€ README.md
β”œβ”€β”€ LICENSE
└── .gitignore
  • data folder contains all QAs and metadata for images, descriptive questions, and reasoning questions. Answers for the test split are intentionally made to null to prevent testing data from leaking into the public.
  • images folder contains all images where their identifiers range from 0 to 2399. Note that there are only 2333 images in total and the numberings are not consecutive.
  • results folder contains all response generation and scoring results.
  • src folder contains all python code for CharXiv:
    • constants.py stores all the prompts and mappings from question ids to actual questions.
    • descriptive_utils.py contains all code to build queries for response generation and grading, as well as saving all artifacts for descriptive questions.
    • reasoning_utils.py contains all code to build queries for response generation and grading, as well as saving all artifacts for reasoning questions.
    • evaluate.py is the main function to evaluate model responses against the answer with gpt API calls.
    • generate.py is the main function to loop QAs for model to generate responses.
    • get_score.py is the main function to print the reasoning and descriptive question scores.
  • run.sh is the script to evaluate models

Response generation

CharXiv doesn't require any third-party python library when prompting your models to generate responses to the chart-question pairs. Therefore, to set up your model, you should implement the custom_evaluate function in generate.py. Specifically, this function takes queries as the input, which contain all the charts and questions CharXiv uses to evaluate models. It has the following structure:

{
    figure_id:{
        'question': ...<str>
        'figure_path': ...<str>
    },
    ...
    figure_id:{
        'question': ...<str>
        'figure_path': ...<str>
    },
}

Once you load your models and all preprocessing functions, simply implement the evaluate function in src/generate.py:

for k in tqdm(queries):
    query = queries[k]['question']
    image = queries[k]["figure_path"]
    query, image = preprocess(query, image) #TODO
    response = model.chat(query, image) #TODO
    queries[k]['response'] = response

To generate model responses:

python generate.py \
   --model_name $model_name \
   --split $split \
   --mode $mode \
   --model_path $model_path #custom arg

⚠️The first three arguments are required and you should not delete them. It is your responsibility to ensure the correctness and integrity of the evaluation pipeline if you change them. In particular,

  • --model_name is used as a string identifier to name the resulting file and you can use whatever string except "-", which we reserve to parse file names to get meta information.

  • You should either choose val or test for the --split argument. To perform end-to-end in-house evaluations, you should use val we do not release answers for test.

  • Finally, you should either choose descriptive or reasoning for the --mode argument.

βœ…The last argument i.e., --model_path is a custom argument and feel free to delete it, modify it or add more args as you see fit.

πŸ—„οΈ Once the process ends, you will find a file in results folder named: gen-<model_name>-<mode>_<split>.json. This file stores your model's responses.

Evaluation

⚠️ Make sure the openai python API library is installed in your evaluation environment. If not, pip install openai first.

python evaluate.py \
    --model_name $model_name \
    --split $split \
    --mode $mode \
    --api_key $openai_key

The first three arguments are same as the above (response generation), except that you need to provide an openai API key to evaluate your model's responses.

This python script will automatically match the gen-<model_name>-<mode>_<split>.json file in the results folder and the <mode>_<split>.json file in the data folder.

πŸ—„οΈ Once the process ends, you will find a file in results folder named: scores-<model_name>-<mode>_<split>.json. This file stores your model's evaluation results.

Finally, run this:

python get_score.py \
    --model_name $model_name \
    --split $split \
    --mode $mode \

This python script will automatically match the scores-<model_name>-<mode>_<split>.json file in the results folder to calculate the score and print the score in your console.

Note: we will be updating the get_score.py file soon to include more metrics (i.e., metrics we reported in our paper).

πŸ“œ License

Our original data contributions (all data except the charts) are distributed under the CC BY-SA 4.0 license. Our code is licensed under Apache 2.0 license. The copyright of the charts belong to the original authors, where you can find the source in image_metadata_val.json and image_metadata_test.json under the data folder.

πŸ₯Ί Cite

If you use our work and are inspired by our work, please consider cite us:

@article{wang2024charxiv,
  title={CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs},
  author={Wang, Zirui and Xia, Mengzhou and He, Luxi and Chen, Howard and Liu, Yitao and Zhu, Richard and Liang, Kaiqu and Wu, Xindi and Liu, Haotian and Malladi, Sadhika and Chevalier, Alexis and Arora, Sanjeev and Chen, Danqi},
  journal={arXiv preprint arXiv:2406.18521},
  year={2024}
}

πŸ™Œ Contributors and Acknowledgement

πŸ“Š CharXiv is built by a team consisting of:
Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, Danqi Chen

Princeton Language and Intelligence, Princeton University
University of Wisconsin, Madison
The University of Hong Kong.

πŸ€— We adapted part of the MathVista's codebase in building our evaluation framework, and we greatly appreciate their contributions to the MLLM community.
πŸ€— The lyrics in the teaser video are created by GPT-4o from our abstract, and the music is created by Suno. Video is manually edited using CapCut.

charxiv's People

Contributors

zwcolin avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.