Git Product home page Git Product logo

whp0011 / db-gpt-hub Goto Github PK

View Code? Open in Web Editor NEW

This project forked from eosphoros-ai/db-gpt-hub

0.0 1.0 0.0 25.29 MB

A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL,and achieved higher exec acc than GPT-4 in spider eval with 13B LLM used this project .

License: MIT License

Shell 1.44% Python 98.56%

db-gpt-hub's Introduction

DB-GPT-Hub: Text-to-SQL parsing with LLMs

Contents

1. What is DB-GPT-Hub

DB-GPT-Hub is an experimental project utilizing LLMs (Large Language Models) to achieve Text-to-SQL parsing. The project primarily encompasses data collection, data preprocessing, model selection and building, and fine-tuning of weights. Through this series of processes, we aim to enhance Text-to-SQL capabilities while reducing the model training costs, allowing more developers to contribute to the improvement of Text-to-SQL accuracy. Our ultimate goal is to realize automated question-answering capabilities based on databases, enabling users to execute complex database queries through natural language descriptions.

So far, we have successfully integrated multiple large models and established a complete workflow, including data processing, model SFT (Supervised Fine-Tuning) training, prediction output, and evaluation. The code is readily reusable within this project.

As of October 10, 2023, by fine-tuning an open-source model of 13 billion parameters using this project, the execution accuracy on the Spider evaluation dataset has surpassed that of GPT-4!

Part of the experimental results have been compiled into the document in this project. By utilizing this project and combining more related data, the execution accuracy on the Spider evaluation set has already reached 0.825.

2. Fine-tuning Text-to-SQL

We enhance the Text-to-SQL performance by applying Supervised Fine-Tuning (SFT) on large language models.

2.1. Dataset

The primary dataset for this project's examples is the Spider dataset:

  • SPIDER: A complex text2sql dataset across domains, containing 10,181 natural language queries, 5,693 SQL distributed across 200 separate databases, covering 138 different domains.download link

Other text2sql datasets available:

  • WikiSQL: A large semantic parsing dataset consisting of 80,654 natural statement expressions and sql annotations of 24,241 tables. Each query in WikiSQL is limited to the same table and does not contain complex operations such as sorting, grouping The queries in WikiSQL are limited to the same table and do not include complex operations such as sorting, grouping, subqueries, etc.

  • CHASE: A cross-domain multi-round interactive text2sql Chinese dataset containing a list of 5,459 multi-round questions consisting of 17,940 <query, SQL> binary groups across 280 different domain databases.

  • BIRD-SQL: A large-scale cross-domain text-to-SQL benchmark in English, with a particular focus on large database content. The dataset contains 12,751 text-to-SQL data pairs and 95 databases with a total size of 33.4 GB across 37 occupational domains. The BIRD-SQL dataset bridges the gap between text-to-SQL research and real-world applications by exploring three additional challenges, namely dealing with large and messy database values, external knowledge inference and optimising SQL execution efficiency.

  • CoSQL: A corpus for building cross-domain conversational text-to-SQL systems. It is a conversational version of the Spider and SParC tasks. CoSQL consists of 30k+ rounds and 10k+ annotated SQL queries from Wizard-of-Oz's collection of 3k conversations querying 200 complex databases across 138 domains. Each conversation simulates a realistic DB query scenario in which a staff member explores the database as a user and a SQL expert uses SQL to retrieve answers, clarify ambiguous questions, or otherwise inform.

  • Following the processing template of NSQL, the dataset underwent basic processing, yielding approximately 20K dataset

2.2. Model

DB-GPT-Hub currently supports the following base models:

  • CodeLlama
  • Baichuan2
  • LLaMa/LLaMa2
  • Falcon
  • Qwen
  • XVERSE
  • ChatGLM2
  • internlm

The model is fine-tuned based on a quantization bit of 4 using Quantized Learning over Redundant Architecture (QLoRA). The minimum hardware requirements for this can be referred to as follows:

Model Parameters GPU RAM CPU RAM DISK
7b 6GB 3.6GB 36.4GB
13b 13.4GB 5.9GB 60.2GB

All the related parameters are set to the minimum, with a batch size of 1 and max length of 512. Based on experience, for better performance, it is recommended to set the related length values to 1024 or 2048.

3. Usage

3.1. Environment preparation

git clone https://github.com/eosphoros-ai/DB-GPT-Hub.git
cd DB-GPT-Hub
conda create -n dbgpt_hub python=3.10 
conda activate dbgpt_hub
pip install -r requirements.txt 
mkdir model 

3.2. Data preparation

DB-GPT-Hub uses the information matching generation method for data preparation, i.e. the SQL + Repository generation method that combines table information. This method combines data table information to better understand the structure and relationships of the data table, and is suitable for generating SQL statements that meet the requirements.

Download the Spider dataset from the Spider dataset link. By default, after downloading and extracting the data, place it in the dbgpt_hub/data directory, i.e., the path should be dbgpt_hub/data/spider.

For the data preprocessing part, simply run the following script :

## generate train and dev(eval) data
sh dbgpt_hub/scripts/gen_train_eval_data.sh

In the directory dbgpt_hub/data/, you will find the newly generated training file example_text2sql_train.json and testing file example_text2sql_dev.json, containing 8659 and 1034 entries respectively.

The data in the generated JSON looks something like this:

    {
        "db_id": "department_management",
        "instruction": "I want you to act as a SQL terminal in front of an example database, you need only to return the sql command to me.Below is an instruction that describes a task, Write a response that appropriately completes the request.\n\"\n##Instruction:\ndepartment_management contains tables such as department, head, management. Table department has columns such as Department_ID, Name, Creation, Ranking, Budget_in_Billions, Num_Employees. Department_ID is the primary key.\nTable head has columns such as head_ID, name, born_state, age. head_ID is the primary key.\nTable management has columns such as department_ID, head_ID, temporary_acting. department_ID is the primary key.\nThe head_ID of management is the foreign key of head_ID of head.\nThe department_ID of management is the foreign key of Department_ID of department.\n\n",
        "input": "###Input:\nHow many heads of the departments are older than 56 ?\n\n###Response:",
        "output": "SELECT count(*) FROM head WHERE age  >  56",
        "history": []
    }, 

3.3. Model fine-tuning

The model fine-tuning supports both LoRA and QLoRA methods. We can run the following command to fine-tune the model. By default, with the parameter --quantization_bit, it uses the QLoRA fine-tuning method. To switch to LoRAs, simply remove the related parameter from the script. Run the command:

sh dbgpt_hub/scripts/train_sft.sh

After fine-tuning, the model weights will be saved by default in the adapter folder, specifically in the dbgpt_hub/output/adapter directory.

If you're using multi-GPU training and want to utilize deepseed, you should modify the default content in train_sft.sh. The change is:

CUDA_VISIBLE_DEVICES=0 python dbgpt_hub/train/sft_train.py \
    --quantization_bit 4 \
    ...

change to :

deepspeed --num_gpus 2  dbgpt_hub/train/sft_train.py \
    --deepspeed dbgpt_hub/configs/ds_config.json \
    --quantization_bit 4 \
    ...

The other parts that are omitted (…) can be kept consistent. If you want to change the default deepseed configuration, go into the dbgpt_hub/configs directory and make changes to ds_config.json as needed.

In the script, during fine-tuning, different models correspond to key parameters lora_target and template, as shown in the following table:

model name lora_target template
LLaMA-2 q_proj,v_proj llama2
CodeLlama-2 q_proj,v_proj llama2
Baichuan2 W_pack baichuan2
InternLM q_proj,v_proj intern
Qwen c_attn chatml
XVERSE q_proj,v_proj xverse
ChatGLM2 query_key_value chatglm2
LLaMA q_proj,v_proj -
BLOOM query_key_value -
BLOOMZ query_key_value -
Baichuan W_pack baichuan
Falcon query_key_value -

In train_sft.sh , other key parameters are as follows:

quantization_bit: Indicates whether quantization is applied, with valid values being [4 or 8].
model_name_or_path: The path of the LLM (Large Language Model).
dataset: Specifies the name of the training dataset configuration, corresponding to the outer key value in dbgpt_hub/data/dataset_info.json, such as example_text2sql.
max_source_length: The length of the text input into the model. If computing resources allow, it can be set as large as possible, like 1024 or 2048.
max_target_length: The length of the SQL content output by the model; 512 is generally sufficient.
output_dir: The output path of the Peft module during SFT (Supervised Fine-Tuning), set by default to dbgpt_hub/output/adapter/ .
per_device_train_batch_size: The size of the batch. If computing resources allow, it can be set larger; the default is 1.
gradient_accumulation_steps: The number of steps for accumulating gradients before an update.
save_steps: The number of steps at which model checkpoints are saved; it can be set to 100 by default.
num_train_epochs: The number of epochs for training the dataset.

3.4. Model Predict

Under the project directory ./dbgpt_hub/output/pred/, this folder is the default output location for model predictions(if not exist, just mkdir).

sh ./dbgpt_hub/scripts/predict_sft.sh

In the script, by default with the parameter --quantization_bit, it predicts using QLoRA. Removing it switches to the LoRA prediction method. The value of the parameter --predicted_out_filename is the file name of the model's predicted results, which can be found in the dbgpt_hub/output/pred directory.

3.5 Model Weights

You can find the second corresponding model weights from Huggingface hg-eosphoros-ai ,we uploaded the LoRA weights in October,which execution accuracy on the Spider evaluation set reached 0.789.

3.5.1 Model and fine-tuned weight merging

If you need to merge the weights of the trained base model and the fine-tuned Peft module to export a complete model, execute the following model export script:

sh ./dbgpt_hub/scripts/export_merge.sh

Be sure to replace the parameter path values in the script with the paths corresponding to your project.

3.6 Model Evaluation

To evaluate model performance on the dataset, default is spider dev dataset. Run the following command:

python dbgpt_hub/eval/evaluation.py --plug_value --input Your_model_pred_file

You can find the results of our latest review and part of experiment results here

4. RoadMap

The whole process we will divide into three phases:

  • Stage 1:

    • Set up the basic framework, enabling end-to-end workflow from data processing, model SFT training, prediction output to evaluation based on multiple large models. As of 20230804, the entire pipeline has been established. now,we has supported as follows:
    • CodeLlama
    • Baichuan2
    • LLaMa/LLaMa2
    • Falcon
    • Qwen
    • XVERSE
    • ChatGLM2
    • internlm
  • Stage 2:

    • Optimize model performance, and support fine-tuning more different models in various ways before 20231010
    • Optimize prompts
    • Release evaluation results, and optimized models open to peers.
  • Stage 3:

    • Inference speed optimization and improvement
    • Targeted optimization and improvement of business scenarios and Chinese effects
    • Optimized based on more papers, such as RESDSQL and others. Combined with our community's sibling projectAwesome-Text2SQLfor further enhancements..

If our work is even a little help to you, please give us a star to let us know ,which would be more motivation for us to release more related work.

5. Contributions

We welcome more folks to participate and provide feedback in areas like datasets, model fine-tuning, performance evaluation, paper recommendations, code reproduction, etc. Feel free to open issues or PRs and we'll actively respond.Before submitting the code, please format it using the black style.

6. Acknowledgements

Our work is primarily based on the foundation of numerous open-source contributions. Thanks to the following open source projects

7、Licence

The MIT License (MIT)

8、Contact Information

We are working together as a community, if you have any ideas about our community work , feel free to contact us. And you're interested in an in-depth experiment and optimization of the DB-GPT-Hub subproject, you can reach out to 'wangzai' in the WeChat group, we are welcome to make it better togther.

Star History Chart

db-gpt-hub's People

Contributors

wangzaistone avatar zhanghy-sketchzh avatar csunny avatar jian1273 avatar chen-pengf avatar shijian4 avatar 1ring2rta avatar simonkorl avatar junewgl avatar john-saxon avatar qidanrui avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.