Comments (5)
Firstly, thank you for your interest in our work. The calculation of IFD scores requires the inference on LLMs, thus it's naturally time-consuming. However, we also proposed Superfiltering(ACL'24), which utilizes small language models like GPT2 to select the data rather than LLMs, it tremendously lowers the time and cost for the data selection process. If efficiency is important to you, please try it.
Secondly, you did not provide enough information for your observation:
- Since this method was originally used for single-round data, how did you implement it for multi-round samples? Calculate IFD once every turn? Did you use the whole previous conversations when calculating or just use the question at each turn?
- How large are your 50k multi-turn samples? 50k is not a small number, even on 50k simple Alpaca data, it needs several hours. If the questions/answers are long in your sample, and there are a lot of rounds in each sample, it should definitely cost a lot more hours. Maybe you should first estimate the token count and inference count.
- What base LLM did you use?
- What GPU did you use?
Again, thank you for your interest! We highly recommend you try our Superfiltering(ACL'24) if efficiency is important to you!
from cherry_llm.
Thank you for your answer!!
The sample is indeed very large, which is 458 MB . I just use the question and answer at each turn instead of history, and I use Qwen1.5-7B-Chat Model and a A800 gpu. I calculate the loss once every turn during data analysis. Do you have any good ideas to accelerate inference.
Thank you again and I will also try to use Superfiltering Method.
from cherry_llm.
And does this project support Chinese datasets selection?
from cherry_llm.
Thank you for your interest!
Based on your data, I think it is quite reasonable that it will cost a lot of hours. Though it has only 50k samples, the size is almost 20 times of the alpaca data. Unfortunately, I am no expert on accelerating inferences, sorry about that.
As for whether this method supports Chinese datasets, I think the answer should be yes. Our method is a language-agnostic method, it computes and compares the losses/perplexities generated by base models. So if the base model itself supports other language, then our method should be useful.
from cherry_llm.
If you are interested in our method or have further questions, we can also add WeChat friends for better communication.
Please send me an email if you are interested!
Thank you!
from cherry_llm.
Related Issues (20)
- a confusion about Instruction-Following Difficulty (IFD) scores HOT 2
- a confusion about data_by_IFD HOT 3
- Logic behind IFD score HOT 1
- I plan to apply this method on Llama2, which part of this project needs to be changed to adapt to Llama2? HOT 1
- May I ask if this project is suitable for other large models, such as the Baichuan model, to filter high-quality datasets from other fields HOT 4
- about the paper HOT 1
- Multi-round conversation data set HOT 3
- GPT-4/ChatGPT Evaluation Code HOT 1
- How to filter code SFT data? HOT 2
- Questions related to training HOT 5
- Could the Pre-Experienced Model be used in other different dataset? HOT 1
- Any report of time consuming? HOT 1
- Chinese SFT data cannot be displayed. HOT 3
- 'The training of pre-experienced models is discarded for more efficient usage': that means we can only use base model to do cherry analysis and selection? HOT 1
- batch? HOT 1
- 关于Direct Answer Score sθ(A) HOT 2
- Evaluation reproducibility on benchmarks HOT 4
- how many epochs to train on cherry data? HOT 2
- Question about the effect of labels[0, :start_token] = -100 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cherry_llm.