Comments (3)
Hi, thanks for asking~
I think the question you mentioned should be divided into two separate questions.
1 How about using cross_entropy loss?
It will be totally fine, as you mentioned that log_softmax+nll_loss is the same as cross_entropy, so directly using it leads to no problems. I wrote it that way just because of personal custom. So you can directly use cross_entropy or even directly use loss = outputs.loss
rather than calculating the loss for each token. In our later implementation, we also directly use loss = outputs.loss
for simplicity.
2 How about using perplexity?
It is also fine. The overall philosophy is the same but the equation should change a little. The ratio between perplexities will become the substruction of losses. We compared both methods and the resulting performances are almost the same. We will include the experiments in our next version paper.
Let me know if you have other questions~
from cherry_llm.
Hi, thanks for asking~ I think the question you mentioned should be divided into two separate questions.
1 How about using cross_entropy loss? It will be totally fine, as you mentioned that log_softmax+nll_loss is the same as cross_entropy, so directly using it leads to no problems. I wrote it that way just because of personal custom. So you can directly use cross_entropy or even directly use
loss = outputs.loss
rather than calculating the loss for each token. In our later implementation, we also directly useloss = outputs.loss
for simplicity.2 How about using perplexity? It is also fine. The overall philosophy is the same but the equation should change a little. The ratio between perplexities will become the substruction of losses. We compared both methods and the resulting performances are almost the same. We will include the experiments in our next version paper.
Let me know if you have other questions~
Thank you for your detailed reply, which completely solved my problem.
I would like to ask another question. In the experimental configuration introduced in the paper, the batch size is 128, is it also 128 when conducting the pre-experiment? If so, will we only iterate once when our sample size is less than 128?Looking forward to your answer~
from cherry_llm.
Yes, the batch size is also 128 following the alpaca setting.
We only train the pre-experienced model for 1 epoch to make the extra resources as few as possible.
from cherry_llm.
Related Issues (19)
- Need help: the loss curve is strange. HOT 3
- How to filter code SFT data? HOT 2
- Questions related to training HOT 5
- Could the Pre-Experienced Model be used in other different dataset? HOT 1
- Any report of time consuming? HOT 1
- Chinese SFT data cannot be displayed. HOT 3
- 'The training of pre-experienced models is discarded for more efficient usage': that means we can only use base model to do cherry analysis and selection? HOT 1
- batch? HOT 1
- 关于Direct Answer Score sθ(A) HOT 2
- Evaluation reproducibility on benchmarks HOT 4
- how many epochs to train on cherry data? HOT 2
- a confusion about Instruction-Following Difficulty (IFD) scores HOT 2
- Logic behind IFD score HOT 1
- I plan to apply this method on Llama2, which part of this project needs to be changed to adapt to Llama2? HOT 1
- May I ask if this project is suitable for other large models, such as the Baichuan model, to filter high-quality datasets from other fields HOT 4
- about the paper HOT 1
- Multi-round conversation data set HOT 3
- GPT-4/ChatGPT Evaluation Code HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cherry_llm.