Comments (2)
Dear author, after tuning parameters, I discovered that potentially better performance might be achieved on my device, which is equipped with a GeForce RTX 4090, when selecting layer 24 and compressing at an 80% ratio (with the rate parameter set to 8.0). I speculate that such differences might stem from variations due to different hardware configurations.
GPTJ-log-24-fc_in-8.0.txt
from laser.
Hi @ZY123-GOOD. Apologies for the late response, I just noticed this issue.
If I understand correctly. You are running the following 3 experiments:
python3 intervention_gptj_fever.py --lname dont
python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 26
python3 intervention_gptj_fever.py --lname fc_in --rate 9.0 --lnum 26
Right? The first one will run the base model and I can see that it recovers the 50.2% accuracy and 1.244 mean log-prob loss that matches the result in the paper (Table 1 here).
The next two will do single LASER intervention in layer 26 in the first-layer of MLP reducing rank down to 0.1 and 1.0 fraction of the maximum rank (which is minimum of the size of the two parameters). This rate thing is related to the ρ in the paper as ρ = 1 - 0.1 * rate.
Now the best hyperparameter for this setting in the paper are listed in Table 3 here and correspond to [Uin, 24, 0.01] where Uin is actually fc_in in the code, 24 is the layer number so lnum, and &rho = 0.01 meaning rate of 9.9. So I will recommend running
python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 24
Can you try this setting for me? I noticed that in your second comment you eventually tried layer 24 but used a rate of 8.0 but that it still gave you improvements over the base model. Is my understanding correct?
Lastly, we have observed that some variations are possible due to inherent stochacticity of Pytorch svd call. Our experiments on one domain suggested the gap to be small but noticeable.
from laser.
Related Issues (20)
- Excellent work, looking forward to following up with further research! HOT 3
- What is the ETA on the code HOT 2
- License HOT 6
- Mistral Support HOT 16
- Where to Get the Dataset HOT 5
- Question HOT 2
- Do you think it could work for MoE models like Mixtral? HOT 2
- Rank-reduced models? HOT 4
- Feature Request for Upcoming Refactoring
- Rank reduction using random matrix theory HOT 1
- what does the 'rate' parameters actually mean in code? HOT 2
- Potential improvements for evaluation HOT 1
- Application to three-dimensional tensors HOT 1
- Llama2-7B + TruthfulQA reproduce issue HOT 8
- method of composing reductions across layers HOT 2
- Generic model? HOT 3
- how to get base model accuracy HOT 2
- How to reproduce Figure 5 analysis in this paper? HOT 2
- Reproducing LLAMA-2 metrics HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from laser.