Comments (13)
GRACE primarily focuses on sequence editing, and its performance on editing individual data is mediocre. Moreover, it is evaluated solely based on the log likelihood of correct outputs, rather than on decoding and inspection, which leads to subpar performance in direct text generation.
from easyedit.
Perhaps increasing n_iter could improve the editing effectiveness:)
from easyedit.
Hi, have you solved your issue? do you have any further questions?
from easyedit.
Thanks for the reply. I tried to increase n_iter but it does not help. Besides, I am a little confused about this:
GRACE primarily focuses on sequence editing, and its performance on editing individual data is mediocre. Moreover, it is evaluated solely based on the log likelihood of correct outputs, rather than on decoding and inspection, which leads to subpar performance in direct text generation.
I saw the code that when I evaluate the results, the code call the function compute_rewrite_or_rephrase_quality() in evaluate.py, which compares the exact output token ids with ground truth ids. And it seems that the code never call any functions in models/grace/metrics.py. According to the GRACE paper Table 2, they use F1 as the evaluation score on ZsRE dataset. Could you explain in more details about the evaluation of GRACE? Is there any difference between your codebase and the original codebase?
Besides, in my opinion, focusing on sequential editing does not mean a worse performance on individual editing. If GRACE fails to edit individual data, how can it work well on sequential editing? By the way, my experiments setting is also sequential editing but there is no improvement on the evaluation score after editing.
from easyedit.
We will update the evaluation criteria for the original paper on the GRACE method in the next few days.
In the source code of GRACE, coding was only done for the gpt2xl model. I did not optimize for the llama model, which might result in subpar evaluation performance.
from easyedit.
Thanks! I will also take a look at the original GRACE codebase and we can discuss later. Besides, are you going to implement MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA (https://arxiv.org/abs/2312.11795)?
from easyedit.
I am supporting Melo. I will update the code once I have debugged it recently. Thank you for your concern.:)
from easyedit.
We have now updated the PPL and F1 evaluation criteria for GRACE. Could you please update the code accordingly?
from easyedit.
Thanks for you updates. I am going to try again. But I am still a little bit concerned about the evaluation results. In my experiments, ROME (or other methods) performs well on the original evaluation metrics of you codebase, but that is not the case for GRACE method. I wondering that without considering sequential editing scenario, does this mean that GRACE is worse than ROME? If so, GRACE is not much applicable than I thought.
from easyedit.
I have asked my senior peers, as well as conducted experiments myself, and it is confirmed that the GRACE performance is indeed not as good as the ROME.
from easyedit.
OK that makes sense. Thanks for your discussion and help!
from easyedit.
Thanks for your excellent work! I am trying to run GRACE method on ZsRE dataset (randomly select 10 samples) using Llama-2-7b checkpoints. However, I found that the rewrite accuracy is very low no matter what hyperparameters I choose.
Here are my experiment settings:
edit_lr: 1.0 n_iter: 100 dist_fn: euc val_init: cold val_train: sgd val_reg: None reg: early_stop replacement: replace_all eps_expand: coverage num_pert: 8
I tried different eps: 1.0, 1.5, 2.0, 10, 20 and different edit layers: 5, 10, 15, 20, 25, 30, 31. But nearly all the results show that the rewrite accuracy is remain same as before editing.
Could you provide some experiment results and hyperparameters? Besides, do you have some suggestions to explain or figure out this phenomenon? Thanks!
Hi, I hope everything is going well recently, and thank you for exploring Grace. Recently, I tried to edit llama2-7B-chat through Grace, and I was able to reproduce the editing success rate of more than 90%. You can use Grace with the latest version of EasyEdit. 😊
from easyedit.
Thanks for your excellent updates! I successfully reproduced the high accuracy of GRACE.
Hope to see your implementation of MELO soon :) Thanks again for your help!
from easyedit.
Related Issues (20)
- 请问一下论文中提到的opencompass在评估时要用到吗 HOT 2
- T-Patcher support HOT 2
- Question about E-VQA HOT 8
- Could there be a bug in the FT implementation HOT 6
- MELO fails to edit gpt2-xl HOT 7
- it seems there is an import bug HOT 5
- error when runing knowedit HOT 8
- Result of running MEND on VQA dataset using minigpt4 model HOT 11
- about evaluation HOT 26
- Where is the dataset `MMEDIT`? HOT 4
- error training LoRa HOT 3
- Test Method IKE on multimodal editing HOT 2
- out of memory HOT 5
- KN out of memory HOT 3
- question about sequential edit using SERAC HOT 2
- How to reproduce the multi-modal results using MEND? HOT 18
- LORA edit does not change model's answers HOT 8
- MEND on KnowEdit dataset ?
- About the implement of MEMIT HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from easyedit.