Git Product home page Git Product logo

Comments (13)

XeeKee avatar XeeKee commented on September 22, 2024

GRACE primarily focuses on sequence editing, and its performance on editing individual data is mediocre. Moreover, it is evaluated solely based on the log likelihood of correct outputs, rather than on decoding and inspection, which leads to subpar performance in direct text generation.

from easyedit.

XeeKee avatar XeeKee commented on September 22, 2024

Perhaps increasing n_iter could improve the editing effectiveness:)

from easyedit.

zxlzr avatar zxlzr commented on September 22, 2024

Hi, have you solved your issue? do you have any further questions?

from easyedit.

ZihaoLin0123 avatar ZihaoLin0123 commented on September 22, 2024

Thanks for the reply. I tried to increase n_iter but it does not help. Besides, I am a little confused about this:

GRACE primarily focuses on sequence editing, and its performance on editing individual data is mediocre. Moreover, it is evaluated solely based on the log likelihood of correct outputs, rather than on decoding and inspection, which leads to subpar performance in direct text generation.

I saw the code that when I evaluate the results, the code call the function compute_rewrite_or_rephrase_quality() in evaluate.py, which compares the exact output token ids with ground truth ids. And it seems that the code never call any functions in models/grace/metrics.py. According to the GRACE paper Table 2, they use F1 as the evaluation score on ZsRE dataset. Could you explain in more details about the evaluation of GRACE? Is there any difference between your codebase and the original codebase?

Besides, in my opinion, focusing on sequential editing does not mean a worse performance on individual editing. If GRACE fails to edit individual data, how can it work well on sequential editing? By the way, my experiments setting is also sequential editing but there is no improvement on the evaluation score after editing.

from easyedit.

XeeKee avatar XeeKee commented on September 22, 2024

We will update the evaluation criteria for the original paper on the GRACE method in the next few days.
In the source code of GRACE, coding was only done for the gpt2xl model. I did not optimize for the llama model, which might result in subpar evaluation performance.

from easyedit.

ZihaoLin0123 avatar ZihaoLin0123 commented on September 22, 2024

Thanks! I will also take a look at the original GRACE codebase and we can discuss later. Besides, are you going to implement MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA (https://arxiv.org/abs/2312.11795)?

from easyedit.

XeeKee avatar XeeKee commented on September 22, 2024

I am supporting Melo. I will update the code once I have debugged it recently. Thank you for your concern.:)

from easyedit.

XeeKee avatar XeeKee commented on September 22, 2024

We have now updated the PPL and F1 evaluation criteria for GRACE. Could you please update the code accordingly?

from easyedit.

ZihaoLin0123 avatar ZihaoLin0123 commented on September 22, 2024

Thanks for you updates. I am going to try again. But I am still a little bit concerned about the evaluation results. In my experiments, ROME (or other methods) performs well on the original evaluation metrics of you codebase, but that is not the case for GRACE method. I wondering that without considering sequential editing scenario, does this mean that GRACE is worse than ROME? If so, GRACE is not much applicable than I thought.

from easyedit.

XeeKee avatar XeeKee commented on September 22, 2024

I have asked my senior peers, as well as conducted experiments myself, and it is confirmed that the GRACE performance is indeed not as good as the ROME.

from easyedit.

ZihaoLin0123 avatar ZihaoLin0123 commented on September 22, 2024

OK that makes sense. Thanks for your discussion and help!

from easyedit.

pengzju avatar pengzju commented on September 22, 2024

Thanks for your excellent work! I am trying to run GRACE method on ZsRE dataset (randomly select 10 samples) using Llama-2-7b checkpoints. However, I found that the rewrite accuracy is very low no matter what hyperparameters I choose.

Here are my experiment settings:

edit_lr: 1.0 n_iter: 100 dist_fn: euc val_init: cold val_train: sgd val_reg: None reg: early_stop replacement: replace_all eps_expand: coverage num_pert: 8

I tried different eps: 1.0, 1.5, 2.0, 10, 20 and different edit layers: 5, 10, 15, 20, 25, 30, 31. But nearly all the results show that the rewrite accuracy is remain same as before editing.

Could you provide some experiment results and hyperparameters? Besides, do you have some suggestions to explain or figure out this phenomenon? Thanks!

Hi, I hope everything is going well recently, and thank you for exploring Grace. Recently, I tried to edit llama2-7B-chat through Grace, and I was able to reproduce the editing success rate of more than 90%. You can use Grace with the latest version of EasyEdit. 😊

from easyedit.

ZihaoLin0123 avatar ZihaoLin0123 commented on September 22, 2024

Thanks for your excellent updates! I successfully reproduced the high accuracy of GRACE.
Hope to see your implementation of MELO soon :) Thanks again for your help!

from easyedit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.