Git Product home page Git Product logo

zjunlp / easyedit Goto Github PK

View Code? Open in Web Editor NEW
1.4K 18.0 172.0 46.74 MB

An Easy-to-use Knowledge Editing Framework for LLMs.

Home Page: https://zjunlp.github.io/project/KnowEdit

License: MIT License

Python 25.47% Jupyter Notebook 72.19% Dockerfile 0.09% Makefile 0.03% MDX 2.22%
artificial-intelligence efficient knowledge-editing large-language-models model-editing natural-language-processing open-source-project tool easyedit knowlm

easyedit's People

Contributors

ashishkumar90244 avatar beasteryong avatar blankspaceplus avatar cheng-siyuan avatar cuiliyuan121 avatar cx229 avatar diyora13 avatar domenicrosati avatar dqyzhwk avatar egojoseph avatar eltociear avatar icyclv avatar littlefive5 avatar macaronlin avatar mengrusun avatar n2man avatar nipelement avatar oe-heart avatar pengzju avatar shengyumao avatar sidnb13 avatar tbozhong avatar txiaoxiaofu avatar venkatasrimannarayana avatar wangxh-07 avatar xeekee avatar xpq-tech avatar xxupiano avatar xzwyyd avatar zxlzr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easyedit's Issues

Codes for Reproducing the Results about the Edit Performance

Hi,

Your project does provide a platform to use and compare different editing methods.
But I do not find any .py or .sh scripts to easily reproduce the resutls in the Table of 'Edit Performance'.
May you provide a complete pipeline for the results reproduction? The pipeline better includes data preprocessing (if has), API calls to different editing methods, editing processes and the evaluation.
It will be grateful if you provide the detail scripts for reproducing.

Thanks!

torch.cuda.OutOfMemoryError: CUDA out of memory

please help:
code is:

hparams = MENDTrainingHparams.from_hparams("./hparams/TRAINING/MEND/llama-7b.yaml")
train_ds = ZsreDataset('./Data/zsre/zsre_mend_train_10000.json', config=hparams)
eval_ds = ZsreDataset('./Data/zsre/zsre_mend_eval.json', config=hparams)
trainer = EditTrainer(
config=hparams,
train_set=train_ds,
val_set=eval_ds
)
trainer.run()

run error:

File "EasyEdit/easyeditor/trainer/algs/MEND.py", line 285, in edit
loss.backward()

OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 7; 79.19 GiB total capacity; 35.57 GiB already allocated; 14.88 MiB free; 35.63
GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for
Memory Management and PYTORCH_CUDA_ALLOC_CONF

llama7b modle ,local path

thank you!
非常好的开源,在实验的时候发现报错,请求帮助,十分感谢!

Colab errors when running EasyEdit_Example_ROME_llama.ipynb

The notebook settings for EasyEdit_Example_ROME_llama.ipynb cause Colab to report that Unrecognized runtime "easyedit"; defaulting to "python3".
This appears to cause a number of errors (that mostly seem to stem from the wrong Python version getting selected as the default), such as ModuleNotFoundError: No module named 'distutils.cmd' (presumably because it tries to run the default system install of Python 3.9).

如何在ZsRE数据集上进行测试呢?

您好!有一些不太懂的地方想请教您一下:

在使用ZsRE数据集测试时,具体流程是什么样的呢?
对于只能单条编辑的方法,是全部顺序编辑后再测试?还是编辑完一条后就测一条呢?
涉及到需要训练模型的编辑方法是需要先在数据集上训练完后,再对数据集整体进行测试吗?

希望您能抽空回答一下! 非常感谢!

Import EasyEdit caused "Segmentation fault (core dumped)"

git clone https://github.com/zjunlp/EasyEdit.git
conda create -n EasyEdit python=3.9.7
...
pip install -r  requirements.txt

Following the instructions above in README, after the environments have been set up, I just tried from easyeditor import MENDHyperParams and it caused a segfault: "Segmentation fault (core dumped)". And I also tried import torch, the same error happened. Importing other packages like transformers won't caused the segfault.

Is there any way to work around this problem? Please advice.

Computation time for MEMIT/ROME editing

Hi, thanks for your great contributions!

I would like to know how long it normally will take to edit a model using MEMIT or ROME. Does the consuming time taken depend on the number of the edit samples?

prompts = ['Ray Charles, the',
            'Grant Hill is a professional',
            'The law in Ikaalinen declares the language'
            ]
ground_truth = ['piano',
                'basketball',
                'Finnish'
                ]
target_new = ['violin',
              'soccer',
              'Swedish'
              ]
subject = ['Ray Charles',
            'Grant Hill',
            'Ikaalinen'
            ]

editor=BaseEditor.from_hparams(hparams)

metrics, edited_model_false, _ = editor.edit(
    prompts=prompts,
    ground_truth=ground_truth,
    target_new=target_new,
    subject=subject,
    keep_original_weight=False
)

I follow the tutorial to run MEMIT on GPT-J-6B, but it takes more than five hours and the program hasn't finished running. Is something wrong?

Looking forward to your reply !

How to tune MEMIT's hyperparameters under thousands of edits?

Thanks for your contribution and it really impresses me !
I want to use MEMIT method to inject thousands of edits into the language model, but in practical attempt, I found that injecting thousands of edits under the default hyperparameter setting would cause the model internal parameters to be totally corrupted (even only edit one layer), so I would like to ask if you have relevant experience in tuning MEMIT hyperparameter under large edit batch. Is it more effective to increase kl_factor or lower v_lr?
Looking forward to your early reply !

Error and Computation Time

Hi, while I am trying to run the google colab notebook,

  1. I am getting the error:

"OSError: Can't load the configuration of '/mnt/peng/EasyEdit/hugging_cache/llama-2-7b'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/mnt/peng/EasyEdit/hugging_cache/llama-2-7b' is the correct path to a directory containing a config.json file"

Please advice what should be done.

  1. Approximately how much time would this editing command take for execution?
    python run_zsre_llama2.py
    --editing_method=ROME
    --hparams_dir=../hparams/ROME/llama-7b
    --data_dir=./data

In my case it is stuck after displaying this line:
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████| 2/2 [00:13<00:00, 6.51s/it]

Does EasyEdit support training on H800?

In order to train on H800, I change the version of torch to 2.1.0+cu121. The other dependencies are installed with the version specified in requirements.txt.

After I run my demo, which is adapted from the code in tutorial, I got this error:

Use device: cuda:0
2023-10-10 10:41:37,603 - easyeditor.editors.editor - INFO - Instantiating model
10/10/2023 10:41:37 - INFO - easyeditor.editors.editor -   Instantiating model
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:46<00:00,  6.67s/it]
Getting coarse neurons for each prompt...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00,  2.49s/it]

484 coarse neurons found - refining
484 neurons remaining after refining

Before modification - groundtruth probability: tensor([2.2667e-07], device='cuda:0')
Argmax completion: `King`
Argmax prob: 0.17146547138690948
Traceback (most recent call last):
  File "/home/workspace/EasyEdit/demo-chatglm2.py", line 31, in <module>
    metrics, edited_model, _ = editor.edit(
  File "/home/workspace/EasyEdit/easyeditor/editors/editor.py", line 241, in edit
    edited_model, weights_copy = self.apply_algo(
  File "/home/workspace/EasyEdit/easyeditor/models/kn/kn_main.py", line 46, in apply_kn_to_model
    results_dict, unpatch_fn = kn.edit_knowledge(
  File "/home/workspace/EasyEdit/easyeditor/models/kn/knowledge_neurons/knowledge_neurons/knowledge_neurons.py", line 906, in edit_knowledge
    return self.modify_weights(
  File "/root/anaconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/workspace/EasyEdit/easyeditor/models/kn/knowledge_neurons/knowledge_neurons/knowledge_neurons.py", line 831, in modify_weights
    output_ff_weights[:, position].detach().clone()
TypeError: 'Linear' object is not subscriptable

Should I change some part of the code to make it work?

The code is as follow:

import os
import logging

from easyeditor import BaseEditor
from easyeditor import KNHyperParams


PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
USE_DEVICE = f"cuda:0"
logging.info(f"Use device: {USE_DEVICE}")

prompts = ['Ray Charles, the',
            'Grant Hill is a professional',
            'The law in Ikaalinen declares the language'
            ]
ground_truth = ['piano',
                'basketball',
                'Finnish'
                ]
target_new = ['violin',
              'soccer',
              'Swedish'
              ]
subject = ['Ray Charles',
            'Grant Hill',
            'Ikaalinen'
            ]

hparams = KNHyperParams.from_hparams(os.path.join(PROJECT_PATH, 'hparams/KN/chatglm2-6b.yaml'))
editor = BaseEditor.from_hparams(hparams)
metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    ground_truth=ground_truth,
    target_new=target_new,
    subject=subject,
    keep_original_weight=True
)

print(metrics)


print('*'*20)

from transformers import AutoTokenizer, AutoModel

# tokenizer = GPT2Tokenizer.from_pretrained('./hugging_cache/gpt2-xl')
tokenizer = AutoTokenizer.from_pretrained('/home/workspace/pretrain-model/chatglm/chatglm2-6b', trust_remote_code=True)
# tokenizer = AutoTokenizer.from_pretrained('/home/workspace/pretrain-model/THUDM/chatglm2-6b', trust_remote_code=True)
# tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side='left'
generation_prompts = [
    "Ray Charles, the",
    "The law in Ikaalinen declares the language"
]

model = AutoModel.from_pretrained('/home/workspace/pretrain-model/chatglm/chatglm2-6b', trust_remote_code=True).to(USE_DEVICE)
# model = AutoModel.from_pretrained('/home/workspace/pretrain-model/THUDM/chatglm2-6b', trust_remote_code=True).to(USE_DEVICE)
batch = tokenizer(generation_prompts, return_tensors='pt', padding=True, max_length=30)

pre_edit_outputs = model.generate(
    input_ids=batch['input_ids'].to(USE_DEVICE),
    attention_mask=batch['attention_mask'].to(USE_DEVICE),
    max_length=10
)

post_edit_outputs = edited_model.generate(
    input_ids=batch['input_ids'].to(USE_DEVICE),
    attention_mask=batch['attention_mask'].to(USE_DEVICE),
    max_length=10
)

[Feature Request] Support InternLM

Dear EasyEdit developer,

我是 InternLM 社区开发者&志愿者尖米, 大佬开源的工作对我的启发很大,希望可以探讨使用 InternLM 实现 EasyEdit 的可能性和实现路径,我的微信是 mzm312,希望可以取得联系进行更深度的交流;

Best regards,
尖米

personality edit code

Thank you for your incredible work. I am very interested in your Personality Edit. May I ask when the relevant code and dataset will be released? I'm really looking forward to it!

Test case for ChatGLM2

Hi, I am trying to use ROME edit on ChatGLM2, while I noticed that the performance (avg prob) after training is significantly low. Could you please share some examples for editing ChatGLM2? Here's some of my cases

  1. Case 1: ENGLISH CASE
    prompts = ['Ray Charles, the']
    ground_truth = ['piano']
    target_new = ['violin']
    subject = ['Ray Charles']

    avg prob: 0.01

  2. Case 2: CHINESE CASE
    prompts = ['**的首都是']
    ground_truth = ['北京']
    target_new = ['上海']
    subject = ['**']

    avg prob: 0.01

Both cases I used the default setting in yaml file. I also tried to raise up the grad_steps (to 100) while the maximum of prob is 0.1 which is not sufficient for editing. Could you please help me out? Appreciate a lot.

Can I use EasyEdit to evaluate the local model which is edited?

Dear EasyEdit research team:
Sincerely, thank you for your outstanding work in the field of model editing which is very helpful and inspiring to me. To this end I would like to ask you a question: Can I use EasyEdit to evaluate the local model which is edited by myself? If the answer is yes, how can I do that? Cause it seems like I do not have a yaml file after editing a model(such as llama 7b). Looking forward to your response.

Index out of bound error with chatglm2-6b

I tried to follow the demo in /tutorial-notebooks with the model chatglm2-6b using ROME or MEMIT but both stop by an index out of bound error. What caused this error? How can I work around it?

The code I used is as below:

import os

from easyeditor import BaseEditor
from easyeditor import ROMEHyperParams


PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))

prompts = ['Ray Charles, the',
            'Grant Hill is a professional',
            'The law in Ikaalinen declares the language'
            ]
ground_truth = ['piano',
                'basketball',
                'Finnish'
                ]
target_new = ['violin',
              'soccer',
              'Swedish'
              ]
subject = ['Ray Charles',
            'Grant Hill',
            'Ikaalinen'
            ]

hparams = ROMEHyperParams.from_hparams(os.path.join(PROJECT_PATH, 'hparams/ROME/chatglm2-6b.yaml'))
editor = BaseEditor.from_hparams(hparams)
metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    ground_truth=ground_truth,
    target_new=target_new,
    subject=subject,
    keep_original_weight=True
)

print(metrics)


print('*'*20)

from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel

# tokenizer = GPT2Tokenizer.from_pretrained('./hugging_cache/gpt2-xl')
tokenizer = GPT2Tokenizer.from_pretrained('/home/workspace/pretrain-model/THUDM/chatglm2-6b')
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side='left'
generation_prompts = [
    "Ray Charles, the",
    "The law in Ikaalinen declares the language"
]

model = GPT2LMHeadModel.from_pretrained('/home/workspace/pretrain-model/THUDM/chatglm2-6b').to('cuda')
batch = tokenizer(generation_prompts, return_tensors='pt', padding=True, max_length=30)

pre_edit_outputs = model.generate(
    input_ids=batch['input_ids'].to('cuda'),
    attention_mask=batch['attention_mask'].to('cuda'),
    max_length=10
)

post_edit_outputs = edited_model.generate(
    input_ids=batch['input_ids'].to('cuda'),
    attention_mask=batch['attention_mask'].to('cuda'),
    max_length=10
)

The error message for MEMIT is as below:

Traceback (most recent call last):
  File "/home/workspace/knowledge_edit/EasyEdit-main/demo.py", line 28, in <module>
    metrics, edited_model, _ = editor.edit(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/editors/editor.py", line 242, in edit
    edited_model, weights_copy = self.apply_algo(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/memit/memit_main.py", line 46, in apply_memit_to_model
    deltas = execute_memit(model, tok, requests, hparams, cache_template=cache_template)
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/memit/memit_main.py", line 141, in execute_memit
    cur_z = compute_z(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/memit/compute_z.py", line 126, in compute_z
    logits = model(**input_tok).logits
  File "/opt/anaconda3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b/8fd7fba285f7171d3ae7ea3b35c53b6340501ed1/modeling_chatglm.py", line 934, in forward
    transformer_outputs = self.transformer(
  File "/opt/anaconda3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b/8fd7fba285f7171d3ae7ea3b35c53b6340501ed1/modeling_chatglm.py", line 830, in forward
    hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
  File "/opt/anaconda3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b/8fd7fba285f7171d3ae7ea3b35c53b6340501ed1/modeling_chatglm.py", line 640, in forward
    layer_ret = layer(
  File "/opt/anaconda3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1547, in _call_impl
    hook_result = hook(self, args, result)
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/util/nethook.py", line 80, in retain_hook
    output = invoke_with_optional_args(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/util/nethook.py", line 451, in invoke_with_optional_args
    return fn(*pass_args, **pass_kw)
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/memit/compute_z.py", line 103, in edit_output_fn
    cur_out[0][i, idx, :] += delta
IndexError: index 12 is out of bounds for dimension 1 with size 7

The error message for ROME is as below:

Traceback (most recent call last):
  File "/home/workspace/knowledge_edit/EasyEdit-main/demo.py", line 28, in <module>
    metrics, edited_model, _ = editor.edit(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/editors/editor.py", line 242, in edit
    edited_model, weights_copy = self.apply_algo(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/rome/rome_main.py", line 41, in apply_rome_to_model
    deltas = execute_rome(model, tok, request, hparams)
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/rome/rome_main.py", line 104, in execute_rome
    left_vector: torch.Tensor = compute_u(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/rome/compute_u.py", line 85, in compute_u
    cur_repr = repr_tools.get_reprs_at_word_tokens(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/rome/repr_tools.py", line 32, in get_reprs_at_word_tokens
    return get_reprs_at_idxs(
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/rome/repr_tools.py", line 164, in get_reprs_at_idxs
    _process(tr.input, batch_idxs, "in")
  File "/home/workspace/knowledge_edit/EasyEdit-main/easyeditor/models/rome/repr_tools.py", line 147, in _process
    to_return[key].append(cur_repr[i][idx_list].mean(0))
IndexError: index 15 is out of bounds for dimension 0 with size 15

Pre-training for MEND

Hello,

Thanks for the putting different model editing methods together. I have a question about the MEND pre-training.

I followed the Trainer tutorial to do pre-training for MEND, but I got stuck at step4. In step 4, when the program initializes the trainer, it calls:

archive, config.archive = load_archive(str(config.archive))

Based on step6, it seems that archive should be set to the path of the meta-network. However, when I am at step4, I am going to train the meta-network and I don't have the model yet. Then what should be the archive in the hparams file?

The Trainer tutorial jumps from step4 to step6 directly. I am wondering if step5 is missing? Besides, in step6, it says CHECKPOINT will be saved to RESULTS_DIR in global.yml. Can you please point me to the path of the global.yml file?

It would be very helpful if you can provide a notebook tutorial for MEND, including both pre-training and model editing!

Thanks!

在使用KN编辑Chatglm2-6b时出现TypeError报错

您好,EasyEdit是一个非常好用的工具,在使用EasyEdit对Chatglm2-6b进行知识编辑时,出现如下报错:
File "/path/to/EasyEdit/lib/python3.9/site-packages/easyeditor/models/kn/knowledge_neurons/knowledge_neurons/knowledge_neurons.py", line 836, in modify_weights output_ff_weights[:, position].detach().clone() TypeError: 'Linear' object is not subscriptable
运行的源代码如下:
`import os

os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3"

from easyeditor import BaseEditor
from easyeditor import KNHyperParams

ENG_TEXTS = [
"Sarah was visiting the capital of france,",
"The capital of france is",
"The eiffel tower is situated in"
]
ground_truth = ['Paris',
'Paris',
'Paris'
]
target_new = ['Beijing',
'Beijing',
'Beijing'
]
haprams = KNHyperParams.from_hparams("/path/to/EasyEdit/hparams/KN/chatglm2-6b.yaml")
editor = BaseEditor.from_hparams(haprams)
metrics, edited_model, _ = editor.edit(
prompts=ENG_TEXTS,
ground_truth=ground_truth,
target_new=target_new,
keep_original_weight=True
)

print(metrics)

print('*'*20)

from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('/path/to/chatglm2-6b')
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side='left'
generation_prompts = [
"The eiffel tower is situated in ",
"The capital of france is "
]

model = GPT2LMHeadModel.from_pretrained('/path/to/chatglm2-6b').to('cuda')
batch = tokenizer(generation_prompts, return_tensors='pt', padding=True, max_length=30)

pre_edit_outputs = model.generate(
input_ids=batch['input_ids'].to('cuda'),
attention_mask=batch['attention_mask'].to('cuda'),
max_length=10
)

post_edit_outputs = edited_model.generate(
input_ids=batch['input_ids'].to('cuda'),
attention_mask=batch['attention_mask'].to('cuda'),
max_length=10
)

`
希望可以帮我解决上述困难,非常感谢~

Some doubts on the dataset tested

Hello! I greatly appreciate your work and have a simple problem.

I would like to know whether the editing performance results showed were tested on the dataset named 'zsre_mend_eval_portability_gpt4.json' which the number of the samples is 1037. Because I didnt see any "probability inputs" on the orginal Zsre dataset that you provide in google drive and it will not have the prob metric I think.

Thank you so much.

ROME + GPT-2XL + zSRE replicate results

Hi,

Thank you so much for your support and wonderful repo :)

I was able to run the ROME + GPT2XL from edit.py and am able to observe the metrics and edited model. The test_ROME() is basically used to insert few manually designed facts/prompts described in the function.

I would like to run the repo for entire zSRE dataset i.e would like to replicate the results from Table 1 from official ROME paper (https://arxiv.org/pdf/2202.05262.pdf)

Thanks
Srinath

Knowledge Editing with BERT on FEVER Dataset

I skimmed through the code base, and it seems to me that it is not possible to use easyedit to edit an encoder-only model (like BERT) on the classification task (like FEVER) even though many of the original papers, including KnowledgeEditor by De Cao et al. and MEND by Mitchell et al. do show such possibility.

Do you know if this observation is accurate? If true, would you like to add support for encoder-only models? If not, please give me the pointer to reproduce the knowledge editing experiment for the FEVER dataset on BERT.

IKE to edit Llama2 on ZsRE and Reproducing Editing Performance

Hello,

  1. How can we edit llama2 on ZsRE with IKE method? I tried with

python run_zsre_llama2.py --editing_method=IKE --hparams_dir=../tutorial-notebooks/hparams/IKE/llama-7b --data_dir=./data

It shows the error : assert 'train_ds' in kwargs.keys() or print('IKE need train_ds(For getting In-Context prompt)')

Where and how should we change the code?

  1. If we just want to reproduce the results of Editing Performance Table with the four metrics how can we do that?

  2. CUDA out of memory error while running the code in EasyEdit_Example_IKE.ipynb
    Is there any other way apart from using hugging face accelerator? Some way to reduce batch_size, etc?

Bug about using ROME to edit

Hello, when I tried to use ROME to edit gpt2-xl according to the example of ROME from the Docs and Colab, I met some bugs.
The code is:
from easyeditor import ROMEHyperParams
from easyeditor import BaseEditor

hparams = ROMEHyperParams.from_hparams('./hparams/ROME/gpt2-xl')

prompts = ['Ray Charles, the',
'Grant Hill is a professional',
'The law in Ikaalinen declares the language'
]
ground_truth = ['piano',
'basketball',
'Finnish'
]
target_new = ['violin',
'soccer',
'Swedish'
]
subject = ['Ray Charles',
'Grant Hill',
'Ikaalinen'
]

editor = BaseEditor.from_hparams(hparams)

metrics, edited_model, _ = editor.edit(
prompts=prompts,
ground_truth=ground_truth,
target_new=target_new,
subject=subject,
# locality_inputs=locality_inputs,
keep_original_weight=False
)

I met a bug:
Traceback (most recent call last):
File "/home/yangwanli/EasyEdit/test_rome.py", line 25, in
metrics, edited_model, _ = editor.edit(
File "/home/yangwanli/EasyEdit/easyeditor/editors/editor.py", line 200, in edit
edited_model, weights_copy = self.apply_algo(
File "/home/yangwanli/EasyEdit/easyeditor/models/rome/rome_main.py", line 41, in apply_rome_to_model
deltas = execute_rome(model, tok, request, hparams)
File "/home/yangwanli/EasyEdit/easyeditor/models/rome/rome_main.py", line 104, in execute_rome
left_vector: torch.Tensor = compute_u(
File "/home/yangwanli/EasyEdit/easyeditor/models/rome/compute_u.py", line 112, in compute_u
u = get_inv_cov(
File "/home/yangwanli/EasyEdit/easyeditor/models/rome/compute_u.py", line 42, in get_inv_cov
stat = layer_stats(
File "/home/yangwanli/EasyEdit/easyeditor/models/rome/layer_stats.py", line 148, in layer_stats
loader = tally(
File "/home/yangwanli/EasyEdit/easyeditor/util/runningstats.py", line 104, in tally
cached_state = load_cached_state(cache, args, quiet=quiet)
File "/home/yangwanli/EasyEdit/easyeditor/util/runningstats.py", line 1485, in load_cached_state
print("%s %s changed from %s to %s" % (cachefile, a, dat[a], v))
File "/home/yangwanli/anaconda3/envs/EasyEdit/lib/python3.9/site-packages/numpy/lib/npyio.py", line 249, in getitem
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'sample_size is not a file in the archive'

I found the bug come from "File "/home/yangwanli/EasyEdit/easyeditor/util/runningstats.py", line 1485, in load_cached_state":
if a not in dat or dat[a] != v:
if not quiet:
print("%s %s changed from %s to %s" % (cachefile, a, dat[a], v))
return None

a was not in dat, but you used dat[a], so it raise the KeyError.
But I don't know how to fix the bug. I tried to comment the print codes, but it raise other errors.

data issues

Hello, I am confused about some files in EasyEdit.

  1. The file downloaded from the link provided in README.md is called "data", and the file downloaded from examples is called "editing-data". What is the difference between these two files?

  2. Both the README in the repo and the README in examples mention placing data in the "./data" folder and the model in the "./hugging_cache" folder. Do I need to create these folders separately in both the repo and examples directories?

  3. Should the entry file be edit.py or examples/run_zsre_llama2.py and what's the difference?

  4. It would be helpful if you could provide an overview and explanation of the file structure of the entire repo, similar to the following structure.

editing-data
├── counterfact
│   ├── counterfact-original-edit.json
│   ├── counterfact-original-train.json
│   └── counterfact-original-val.json
├── locality
│   ├── Commonsense Task
│   │   ├── piqa_valid-labels.lst
│   │   └── piqa_valid.jsonl
│   ├── Distracting Neighbor
│   │   └── counterfact_distracting_neighbor.json
│   └── Other Attribution
│       └── counterfact_other_attribution.json
├── portability
│   ├── Inverse Relation
│   │   └── zsre_inverse_relation.json
│   ├── One Hop
│   │   ├── counterfact_portability_gpt4.json
│   │   └── zsre_mend_eval_portability_gpt4.json
│   └── Subject Replace
│       ├── counterfact_subject_replace.json
│       └── zsre_subject_replace.json
└── zsre
    ├── zsre_mend_eval.json
    └── zsre_mend_train_10000.json

Thank you very much~

personality edit code

Thank you for your incredible work. I am very interested in your Personality Edit. May I ask when the relevant code and dataset will be released? I'm really looking forward to it!

Any batch size to recommend for MEMIT batch edit?

Hello, I'm editing MEMIT on llama-7b and llama-13b models with thousands of edits.

If I use batch size = 1 for batch_edit(), then after a few hundred edits the model parameters are completely destroyed.

So, is there any batch size that you recommend?

L1-distance between original and edited model weights is 0!!

Hi,

I am wondering whether the L1 distance between the original and edited model weights can be 0? If both the edited and original model weights are same, how's the editing technique working!

Steps done:

  1. Ran GPT-2XL with ROME + zSRE. The code is basically run_zsre_llama2.py with the change specified in #44
  2. At the end, saved the edited model via edited_model.save_pretrained('gpt2xl_rome_zsre_edited')
  3. Have a small script which takes in the original and edited model weight paths, and compute L1 distance.
from transformers import AutoModelForCausalLM
import torch

def compute_l1_difference(original_model_path, edited_model_path):
    # Load the original model and the edited model
    original_model = AutoModelForCausalLM.from_pretrained(original_model_path)
    edited_model = AutoModelForCausalLM.from_pretrained(edited_model_path)

    # Get the model parameters as dictionaries
    original_params = original_model.state_dict()
    edited_params = edited_model.state_dict()

    # Compute the L1 difference between the model weights
    l1_difference = 0
    for name, original_param in original_params.items():
        if name in edited_params:
            edited_param = edited_params[name]
            l1_difference += torch.norm(original_param - edited_param, p=1).item()

    return l1_difference

# Example usage:
original_model_path = "xxx"
edited_model_path = "yyy"

l1_diff = compute_l1_difference(original_model_path, edited_model_path)
print(f"L1 Difference between models: {l1_diff}")

I am getting 0, which I don't think makes sense. So, I am assuming I am missing something!

Possible rootcauses

  1. At the editing script, not sure if I am actually saving the edited model, but I assume that's the case!
  2. At the computing L1-distance end (above script), but it looks fine to me

Please help incase you find something is not proper or a potential bug! Happy to look in detail.

Thanks

Use two GPU

So hi, i'm trying to run this with 2xt4 gpu on kaggle but then came a crosss this error : RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!. I change the self.model = LlamaForCausalLM.from_pretrained(self.model_name) by adding device_map="auto"

what should i do ?

How to perform Batch Editing using Easyedit?

Thanks for your contribution !
I have a problem with Easyedit framework, when I edit multiple facts on the base model using the editor.edit(), the run still looks like performing sequential edit, how to perform batch edit on the base model? Does batch edit using MEND and MEMIT take less time than sequential edit?
Looking forward to your reply ! It's really a great project

GPT2-XL and GPT-J evaluating with ZsRE

Hello,

I would like to appreciate the detailed and good work.

It would be really helpful if you could please tell me if we can evaluate editing models with ZsRE dataset on GPT2-XL and GPT-J model or any other model other than Llama-2 as given in the example folder. How can we do that?

Thanks in advance!

does not appear to have a file named config.json

执行命令:
python run_zsre_llama2.py \ --editing_method=IKE \ --hparams_dir=../hparams/IKE/llama-7b \ --data_dir=/home/chenhw/llm/EasyEdit-main/Data/zsre
20231008101616

提示没有config.json文件,llama2-7b的模型库好像确实没有这个文件,应该怎么解决呢
20231008101601

Performance of editing operations

Thank you for sharing your excellent work. I would like to know what kind of GPU configuration SERAC ROME MEND needs when editing thousands of samples respectively, and how much time will it take

personality edit code

Thank you for your incredible work. I am very interested in your Personality Edit. May I ask when the relevant code and dataset will be released? I'm really looking forward to it!

Question about pretraining the meta networks of MEND

Hi, thanks for your great contributions!
I'm trying to use MEND for editing. But when I pretained the meta networks of MEND under the guidance of the eaxmple of "Trainer" in README, the program exited with following information:

08/24/2023 12:07:07 - INFO - easyeditor.trainer.EditTrainer - Step 19009/19009 edit: 0.00002 acc_pre: 0.00365 acc_post: 0.00034 acc_delta: 0.00330 it_time: 0.3682
08/24/2023 12:07:07 - INFO - easyeditor.trainer.BaseTrainer - Step 0:
08/24/2023 12:07:07 - INFO - easyeditor.trainer.BaseTrainer - loss/edit_val : nan
loss/loc_val : nan
edit/acc_val : 0.00002
edit/log_prob_val : nan
edit/prob_val : nan
acc/pre_val : 0.00365
acc/post_val : 0.00034
nll/pre_val : 11.29517
perplexity/pre_val : 80432.48438
nll/post_val : nan
perplexity/post_val : nan
n_tokens/pre_val : 4.98143
n_tokens/post_val : 4.98143
time/edit_val : 0.25354
loss/total_val : nan
loss/total_edit_val : nan
memory/alloc_max_val: 14272471495.30265
memory/res_max_val : 14786047237.42732
eval_time/elapsed : 6998.66469
eval_time/average : 0.36818
08/24/2023 12:07:07 - INFO - easyeditor.trainer.BaseTrainer - Wrote results to:
08/24/2023 12:07:07 - INFO - easyeditor.trainer.BaseTrainer - ./results/results.json
GPTTokenizer Detected, Set pad token id and left padding!!!
GPTTokenizer Detected, Set pad token id and left padding!!!

I don't know why the program didn't save the model and how to fix the bug. Looking forward to your kind help!

Tutorial on injecting or adding new knowledge

I read your Readme. md and there is no section about injecting knowledge except this line 2023-9-21 The EasyEdit have supported Parameter-Efficient Fine-Tuning through AdaLoRA to inject knowledge into the LLM. and this part LoRA in the example folder. Can you provide more details on this?

Thank you

What should be the archive value for MEND?

I'm trying to load HF model and edit it using MEND. But I'm getting the following error

No such file or directory: './results/models/MEND/llama-7b'

This is related to the archive value in the yaml file. What should be given here while using HF models? I tried giving the HF model name as well.

Here is my sample code


from easyeditor import ROMEHyperParams, BaseEditor, MENDHyperParams

# hparams = ROMEHyperParams.from_hparams('./hparams/ROME/llama-7b')
hparams = MENDHyperParams.from_hparams('./hparams/MEND/llama-7b')


## edit descriptor: prompt that you want to edit
prompts = [
    'What university did Watts Humphrey attend?',
    'Which family does Ramalinaceae belong to',
    'What role does Denny Herzig play in football?'
]
## You can set `ground_truth` to None !!!(or set to original output)
ground_truth = ['Illinois Institute of Technology', 'Lecanorales', 'defender']
## edit target: expected output
target_new = ['University of Michigan', 'Lamiinae', 'winger']

locality_inputs = {
    'neighborhood':{
        'prompt': ['Joseph Fischhof, the', 'Larry Bird is a professional', 'In Forssa, they understand'],
        'ground_truth': ['piano', 'basketball', 'Finnish']
    },
    'distracting': {
        'prompt': ['Ray Charles, the violin Hauschka plays the instrument', 'Grant Hill is a professional soccer Magic Johnson is a professional', 'The law in Ikaalinen declares the language Swedish In Loviisa, the language spoken is'],
        'ground_truth': ['piano', 'basketball', 'Finnish']
    }
}


editor = BaseEditor.from_hparams(hparams)

metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    ground_truth=ground_truth,
    target_new=target_new,
    locality_inputs=locality_inputs,
    keep_original_weight=True
)

Llama2 ROME index out of bound

Hi,

Thanks for your excellent work! I am trying to run ROME on LLaMA2-7b. My checkpoint is requested from META form and is converted to hf format using this code. (I also tried to download the checkpoint from huggingface directly, but the same error occurs.) I followed your instruction to copy the zsre_mend_eval_portability_gpt4.json to the right path. When I run the code, I met the issue:

Traceback (most recent call last):
File "/home/linzihao/memory-editing/EasyEdit/scripts/../examples/run_zsre_llama2.py", line 78, in
metrics, edited_model, _ = editor.edit(
File "/home/linzihao/memory-editing/EasyEdit/scripts/../easyeditor/editors/editor.py", line 199, in edit
"pre": compute_edit_quality(self.model, self.model_name, self.hparams, self.tok, request,
File "/home/linzihao/memory-editing/EasyEdit/scripts/../easyeditor/evaluate/evaluate.py", line 70, in compute_edit_quality
compute_portability_quality(model, model_name, hparams, tok, portability_key,
File "/home/linzihao/memory-editing/EasyEdit/scripts/../easyeditor/evaluate/portability_evaluate.py", line 24, in compute_portability_quality
portability_correct = test_prediction_acc(model, tok, hparams, prompt, ground_truth, device)
File "/home/linzihao/memory-editing/EasyEdit/scripts/../easyeditor/evaluate/evaluate_utils.py", line 109, in test_prediction_acc
if isinstance(answers[0], list):
IndexError: list index out of range

It happens when evaluate the 17th request. The exact item in zsre_mend_eval_portability_gpt4.json is
{
"subject": "USS Leedstown (APA-56)",
"src": "Which corporation created USS Leedstown (APA-56)?",
"pred": "Lockheed Shipbuilding and Construction Company",
"rephrase": "Which company was produced by USS Leedstown (APA-56)?",
"alt": "Arleigh Burke-class aircraft carrier",
"answers": [
"Bethlehem Steel"
],
"loc": "nq question: i was a great islamic scholar and mathematician who died in 1131 ce",
"loc_ans": "Omar Khayyam",
"cond": "Lockheed Shipbuilding and Construction Company >> Arleigh Burke-class aircraft carrier || Which corporation created USS Leedstown (APA-56)?",
"portability": {
"Recalled Relation": "(Arleigh Burke-class, primary user, United States Navy)",
"New Question": "Which organization is the primary user of the class that USS Leedstown (APA-56) belongs to?",
"New Answer": "United States Navy"
}
},

The exact reason is that the prompt_len is 30 while the generated output from LLaMA2 is also 30. After the line 105 in file easyeditor/evaluate/evaluate_utils.py, the answers become "[]" which causes the indexerror when call "answers[0]". I add "try ... except ..." module and I found there are more than ten data items meet this problem.

I wonder do you face the same problem? How do you fix it? Thanks!

关于llama2 的Example的问题

您好!关于 llama2 的 Example,我的理解是先使用zsre的原始数据集进行编辑,然后再测试locality或portability,而您的代码中只使用了一个zsre_mend_eval_portability_gpt4.json数据,我不太懂数据是如何组织的。请问如何实现您论文 Editing Large Language Models: Problems, Methods, and Opportunities 表2中的结果呢?谢谢

[Feature Request] Support InternLM

Dear EasyEdit developer,

Greetings! I am vansinhu, a community developer and volunteer at InternLM. Your work has been immensely beneficial to me, and I believe it can be effectively utilized in InternLM as well. Welcome to add Discord https://discord.gg/gF9ezcmtM3 . I hope to get in touch with you.

Best regards,
vansinhu

Some questions about sequential editing

First of all, thank you very much for your great work. In my opinion, the sequential editing mode of knowledge editing is more valuable for practical applications.

I have a few questions about sequential editing that I'd like answered:
(1) I see that single editing is implemented in the source code. If I want to implement sequential editing, do I only need to assign the edited_model in the following code to self.model?
企业微信截图_16956984015422

(2) Should the calculation of metrics for sequential editing be performed after all editing is completed?

Very much looking forward to your reply, thank you!

Issue while running with device='auto'

Hi,

I'm trying to run GPT-J 6B with ROME and here's what I've done.

  1. Changed the model_name in YAML file in hparams/ROME/
  2. In edit.py, uncommented test_ROME_GPTJ()
  3. In easyeditor/editors/editor/ added self.model = AutoModelForCausalLM.from_pretrained(self.model_name, device_map='auto')

When I run python edit.py, I got the following traceback!

Traceback (most recent call last):
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/data/EasyEdit/edit.py", line 1860, in <module>
    main()
  File "/data/EasyEdit/edit.py", line 1817, in main
    test_ROME_GPTJ()
  File "/data/EasyEdit/edit.py", line 772, in test_ROME_GPTJ
    editor = BaseEditor.from_hparams(hparams)
  File "/data/EasyEdit/easyeditor/editors/editor.py", line 44, in from_hparams
    return cls(hparams)
  File "/data/EasyEdit/easyeditor/editors/editor.py", line 67, in __init__
    self.model = AutoModelForCausalLM.from_pretrained(self.model_name, device_map='auto')
...
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. 

I've 8 GPUs and when I changed device: 0 or device: 4, I am still facing the same issue!

Basically, I want to run using multiple GPUs as I believe ROME with GPT-J will take around 60-80GB. Please let me know how this can be done!

Thanks

RuntimeError: Invalid device string: 'cuda:0,1'

when I run example MEMLT_llama.ipynb
llama-7b.yaml as follow:
device: 0,1
bug
RuntimeError: Invalid device string: 'cuda:0,1'
Do EasyEdit support multi GPU for training?
I only have 3090 24G,I am very looking forward this support.
thank you

Issues while running the repo

Hi folks,

Thanks for building this amazing repo, it's super concise and easy to setup.

However, when I am running python edit.py with test_MEND_T5() after installing zsRE and CounterFact datasets and t5-3B model path changed in YAML file, I am getting this error:

ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds

希望能同步一个gitee的镜像

这是我见到第一个非常全面详尽,优秀漂亮的大模型知识编辑的框架.
而且自从发布以来,你们还在不断更新,优化.
非常感谢你们的辛勤付出和杰出的共享.

你们的更新非常及时勤奋,但是国内代理不稳定,同步非常麻烦.
所以希望能同步一个gitee的镜像.
万分感谢.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.