Git Product home page Git Product logo

Comments (15)

BiEchi avatar BiEchi commented on May 24, 2024 3

Hi @jalammar ,
Ecco is actually compatible with the current LLAMA 2 model because it's stll causal. The config looks like this:

model_config = {
    'embedding': "model.embed_tokens",
    'type': 'causal',
    'activations': ['down_proj'], #This is a regex
    'token_prefix': '_',
    'partial_token_prefix': ''
}

However, as the model becomes larger and larger, Ecco is occupying a significant amount of GPU memory. I'd like to contribute to some memory optimization options. Would you like to point to me where Ecco occupies GPU memory?

from ecco.

BiEchi avatar BiEchi commented on May 24, 2024 3

@Dongximing this result makes sense to an LLM because saliency/integrated saliency methods perform extremely badly on complex models. The reason is that they're not developed to interpret LLMs. When proposed, these methods were just used for small models like CNN and at most LSTM. Later it got worked on GPT-2 as the model is still not too complex. When it comes to LLAMA, backbrop becomes extremely expensive and unreliable due to the linearality assumption of the saliency methods.
For other algorithms, I'll come back to you in several days.

from ecco.

verazuo avatar verazuo commented on May 24, 2024 2

Thx for this comment! I tried this model config and the pulled ecco library in @BiEchi 's repo, it works well on Vicuna.

from ecco.

BiEchi avatar BiEchi commented on May 24, 2024 2

Hi @Dongximing yes, large models are significantly more costly than small models because backprop is several times larger and more computational heavy. GPT-2 is 1.5B, while LLAMA-2 is 7B.
When the input sequence increases, each output token needs to attribute to more input tokens, so the memory usage increases. For output sequence, it's very obvious that the memory and time increases when it's longer, because you do one backprop for each output token.
For other saliency methods, I'd suggest to use lime or shap provided by Captum. You can write some code in Ecco to suit that. I've got it working on my hand using Ecco. If you're interested pls leave comments here. If there are lots of likes for this comment, I'll try to clean my code and release it for your good.

from ecco.

jalammar avatar jalammar commented on May 24, 2024

That would be valuable indeed, if you have the bandwidth for it! Sure!

from ecco.

EricPeter avatar EricPeter commented on May 24, 2024

Hi , I am trying to install ecco in google colab but am getting this error Collecting ecco

Using cached ecco-0.1.2-py2.py3-none-any.whl (70 kB)
Collecting transformers~=4.2 (from ecco)
Using cached transformers-4.31.0-py3-none-any.whl (7.4 MB)
Requirement already satisfied: seaborn~=0.11 in /usr/local/lib/python3.10/dist-packages (from ecco) (0.12.2)
Collecting scikit-learn~=0.23 (from ecco)
Using cached scikit-learn-0.24.2.tar.gz (7.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
error: subprocess-exited-with-error

× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.  

And I can't seem to find a way to fix it .

from ecco.

Dongximing avatar Dongximing commented on May 24, 2024

Hi , I just wandering is 'embedding': "model.embed_tokens" ? or 'model.embed_tokens.weight'
could you give me a sample code and how did you use it ?

thanks a lot

from ecco.

BiEchi avatar BiEchi commented on May 24, 2024

@EricPeter Pls open a separate issue for this.

from ecco.

BiEchi avatar BiEchi commented on May 24, 2024

@Dongximing


text = """The first presient of US is """

print("===== Attribution Method =====")
attribution_method = 'dl'
print(attribution_method)
tokenizer = AutoTokenizer.from_pretrained(model, torch_dtype=dtype)

model_config = {
    'embedding': "model.embed_tokens",
    'type': 'causal',
    'activations': ['down_proj'],
    'token_prefix': '_',
    'partial_token_prefix': ''
}

lm = ecco.from_pretrained(model,
                          activations=False,
                          model_config=model_config,
                          gpu=False
                          )

from ecco.

BiEchi avatar BiEchi commented on May 24, 2024

Hi @jalammar,
I'm closing this issue as I've got some conclusive results. Gradient-based saliency methods are computationally heavy at LLM generation because each token takes too much GPU memory due to the back prop process, and the time consumption is also stunningly high.
The performance of gradient-based saliency methods is also extremely low on LLMs, because gradient-based methods assumes that the model is simulatable to its first-order Taylor series, i.e., its linear function (affined model). Thus, when the model gets more complex and unliearlized, the results of gradient-based methods drop significantly. I've tested naive gradient, integrated gradient and input * gradient methods, and none of them perform well on LLAMA-2 (7B), but the results are exceptionally good on GPT-2 (1.5B).
Thus, I'd like to suggest people following this thread give up applying gradient-based methods to LLAMA-2. Perturbing-based methods may make a difference, but I'll make a different thread to discuss about it.

from ecco.

BiEchi avatar BiEchi commented on May 24, 2024

@verazuo Pls correct me if you see any good results applying these methods on LLMs. I can reopen this issue if it's still promising.

from ecco.

Dongximing avatar Dongximing commented on May 24, 2024

Hi @jalammar, I'm closing this issue as I've got some conclusive results. Gradient-based saliency methods are computationally heavy at LLM generation because each token takes too much GPU memory due to the back prop process, and the time consumption is also stunningly high. The performance of gradient-based saliency methods is also extremely low on LLMs, because gradient-based methods assumes that the model is simulatable to its first-order Taylor series, i.e., its linear function (affined model). Thus, when the model gets more complex and unliearlized, the results of gradient-based methods drop significantly. I've tested naive gradient, integrated gradient and input * gradient methods, and none of them perform well on LLAMA-2 (7B), but the results are exceptionally good on GPT-2 (1.5B). Thus, I'd like to suggest people following this thread give up applying gradient-based methods to LLAMA-2.
Perturbing-based methods may make a difference, but I'll make a different thread to discuss about it.

Hi @BiEchi
the result is just a normal distribution, for example, "please tell me this sentence positive or negative: I love you. the answer is [model output]" , I got a result. is there some benchmark to evaluate the performance of model explanation? or How do you think this result?
thank you very much, btw as you said linear is not good, could you recommend me some other algorithm in this ecco library?
t1

from ecco.

Dongximing avatar Dongximing commented on May 24, 2024

Hi BiEchi
is that possible cause out of memory ? when the number of output increase . such as when the max_new_tokens = 1000 or input_size = 1000. also , do you know other open source to have model explanation in NLP?
thanks

from ecco.

Dongximing avatar Dongximing commented on May 24, 2024

if possible, please share it. thanks

from ecco.

jalammar avatar jalammar commented on May 24, 2024

A new method that could work better with these models is Contrastive Explanations (https://arxiv.org/abs/2202.10419). You can try an implementation of it in Inseq https://github.com/inseq-team/inseq

from ecco.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.