Git Product home page Git Product logo

Comments (5)

sunyt32 avatar sunyt32 commented on May 21, 2024

What's your extrapolation setting? Is it identical to our paper? Maybe you can try window attention which is much easier to implement to see the performance first.

from torchscale.

sunyt32 avatar sunyt32 commented on May 21, 2024

If you use it on LongEval setting, I think it doesn't work to retrieve very long topics. The local techniques maintain the local modeling where ppl is more stable.

from torchscale.

RulinShao avatar RulinShao commented on May 21, 2024

Thank for the reply @sunyt32 ! I was actually using the rotary embedding as implemented in the LLaMa HF codes. I only implemented the BCA to help it extrapolate to longer context. I did very simple tests for debugging:

For example, I set the window size to be w where my prompt is padded on the left to 2w (e.g., w = 16, 32, 128). (Do you think it's a reasonable case for debugging?) The LLama model worked well when I turned off the BCA. With BCA, I expect it to generate reasonable answers following the prompt, but I got gibberish like 6.666666 after the generation of three or five new tokens. I think this dummy case indicates there might be a bug in my codes. So I appreciate any additional information that can help me check the expected outputs and intermediate tensors (like the k,v cache and rotary positional embedding calculation with bca) in the context of generation, which would be super helpful!

Thanks a lot for your time!

from torchscale.

sunyt32 avatar sunyt32 commented on May 21, 2024

I see, the reason here is similar, the window attention actually doesn't have the ability for longer context. However, using BCA or window attention should not cause gibberish. The reasonable generation sequence is at least coherent.

I have to admit that the long context evaluation is much more reasonable nowadays...It's a wrong idea just to concentrate on ppl. Let's forget window attention styles...

ntk extrapolation is a good technique for these tasks. But xPos still has its values. Our experiments show that xPos+ntk will have a more stable performance than RoPE, including ppl and retrieval.

from torchscale.

RulinShao avatar RulinShao commented on May 21, 2024

Gotcha! Thanks for the nice advice! I'll try the other way you suggested!

from torchscale.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.