Git Product home page Git Product logo

Comments (12)

nsarrazin avatar nsarrazin commented on August 25, 2024 2

If you want a list of templates we've used in the past, you got PROMPTS.md

If you want to see the current HuggingChat prod config it's .env.template

and ideally try to see if the model you want has a tokenizer_config.json file on the hub. if it does you can just do

"tokenizer": "namespace/on-the-hub" in your config and it should pick up the template. see the .env.template for some examples

from chat-ui.

nsarrazin avatar nsarrazin commented on August 25, 2024 1

At the end of the day, use what works for you 🤗 We support both custom prompt templates with chatPromptTemplate but for easy setup sometimes it's nicer if you can get the chat template directly from the tokenizer

from chat-ui.

mlim15 avatar mlim15 commented on August 25, 2024

This seems to be working-ish based on what I've seen passed around elsewhere (e.g. ollama's prompt template or the sample provided on the llama.cpp pull request) :

{
    "name": "Llama 3",
    "preprompt": "This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.",
    "chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
    "parameters": {
        (...snip...)
        "stop": ["<|end_of_text|>", "<|eot_id|>"] // Verify that this is correct.
    },
    (...snip...)
}

There are still some issues with the response not ending (ollama/ollama#3759) and the stop button not working (#890) that I'm still running into. That's probably related to the specific thing I've set as "stop" in the above definition here, as well as the tokenizer config when the model is converted to GGUF (if you do that). Apparently you can edit the tokenizer config JSON to fix some of these issues. See ongoing discussions floating around about Llama 3's stop tokens: ggerganov/llama.cpp#6770, ggerganov/llama.cpp#6745 (comment), ggerganov/llama.cpp#6751 (comment).

from chat-ui.

IAINATDBI avatar IAINATDBI commented on August 25, 2024

Thank you @mlim15 that worked just fine. I spun up the 70B Instruct model and it appears to stop when intended. I do see some special tokens (start and stop header) streamed at the start but those are tidied up at the end of streaming. That's maybe the chat ui code rather than the model.

from chat-ui.

iChristGit avatar iChristGit commented on August 25, 2024

This seems to be working-ish based on what I've seen passed around elsewhere (e.g. ollama's prompt template or the sample provided on the llama.cpp pull request) :

{
    "name": "Llama 3",
    "preprompt": "This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.",
    "chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
    "parameters": {
        (...snip...)
        "stop": ["<|end_of_text|>", "<|eot_id|>"] // Verify that this is correct.
    },
    (...snip...)
}

There are still some issues with the response not ending (ollama/ollama#3759) and the stop button not working (#890) that I'm still running into. That's probably related to the specific thing I've set as "stop" in the above definition here, as well as the tokenizer config when the model is converted to GGUF (if you do that). Apparently you can edit the tokenizer config JSON to fix some of these issues. See ongoing discussions floating around about Llama 3's stop tokens: ggerganov/llama.cpp#6770, ggerganov/llama.cpp#6745 (comment), ggerganov/llama.cpp#6751 (comment).

Thank you @mlim15 that worked just fine. I spun up the 70B Instruct model and it appears to stop when intended. I do see some special tokens (start and stop header) streamed at the start but those are tidied up at the end of streaming. That's maybe the chat ui code rather than the model.

Hello!
I tried using the recommended template you provided, but the response are never stopping, and the LLM wont choose a topic for the conversation (no title - just "New Chat")
Can you link the whole .env.local?

from chat-ui.

iChristGit avatar iChristGit commented on August 25, 2024

Also, I am using Text-Generation-Webui, do you use the same?
Edit:
I was using the original meta fp16 model, now when generating with the GGUF version it works fine!

from chat-ui.

BlueskyFR avatar BlueskyFR commented on August 25, 2024

was anyone able to make it work?

from chat-ui.

nsarrazin avatar nsarrazin commented on August 25, 2024

In prod for HuggingChat this is what we use:

    "tokenizer" : "philschmid/meta-llama-3-tokenizer",
    "parameters": {
      "stop": ["<|eot_id|>"]
    }

chat-ui supports using the template that is stored in the tokenizer config so that should work. Let me know if it doesn't, maybe there's some endpoint specific thing going on.

from chat-ui.

iChristGit avatar iChristGit commented on August 25, 2024

In prod for HuggingChat this is what we use:

    "tokenizer" : "philschmid/meta-llama-3-tokenizer",
    "parameters": {
      "stop": ["<|eot_id|>"]
    }

chat-ui supports using the template that is stored in the tokenizer config so that should work. Let me know if it doesn't, maybe there's some endpoint specific thing going on.

"name": "Llama-3",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}} {{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}} <|eot_id|>{{/ifAssistant}}{{/each}}",
"preprompt": "This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.",

"stop": ["<|end_of_text|>", "<|eot_id|>"]

Im using this config, if I want to use "tokenizer" : "philschmid/meta-llama-3-tokenizer", should I remove chatPromptTemplate and Preprompt ?

from chat-ui.

nsarrazin avatar nsarrazin commented on August 25, 2024

You can keep preprompt but you should get rid of the chatPromptTemplate yes!

from chat-ui.

iChristGit avatar iChristGit commented on August 25, 2024

You can keep preprompt but you should get rid of the chatPromptTemplate yes!

Il try that! although the current config works flawlessly!
Thank you

from chat-ui.

BlueskyFR avatar BlueskyFR commented on August 25, 2024

@nsarrazin thanks for the answer, I'll try it soon!
Though is there a place we could find all the configs for the models for our .env.local? For instance could we get the list you use in production? It would be easier IMO

from chat-ui.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.