Git Product home page Git Product logo

Comments (51)

InfernalDread avatar InfernalDread commented on July 16, 2024 1

Also, one last thing, I got a timeout error from the chat UI cmd, but the response still showed up on the gpt-llama.cpp cmd, so this issue is resolved, but it is weird that I got a timeout error from chat UI

EDIT: And now Chat UI is working fine.......I am confused, but happy confused.

Thank you for all of your help!

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024 1

Oh, no problem. So, I went to the main llama.cpp repo and clicked on the releases button. Then, I downloaded the avx zip file that doesn't have a number attached to it (should be the first file of the assets). I unzipped the file and clicked into it and bam, all the .exe files that you need (including of course the "main.exe" file. Then I moved them to a separate folder named "llama.cpp" and created the "models" where I put my folder with the ".bin" file in it. If you need more clarification and or a visual representation, fill free to ask!

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024 1

are you running on cmd, powershell or git bash or something else?

I am running fully on CMD using "venv" python virtual environment

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024 1

got it! thank you :) also do you think if i made a discord channel it'd be helpful?

I'd think so, this way more people can either contribute or ask questions without bloating the repo, but it's up to you of course! I'm just glad I can be of assistance!

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024 1

@Vitorhsantos i think you're missing a / in your path, it should start with C:// right?

also @InfernalDread and @Vitorhsantos , i just made a discord channel that hopefully avoids a 50 reply long thread on GitHub (but hopefully we'll still have actual issues posted here too): https://discord.gg/aWHBQnJaFC come through!

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

o crap it must be the front and back slashes i have hardcoded.. let me make a change..

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

Something that was interesting was that the code would add "/llama.cpp/main" after my path for specifying the .bin file in the llama.cpp folder, I am not sure if it has anything to do with that though

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

oh I see, lol, similar thought processes

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

I think the hardcoded values are in the "utils.js" file, near the end of the code. I am assuming that you would have to remove them and include a variable to take the path that the user input?

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

sorry i'll push out a fix tonight that supports both windows and mac (for front and backslashes). for now, u can try replacing all the front slashes with backslashes and lmk if that works?

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

oh ok, I can try that, hopefully it works lol

from gpt-llama.cpp.

brandonvessel avatar brandonvessel commented on July 16, 2024

Stumbled upon this issue as well. Will also need to change scriptPath on line 80 in chatRoutes.js

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

yup, made note of that just now, I will try to make the changes and see if it works

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

hey @brandonvessel @InfernalDread , i just pushed a change that should "fix" \ and / in paths for windows and mac in 3055f5e

do a fresh pull and lmk if it works! (i haven't published it to npm yet pending this test)

from gpt-llama.cpp.

brandonvessel avatar brandonvessel commented on July 16, 2024

Not sure if this is on my end. Getting the following error:

  const readable = new ReadableStream({
                   ^

ReferenceError: ReadableStream is not defined
    at file:///E:/Projects/_AI/gpt-llama.cpp/routes/chatRoutes.js:164:20
    at Layer.handle [as handle_request] (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\layer.js:95:5)
    at next (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\route.js:144:13)
    at Route.dispatch (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\route.js:114:3)
    at Layer.handle [as handle_request] (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\layer.js:95:5)
    at E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\index.js:284:15
    at Function.process_params (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\index.js:346:12)
    at next (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\index.js:280:10)
    at Function.handle (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\index.js:175:3)
    at router (E:\Projects\_AI\gpt-llama.cpp\node_modules\express\lib\router\index.js:47:12)```

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

hmm could you run npm i again?

from gpt-llama.cpp.

brandonvessel avatar brandonvessel commented on July 16, 2024

No updates needed. npm version 9.6.5 if that helps. It did print the --LLAMA.CPP SPAWNED-- log message and the raw query information, but errors out on trying to read the response.

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

this is actually probably related to readablestream only being supported on node 16+. can you double check your node version? you can check this by doing node -v. i'm on v19.8.1

from gpt-llama.cpp.

brandonvessel avatar brandonvessel commented on July 16, 2024

v16.14.0

from gpt-llama.cpp.

brandonvessel avatar brandonvessel commented on July 16, 2024

Going to upgrade node and see if that fixes it

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

Screenshot 2023-04-19 at 5 54 41 PM

actually i lied @brandonvessel , it requires v18+. upgrading should do the trick

source

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

I still am getting this error (also, yes, I changed the port to 8080):

(venv) C:\Users\Mike's PC\Documents\transfer_to_external_storage\gpt_llama_cpp\gpt-llama.cpp>npm start

[email protected] start
node index.js

Server is listening on:

  • localhost:8080
  • 192.168.0.33:8080 (for other devices on the same network)

--LLAMA.CPP SPAWNED--
C:\Users\Mike's\llama.cpp\main -m C:\Users\Mike's --temp 0 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p ### Instructions

Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.

Inputs

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.

Response

user: hello
assistant:

--REQUEST--
user: hello
node:events:515
throw er; // Unhandled 'error' event
^

Error: spawn C:\Users\Mike's\llama.cpp\main ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:289:12)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: "spawn C:\Users\Mike's\llama.cpp\main",
path: "C:\Users\Mike's\llama.cpp\main",
spawnargs: [
'-m',
"C:\Users\Mike's",
'--temp',
0,
'--n_predict',
1000,
'--top_p',
'0.1',
'--top_k',
'40',
'-b',
'512',
'-c',
'2048',
'--repeat_penalty',
'1.1764705882352942',
'--reverse-prompt',
'user:',
'--reverse-prompt',
'\nuser',
'--reverse-prompt',
'system:',
'--reverse-prompt',
'\nsystem',
'--reverse-prompt',
'##',
'--reverse-prompt',
'\n##',
'--reverse-prompt',
'###',
'-i',
'-p',
'### Instructions\n' +
'Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.\n' +
'\n' +
'### Inputs\n' +
'system: You are a helpful assistant.\n' +
'user: How are you?\n' +
'assistant: Hi, how may I help you today?\n' +
"system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.\n" +
'\n' +
'### Response\n' +
'user: hello\n' +
'assistant:'
]
}

Node.js v18.4.0

(venv) C:\Users\Mike's PC\Documents\transfer_to_external_storage\gpt_llama_cpp\gpt-llama.cpp>

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

i'm going to add a minimum node version of 18 if that fixes it. unless something else comes up that forces backwards compatibility with the experimental 16.5-17.5 version i'm going to stick with that

from gpt-llama.cpp.

brandonvessel avatar brandonvessel commented on July 16, 2024

@keldenl That fixed it! Looks like my instance is working as expected now

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

@InfernalDread does running

C:\Users\Mike's\llama.cpp\main -m C:\Users\Mike's --temp 0 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942 -p "the sky is"

work for you?

update: nvm, i don't see your model path at all, C:\Users\Mike's\llama.cpp\main -m C:\Users\Mike's isn't right. it should show your model path

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

it says path cannot be specified. I believe I correctly authorized the model. Let me check again

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

mine looks like this: ../llama.cpp/main -m ../llama.cpp/models/vicuna/13B/ggml-vicuna-unfiltered-13b-4bit.bin --temp 1 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942

can you paste the api key you provided for your model?

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

C:\Users\Mike's PC\Documents\transfer_to_external_storage\llama_cpp\3\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin

EDIT: do I have to create a specific directory to match something like yours?

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

ahhhh.. your llama.cpp path looks vastly different from what i expected

i believe the standard setup should be

\llama.cpp\models\ggml-model-q4_0.bin

the problematic parts of your path is the fact that i rely on

  • llama.cpp being the project folder name
  • models being stored in llama.cpp\models\<HERE> helps, but not required. it relies on llama.cpp being in the path tho

any chance you could tweak your path and folder naming to fix that?

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

oh ya, of course I can, let me get on that right now!

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

all the other ".exe" files like "main.exe" should also be there too right?

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

did you have to include .exe when getting it working with llama.cpp? what's an example command you run in cmd to get llama.cpp working (not gpt-llama.cpp)

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

I actually do not remember if I had to, I haven't used llama.cpp in a while, but I know that it worked

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

I am now using this as the model path:

C:\Users\Mike's PC\Documents\llama.cpp\models\ggml-model-q4_0.bin

will this work better?

EDIT: I forgot to include the ".bin" in a folder in the models folder, does that matter?

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

I think it would be best for me to try and reinstall everything over again

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

interesting, even after completely installing it fresh, and using this path:

C:\Users\Mike's PC\Documents\llama.cpp\models\vicuna_13b\ggml-model-q4_0.bin

I still get this error:

(venv) C:\Users\Mike's PC\Documents\transfer_to_external_storage\gpt_llama_cpp\gpt-llama.cpp>npm start

[email protected] start
node index.js

Server is listening on:

  • localhost:8080
  • 192.168.0.33:8080 (for other devices on the same network)

--LLAMA.CPP SPAWNED--
C:\Users\Mike's\llama.cpp\main -m C:\Users\Mike's --temp 0 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p ### Instructions

Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.

Inputs

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.

Response

user: hello
assistant:

--REQUEST--
user: hello
node:events:515
throw er; // Unhandled 'error' event
^

Error: spawn C:\Users\Mike's\llama.cpp\main ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:289:12)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: "spawn C:\Users\Mike's\llama.cpp\main",
path: "C:\Users\Mike's\llama.cpp\main",
spawnargs: [
'-m',
"C:\Users\Mike's",
'--temp',
0,
'--n_predict',
1000,
'--top_p',
'0.1',
'--top_k',
'40',
'-b',
'512',
'-c',
'2048',
'--repeat_penalty',
'1.1764705882352942',
'--reverse-prompt',
'user:',
'--reverse-prompt',
'\nuser',
'--reverse-prompt',
'system:',
'--reverse-prompt',
'\nsystem',
'--reverse-prompt',
'##',
'--reverse-prompt',
'\n##',
'--reverse-prompt',
'###',
'-i',
'-p',
'### Instructions\n' +
'Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.\n' +
'\n' +
'### Inputs\n' +
'system: You are a helpful assistant.\n' +
'user: How are you?\n' +
'assistant: Hi, how may I help you today?\n' +
"system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.\n" +
'\n' +
'### Response\n' +
'user: hello\n' +
'assistant:'
]
}

Node.js v18.4.0

(venv) C:\Users\Mike's PC\Documents\transfer_to_external_storage\gpt_llama_cpp\gpt-llama.cpp>

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

FINALLY, so apparently, this program does NOT like spaces in the path. I had to put this in a path without a single space. I know that this can happen at times with coding.

This is the final path:

C:\Users\Mike's\llama.cpp\models\vicuna_13b\ggml-model-q4_0.bin

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

so, you probably want to include in the README file, to not have ANY spaces in your path

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

glad you figured it out! hopefully this thread will be useful for other folks if they run into similar issues – sorry i had to step away to make some dinner :/

regarding the spaces.. let me see if there's a way around it, there may be a bug perhaps

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

glad you figured it out! hopefully this thread will be useful for other folks if they run into similar issues – sorry i had to step away to make some dinner :/

regarding the spaces.. let me see if there's a way around it, there may be a bug perhaps

oh no, don't worry about it, we all gotta eat lol. As for the spaces, it's not the worst thing to deal with, but I am glad that this issue was resolved. Hopefully it is something that can be fixed easily, if not, no big deal!

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

I think I may know why this happened, maybe you have it in your code to always use the %username% of the host machine in the path variable. Hence why, no matter what previous path I chose, it always defaulted to my username.

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

hey @InfernalDread weird question but how did you get llama.cpp set up? do you have a main file? somebody else is trying to get it working and seem to be stuck on not having the main file

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

are you running on cmd, powershell or git bash or something else?

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

got it! thank you :) also do you think if i made a discord channel it'd be helpful?

from gpt-llama.cpp.

Vitorhsantos avatar Vitorhsantos commented on July 16, 2024

Hello guys, how are you doing? Firstly, great job here keldenl! Your work is amazing! And thanks for all the help you gave him InfernalDread.

I'm stuck in the same error as you InfernalDread:

node:events:489
      throw er; // Unhandled 'error' event
      ^

Error: spawn C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/main ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
    at ChildProcess._handle.onexit (node:internal/child_process:291:12)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/main',
  path: 'C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/main',
  spawnargs: [
    '-m',
    'C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/models/ggml-vicuna-13b-4bit-rev1.bin',
    '--temp',
    1,
    '--n_predict',
    1000,
    '--top_p',
    '0.1',
    '--top_k',
    '40',
    '-b',
    '512',
    '-c',
    '2048',
    '--repeat_penalty',
    '1.1764705882352942',
    '--reverse-prompt',
    'user:',
    '--reverse-prompt',
    '\nuser',
    '--reverse-prompt',
    'system:',
    '--reverse-prompt',
    '\nsystem',
    '--reverse-prompt',
    '##',
    '--reverse-prompt',
    '\n##',
    '--reverse-prompt',
    '###',
    '-i',
    '-p'

You have mentioned the spaces between the folders in the path, I've checked that already and it's everything ok, any toughts?

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

Hello guys, how are you doing? Firstly, great job here keldenl! Your work is amazing! And thanks for all the help you gave him InfernalDread.

I'm stuck in the same error as you InfernalDread:

node:events:489
      throw er; // Unhandled 'error' event
      ^

Error: spawn C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/main ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
    at ChildProcess._handle.onexit (node:internal/child_process:291:12)
    at onErrorNT (node:internal/child_process:483:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/main',
  path: 'C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/main',
  spawnargs: [
    '-m',
    'C:/Users/Vitor/Desktop/Hax_Prog/llama.cpp/models/ggml-vicuna-13b-4bit-rev1.bin',
    '--temp',
    1,
    '--n_predict',
    1000,
    '--top_p',
    '0.1',
    '--top_k',
    '40',
    '-b',
    '512',
    '-c',
    '2048',
    '--repeat_penalty',
    '1.1764705882352942',
    '--reverse-prompt',
    'user:',
    '--reverse-prompt',
    '\nuser',
    '--reverse-prompt',
    'system:',
    '--reverse-prompt',
    '\nsystem',
    '--reverse-prompt',
    '##',
    '--reverse-prompt',
    '\n##',
    '--reverse-prompt',
    '###',
    '-i',
    '-p'

You have mentioned the spaces between the folders in the path, I've checked that already and it's everything ok, any toughts?

What is your node version? It needs to be 18+ to work. Do "node -v" and tell me what the output is.

from gpt-llama.cpp.

Vitorhsantos avatar Vitorhsantos commented on July 16, 2024

@InfernalDread It says Node.js v20.0.0

@keldenl I've tried both options, same error, any other toughts?

from gpt-llama.cpp.

keldenl avatar keldenl commented on July 16, 2024

@Vitorhsantos wanna join the discord we could try to figure it out there faster

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

@InfernalDread It says Node.js v20.0.0

@keldenl I've tried both options, same error, any other toughts?

You made sure to authenticate and use the same model structure and path as in the README?

from gpt-llama.cpp.

Vitorhsantos avatar Vitorhsantos commented on July 16, 2024

Actually I got lost in that part, how can I authenticate properly? I would really appreciate it if you could explain it to me

from gpt-llama.cpp.

InfernalDread avatar InfernalDread commented on July 16, 2024

Actually I got lost in that part, how can I authenticate properly? I would really appreciate it if you could explain it to me

Join the discord, it'll be easier for us to help you there!

from gpt-llama.cpp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.