Comments (19)
Ok, I have turned back on my PC, lets do this!
from gpt-llama.cpp.
can u screenshot what u see? also is anything printed in the terminal? thank u!
from gpt-llama.cpp.
from gpt-llama.cpp.
interesting.. can you make sure you run npm i
again to install dependencies?
also, what do you get if you try to visit localhost:443/v1/models
from gpt-llama.cpp.
I will try that now.
from gpt-llama.cpp.
Same result for both sadly
from gpt-llama.cpp.
that is odd.. do you see any issues in the cmd window?
can you try instead of localhost use your ip address (192.168.0.33:443)
does going to localhost:443 even give you anything? it almost seems like the server isn't running at all..
from gpt-llama.cpp.
I can try that. I agree with you though, its almost like the server isn't running lol
from gpt-llama.cpp.
Nothing. No issues in the cmd window either, no errors or odd things showing up at all. Odd indeed.
from gpt-llama.cpp.
hmmmmmmm.. maybe try opening a new terminal and trying agin? or restarting ur pc? this gotta be the weirdest issue i've seen haha
from gpt-llama.cpp.
I guess I can try that too, lets hope for the best! In the meantime, is there any code that you are using that is specific to apple hardware? Or is everything "universally" supported?
from gpt-llama.cpp.
i wouldn't think that the issue you're seeing would be due to windows vs mac. i asked chatgpt and it has a couple good suggestions
- Make sure your server is actually running and listening on port 443. You can check this by running
netstat -an | findstr :443
in your command prompt or terminal. This command will list all the active network connections and you should see a line with 127.0.0.1:443 or 0.0.0.0:443 indicating that your server is listening on port 443.
(i see TCP [::]:443 [::]:0. LISTENING
for the server)
- Check your firewall settings to make sure that port 443 is open and not blocked. You can try temporarily disabling your firewall to see if that resolves the issue.
Maybe port 443 is blocked. try changing the PORT to something other than 443 in index.js
(line 11), maybe try 8000?
const PORT = 8000;
from gpt-llama.cpp.
Lets gooo. Thank you for the assistance! Do I need to do anything with the docs? Or are the next steps specific to other programs that utilize this api?
from gpt-llama.cpp.
lets gooo!!!! what was the solution? was it because of the :443
port or did the restart do the job?
docs are a good place to test out the api (you NEED to "AUTHORIZE" for any of the endpoints to work, just throw in the path to the model in there), but otherwise you can start by trying out chatbot-ui
's guide (only one i've written so far lol) https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/chatbot-ui-setup-guide.md
from gpt-llama.cpp.
Ya, the restart did the trick. How do I authorize? I tried to look into the steps, but I am not too tech savvy lol
EDIT: I have a Vicuna 13B ggml bin file ready to go
from gpt-llama.cpp.
from gpt-llama.cpp.
Ohhhhh, man, please excuse my stupidity LOL
from gpt-llama.cpp.
u are good haha
from gpt-llama.cpp.
i'm going to go ahead and close this as resolved! tl;dr if anybody else in the future hits this – doesn't seem to be windows specific and a computer restart or cmd window restart should do the trick! thanks!
from gpt-llama.cpp.
Related Issues (20)
- TypeError: Window.fetch: HEAD or GET Request cannot have a body. HOT 1
- npm error on gpt-llama.cpp HOT 4
- Slow speed Vicuna - 7B Help plz HOT 3
- llama.cpp GPU support HOT 1
- Are there different specific instructions for running Red Pajama?
- no response message with Readable Stream: CLOSED HOT 2
- Error: spawn ..\llama.cpp\main ENOENT at ChildProcess._handle.onexit HOT 1
- SERVER BUSY, REQUEST QUEUED
- Cannot POST /V1/embeddings HOT 1
- Bearer Token vs Model parameter?
- Why is a default chat being forced?
- Every Other Chat Response HOT 1
- Finding last messages?
- "Internal Server Error" on a remote server
- Change listening ip to public ip? HOT 1
- gguf supported? HOT 1
- llama.cpp unresponsive for 20 seconds HOT 3
- Module not found: Package path ./lite/tiktoken_bg.wasm?module is not exported from package HOT 1
- node:events:491 throw er; // Unhandled 'error' event Error: spawn YOUR_KEY=../llama.cpp/main ENOENT
- How to create a single binary
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt-llama.cpp.