Comments (16)
I would like to try this as a docker image: ideally paired with some kind of web interface.
from gpt4all.
The main difference with sarge is that llama-cli doesn't bash out as serge does, but rather uses llama.cpp straight from the C++ code and as such keeps the model in memory between the requests - that makes it more faster for iterating.
Seems like the best of both worlds would be to bundle llama-cli with an optional chat-style web interface.
To slightly extend my metaphor of different models for the app being like different movies for a media player, seems like it would be nice to have these applications not be restricted to just one model per app. Requests could go to the same app for any number of models, specifying which one as part of the request.
from gpt4all.
Re models: correct, that's where I'd like to go too in the long run, although reloading models come with the price of loading them back into ram each time when instantiated
Well I think the only real restriction there would be how much RAM you have. Like maybe given how much RAM you have, you'd have a choice of running two small models or one big one. Maybe different models could be started or stopped with different settings with respect to RAM vs storage.
from gpt4all.
we dont have any plans to do this at the moment
from gpt4all.
maybe this can help you, I've added support for gpt4all too: https://github.com/go-skynet/llama-cli
from gpt4all.
@mudler docker: Error response from daemon: unknown: Tag v0.3 was deleted or has expired.
from gpt4all.
@mudler docker: Error response from daemon: unknown: Tag v0.3 was deleted or has expired.
Use latest, going to tag a new release soon.
from gpt4all.
would love this feature, it allows the project to be easily run on any machine without any hassle
from gpt4all.
Upon further research into this, it appears that the llama-cli project is already capable of bundling gpt4all into a docker image with a CLI and that may be why this issue is closed so as to not re-invent the wheel.
However, I'm not seeing a docker-compose for it, nor good instructions for less experienced users to try it out.
I'm also a bit nervous about hardware requirements. It isn't made very clear what you really need to run this in terms of hardware. If I store the model on an HDD, would it be bad for the long term health of the HDD? Lots of questions like that need answering.
from gpt4all.
Upon further research into this, it appears that the llama-cli project is already capable of bundling gpt4all into a docker image with a CLI and that may be why this issue is closed so as to not re-invent the wheel.
However, I'm not seeing a docker-compose for it, nor good instructions for less experienced users to try it out.
I'm also a bit nervous about hardware requirements. It isn't made very clear what you really need to run this in terms of hardware. If I store the model on an HDD, would it be bad for the long term health of the HDD? Lots of questions like that need answering.
HDDs do not deteriorate with write cycles unlike SSDs so if even if it does lots of writes there is no problem
from gpt4all.
HDDs do not deteriorate with write cycles unlike SSDs so if even if it does lots of writes there is no problem
I guess maybe I shouldn't be using this issue as a forum but I am curious as to why the model would be doing lots of writes? Or any writes? If I'm just trying to run the model to generate text and am not working on tuning it, then the program's access to the model data should be read only, at least in theory, right?
from gpt4all.
HDDs do not deteriorate with write cycles unlike SSDs so if even if it does lots of writes there is no problem
I guess maybe I shouldn't be using this issue as a forum but I am curious as to why the model would be doing lots of writes? Or any writes? If I'm just trying to run the model to generate text and am not working on tuning it, then the program's access to the model data should be read only, at least in theory, right?
yes, it will only be reads. And i mentioned writes because you were asking if the HDD health would deteriorate in the long term. I don't think it will.
from gpt4all.
Upon further research into this, it appears that the llama-cli project is already capable of bundling gpt4all into a docker image with a CLI and that may be why this issue is closed so as to not re-invent the wheel.
However, I'm not seeing a docker-compose for it, nor good instructions for less experienced users to try it out.
Care to open an issue on llama-cli? We can tackle it from there.
Edit: I'm not bundling the model in the image due to #75
from gpt4all.
I just had another option recommended to me on Discord: Serge provides a Docker image with a web interface. No offense but it seems to be closer to what I had in mind for the specific goofy nonsense I'm just playing around with than llama-cli but thanks anyway.
Also, it absolutely makes sense to not bundle the actual model with the application now that I think about it. That would be like bundling movies with the Jellyfin Docker image.
from gpt4all.
I just had another option recommended to me on Discord: Serge provides a Docker image with a web interface. No offense but it seems to be closer to what I had in mind for the specific goofy nonsense I'm just playing around with than llama-cli but thanks anyway.
sure thing! I'm happy you got your way around it! llama-cli is more suitable if you need to e.g. embed it in some application, as it provides just a raw RESTful API and a simple web page to interact with as a playground - it's by no mean UX friendly, but rather developer friendly.
The main difference with sarge is that llama-cli doesn't bash out as serge does, but rather uses llama.cpp straight from the C++ code and as such keeps the model in memory between the requests - that makes it more faster for iterating.
Also, it absolutely makes sense to not bundle the actual model with the application now that I think about it. That would be like bundling movies with the Jellyfin Docker image.
π
from gpt4all.
The main difference with sarge is that llama-cli doesn't bash out as serge does, but rather uses llama.cpp straight from the C++ code and as such keeps the model in memory between the requests - that makes it more faster for iterating.
Seems like the best of both worlds would be to bundle llama-cli with an optional chat-style web interface.
To slightly extend my metaphor of different models for the app being like different movies for a media player, seems like it would be nice to have these applications not be restricted to just one model per app. Requests could go to the same app for any number of models, specifying which one as part of the request.
Very good points, I'd like to iterate on this, I'm not a frontend developer, but I guess that shouldn't be quite hard.
Re models: correct, that's where I'd like to go too in the long run, although reloading models come with the price of loading them back into ram each time when instantiated
from gpt4all.
Related Issues (20)
- [Feature] Support for GPT 4 Turbo
- UI: If you have too many installed models, the list gets cut off and can't be scrolled
- [Feature] Ctrl+F to search text inside a discussion
- bug
- v2.7.5 Windows Local and Server Model both use Llama 3 Instruct, program crash HOT 1
- [Feature] indicate the max context size of each model in the download list ?
- [Feature] check the compatibility of a hugging face model before fully downloading it ?
- Idk what this is honestly HOT 1
- Python Bindings: Model no longer kept in cache HOT 2
- Reliable crash test in 2.7.5 and 2.8.0pre1 HOT 3
- Python bindings: add possibility to clear history of a chat_session HOT 4
- "availableGPUDevices: built without Kompute" error when installed via pip on macOS M2 HOT 2
- [Feature] Ability to populate previous chat history when using chat_session() HOT 7
- ε’ε ε―ΉIntel ARC A770ζΎε‘ζ¨ηζ―ζ HOT 2
- Ver. 2.7.4 nad Ver. 2.8.0 pre not starting gui on Windows HOT 2
- API service response data missing
- Building GPT4all from source - Windows - Qt.dll errors HOT 11
- Is there a WebUI available? HOT 1
- Need `#include <algorithm>` to build `gpt4all-backend/llamamodel.cpp`
- Windows 11. Nothing happens HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt4all.