Git Product home page Git Product logo

Comments (16)

BenMcLean avatar BenMcLean commented on May 25, 2024 4

I would like to try this as a docker image: ideally paired with some kind of web interface.

from gpt4all.

BenMcLean avatar BenMcLean commented on May 25, 2024 1

The main difference with sarge is that llama-cli doesn't bash out as serge does, but rather uses llama.cpp straight from the C++ code and as such keeps the model in memory between the requests - that makes it more faster for iterating.

Seems like the best of both worlds would be to bundle llama-cli with an optional chat-style web interface.

To slightly extend my metaphor of different models for the app being like different movies for a media player, seems like it would be nice to have these applications not be restricted to just one model per app. Requests could go to the same app for any number of models, specifying which one as part of the request.

from gpt4all.

BenMcLean avatar BenMcLean commented on May 25, 2024 1

Re models: correct, that's where I'd like to go too in the long run, although reloading models come with the price of loading them back into ram each time when instantiated

Well I think the only real restriction there would be how much RAM you have. Like maybe given how much RAM you have, you'd have a choice of running two small models or one big one. Maybe different models could be started or stopped with different settings with respect to RAM vs storage.

from gpt4all.

bstadt avatar bstadt commented on May 25, 2024

we dont have any plans to do this at the moment

from gpt4all.

mudler avatar mudler commented on May 25, 2024

maybe this can help you, I've added support for gpt4all too: https://github.com/go-skynet/llama-cli

from gpt4all.

faroukellouze avatar faroukellouze commented on May 25, 2024

@mudler docker: Error response from daemon: unknown: Tag v0.3 was deleted or has expired.

from gpt4all.

mudler avatar mudler commented on May 25, 2024

@mudler docker: Error response from daemon: unknown: Tag v0.3 was deleted or has expired.

Use latest, going to tag a new release soon.

from gpt4all.

iQuickDev avatar iQuickDev commented on May 25, 2024

would love this feature, it allows the project to be easily run on any machine without any hassle

from gpt4all.

BenMcLean avatar BenMcLean commented on May 25, 2024

Upon further research into this, it appears that the llama-cli project is already capable of bundling gpt4all into a docker image with a CLI and that may be why this issue is closed so as to not re-invent the wheel.

However, I'm not seeing a docker-compose for it, nor good instructions for less experienced users to try it out.

I'm also a bit nervous about hardware requirements. It isn't made very clear what you really need to run this in terms of hardware. If I store the model on an HDD, would it be bad for the long term health of the HDD? Lots of questions like that need answering.

from gpt4all.

iQuickDev avatar iQuickDev commented on May 25, 2024

Upon further research into this, it appears that the llama-cli project is already capable of bundling gpt4all into a docker image with a CLI and that may be why this issue is closed so as to not re-invent the wheel.

However, I'm not seeing a docker-compose for it, nor good instructions for less experienced users to try it out.

I'm also a bit nervous about hardware requirements. It isn't made very clear what you really need to run this in terms of hardware. If I store the model on an HDD, would it be bad for the long term health of the HDD? Lots of questions like that need answering.

HDDs do not deteriorate with write cycles unlike SSDs so if even if it does lots of writes there is no problem

from gpt4all.

BenMcLean avatar BenMcLean commented on May 25, 2024

HDDs do not deteriorate with write cycles unlike SSDs so if even if it does lots of writes there is no problem

I guess maybe I shouldn't be using this issue as a forum but I am curious as to why the model would be doing lots of writes? Or any writes? If I'm just trying to run the model to generate text and am not working on tuning it, then the program's access to the model data should be read only, at least in theory, right?

from gpt4all.

iQuickDev avatar iQuickDev commented on May 25, 2024

HDDs do not deteriorate with write cycles unlike SSDs so if even if it does lots of writes there is no problem

I guess maybe I shouldn't be using this issue as a forum but I am curious as to why the model would be doing lots of writes? Or any writes? If I'm just trying to run the model to generate text and am not working on tuning it, then the program's access to the model data should be read only, at least in theory, right?

yes, it will only be reads. And i mentioned writes because you were asking if the HDD health would deteriorate in the long term. I don't think it will.

from gpt4all.

mudler avatar mudler commented on May 25, 2024

Upon further research into this, it appears that the llama-cli project is already capable of bundling gpt4all into a docker image with a CLI and that may be why this issue is closed so as to not re-invent the wheel.

However, I'm not seeing a docker-compose for it, nor good instructions for less experienced users to try it out.

Care to open an issue on llama-cli? We can tackle it from there.

Edit: I'm not bundling the model in the image due to #75

from gpt4all.

BenMcLean avatar BenMcLean commented on May 25, 2024

I just had another option recommended to me on Discord: Serge provides a Docker image with a web interface. No offense but it seems to be closer to what I had in mind for the specific goofy nonsense I'm just playing around with than llama-cli but thanks anyway.

Also, it absolutely makes sense to not bundle the actual model with the application now that I think about it. That would be like bundling movies with the Jellyfin Docker image.

from gpt4all.

mudler avatar mudler commented on May 25, 2024

I just had another option recommended to me on Discord: Serge provides a Docker image with a web interface. No offense but it seems to be closer to what I had in mind for the specific goofy nonsense I'm just playing around with than llama-cli but thanks anyway.

sure thing! I'm happy you got your way around it! llama-cli is more suitable if you need to e.g. embed it in some application, as it provides just a raw RESTful API and a simple web page to interact with as a playground - it's by no mean UX friendly, but rather developer friendly.

The main difference with sarge is that llama-cli doesn't bash out as serge does, but rather uses llama.cpp straight from the C++ code and as such keeps the model in memory between the requests - that makes it more faster for iterating.

Also, it absolutely makes sense to not bundle the actual model with the application now that I think about it. That would be like bundling movies with the Jellyfin Docker image.

πŸ‘

from gpt4all.

mudler avatar mudler commented on May 25, 2024

The main difference with sarge is that llama-cli doesn't bash out as serge does, but rather uses llama.cpp straight from the C++ code and as such keeps the model in memory between the requests - that makes it more faster for iterating.

Seems like the best of both worlds would be to bundle llama-cli with an optional chat-style web interface.

To slightly extend my metaphor of different models for the app being like different movies for a media player, seems like it would be nice to have these applications not be restricted to just one model per app. Requests could go to the same app for any number of models, specifying which one as part of the request.

Very good points, I'd like to iterate on this, I'm not a frontend developer, but I guess that shouldn't be quite hard.

Re models: correct, that's where I'd like to go too in the long run, although reloading models come with the price of loading them back into ram each time when instantiated

from gpt4all.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.