Comments (7)
That's fair, how about a middle ground - include a direct download link to the model file, or if it's required we accept some terms, direct link to the model landing page?
from llamachat.
That's fair, how about a middle ground - include a direct download link to the model file, or if it's required we accept some terms, direct link to the model landing page?
maybe adding some more information on the Readme page, like where to download, what to download, and how to set up the environment to deploy the app. with sections for each one of the supported models(maybe some links).
from llamachat.
+1
I have currently downloaded 20GB+ of models, none is working -_-'
from llamachat.
@umaar Definitely been thinking about this for the publicly available models. This won't be happening for most though because of their distribution terms 👎
@matthieu-lapeyre If there are specific problems could you open another Issue? I wonder if maybe your problems are related to #3 — I have some ideas for this in the works!
from llamachat.
yeah, I think at least for GPT4All which publish a direct download link this will work!
from llamachat.
@mignochrono there are some links already in-app but agreed that these could be improved. I think it would be better to improve the UI here because the plan is to add more models, which could get unwieldy if they're all documented in the README.
how to set up the environment to deploy the app
What do you mean by this exactly? Building from source, or something else?
from llamachat.
Can someone post a short instruction on how to downlaod and install a model that worked for them?
I also had problems getting it working and this would be nice.
Having a quickstart guide on how to get your first chat running would be nice
from llamachat.
Related Issues (20)
- Syntax Error in ChatModel.swift HOT 2
- Build error HOT 1
- Add Metal/GPU support for running model inference HOT 1
- Support for Falcon HOT 1
- Support for Open LLaMa? HOT 1
- Add iOS support ? HOT 1
- Support ggmlv3 HOT 2
- windows support HOT 1
- Error using pth format model HOT 4
- installation environment problem HOT 1
- llama2 7B-chat model works poorly.
- Error when converting model HOT 1
- [Feature Request] Support InternLM
- Support for Code Llama – Instruct?
- support for amd gpu (macos)
- Ollama support HOT 1
- Broken link in README
- Llamachat is spouting gibberish HOT 1
- llama3 support
- Feature Request: Apple Silicone Neural Engine - Core ML model package format support HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llamachat.