Comments (6)
I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is.
He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible.
I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.Yes this is exactly what i mean, the document i uploaded had nothing to do with captain america and still gave me who captain america is
The first thing you shouldn't disregard is that ~4GB model file is indeed large (the first "L" in "LLM"), because is trained on huge (as in terabytes of) text data. So it'll indeed know a thing or two about pretty much anything, not to mention the massively popular pop-culture stuff like Captain America. E.g. if it was a hyper-focused model made out of just a few PDFs then nothing could've justified it taking up gigabytes of storage/memory.
And the second thing to not to mix up is what this project does (so far): It ingests your documents so that the model will have an easier time looking up at all that data from the vector database when it's coming up with a little more educated answers to your questions than what it'd have otherwise done as an isolated snapshot of a mind. In other words you aren't training the model. The application is simply helping the model to take notes (for looking them up later).
from chatdocs.
This usually comes down to prompting. If you tell it to not answer anything if it can't find the answer inside the documents then it is less likely to use its previously learned knowledge. to emphasise, it is LESS LIKELY to. There isn't a generally accepted way to completely prevent an LLM using knowledge it has been trained on
from chatdocs.
You can try different prompts like:
- Based on the context provided, who is captain america?
- Based on the above text, who is captain america? If the above text doesn't have enough context, just say you don't know.
Here I'm referring to the document text as "above text" which is passed to LLM using a prompt template.
But it can be hard to make them answer only from documents especially if documents don't provide enough context.
from chatdocs.
I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is.
He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible.
I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.
from chatdocs.
I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is.
He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible.
I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.
Yes this is exactly what i mean, the document i uploaded had nothing to do with captain america and still gave me who captain america is
from chatdocs.
I think what he means is that the pretrained LLM has it's own knowledge outside of his document. He probably has not uploaded a document about captain america. However the LLM knows who captain america is.
He is trying to sandbox the LLM from anything other than the document he uploaded. I'm not sure that's possible.
I have only been able to do it by using openAI embeddings over API and chatgpt3,5 turbo.Yes this is exactly what i mean, the document i uploaded had nothing to do with captain america and still gave me who captain america is
The first thing shouldn't disregard is that >=4GB model file is indeed large (the first "L" in "LLM"), which is trained on huge (as in terabytes of) text data. So it'll indeed know a thing or two about pretty much anything, not to mention the massively popular pop-culture stuff like Captain America. E.g. if it was a hyper-focused model made out of just a few PDFs then nothing could've justified it taking up gigabytes of storage/memory.
And the second thing to not to mix up is what this project does (so far): It ingests your documents so that the model will have an easier time looking at all that data from the vector database when it's coming up with a little more educated answers to your questions than what it'd have otherwise done as an isolated snapshot of a mind. In other words you aren't training the model. The application is simply helping the model to take notes (for looking them up later).
Any way we can sandbox this or maybe use different prompts to give output based on the document?
from chatdocs.
Related Issues (20)
- Can't download models anymore, not sure why. Used to work perfectly HOT 4
- Is chatdocs still being supported? HOT 1
- how to turn off citations? HOT 1
- `score_threshold` in db.as_retriever doesn't seem to be enforced HOT 1
- ModuleNotFoundError: No module named 'langchain.embeddings.base' HOT 1
- ImportError: cannot import name 'soft_unicode' from 'markupsafe' HOT 1
- pad_token errors
- Google colab: OSError: libcudart.so.12: cannot open shared object file: No such file or directory
- model DocsGPT-7B
- Error ImportError: cannot import name 'url_quote' from 'werkzeug.urls'after this command chatdocs ui HOT 1
- ui color change HOT 1
- HTTPS not working? Please help.
- In which path the add file will be loaded any how can we delete the loaded file ?
- Works only on one session at a time.
- Error ImportError: cannot import name 'url_quote' from 'werkzeug.urls' HOT 1
- Turning on GPU gives PTX error
- Tabulate missing as dependency
- Update to Python 3.12 - Remove 'stdlib distutils module' requirement (deprecated)
- French language support
- IndexError when using chatdocs add command for some documents
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatdocs.