Git Product home page Git Product logo

pdfgpt's Introduction

pdfGPT

Demo

  1. Demo URL: https://bhaskartripathi-pdfgpt-turbo.hf.space

  2. Demo Video:

    IMAGE ALT TEXT HERE

Version Updates (27 July, 2023):

  1. Improved error handling
  2. PDF GPT now supports Turbo models and GPT4 including 16K and 32K token model.
  3. Pre-defined questions for auto-filling the input.
  4. Implemented Chat History feature. image

Note on model performance

If you find the response for a specific question in the PDF is not good using Turbo models, then you need to understand that Turbo models such as gpt-3.5-turbo are chat completion models and will not give a good response in some cases where the embedding similarity is low. Despite the claim by OpenAI, the turbo model is not the best model for Q&A. In those specific cases, either use the good old text-DaVinci-003 or use GPT4 and above. These models invariably give you the most relevant output.

Upcoming Release Pipeline:

  1. Support for Falcon, Vicuna, Meta Llama
  2. OCR Support
  3. Multiple PDF file support
  4. OCR Support
  5. Node.Js based Web Application - With no trial, no API fees. 100% Open source.

Problem Description :

  1. When you pass a large text to Open AI, it suffers from a 4K token limit. It cannot take an entire pdf file as an input
  2. Open AI sometimes becomes overtly chatty and returns irrelevant response not directly related to your query. This is because Open AI uses poor embeddings.
  3. ChatGPT cannot directly talk to external data. Some solutions use Langchain but it is token hungry if not implemented correctly.
  4. There are a number of solutions like https://www.chatpdf.com, https://www.bespacific.com/chat-with-any-pdf, https://www.filechat.io they have poor content quality and are prone to hallucination problem. One good way to avoid hallucinations and improve truthfulness is to use improved embeddings. To solve this problem, I propose to improve embeddings with Universal Sentence Encoder family of algorithms (Read more here: https://tfhub.dev/google/collections/universal-sentence-encoder/1).

Solution: What is PDF GPT ?

  1. PDF GPT allows you to chat with an uploaded PDF file using GPT functionalities.
  2. The application intelligently breaks the document into smaller chunks and employs a powerful Deep Averaging Network Encoder to generate embeddings.
  3. A semantic search is first performed on your pdf content and the most relevant embeddings are passed to the Open AI.
  4. A custom logic generates precise responses. The returned response can even cite the page number in square brackets([]) where the information is located, adding credibility to the responses and helping to locate pertinent information quickly. The Responses are much better than the naive responses by Open AI.
  5. Andrej Karpathy mentioned in this post that KNN algorithm is most appropriate for similar problems: https://twitter.com/karpathy/status/1647025230546886658
  6. Enables APIs on Production using langchain-serve.

Docker

Run docker-compose -f docker-compose.yaml up to use it with Docker compose.

Use pdfGPT on Production using langchain-serve

Local playground

  1. Run lc-serve deploy local api on one terminal to expose the app as API using langchain-serve.
  2. Run python app.py on another terminal for a local gradio playground.
  3. Open http://localhost:7860 on your browser and interact with the app.

Cloud deployment

Make pdfGPT production ready by deploying it on Jina Cloud.

lc-serve deploy jcloud api

Show command output
╭──────────────┬──────────────────────────────────────────────────────────────────────────────────────╮
│ App ID       │                                 langchain-3ff4ab2c9d                                 │
├──────────────┼──────────────────────────────────────────────────────────────────────────────────────┤
│ Phase        │                                       Serving                                        │
├──────────────┼──────────────────────────────────────────────────────────────────────────────────────┤
│ Endpoint     │                      https://langchain-3ff4ab2c9d.wolf.jina.ai                       │
├──────────────┼──────────────────────────────────────────────────────────────────────────────────────┤
│ App logs     │                               dashboards.wolf.jina.ai                                │
├──────────────┼──────────────────────────────────────────────────────────────────────────────────────┤
│ Swagger UI   │                    https://langchain-3ff4ab2c9d.wolf.jina.ai/docs                    │
├──────────────┼──────────────────────────────────────────────────────────────────────────────────────┤
│ OpenAPI JSON │                https://langchain-3ff4ab2c9d.wolf.jina.ai/openapi.json                │
╰──────────────┴──────────────────────────────────────────────────────────────────────────────────────╯

Interact using cURL

(Change the URL to your own endpoint)

PDF url

curl -X 'POST' \
  'https://langchain-3ff4ab2c9d.wolf.jina.ai/ask_url' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "url": "https://uiic.co.in/sites/default/files/uploads/downloadcenter/Arogya%20Sanjeevani%20Policy%20CIS_2.pdf",
  "question": "What'\''s the cap on room rent?",
  "envs": {
    "OPENAI_API_KEY": "'"${OPENAI_API_KEY}"'"
    }
}'

{"result":" Room rent is subject to a maximum of INR 5,000 per day as specified in the Arogya Sanjeevani Policy [Page no. 1].","error":"","stdout":""}

PDF file

QPARAMS=$(echo -n 'input_data='$(echo -n '{"question": "What'\''s the cap on room rent?", "envs": {"OPENAI_API_KEY": "'"${OPENAI_API_KEY}"'"}}' | jq -s -R -r @uri))
curl -X 'POST' \
  'https://langchain-3ff4ab2c9d.wolf.jina.ai/ask_file?'"${QPARAMS}" \
  -H 'accept: application/json' \
  -H 'Content-Type: multipart/form-data' \
  -F 'file=@Arogya_Sanjeevani_Policy_CIS_2.pdf;type=application/pdf'

{"result":" Room rent is subject to a maximum of INR 5,000 per day as specified in the Arogya Sanjeevani Policy [Page no. 1].","error":"","stdout":""}

Running on localhost

Credits : Adithya S

  1. Pull the image by entering the following command in your terminal or command prompt:
docker pull registry.hf.space/bhaskartripathi-pdfchatter:latest
  1. Download the Universal Sentence Encoder locally to your project's root folder. This is important because otherwise, 915 MB will be downloaded at runtime everytime you run it.
  2. Download the encoder using this link.
  3. Extract the downloaded file and place it in your project's root folder as shown below:
Root folder of your project
└───Universal Sentence Encoder
|   ├───assets
|   └───variables
|   └───saved_model.pb
|
└───app.py
  1. If you have downloaded it locally, replace the code on line 68 in the API file:
self.use = hub.load('https://tfhub.dev/google/universal-sentence-encoder/4')

with:

self.use = hub.load('./Universal Sentence Encoder/')
  1. Now, To run PDF-GPT, enter the following command:
docker run -it -p 7860:7860 --platform=linux/amd64 registry.hf.space/bhaskartripathi-pdfchatter:latest python app.py

Original Source code with no integrations (for demo hosted in Hugging Face) :

https://huggingface.co/spaces/bhaskartripathi/pdfGPT_Turbo

UML

sequenceDiagram
    participant User
    participant System

    User->>System: Enter API Key
    User->>System: Upload PDF/PDF URL
    User->>System: Ask Question
    User->>System: Submit Call to Action

    System->>System: Blank field Validations
    System->>System: Convert PDF to Text
    System->>System: Decompose Text to Chunks (150 word length)
    System->>System: Check if embeddings file exists
    System->>System: If file exists, load embeddings and set the fitted attribute to True
    System->>System: If file doesn't exist, generate embeddings, fit the recommender, save embeddings to file and set fitted attribute to True
    System->>System: Perform Semantic Search and return Top 5 Chunks with KNN
    System->>System: Load Open AI prompt
    System->>System: Embed Top 5 Chunks in Open AI Prompt
    System->>System: Generate Answer with Davinci

    System-->>User: Return Answer
Loading

Flowchart

flowchart TB
A[Input] --> B[URL]
A -- Upload File manually --> C[Parse PDF]
B --> D[Parse PDF] -- Preprocess --> E[Dynamic Text Chunks]
C -- Preprocess --> E[Dynamic Text Chunks with citation history]
E --Fit-->F[Generate text embedding with Deep Averaging Network Encoder on each chunk]
F -- Query --> G[Get Top Results]
G -- K-Nearest Neighbour --> K[Get Nearest Neighbour - matching citation references]
K -- Generate Prompt --> H[Generate Answer]
H -- Output --> I[Output]
Loading

Star History

Star History Chart I am looking for more contributors from the open source community who can take up backlog items voluntarily and maintain the application jointly with me.

Also Try PyViralContent:

Have you ever thought why your social media posts, blog, article, advertising, YouTube video, or other content don't go viral? I have published a new Python Package: pyviralcontent ! 🚀 It predicts the virality of your content along with readability scores! It uses multiple sophisticated algorithms to calculate your content's readability score and its predict its viral probability using Multi Criteria Decision Analysis. 📈 Make your content strategy data-driven with pyviralcontent. Try it out and take your content's impact to the next level! 💥 https://github.com/bhaskatripathi/pyviralcontent

License

This project is licensed under the MIT License. See the LICENSE.txt file for details.

Citation

If you use PDF-GPT in your research or wish to refer to the examples in this repo, please cite with:

@misc{pdfgpt2023,
  author = {Bhaskar Tripathi},
  title = {PDF-GPT},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub Repository},
  howpublished = {\url{https://github.com/bhaskatripathi/pdfGPT}}
}

pdfgpt's People

Contributors

bhaskatripathi avatar chenhuihu avatar danielorozco06 avatar deepankarm avatar iw4p avatar jeffrey95 avatar krrishdholakia avatar mtrevoux avatar richardscottoz avatar taherfattahi avatar weartist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pdfgpt's Issues

docker-compose up error

[+] Running 2/0
 ✔ Container pdfgpt-pdf-gpt-1          Created                                                                                                                                                                  0.0s 
 ✔ Container pdfgpt-langchain-serve-1  Created                                                                                                                                                                  0.0s 
Attaching to pdfgpt-langchain-serve-1, pdfgpt-pdf-gpt-1
pdfgpt-langchain-serve-1  | 
pdfgpt-pdf-gpt-1          | Traceback (most recent call last):
pdfgpt-pdf-gpt-1          |   File "app.py", line 92, in <module>
pdfgpt-pdf-gpt-1          |     demo.app.server.timeout = 60000 # Set the maximum return time for the results of accessing the upstream server
pdfgpt-pdf-gpt-1          | AttributeError: 'App' object has no attribute 'server'
pdfgpt-langchain-serve-1  | ────────────────────────── 🎉 Flow is ready to serve! ──────────────────────────
pdfgpt-langchain-serve-1  | ╭────────────── 🔗 Endpoint ───────────────╮
pdfgpt-langchain-serve-1  | │  ⛓        Protocol                 HTTP  │
pdfgpt-langchain-serve-1  | │  🏠          Local         0.0.0.0:8080  │
pdfgpt-langchain-serve-1  | │  🔒        Private      172.25.0.3:8080  │
pdfgpt-langchain-serve-1  | ╰──────────────────────────────────────────╯
pdfgpt-langchain-serve-1  | ╭─────────── 💎 HTTP extension ────────────╮
pdfgpt-langchain-serve-1  | │  💬          Swagger UI        .../docs  │
pdfgpt-langchain-serve-1  | │  📚               Redoc       .../redoc  │
pdfgpt-langchain-serve-1  | ╰──────────────────────────────────────────╯
pdfgpt-langchain-serve-1  | Do you love open source? Help us improve Jina in just 1 minute and 30 seconds by
pdfgpt-langchain-serve-1  | taking our survey: 
pdfgpt-langchain-serve-1  | https://10sw1tcpld4.typeform.com/jinasurveyfeb23?utm_source=jina(Set environment
pdfgpt-langchain-serve-1  | variable JINA_HIDE_SURVEY=1 to hide this message.)
pdfgpt-pdf-gpt-1 exited with code 1

API key save in .env for one time key save

I have tried other gpt apps for pdf . Your app is great but need one option as i think.(suggestion)

If we can save api file permanent , so no need to add everytime.

Can you implement it.

May need a better way tokenize characters...

Hello,

I recently encountered an issue while using your open source project. When I tried to use the project with Chinese characters, I received the following error message:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 5803 tokens (1707 in your prompt; 4096 for the completion). Please reduce your prompt; or completion length.

I believe this issue might be due to a possible miscalculation in the token count for Chinese characters. I understand that the GPT model tokenizes text differently based on the language, and it is possible that the algorithm isn't accurately calculating the token count for Chinese text. This leads to an incorrect total token count and subsequently the InvalidRequestError.

To better diagnose and resolve this issue, I kindly request you to look into the algorithm's handling of Chinese characters, specifically in the tokenization process. It would be greatly appreciated if you could provide any guidance or potential fixes for this issue.

Thank you for your time and effort in maintaining this project. I'm looking forward to your response.

Deploy `pdfGPT` as APIs locally/on cloud using `langchain-serve`

Repo - langchain-serve.

  • Exposes APIs from function definitions locally as well as on the cloud.
  • Very few lines of code changes and ease of development remain the same as local.
  • Supports both REST & WebSocket endpoints
  • Serverless/autoscaling endpoints with automatic tls certs on the cloud.
  • Real-time streaming, human-in-the-loop support - which is crucial for chatbots.

We can extend the simple existing app pdf-qna on langchain-serve.

Disclaimer: I'm the primary author of langchain-serve.

Attribute error

I'm getting this error message when I run the code:
It happens after putting in the API key and uploading a PDF.
AttributeError: 'SemanticSearch' object has no attribute 'nn'
And I can't find out why it's happening.

cant lc serve

'lc-serve' is not recognized as an internal or external command,
operable program or batch file.

Docker?

I would like to run this in docker for unraid. Is this something you can try?

LICENSE

Please insert license.

Thanks you,
Best regards.

Upload PDF Display Error

Seems to be due to the issue of the PDF file being too large?What is the supported file content size?

No module named 'gradio'

Just a suggestion. Maybe gradio should be included in the requirements.
I had to install it manually.

Key Error:'name'

image

What does this error code mean?
Is it because I'm using the wrong API?
If yes, which api key should I use?

More features: formatting, actual chat, and show PDF

I wanted to see if we could improve the current feature set. The following is what I have in mind:

  • Add formatting to answer (new line between each citation, make the citations go from [4] to [Page 4] and make them a different color so they stand out (making it easy to reference)
  • Have an actual chat-like interface (think humata, chatpdf)
  • Show the actual pdf side-by-side (and maybe the citations in the answers can link to the page in the pdf viewer making it easy to reference)

These features would be dope. I'm looking into working on it myself but I know there's people more talented that could look into this

Error using app webpage

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 414, in run_predict
output = await app.get_blocks().process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 1320, in process_api
result = await self.call_function(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 1048, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/Users/taoruifu/work/projects/llm/pdfGPT/app.py", line 49, in ask_api
raise ValueError(f'[ERROR]: {r.text}')

so where is the index data?

When preparing questions and answers, we typically follow these steps:

  1. Extract text from various data sources, such as websites, PDFs, CSVs, or plain text files. The extracted text can be saved in different locations.
  2. Create embeddings, which produce an index file as output.
  3. Answer questions by referencing the index created in the previous step.

However, when I run the API and App locally for this project, I cannot see the data generated in steps (1) and (2). I am curious about where the data is stored and how it works.

Getting error when running

using python3:

File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/app.py", line 2, in
import fitz
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/fitz/init.py", line 1, in
from frontend import *
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/frontend/init.py", line 1, in
from .events import *
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/frontend/events/init.py", line 1, in
from .clipboard import *
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/frontend/events/clipboard.py", line 2, in
from ..dom import Event
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/frontend/dom.py", line 439, in
from . import dispatcher
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/frontend/dispatcher.py", line 15, in
from . import config, server
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/frontend/server.py", line 24, in
app.mount(config.STATIC_ROUTE, StaticFiles(directory=config.STATIC_DIRECTORY), name=config.STATIC_NAME)
File "/Users/m3kwong/PythonCode/LLM/pdfGPT-main/new/lib/python3.10/site-packages/starlette/staticfiles.py", line 57, in init
raise RuntimeError(f"Directory '{directory}' does not exist")
RuntimeError: Directory 'static/' does not exist

Waiting gateway...

image

As u can see the server nerver deploy. What can be the issue? I installed all requirements needed in the requirements.txt

M1 Mac Tensorflow

Anyone able to get this to work on MacOS? I am trying to get this working in a virtual environment with Python 3.10 and have to use tensorflow-macos and tensorflow-metal.

Unsure what to change in requirements but I keep getting errors when trying to install all the dependencies.

On mobile right now but will update with full errors when I get back to my laptop.

Unable to pull a particular Docker layer of pdfchatter

Hi, I ran the docker pull command as suggested in the README but I get the following output.

docker pull registry.hf.space/bhaskartripathi-pdfchatter:latest
latest: Pulling from bhaskartripathi-pdfchatter
bd8f6a7501cc: Pull complete 
44718e6d535d: Pull complete 
efe9738af0cb: Pull complete 
f37aabde37b8: Pull complete 
3923d444ed05: Pull complete 
1ecef690e281: Pull complete 
48673bbfd34d: Pull complete 
b761c288f4b0: Pull complete 
4ea6ac43d369: Pull complete 
aa9e20aea25a: Extracting [==================================================>]  99.49MB/99.49MB
63248b4e37e2: Download complete 
5806ef4fec33: Download complete 
ec89491cf0cd: Download complete 
e662a12eee66: Download complete 
46995db4b389: Download complete 
7d67ad956d91: Download complete 
b025d72cdd42: Download complete 
0bbbfa67eeab: Download complete 
66aa17d0dc7e: Download complete 
failed to register layer: Error processing tar file(exit status 1): archive/tar: invalid tar header

Is there maybe something wrong with the aa9e20aea25a layer?

Multiple PDF Files

What would it take to support Multiple PDF File Import and providing search capability?

Got error when using Chinese

I really like your app, however, I uploaded a pdf that is in Chinese, and I also asked a question in Chinese, but when I run the result, I got an error.
When will it support Chinese pdf? Really looking forward to it.

Using same file/url results in having to reload entire document and reload chunks etc.

Use case is we want to ask multiple questions using the same file (i.e. the file is a faq and we have a number of different questions to ask about it (the questions are unrelated so this isn't a conversation just a plain question).

Using same file/url results in having to reload entire document and reload chunks etc. would be nice if the url or file are unchanged from the last submission that this process didn't need to occur since we already have generated the recommender previously.

more errors

It works for a while then:

AttributeError: 'SemanticSearch' object has no attribute 'nn'

just errors

uploaded a test document, background.pdf, asked it to give feedback, returned an error

Error loading model

Error: Trying to load a model of incompatible/unknown type. 'C:\Users\User\AppData\Local\Temp\tfhub_modules\063d866c066fd46003be952409c' contains neither 'saved_model.pb' nor 'saved_model.pbtxt'.

I thought this uses OAI APK. Why is it trying to load a local model?

Prompt Optimization by upto 50% more token input

By removing whitespaces and trimming most letters to the most discernible sentence possible we can allow even more tokens inside our prompt by upto a staggering 50% and hence even more larger support of pdf sizes to occur. The algorithm in use will be taken from a reputable source linked after completing the issue.

image

Please allow me to assign to this issue and I will make a stable release sometime in June. Thank you.

How to run this locally?

This is my by far the best Chatwithpdf kind of app. I would really appreciate, if there was a comprehensive guide on how to set it up locally. Since I am getting lot of errors.

Problem with class "SemanticSearch"

when running the application i get an error which says:
"AttributeError: 'SemanticSearch' object has no attribute 'nn' "

any clues as to why and how to fix the issue?
Anyway i love the idea thank you for the repo!

Can't run the code

Hi.

I have tried running your code on both windows and an Ubbuntu VM. In both cases I had to install via pip Fitz and frontend libs in addition to what requirements.txt contains. Again in both cases I have this error:

File "/home/parallels/Documents/PythonScripts/PDFGPT/app.py", line 2, in
import fitz
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/fitz/init.py", line 1, in
from frontend import *
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/frontend/init.py", line 1, in
from .events import *
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/frontend/events/init.py", line 1, in
from .clipboard import *
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/frontend/events/clipboard.py", line 2, in
from ..dom import Event
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/frontend/dom.py", line 439, in
from . import dispatcher
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/frontend/dispatcher.py", line 15, in
from . import config, server
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/frontend/server.py", line 24, in
app.mount(config.STATIC_ROUTE, StaticFiles(directory=config.STATIC_DIRECTORY), name=config.STATIC_NAME)
File "/home/parallels/Documents/PythonScripts/PDFGPT/PDFGPT/lib/python3.10/site-packages/starlette/staticfiles.py", line 57, in init
raise RuntimeError(f"Directory '{directory}' does not exist")
RuntimeError: Directory 'static/' does not exist

Any ideas?

Thank you.

Demo not working

Hello,
Great work, only want to try.

am i the only one that cannot run the demo ? it load the pdf, i past the key then ask the question.

The result is "Error" even i tried on several PDF files.

Any suggestion ?

Thank you

gradio api not working

The app is working fine in local playground but when I am making api request with the given example it's not working
image

Too slow

After I downloaded and installed all the dependencies, I run python app.py, but it just sits there and do nothing, not even after 30 minutes. What could be going on?

Request: GPT-3.5-turbo

Is there any specific reason (like less hallucianation) why we are using text-davinci-003 over gpt-3.5-turbo?

It would be nice if there was an option to switch to Gpt-3.5 turbo.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.