Git Product home page Git Product logo

chat-llamaindex's People

Contributors

dependabot[bot] avatar ekaone avatar gappc avatar himself65 avatar joshuasundance-swca avatar jwandekoken avatar marcusschiesser avatar thucpn avatar tolgayan avatar yisding avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chat-llamaindex's Issues

[Bug] Addition of Sentry seems to have broken the docker build

Describe the bug
I clone the main branch and followed the docker build instructions. Instead of having a working docker container I get the following error.

To Reproduce
Steps to reproduce the behavior:

  1. clone git clone https://github.com/run-llama/chat-llamaindex
  2. Copy cp .env.template .env.development.local
  3. Build docker build -t chat-llamaindex .
  4. See error
 => [build 2/6] WORKDIR /usr/src/app                                                                                                 0.8s
 => [runtime 2/6] WORKDIR /usr/src/app                                                                                               0.8s
 => [build 3/6] COPY package.json pnpm-lock.yaml ./                                                                                  0.0s
 => [build 4/6] RUN npm install -g pnpm &&     pnpm install                                                                         14.6s
 => [build 5/6] COPY . .                                                                                                             0.3s
 => ERROR [build 6/6] RUN pnpm build                                                                                                39.5s
------
 > [build 6/6] RUN pnpm build:
0.674
0.674 > chat-llamaindex@ build /usr/src/app
0.674 > next build
0.674
1.363 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.363 This information is used to shape Next.js' roadmap and prioritize features.
1.363 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.363 https://nextjs.org/telemetry
1.363
1.419   ▲ Next.js 14.2.1
1.419
1.475    Creating an optimized production build ...
1.810 warn  - It seems like you don't have a global error handler set up. It is recommended that you add a global-error.js file with Sentry instrumentation so that React rendering errors are reported to Sentry. Read more: https://docs.sentry.io/platforms/javascript/guides/nextjs/manual-setup/#react-render-errors-in-app-router
38.00 Failed to compile.
38.00
38.00 Sentry CLI Plugin: Command failed: /usr/src/app/node_modules/.pnpm/@[email protected][email protected]/node_modules/@sentry/cli/sentry-cli releases new VRbevAbU_2mYJxqEuyKqu
38.00 error: API request failed
38.00   caused by: [60] SSL peer certificate or SSH remote key was not OK (SSL certificate problem: unable to get local issuer certificate)
38.00
38.00 Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
38.00 Please attach the full debug log to all bug reports.
38.00
38.00 Sentry CLI Plugin: Command failed: /usr/src/app/node_modules/.pnpm/@[email protected][email protected]/node_modules/@sentry/cli/sentry-cli releases new VRbevAbU_2mYJxqEuyKqu
38.00 error: API request failed
38.00   caused by: [60] SSL peer certificate or SSH remote key was not OK (SSL certificate problem: unable to get local issuer certificate)
38.00
38.00 Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
38.00 Please attach the full debug log to all bug reports.
38.00
38.01
38.01 > Build failed because of webpack errors
38.22  ELIFECYCLE  Command failed with exit code 1.
------
Dockerfile:18
--------------------
  16 |
  17 |     # Build the application for production
  18 | >>> RUN pnpm build
  19 |
  20 |     # ---- Production Stage ----
--------------------
ERROR: failed to solve: process "/bin/sh -c pnpm build" did not complete successfully: exit code: 1

Expected behavior
The docker build should finish without error
Deployment

  • [ x ] Docker
  • Vercel
  • Server

Desktop (please complete the following information):

  • OS: Ubuntu 23.10
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

[Bug] Emoji not loading

Describe the bug
The emojis are not loading.

To Reproduce
Steps to reproduce the behavior:

  1. Open the app, e.g. https://chat-llamaindex.vercel.app/
  2. There are no emojis

Expected behavior
The expectation is that the emojis will be loaded and displayed.

Screenshots
image

Deployment

  • Vercel (tested)
  • local development (tested)

The issue likely exists everywhere, the CDN used to load the emojis (cdn.staticfile.org) appears to no longer host emoji files.

A fix is provided in PR #57. That PR updates the CDN to Cloudflare and incorporates the latest emoji version.

[Bug] May I know why there is BadRequestError:400 while I run "npm run generate"

I get this output when I run npm run generate :

BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 23869 tokens (23869 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. at APIError.generate (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/error.mjs:41:20) at OpenAI.makeStatusError (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/core.mjs:256:25) at OpenAI.makeRequest (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/core.mjs:299:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async OpenAIEmbedding.getOpenAIEmbedding (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:82:26) at async OpenAIEmbedding.getTextEmbeddings (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:93:16) at async OpenAIEmbedding.getTextEmbeddingsBatch (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/types.js:32:36) at async VectorStoreIndex.getNodeEmbeddingResults (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:89:28) at async VectorStoreIndex.insertNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:189:34) at async VectorStoreIndex.buildIndexFromNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:109:9) at async VectorStoreIndex.init (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:55:13) at async VectorStoreIndex.fromDocuments (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:132:16) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:37:5 at async getRuntime (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:22:3) at async generateDatasource (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:30:14) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:86:3

[Bug] First try to add url to get error

To Reproduce
Steps to reproduce the behavior:

  1. Go to input text
  2. Put url and press enter
  3. See error

Second type without any errors
Screenshots
image

Desktop (please complete the following information):

  • OS: windows 11
  • Browser edge
  • Version latest

[Feature] support for OpenAI-like mock servers & OpenAI proxy servers

Currently, when I want to use OpenAI-like mock servers or proxy servers, there's no apparent way to manually modify the openai.api_base and add headers to openai Completion/ChatCompletion request.

The mock server requires changing openai.api_base and specifying the model name.
The proxy server requires changing openai.api_base, providing openai.api_key, specifying the model name, and adding a custom headers to the request.

[Bug] - Vercel Blob token required for local usage

Describe the bug
I want to understand if its possible to use this app without connecting to Vercel when using locally. When i try to upload an image, I see the following error:
[Upload] BlobError: Vercel Blob: No token found. Either configure the BLOB_READ_WRITE_TOKENenvironment variable, or pass atoken option to your calls.

To Reproduce
Steps to reproduce the behavior:

  1. Open the vision preview bot
  2. Upload an image

Expected behavior
Not sure if this supported but can we use this project locally without requiring Vercel tokens?

Screenshots
image

Deployment

  • Docker
  • Vercel
  • Server

Desktop (please complete the following information):

  • OS: [e.g. iOS] macOS
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional Logs
Add any logs about the problem here.

[Bug] Error: Set OpenAI Key in OPENAI_API_KEY env variable

Trying to generate a new data source and when I run pnpm run generate <datasource-name> get the following error. The OpenAI key is set in .env.development.local. The app works, but not the data source generation.

chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:470
      throw new Error("Set OpenAI Key in OPENAI_API_KEY env variable");
            ^
Error: Set OpenAI Key in OPENAI_API_KEY env variable
    at new OpenAISession (~\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:470:13)
    at getOpenAISession (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:486:15)
    at new OpenAI2 (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:606:81)
    at serviceContextFromDefaults (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:2075:71)
    at file:///~/Documents/WORKSPACES/GenerativeAI/chat-llamaindex/scripts/generate.mjs:54:26
    at file:///~/Documents/WORKSPACES/GenerativeAI/chat-llamaindex/scripts/generate.mjs:61:3
    at ModuleJob.run (node:internal/modules/esm/module_job:193:25)
    at async Promise.all (index 0)
    at async ESMLoader.import (node:internal/modules/esm/loader:530:24)
    at async loadESM (node:internal/process/esm_loader:91:5)
    at async handleMainPromise (node:internal/modules/run_main:65:12)

Node.js v18.12.1
 ELIFECYCLE  Command failed with exit code 1.

[Feature] Local LLM Support

Would like to be able to run this with local llm stacks like litellm or ollama etc.

Could you provide a parameter to specify llm and base url

Supported LLM: Azure OpenAI?

You have indicated that ChatGPT-Next-Web project was used as a starter template for this project. Can you please confirm if LlamaIndex Chat support Azure OpenAI?

If yes, please provide the instructions to switch to Azure OpenAI.
If no, will this be treated as feature enhancement? Is there a quick way to make this switch to use Azure OpenAI?

Content of .env.development.local file

Your openai api key. (required)

OPENAI_API_KEY=sk-xxxx

[Bug] deployement on aws amplify is not working properly

Describe the bug
In local setup it's working fine , but when I deploy it to aws amplify , it's giving internal server error (500) for API call

not it's calling this API : https://develop.d2tnt2s5bwrvl6.amplifyapp.com/api/llm
instead of: http://localhost:300/api/llm

To Reproduce
Steps to reproduce the behavior:
Deploy it to aws amplify

Expected behavior
Should call api/llm successfully
Screenshots
If applicable, add screenshots to help explain your problem.

Deployment

  • aws amplify

Desktop (please complete the following information):

  • OS: [e.g. windows]
  • Browser [chrome]

Smartphone (please complete the following information):

Additional Logs
Add any logs about the problem here.

Add a Roadmap on project's README

It would be great to be able to see what are the features that are being developped and those who are going to be in the future.

For example I'm wondering if you're aiming to be anytime soon ISO feature with OpenAI, or when are you going to actually support running opensource models.
And I'm sure I'm not alone so again would be great to have access to those informations!

[Bug] Hit Token limit when using the generate command

Describe the bug
A clear and concise description of what the bug is.

Ran into this when running the generate <datasource command.

BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 18039 tokens (18039 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
  error: {
    message: "This model's maximum context length is 8192 tokens, however you requested 18039 tokens (18039 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
    type: 'invalid_request_error',
    param: null,
    code: null
  },

I expected it to split the documents for me?

[Feature] and [Bug] Project is named llama index - but doesn't support llama

Is your feature request related to a problem? Please describe.

I went to use this project and found that it doesn't seem to actually use or support llama despite the name.

It appears to be locked into only using OpenAI's proprietary SaaS product.

e.g. https://github.com/run-llama/chat-llamaindex/blob/main/.env.template#L1

Describe the solution you'd like

  • Support for local / self-hosted LLMs such as llama.
  • There should be configuration where you provide the API endpoint for your LLM.
    • This could be an OpenAI style API and if so I would highly recommend using LiteLLM for this as it's a quick and easy solution that's being widely adopted.
    • Alternatives options include adding support for the Text Generation Web UI native API.

Describe alternatives you've considered

Maybe rename the project to chat-openai-index or similar if it hasn't got anything to do with Llama as it may confuse folks.

Additional context
N/A

[Feature] Connect bot to data source

It seems intuitive that, once a user creates a data source, he should be able to query it somehow. It would be great if there were a field in the 'create bot' window to connect the bot to an existing data source.

It's entirely possible I'm missing something, but I can't see how to make that connection at the moment.

Thank you very much,
Adam

Supported LLM: Azure OpenAI?

You have indicated that ChatGPT-Next-Web project was used as a starter template for this project. Can you please confirm if LlamaIndex Chat support Azure OpenAI?

If yes, please provide the instructions to switch to Azure OpenAI.
If no, will this be treated as feature enhancement? Is there a quick way to make this switch to use Azure OpenAI?

Content of .env.development.local file

Your openai api key. (required)

OPENAI_API_KEY=sk-xxxx

[Bug] Error getting OPENAI_API_KEY from .env.development.local: ENOENT: no such file or directory

On Windows 10. Trying to generate a new data source and when I run pnpm run generate <datasource-name> get the following error. The OpenAI key is set in .env.development.local. The app works, but not the data source generation. Related to #23 which was closed as addressed.

Error getting OPENAI_API_KEY from .env.development.local: ENOENT: no such file or directory, open 'C:\C:\Users\xxx\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\.env.development.local'
 ELIFECYCLE  Command failed with exit code 1.

Deactivated GPT4 on chat.llamaindex.ai

Dear friends, I regret to inform you that Chatlamaindex's response has been invalidated

{
"error": true,
"message": "There was an error calling the OpenAI API. Please try again later."
}

[Feature] Add PDF OCR Support

As a user, I want to be able to upload and train PDF documents to LlamaIndex Chat and have the text contents of those PDFs extracted via OCR so that OpenAI can easily process the text data. Many PDF files are scanned copies and not true searchable PDFs.

[Bug] Generating datasource not working anymore on the latest update

Describe the bug
I'm using this project as a base for my site since last month and have just tried upgrading to the newest update (llamaindex edge).
Everything else seems fine but generating a datasource (pnpm run generate ) doesn't seem to be working anymore. (Tested on the original repository by cloning the latest update)
Error says: "Cannot find package 'llamaindex' imported from..."

+Additional bug:
https://chat.llamaindex.ai/
There is an error with the bots based off of data sources. (Red Hat Linux Expert, Apple Watch Genius, & German Basic Law Expert bots are not working.)

To Reproduce
Steps to reproduce the behavior:

  1. Clone repository

  2. Follow steps to create new data source (add folder to datasources folder -> add files to folder)

  3. Run "pnpm run generate " in terminal

  4. See error

  5. Go to Chat Llamaindex

  6. Select "German Basic Law Expert" bot

  7. Ask any question

  8. See error

Expected behavior
A new VerctorStoreIndex for data source created (new data source folder and data in cache folder).
Normal chat experience at Chat Llamaindex site.

Screenshots
스크린샷 2024-03-21 112228
스크린샷 2024-03-21 132648

Deployment

  • Docker
  • Vercel
  • Server

Desktop (please complete the following information): Unapplicable

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information): Unapplicable

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional Logs
Add any logs about the problem here.

Thank you!!

[Bug] Warning: filter "Crypt" not supported yet

Not an error. When I run the generation of a new data source script, getting a whole set of following warning messages. Source contains PDF's. What is the reason? How would I now if all of my PDF's got processed?

Warning: filter "Crypt" not supported yet
Warning: Could not find a preferred cmap table.
Warning: Required "glyf" table is not found -- trying to recover.
Warning: TT: undefined function: 32

[Bug] TypeError: text.match is not a function

Describe the bug
TypeError: text.match is not a function

To Reproduce
Steps to reproduce the behavior:

  1. Create GPT4V model
  2. upload image
  3. prompt the model to "explain this picture"
  4. error generates

Expected behavior
Proper response from model

Deployment

  • Vercel

Desktop (please complete the following information):

  • OS: win10
  • Browser chromium
  • Version: latest

[Feature] Python version

Dear all,
Thanks for this great contribution to the LLM community.
Are you considering a chat-llamaindex implementation based on Python instead of Typescript?

Unclear where to add datasource for bots created in UI

First of all, thanks for the great solution.

Everything is running fine locally but I'm not clear where to edit the bots created from the UI. When I go to the apps/bots/bot.data.ts i do not see the bot I created and when I edit one of the demo bots in that file I don't see the changes in the UI.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.