run-llama / chat-llamaindex Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://chat.llamaindex.ai
License: MIT License
Home Page: https://chat.llamaindex.ai
License: MIT License
We want to switch this application to use Azure OpenAI for embeddings and inferences.
Please guide us with the steps.
Thanks.
Problems with uploading PDF files after update #76.
Uploading the identical file was working two days ago.
I am working on a Mac with Sonoma 14.2.1 and Chrome Version 122.0.6261.69
Describe the bug
I clone the main branch and followed the docker build instructions. Instead of having a working docker container I get the following error.
To Reproduce
Steps to reproduce the behavior:
git clone https://github.com/run-llama/chat-llamaindex
cp .env.template .env.development.local
docker build -t chat-llamaindex .
=> [build 2/6] WORKDIR /usr/src/app 0.8s
=> [runtime 2/6] WORKDIR /usr/src/app 0.8s
=> [build 3/6] COPY package.json pnpm-lock.yaml ./ 0.0s
=> [build 4/6] RUN npm install -g pnpm && pnpm install 14.6s
=> [build 5/6] COPY . . 0.3s
=> ERROR [build 6/6] RUN pnpm build 39.5s
------
> [build 6/6] RUN pnpm build:
0.674
0.674 > chat-llamaindex@ build /usr/src/app
0.674 > next build
0.674
1.363 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.363 This information is used to shape Next.js' roadmap and prioritize features.
1.363 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.363 https://nextjs.org/telemetry
1.363
1.419 ▲ Next.js 14.2.1
1.419
1.475 Creating an optimized production build ...
1.810 warn - It seems like you don't have a global error handler set up. It is recommended that you add a global-error.js file with Sentry instrumentation so that React rendering errors are reported to Sentry. Read more: https://docs.sentry.io/platforms/javascript/guides/nextjs/manual-setup/#react-render-errors-in-app-router
38.00 Failed to compile.
38.00
38.00 Sentry CLI Plugin: Command failed: /usr/src/app/node_modules/.pnpm/@[email protected][email protected]/node_modules/@sentry/cli/sentry-cli releases new VRbevAbU_2mYJxqEuyKqu
38.00 error: API request failed
38.00 caused by: [60] SSL peer certificate or SSH remote key was not OK (SSL certificate problem: unable to get local issuer certificate)
38.00
38.00 Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
38.00 Please attach the full debug log to all bug reports.
38.00
38.00 Sentry CLI Plugin: Command failed: /usr/src/app/node_modules/.pnpm/@[email protected][email protected]/node_modules/@sentry/cli/sentry-cli releases new VRbevAbU_2mYJxqEuyKqu
38.00 error: API request failed
38.00 caused by: [60] SSL peer certificate or SSH remote key was not OK (SSL certificate problem: unable to get local issuer certificate)
38.00
38.00 Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
38.00 Please attach the full debug log to all bug reports.
38.00
38.01
38.01 > Build failed because of webpack errors
38.22 ELIFECYCLE Command failed with exit code 1.
------
Dockerfile:18
--------------------
16 |
17 | # Build the application for production
18 | >>> RUN pnpm build
19 |
20 | # ---- Production Stage ----
--------------------
ERROR: failed to solve: process "/bin/sh -c pnpm build" did not complete successfully: exit code: 1
Expected behavior
The docker build should finish without error
Deployment
Desktop (please complete the following information):
Describe the bug
The emojis are not loading.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The expectation is that the emojis will be loaded and displayed.
Deployment
The issue likely exists everywhere, the CDN used to load the emojis (cdn.staticfile.org) appears to no longer host emoji files.
A fix is provided in PR #57. That PR updates the CDN to Cloudflare and incorporates the latest emoji version.
I get this output when I run npm run generate :
BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 23869 tokens (23869 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. at APIError.generate (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/error.mjs:41:20) at OpenAI.makeStatusError (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/core.mjs:256:25) at OpenAI.makeRequest (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/core.mjs:299:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async OpenAIEmbedding.getOpenAIEmbedding (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:82:26) at async OpenAIEmbedding.getTextEmbeddings (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:93:16) at async OpenAIEmbedding.getTextEmbeddingsBatch (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/types.js:32:36) at async VectorStoreIndex.getNodeEmbeddingResults (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:89:28) at async VectorStoreIndex.insertNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:189:34) at async VectorStoreIndex.buildIndexFromNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:109:9) at async VectorStoreIndex.init (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:55:13) at async VectorStoreIndex.fromDocuments (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:132:16) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:37:5 at async getRuntime (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:22:3) at async generateDatasource (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:30:14) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:86:3
I have been trying to integrate Palm2 with this project for Chat engine and also for embedding
But I'm unable to run it.
Currently, when I want to use OpenAI-like mock servers or proxy servers, there's no apparent way to manually modify the openai.api_base and add headers to openai Completion/ChatCompletion request.
The mock server requires changing openai.api_base and specifying the model name.
The proxy server requires changing openai.api_base, providing openai.api_key, specifying the model name, and adding a custom headers to the request.
Embedding the frontend application into a Django web application. This could be really useful when trying to manage files, accounts, and analytics. Thank you.
Describe the bug
I want to understand if its possible to use this app without connecting to Vercel when using locally. When i try to upload an image, I see the following error:
[Upload] BlobError: Vercel Blob: No token found. Either configure the
BLOB_READ_WRITE_TOKENenvironment variable, or pass a
token option to your calls.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Not sure if this supported but can we use this project locally without requiring Vercel tokens?
Deployment
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional Logs
Add any logs about the problem here.
Trying to generate a new data source and when I run pnpm run generate <datasource-name>
get the following error. The OpenAI key is set in .env.development.local. The app works, but not the data source generation.
chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:470
throw new Error("Set OpenAI Key in OPENAI_API_KEY env variable");
^
Error: Set OpenAI Key in OPENAI_API_KEY env variable
at new OpenAISession (~\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:470:13)
at getOpenAISession (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:486:15)
at new OpenAI2 (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:606:81)
at serviceContextFromDefaults (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:2075:71)
at file:///~/Documents/WORKSPACES/GenerativeAI/chat-llamaindex/scripts/generate.mjs:54:26
at file:///~/Documents/WORKSPACES/GenerativeAI/chat-llamaindex/scripts/generate.mjs:61:3
at ModuleJob.run (node:internal/modules/esm/module_job:193:25)
at async Promise.all (index 0)
at async ESMLoader.import (node:internal/modules/esm/loader:530:24)
at async loadESM (node:internal/process/esm_loader:91:5)
at async handleMainPromise (node:internal/modules/run_main:65:12)
Node.js v18.12.1
ELIFECYCLE Command failed with exit code 1.
Would like to be able to run this with local llm stacks like litellm or ollama etc.
Could you provide a parameter to specify llm and base url
I built a small RAG with a local embedding model in the normal python-based llamaindex. How do I use this react-based chat application with the python-based chat engine? Or what is the idiomatic way to have a GUI chat for the python-based llamaindex?
We want to switch this application to use Azure OpenAI for embeddings and inferences.
Please guide us with the steps.
Thanks.
You have indicated that ChatGPT-Next-Web project was used as a starter template for this project. Can you please confirm if LlamaIndex Chat support Azure OpenAI?
If yes, please provide the instructions to switch to Azure OpenAI.
If no, will this be treated as feature enhancement? Is there a quick way to make this switch to use Azure OpenAI?
Content of .env.development.local file
OPENAI_API_KEY=sk-xxxx
Please support azure openai apis
Describe the bug
In local setup it's working fine , but when I deploy it to aws amplify , it's giving internal server error (500) for API call
not it's calling this API : https://develop.d2tnt2s5bwrvl6.amplifyapp.com/api/llm
instead of: http://localhost:300/api/llm
To Reproduce
Steps to reproduce the behavior:
Deploy it to aws amplify
Expected behavior
Should call api/llm successfully
Screenshots
If applicable, add screenshots to help explain your problem.
Deployment
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional Logs
Add any logs about the problem here.
It would be great to be able to see what are the features that are being developped and those who are going to be in the future.
For example I'm wondering if you're aiming to be anytime soon ISO feature with OpenAI, or when are you going to actually support running opensource models.
And I'm sure I'm not alone so again would be great to have access to those informations!
Describe the bug
A clear and concise description of what the bug is.
Ran into this when running the generate <datasource
command.
BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 18039 tokens (18039 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
error: {
message: "This model's maximum context length is 8192 tokens, however you requested 18039 tokens (18039 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
type: 'invalid_request_error',
param: null,
code: null
},
I expected it to split the documents for me?
Is your feature request related to a problem? Please describe.
I went to use this project and found that it doesn't seem to actually use or support llama despite the name.
It appears to be locked into only using OpenAI's proprietary SaaS product.
e.g. https://github.com/run-llama/chat-llamaindex/blob/main/.env.template#L1
Describe the solution you'd like
Describe alternatives you've considered
Maybe rename the project to chat-openai-index or similar if it hasn't got anything to do with Llama as it may confuse folks.
Additional context
N/A
It seems intuitive that, once a user creates a data source, he should be able to query it somehow. It would be great if there were a field in the 'create bot' window to connect the bot to an existing data source.
It's entirely possible I'm missing something, but I can't see how to make that connection at the moment.
Thank you very much,
Adam
You have indicated that ChatGPT-Next-Web project was used as a starter template for this project. Can you please confirm if LlamaIndex Chat support Azure OpenAI?
If yes, please provide the instructions to switch to Azure OpenAI.
If no, will this be treated as feature enhancement? Is there a quick way to make this switch to use Azure OpenAI?
Content of .env.development.local file
OPENAI_API_KEY=sk-xxxx
On Windows 10. Trying to generate a new data source and when I run pnpm run generate <datasource-name>
get the following error. The OpenAI key is set in .env.development.local. The app works, but not the data source generation. Related to #23 which was closed as addressed.
Error getting OPENAI_API_KEY from .env.development.local: ENOENT: no such file or directory, open 'C:\C:\Users\xxx\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\.env.development.local'
ELIFECYCLE Command failed with exit code 1.
Dear friends, I regret to inform you that Chatlamaindex's response has been invalidated
{
"error": true,
"message": "There was an error calling the OpenAI API. Please try again later."
}
As a user, I want to be able to upload and train PDF documents to LlamaIndex Chat and have the text contents of those PDFs extracted via OCR so that OpenAI can easily process the text data. Many PDF files are scanned copies and not true searchable PDFs.
Describe the bug
I'm using this project as a base for my site since last month and have just tried upgrading to the newest update (llamaindex edge).
Everything else seems fine but generating a datasource (pnpm run generate ) doesn't seem to be working anymore. (Tested on the original repository by cloning the latest update)
Error says: "Cannot find package 'llamaindex' imported from..."
+Additional bug:
https://chat.llamaindex.ai/
There is an error with the bots based off of data sources. (Red Hat Linux Expert, Apple Watch Genius, & German Basic Law Expert bots are not working.)
To Reproduce
Steps to reproduce the behavior:
Clone repository
Follow steps to create new data source (add folder to datasources folder -> add files to folder)
Run "pnpm run generate " in terminal
See error
Go to Chat Llamaindex
Select "German Basic Law Expert" bot
Ask any question
See error
Expected behavior
A new VerctorStoreIndex for data source created (new data source folder and data in cache folder).
Normal chat experience at Chat Llamaindex site.
Deployment
Desktop (please complete the following information): Unapplicable
Smartphone (please complete the following information): Unapplicable
Additional Logs
Add any logs about the problem here.
Thank you!!
Not an error. When I run the generation of a new data source script, getting a whole set of following warning messages. Source contains PDF's. What is the reason? How would I now if all of my PDF's got processed?
Warning: filter "Crypt" not supported yet
Warning: Could not find a preferred cmap table.
Warning: Required "glyf" table is not found -- trying to recover.
Warning: TT: undefined function: 32
Describe the bug
TypeError: text.match is not a function
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Proper response from model
Deployment
Desktop (please complete the following information):
Dear all,
Thanks for this great contribution to the LLM community.
Are you considering a chat-llamaindex implementation based on Python instead of Typescript?
First of all, thanks for the great solution.
Everything is running fine locally but I'm not clear where to edit the bots created from the UI. When I go to the apps/bots/bot.data.ts i do not see the bot I created and when I edit one of the demo bots in that file I don't see the changes in the UI.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.