Git Product home page Git Product logo

chat-ollama's People

Contributors

doggy8088 avatar eltociear avatar gslin1224 avatar jackxwb avatar jacky97s avatar lengweiping1983 avatar lihaowang avatar nxy666 avatar qitest avatar satrong avatar shunyue1320 avatar sugarforever avatar sylaryip avatar wusung avatar xmx0632 avatar yinzhidong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chat-ollama's Issues

請教關於使用openai api 問題

有要求Ollama server is running on http://localhost:11434/
請教如果我希望使用openai gpt4作為LLM使用,
我應該如何修改設定呢?

還是目前只能使用ollama運行開源模型?
感謝您(因為自家的GPU不給力,所以希望用OPENAI API)

无法连接已安装ollama,获取models

你好,按照教程用docker已经在服务器部署了chatollama,服务器也安装了ollama服务,下载了几个模型。也可以正常运行。
tcp 0 0 127.0.0.1:11434 0.0.0.0:* LISTEN 2216/ollama
但是按照教程设置了Ollama server to http://host.docker.internal:11434,刷新model无法显示已下载的model,后台报错[nuxt] [request error] [unhandled] [500] fetch failed
chatollama-1 | at Object.fetch (node:internal/deps/undici/undici:11576:11)
chatollama-1 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5),请指教

非常有吸引力的项目,但是过于复杂不好推广

这个项目非常有吸引力,自定义知识库,但是项目使用部署等的难度太高了,普通人甚至有一定基础的人都不一定能用,这大大限定了项目受众。
如果项目进行模块化,降低上手难度,比如利用docker进行分布式部署,Ollama作为一个模块,数据库一个模块,前端一个模块,连接成一个整体使用。

[request] plugins for flexibility/customizability of 3rd-party systems integration

I believe it already supports OpenAI and Anthropic at this moment - for a long-term run, is it possible to have a plugin system to allow users to add more LLMs, even more in the future, embedings/vectorDBs/etc., via public API? saying, one or a few yaml files to define input/output schema of a 3rd-party systems to integrate in.

[知识库上传异常]The tablemain.KnowledgeBasedoes not exist in the current database

http://localhost:3000/knowledgebases

在知识库页面直接刷新,就报错了,也无法上传知识库,docker logs 报错如下:

The tablemain.KnowledgeBasedoes not exist in the current database. at _n.handleRequestError (/app/.output/server/node_modules/@prisma/client/runtime/library.js:123:6854) at _n.handleAndLogRequestError (/app/.output/server/node_modules/@prisma/client/runtime/library.js:123:6188) at _n.request (/app/.output/server/node_modules/@prisma/client/runtime/library.js:123:5896) at async l (/app/.output/server/node_modules/@prisma/client/runtime/library.js:128:10871) at async listKnowledgeBases (file:///app/.output/server/chunks/index.get2.mjs:13:12) at async Object.handler (file:///app/.output/server/chunks/index.get2.mjs:24:26) at async Object.handler (file:///app/.output/server/chunks/nitro/node-server.mjs:2908:19) at async toNodeHandle (file:///app/.output/server/chunks/nitro/node-server.mjs:3174:7) at async ufetch (file:///app/.output/server/chunks/nitro/node-server.mjs:3777:17) at async $fetchRaw2 (file:///app/.output/server/chunks/nitro/node-server.mjs:3650:26) { code: 'P2021', clientVersion: '5.10.2', meta: { modelName: 'KnowledgeBase', table: 'main.KnowledgeBase' } }

[nuxt] [request error] [unhandled] [500] Cannot read properties of null (reading 'map') at ./.output/server/chunks/app/_nuxt/index-kxFoGMUd.mjs:101:40 at ReactiveEffect.fn (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:927:13) at ReactiveEffect.run (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:170:19) at get value [as value] (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:938:68) at unref (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:1036:29) at ./.output/server/chunks/app/_nuxt/index-kxFoGMUd.mjs:340:15 at renderComponentSubTree (./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:430:9) at ./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:374:25 at async unrollBuffer$1 (./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:617:16) at async unrollBuffer$1 (./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:622:16) Ollama: { host: 'http://localhost:11434', username: null, password: null }

用nomic-embed-text做嵌入,知识库创建时报错

用nomic-embed-text做嵌入模型,创建支持库时候报500错误,如下:

ReadableStream { locked: false, state: 'readable', supportsBYOB: false }
Error in handler LangChainTracer, handleChainError: Error: Failed to batch create run: 401 Unauthorized {"detail":"Need authorization header or api key"}
[nuxt] [request error] [unhandled] [500] Oll
emd
ama call failed with status code 400: embedding models do not support chat

nomic-embed-text是ollama支持的嵌入模型,手动ollama下http://localhost:11434/api/embeddings 没问题,麻烦看看什么问题

除了chat页面其他页面都无法使用见报错

指令页:
Invalid prisma.instruction.findMany() invocation in
/opt/localLLM/chatdemo/chat-ollama/server/api/instruction/index.get.ts:6:1

知识库页:
Error fetching knowledge bases: PrismaClientInitializationError:
Invalid prisma.knowledgeBase.findMany() invocation in
/opt/localLLM/chatdemo/chat-ollama/server/api/knowledgebases/index.get.ts:6:1

3 const listKnowledgeBases = async (): Promise<KnowledgeBase[] | null> => {
4 const prisma = new PrismaClient();
5 try {
→ 6 return await prisma.knowledgeBase.findMany(
Prisma Client could not locate the Query Engine for runtime "rhel-openssl-1.0.x".

此外,使用mixtral对话时,内容一直在重复输出,无法停止,不得不把后台全部停了。

在Models Tab 下载nomic-embed-text出错

代码版本Git Commit ID: 2dd0ddd
点击下载,后台报错,
图片

错误信息:

Ollama:  { host: 'null', username: 'null', password: 'null' }
Authorization: dW5kZWZpbmVkOnVuZGVmaW5lZA==
[nuxt] [request error] [unhandled] [500] fetch failed
  at node:internal/deps/undici/undici:12345:11  
  at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Ollama:  { host: 'null', username: 'null', password: 'null' }
Authorization: dW5kZWZpbmVkOnVuZGVmaW5lZA==
[nuxt] [request error] [unhandled] [500] fetch failed
  at node:internal/deps/undici/undici:12345:11  
  at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Ollama:  { host: 'null', username: 'null', password: 'null' }
Authorization: dW5kZWZpbmVkOnVuZGVmaW5lZA==
[nuxt] [request error] [unhandled] [500] fetch failed
  at node:internal/deps/undici/undici:12345:11  
  at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

图片

创建knowledge base 500 错误

Hello,
我已经搜索过 issue 中关于500的文章,没有找到类似的。

我的错误如下:

Created knowledge base test3: [object Object]
[nuxt] [request error] [unhandled] [500] fetch failed
  at node:internal/deps/undici/undici:13737:13
  at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
  at async OllamaEmbeddings._request (./node_modules/@langchain/community/dist/embeddings/ollama.js:107:26)
  at async RetryOperation._fn (./node_modules/p-retry/index.js:50:12)

可见,knowledge base test3 创建之后报错了。

环境信息

  • chat-ollama 克隆了 2024年3月10日 的版本
  • chat 功能正常
  • chromadb 端口 8000,可以访问
    • 浏览器直接访问,返回内容 "detail": "Not Found"
  • 尝试过删除 sqlite 数据库并重建,删除 chromadb container 并重建,仍然报错

知识库是否支持windows平台

Warning: Your current platform windows is not included in your generator's binaryTargets configuration ["linux-arm64-openssl-3.0.x","darwin-arm64","debian-openssl-3.0.x"].

使用任何模型都报错[500] stream.getReader is not a function

使用 qwen:4b模型报错[500] stream.getReader is not a function
[
Document {
pageContent: '\n' +
'49\n' +
'\n' +
\n' +
'
\n' +
'【
\n' +
'
\n' +
'【
***************】',
metadata: { blobType: 'application/pdf', source: 'blob', loc: [Object] }
}
]
<ref *1> IterableReadableStream [ReadableStream] {
_state: 'readable',
_reader: undefined,
_storedError: undefined,
_disturbed: false,
_readableStreamController: ReadableStreamDefaultController {
_controlledReadableStream: [Circular *1],
_queue: SimpleQueue {
_cursor: 0,
_size: 0,
_front: [Object],
_back: [Object]
},
_queueTotalSize: 0,
_started: true,
_closeRequested: false,
_pullAgain: false,
_pulling: true,
_strategySizeAlgorithm: [Function (anonymous)],
_strategyHWM: 1,
_pullAlgorithm: [Function: pullAlgorithm],
_cancelAlgorithm: [Function: cancelAlgorithm]
},
reader: undefined
}
[nuxt] [request error] [unhandled] [500] stream.getReader is not a function
at Function.fromReadableStream (chat-ollama/node_modules/@langchain/core/dist/utils/stream.js:68:31)
at createOllamaStream (chat-ollama/node_modules/@langchain/community/dist/utils/ollama.js:36:43)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async createOllamaChatStream (chat-ollama/node_modules/@langchain/community/dist/utils/ollama.js:57:5)
at async ChatOllama._streamResponseChunks (chat-ollama/node_modules/@langchain/community/dist/chat_models/ollama.js:396:30)
at async ChatOllama._streamIterator (chat-ollama/node_modules/@langchain/core/dist/language_models/chat_models.js:78:34)
at async ChatOllama.transform (chat-ollama/node_modules/@langchain/core/dist/runnables/base.js:337:9)
at async wrapInputForTracing (chat-ollama/node_modules/@langchain/core/dist/runnables/base.js:231:30)
at async pipeGeneratorWithSetup (chat-ollama/node_modules/@langchain/core/dist/utils/stream.js:223:19)
at async StringOutputParser._transformStreamWithConfig (chat-ollama/node_modules/@langchain/core/dist/runnables/base.js:252:26)

chat 功能正常,知识库报错

chat 功能正常。

知识库报错如下:

Error: Failed to batch create run: 401 Unauthorized {"detail":"Need authorization header or api key"}

Add supports for openai compatible APIs

Thank you for this great app.

I am wondering if you can add supports for openai compatible APIs.

Many open source LLMs provide OpenAI compatible APIs.

For example,

https://console.groq.com/docs/openai,

https://github.com/groq/groq-python,

from openai import OpenAI
client = OpenAI(

  api_key = 'GROQ_API_KEY',
  base_url = "https://api.groq.com/openai/v1"
)

response = client.chat.completions.create(
     model="mixtral-8x7b-32768",
    messages=[
        {"role": "system", "content": "You are my coding assistant."},
        {"role": "user", "content": "Can you tell me how to write python flask application?"}
    ]
)

print(response.choices[0].message.content)

or

import openai

openai.api_key = 'GROQ_API_KEY'

# all client options can be configured just like the `OpenAI` instantiation counterpart
openai.base_url = "https://api.groq.com/openai/v1/"
openai.default_headers = {"x-foo": "true"}

completion = openai.chat.completions.create(
    model="mixtral-8x7b-32768",
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?",
        },
    ],
)
print(completion.choices[0].message.content)

These are implemented in Python.

Could you implement them in JS?

For example, add three environment variables:

openai_key='GROQ_API_KEY'
openai_model= "mixtral-8x7b-32768"
openai_base_url="https://api.groq.com/openai/v1"

And in the openai function call: add openai_model and openai_base_url parameters.

Thanks.

Hope this app is getting better.

如何远程访问chat-ollama?

我试了修改nuxt.config.ts 和 package.json好像都没用啊
nuxt.config.ts
export default defineNuxtConfig({
server: {
host: '0.0.0.0',
port: 8080
},
devtools: { enabled: true },
modules: [
'@nuxt/ui'
]
})

能否支持同时对两个模型发问?

把回答面板分成左右两个,一个是自己调优的模型,一个是用来对比的参照模型(比如OpenAI)。方便比较自己模型和其他模型的相应质量。

多谢

讨论:获取模型列表接口报错,不知道是不是Node的版本问题

image
image

{
    "url": "/api/models/",
    "statusCode": 500,
    "statusMessage": "",
    "message": "fetch failed",
    "stack": "<pre><span class=\"stack internal\">at Object.fetch (node:internal/deps/undici/undici:11731:11)</span>\n<span class=\"stack internal\">at process.processTicksAndRejections (node:internal/process/task_queues:95:5)</span></pre>"
}

windows系统,使用的Node版本是:v18.19.1

在文档中添加数据存储的段落

#19

该合并请求将指令存储到数据库。在文档中需要一个段落介绍ChatOllama应用的数据存储策略

  1. local storage存了什么
  2. 数据库存了什么

数据库准备配置

向量数据库

  https://hub.docker.com/r/chromadb/chroma/tags

  docker pull chromadb/chroma
  docker run -d -p 8000:8000 chromadb/chroma
  
  # 访问地址:http://localhost:8000

sqlite 数据库

 #从 .env.example 复制文件 .env
 cp .env.example .env

 #修改配置文件 .env 中的 sqlite 路径:
 DATABASE_URL=file:/Users/xxx/chat-ollama/prisma/.sqlite/chatollama.sqlite

执行数据库脚本迁移

  # 安装prisma
  npm install -g prisma
  
  # 执行脚本迁移
  cd prisma
  prisma migrate dev

两个建议:一是增加文档分组,二是界面显示API调用

  • 文档分组的意义:比如我有“财务”、“人事”、“质检”的若干工作,每项工作有各项文档若干,每组分别建立向量数据库。这样一样更为实用,二是别人无需知道有哪些文件,访问WEBUI界面就可以获取相关知识,在同事间变传统文件共享为知识共享。有非常实际的应用场景。

  • WEBUI界面显示API调用示例的意义在于,我们很多工作都在微信群、QQ群里商讨,相对地使用QQ或微信更便于推广,有了API,就可以更方便地接入。

nomic-embed-text 问题。

1.看到你的其他nomic-embed-text 的教程 ,要注册并有KEY,但你这个项目好像没有用到?

2.这个nomic-embed-text 最大支持的文件尺寸是否有限制 8000 TOKEN?还是和选择的模型有限制?

聊天界面发送后没有响应

我安装运行后,.env配置都很正常,模型下载加载也都没有问题,但是在chat界面发送消息后,没有任何回复,send一直在转圈,可能的原因会是什么?应该如何排查问题呢?
image

在知识库里上传文件保存的时候报错

点击Save后终端返回:
Created knowledge base DataBot: [object Object]
[nuxt] [request error] [unhandled] [500] Chroma getOrCreateCollection error: Error: [object Response]

数据库sqlite已对接,chroma部署在一台服务器docker镜像上,用ip+8000端口访问返回:{"detail":"Not Found"}
.env里的CHROMADB_URL设置了ip+8000的地址
请问是哪里没有配置好吗?

在 mac arm 架构下使用 docker-compose 方式启动

如果是 mac m1 系统,可以尝试用 docker compose方式启动。😀

  1. 首先确保本地安装好 docker 和 docker compose 可用。
  2. 然后启动 ollama 服务。
  3. 下载以下文件,https://raw.githubusercontent.com/sugarforever/chat-ollama/48d3e80c88014b78a6b0bac0a17cc6f7145ea33f/docker-compose.yaml
  4. 修改docker-compose.yaml 文件中的 192.168.3.3 地址为 ollama 服务局域网 ip 地址
  5. 启动命令容器:
    docker-compose up
  6. 访问 http://127.0.0.1:3000 然后观察后台日志

在做rag查询到的时候一直出错

在做rag查询到的时候一直出错

【nuxt】 【request error】 【unhandled】 【500】 Chroma getOrCreateCollection error: Error: TypeError: fetch failed
at Chroma.ensureCollection (/D:/working/opensource/chat-ollama/node_modules/@langchain/community/dist/vectorstores/chroma.js:99:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Chroma.similaritySearchVectorWithScore (/D:/working/opensource/chat-ollama/node_modules/@langchain/community/dist/vectorstores/chroma.js:187:28)
at async Chroma.similaritySearch (/D:/working/opensource/chat-ollama/node_modules/@langchain/core/dist/vectorstores.js:104:25)
at async VectorStoreRetriever.getRelevantDocuments (/D:/working/opensource/chat-ollama/node_modules/@langchain/core/dist/retrievers.js:67:29)
at Object.handler (D:\working\opensource\chat-ollama\server\api\models\chat\index.post.ts:67:1)

使用的 chroma 配置如下

  chroma:
    image: ghcr.io/chroma-core/chroma:latest
    container_name: chroma
    environment:
      - IS_PERSISTENT=TRUE
    volumes:
      # Default configuration for persist_directory in chromadb/config.py
      # Currently it's located in "/chroma/chroma/"
      - ./chroma-data:/chroma/chroma/
    ports:
      - 8011:8000

Custom OpenAI URL

目前OpenAI API 访问的路径是默认的URL,是吗? 我希望能指定OpenAI为国内中转的URL该如何设置呢?谢谢

非docker安装,打开localhost:3000,配置host,点击save没有反应。

非docker。
本地安装后,打开localhost:3000,配置host,点击save没有反应。

报错如下:

yarn run v1.22.22
$ nuxt dev
Nuxt 3.10.3 with Nitro 2.9.2 13:33:45
13:33:45
➜ Local: http://localhost:3000/
➜ Network: use --host to expose

i Using default Tailwind CSS file nuxt:tailwindcss 13:33:47
➜ DevTools: press Shift + Alt + D in the browser (v1.0.8) 13:33:48

i Tailwind Viewer: http://localhost:3000/_tailwind/ nuxt:tailwindcss 13:33:48
i Vite client warmed up in 5117ms 13:33:54
i Vite server warmed up in 4784ms 13:33:54
√ Nuxt Nitro server built in 1463 ms nitro 13:33:55
(node:34616) [DEP0040] DeprecationWarning: The punycode module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ... to show where the warning was created)
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}

无法上传文件

输出信息:
Ollama: {
host: 'http://192.168.99.12:28537',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://192.168.99.12:28537',
username: 'null',
password: 'null'
}
Created knowledge base activity: [object Object]
ingestDocument
Warning: TT: undefined function: 32

问题是在server\api\knowledgebases\index.post.ts 的32行左右.
if (existingCollection) {
//error
//splits可以正常打印
await existingCollection.addDocuments(splits);
console.log(Chroma collection ${collectionName} updated);
} else {
await Chroma.fromDocuments(splits, embeddings, dbConfig);
console.log(Chroma collection ${collectionName} created);
}

表现为无法正常上传文件(500k 左右), Save一只在loading状态.

500是什么问题?

E:\chat-ollama\chat-ollama-main>npm run dev

dev
nuxt dev

Nuxt 3.10.3 with Nitro 2.9.1 20:26:23
20:26:23
➜ Local: http://localhost:3000/
➜ Network: use --host to expose

i Using default Tailwind CSS file nuxt:tailwindcss 20:26:24
➜ DevTools: press Shift + Alt + D in the browser (v1.0.8) 20:26:25

i Tailwind Viewer: http://localhost:3000/_tailwind/ nuxt:tailwindcss 20:26:25
i Vite client warmed up in 3218ms 20:26:29
i Vite server warmed up in 3004ms 20:26:29
√ Nuxt Nitro server built in 425 ms nitro 20:26:29
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Error fetching knowledge bases: PrismaClientInitializationError:
Invalid prisma.knowledgeBase.findMany() invocation in
E:\chat-ollama\chat-ollama-main\server\api\knowledgebases\index.get.ts:6:1

3 const listKnowledgeBases = async (): Promise<KnowledgeBase[] | null> => {
4 const prisma = new PrismaClient();
5 try {
→ 6 return await prisma.knowledgeBase.findMany(
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:3
|
2 | provider = "sqlite"
3 | url = env("DATABASE_URL")
|

Validation Error Count: 1
at _n.handleRequestError (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:123:7154)
at _n.handleAndLogRequestError (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:123:6188)
at _n.request (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:123:5896)
at async l (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:128:10871)
at listKnowledgeBases (E:\chat-ollama\chat-ollama-main\server\api\knowledgebases\index.get.ts:6:1)
at Object.handler (E:\chat-ollama\chat-ollama-main\server\api\knowledgebases\index.get.ts:18:1)
at async Object.handler (file:///E:/chat-ollama/chat-ollama-main/node_modules/h3/dist/index.mjs:1962:19)
at async toNodeHandle (file:///E:/chat-ollama/chat-ollama-main/node_modules/h3/dist/index.mjs:2249:7)
at async ufetch (file:///E:/chat-ollama/chat-ollama-main/node_modules/unenv/runtime/fetch/index.mjs:9:17)
at async $fetchRaw2 (file:///E:/chat-ollama/chat-ollama-main/node_modules/ofetch/dist/shared/ofetch.00501375.mjs:219:26) {
clientVersion: '5.10.2',
errorCode: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
[nuxt] [request error] [unhandled] [500] Cannot read properties of null (reading 'map')
at E:\chat-ollama\chat-ollama-main\pages\knowledgebases\index.vue:122:36
at ReactiveEffect.fn (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:998:13)
at ReactiveEffect.run (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:181:19)
at get value [as value] (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:1010:109)
at unref (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:1132:29)
at Object.get (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:1138:35)
at _sfc_ssrRender (E:\chat-ollama\chat-ollama-main\pages\knowledgebases\index.vue:357:18)
at renderComponentSubTree (E:\chat-ollama\chat-ollama-main\node_modules@vue\server-renderer\dist\server-renderer.cjs.js:695:9)
at E:\chat-ollama\chat-ollama-main\node_modules@vue\server-renderer\dist\server-renderer.cjs.js:639:25
at async unrollBuffer$1 (E:\chat-ollama\chat-ollama-main\node_modules@vue\server-renderer\dist\server-renderer.cjs.js:882:16)

在Google colab中部署后,可以和大模型chat,但是新建知识库时,上传的文档会卡住无法保存并报错

报错:Ollama: { host: 'null', username: 'null', password: 'null' }
Created knowledge base 平安: [object Object]
[nuxt] [request error] [unhandled] [500] Chroma getOrCreateCollection error: Error: TypeError: fetch failed
at Chroma.ensureCollection (./node_modules/@langchain/community/dist/vectorstores/chroma.js:99:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Chroma.fromExistingCollection (./node_modules/@langchain/community/dist/vectorstores/chroma.js:274:9)
at ingestDocument (./server/api/knowledgebases/index.post.ts:25:1)
at Object.handler (./server/api/knowledgebases/index.post.ts:92:1)
at async Object.handler (./node_modules/h3/dist/index.mjs:1962:19)
at async Server.toNodeHandle (./node_modules/h3/dist/index.mjs:2249:7)

在colab通过ngrok做了3000端口的url映射,和大模型直接对话没问题,但是创建知识库时没反应

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.