nczkevin / chatglm-web Goto Github PK
View Code? Open in Web Editor NEW用 FastAPI 和 Vue3 搭建的 ChatGLM 网页 (前端样式仿照chatgpt-web, 支持chatglm流式输出、前端调整参数、上下文选择、保存图片、知识库问答等功能)
Home Page: http://chatglm.nczkevin.com
License: MIT License
用 FastAPI 和 Vue3 搭建的 ChatGLM 网页 (前端样式仿照chatgpt-web, 支持chatglm流式输出、前端调整参数、上下文选择、保存图片、知识库问答等功能)
Home Page: http://chatglm.nczkevin.com
License: MIT License
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/root/chat/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/connectionpool.py", line 536, in _make_request
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/connection.py", line 454, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/python3.11/lib/python3.11/http/client.py", line 1374, in getresponse
response.begin()
File "/usr/python3.11/lib/python3.11/http/client.py", line 318, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "/usr/python3.11/lib/python3.11/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/chat/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/util/retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/util/util.py", line 38, in reraise
raise value.with_traceback(tb)
File "/root/chat/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/connectionpool.py", line 536, in _make_request
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/urllib3/connection.py", line 454, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/python3.11/lib/python3.11/http/client.py", line 1374, in getresponse
response.begin()
File "/usr/python3.11/lib/python3.11/http/client.py", line 318, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "/usr/python3.11/lib/python3.11/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/chatglm-web/service/main.py", line 178, in
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 663, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 388, in get_class_from_dynamic_module
final_module = get_cached_module_file(
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 286, in get_cached_module_file
commit_hash = model_info(pretrained_model_name_or_path, revision=revision, token=use_auth_token).sha
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1675, in model_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/chat/lib/python3.11/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
请问如何修改pt模型路径~?
当我客户端停止接收数据后,服务端抛出大量报错
07/18/2023 16:14:17 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:17 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:17 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:18 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:19 - WARNING - asyncio - socket.send() raised exception.
07/18/2023 16:14:19 - WARNING - asyncio - socket.send() raised exception.
使用https://github.com/THUDM/ChatGLM-6B的后端api和此项目的前端。
后端报错显示如下
INFO: Started server process [7348]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3002 (Press CTRL+C to quit)
INFO: 127.0.0.1:50608 - "POST /chat-process HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:50609 - "POST /chat-process HTTP/1.1" 404 Not Found
前端报错如下:
2023/4/16 15:03:40 :你好
2023/4/16 15:03:40:Request failed with status code 404
如何解决
无法下载 解压
如何使用本地知识库
请问这个如何设置本地知识库,应该如何才能根据自己定义的知识库回答问题呢
前端后端都支持流式输出吗
麻烦问一下前端支持挂载在子路径吗?
比如 主页从 http://chatglm.nczkevin.com/
变更为 http://nczkevin.com/chatglm/
直接代理前端的话 主要问题在于 main.js 和 src/client.js 会有路径问题
前端后端都是在windows本地部署,本地没有安装nginx,但前端没有打字机效果
前端
➜ Local: http://localhost:3000/
➜ Network: http://192.168.1.9:3000/
➜ Network: http://172.28.80.1:3000/
➜ press h to show help
11:08:46 [vite] http proxy error at /chat-process:
Error: connect ECONNREFUSED ::1:3002
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
后端
oading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [00:15<00:00, 1.89s/it]
INFO: Started server process [17736]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3002 (Press CTRL+C to quit)
分析了下原因,是因为记忆20条,第11个问题的时候,总共会有21条数据,裁剪数据会把第一个问题裁剪掉,导致问答不完整,需要把main.py里的记忆20改成21
欢迎大家提供建议,目前后续想法包括
cuda已经正常安装了
Traceback (most recent call last):
File "D:\chatglm-web-main\service\main.py", line 186, in
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().quantize(quantize).cuda()
File "C:\Users\Administrator/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\modeling_chatglm.py", line 1434, in quantize
self.transformer = quantize(self.transformer, bits, empty_init=empty_init, **kwargs)
File "C:\Users\Administrator/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\quantization.py", line 159, in quantize
weight_tensor=layer.attention.query_key_value.weight.to(torch.cuda.current_device()),
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda_init_.py", line 674, in current_device
lazy_init()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda_init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
首先感谢大佬的努力
我在本地电脑已经安装且已经可以使用chatGLM了,但不知道怎么让 -web识别,希望大佬能讲解具体步骤和修改什么文件讲解
python main.py --device='cpu' --host='127.0.0.1' --port='7680'
Traceback (most recent call last):
File "/Users/weicheng/Documents/Dev/llm/chatglm-web/service/main.py", line 16, in
import knowledge
File "/Users/weicheng/Documents/Dev/llm/chatglm-web/service/knowledge.py", line 4, in
ix = storage.open_index()
File "/opt/homebrew/lib/python3.10/site-packages/whoosh/filedb/filestore.py", line 176, in open_index
return indexclass(self, schema=schema, indexname=indexname)
File "/opt/homebrew/lib/python3.10/site-packages/whoosh/index.py", line 421, in init
TOC.read(self.storage, self.indexname, schema=self._schema)
File "/opt/homebrew/lib/python3.10/site-packages/whoosh/index.py", line 618, in read
raise EmptyIndexError("Index %r does not exist in %r"
whoosh.index.EmptyIndexError: Index 'MAIN' does not exist in FileStorage('knowdata')
如何更改模型路径
首先感谢大佬的制作和精妙的UI
我没有多少开发软件和代码的经验,想询问下具体將chatglm-6b和 该ui结合的方式,已经本地电脑上可以打开模型了,但不知道如何让-web去识别到和与我的glm对接,希望能详细说明步骤和修改什么文件内容
通过http://127.0.0.1:3000/#/chat/1002方式
第一:特别慢,需要30秒
第二:而且前端没有打字机效果
您好,部署repo后出现如下错误:
python main.py 是默认参数,没有改参数,在前端输入问题,回复500错误,请问如何调试?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.