Git Product home page Git Product logo

openai-forward's Introduction

简体中文 | English

PyPI version License docker pull tests pypi downloads

Important

在v0.7.0以后在配置方面会有较大调整,并与之前版本不兼容。通过UI配置起来会更加方便,且提供了更强大的配置选项。

OpenAI-Forward 是为大型语言模型实现的高效转发服务。其核心功能包括 用户请求速率控制、Token速率限制、智能预测缓存、日志管理和API密钥管理等,旨在提供高效、便捷的模型转发服务。 无论是代理本地语言模型还是云端语言模型,如 LocalAIOpenAI,都可以由 OpenAI Forward 轻松实现。 得益于 uvicorn, aiohttp, 和 asyncio 等库支持,OpenAI-Forward 实现了出色的异步性能。

News

  • 🎉🎉🎉 v0.7.0版本后支持通过WebUI进行配置管理
  • gpt-1106版本已适配
  • 缓存后端切换为高性能数据库后端:🗲 FlaxKV

主要特性

  • 全能转发:可转发几乎所有类型的请求
  • 性能优先:出色的异步性能
  • 缓存AI预测:对AI预测进行缓存,加速服务访问并节省费用
  • 用户流量控制:自定义请求速率与Token速率
  • 实时响应日志:提升LLMs可观察性
  • 自定义秘钥:替代原始API密钥
  • 多目标路由:转发多个服务地址至同一服务下的不同路由
  • 黑白名单:可对指定IP进行黑白名单限制
  • 自动重试:确保服务的稳定性,请求失败时将自动重试
  • 快速部署:支持通过pip和docker在本地或云端进行快速部署

由本项目搭建的代理服务地址:

注:此处部署的代理服务仅供个人学习和研究目的使用,勿用于任何商业用途。

部署指南

👉 部署文档

使用指南

快速入门

安装

pip install openai-forward 

# 或安装webui版本:
pip install openai-forward[webui]

启动服务

aifd run
# 或启动带webui的服务
aifd run --webui

如果读入了根路径的.env的配置, 将会看到以下启动信息

❯ aifd run
╭────── 🤗 openai-forward is ready to serve!  ───────╮
│                                                    │
│  base url         https://api.openai.com           │
│  route prefix     /                                │
│  api keys         False                            │
│  forward keys     False                            │
│  cache_backend    MEMORY                           │
╰────────────────────────────────────────────────────╯
╭──────────── ⏱️ Rate Limit configuration ───────────╮
│                                                    │
│  backend                memory                     │
│  strategy               moving-window              │
│  global rate limit      100/minute (req)           │
│  /v1/chat/completions   100/2minutes (req)         │
│  /v1/completions        60/minute;600/hour (req)   │
│  /v1/chat/completions   60/second (token)          │
│  /v1/completions        60/second (token)          │
╰────────────────────────────────────────────────────╯
INFO:     Started server process [191471]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

代理OpenAI模型:

aifd run的默认选项便是代理https://api.openai.com

下面以搭建好的服务地址https://api.openai-forward.com 为例

Python

  from openai import OpenAI  # pip install openai>=1.0.0
  client = OpenAI(
+     base_url="https://api.openai-forward.com/v1", 
      api_key="sk-******"
  )

代理本地模型

  • 适用场景:LocalAIapi-for-open-llm等项目一起使用

  • 如何操作: 以LocalAI为例,如果已在 http://localhost:8080 部署了LocalAI服务,仅需在环境变量或 .env 文件中设置 FORWARD_CONFIG=[{"base_url":"http://localhost:8080","route":"/localai","type":"openai"}]。 然后即可通过访问 http://localhost:8000/localai 使用LocalAI。

(更多)

代理任意云端模型

配置环境变量或 .env 文件如下:

FORWARD_CONFIG=[{"base_url":"https://generativelanguage.googleapis.com","route":"/gemini","type":"general"}]

说明:aidf run启动后,即可通过访问 http://localhost:8000/gemini 使用gemini pro。

  • 场景1: 使用通用转发,可对任意来源服务进行转发, 可获得请求速率控制与token速率控制;但通用转发不支持自定义秘钥.

  • 场景2: 可通过 LiteLLM 可以将 众多云模型的 API 格式转换为 openai 的api格式,然后使用openai风格转发

(更多)

配置

执行 aifd run --webui 进入配置页面 (默认服务地址 http://localhost:8001)

你可以在项目的运行目录下创建 .env 文件来定制各项配置。参考配置可见根目录下的 .env.example文件

智能缓存

开启缓存后,将会对指定路由的内容进行缓存,其中转发类型分别为openaigeneral两者行为略有不同, 使用general转发时,默认会将相同的请求一律使用缓存返回,
使用openai转发时,在开启缓存后,可以通过OpenAI 的extra_body参数来控制缓存的行为,如

Python

  from openai import OpenAI 
  client = OpenAI(
+     base_url="https://smart.openai-forward.com/v1", 
      api_key="sk-******"
  )
  completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
      {"role": "user", "content": "Hello!"}
    ],
+   extra_body={"caching": True}
)

Curl

curl https://smart.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-******" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}],
    "caching": true
  }'

自定义秘钥

Click for more details

见.env文件

用例:

  import openai
+ openai.api_base = "https://api.openai-forward.com/v1"
- openai.api_key = "sk-******"
+ openai.api_key = "fk-******"

多目标服务转发

支持转发不同地址的服务至同一端口的不同路由下 用例见 .env.example

对话日志

Click for more details

保存路径在当前目录下的Log/openai/chat/chat.log路径中。
记录格式为

{'messages': [{'role': 'user', 'content': 'hi'}], 'model': 'gpt-3.5-turbo', 'stream': True, 'max_tokens': None, 'n': 1, 'temperature': 1, 'top_p': 1, 'logit_bias': None, 'frequency_penalty': 0, 'presence_penalty': 0, 'stop': None, 'user': None, 'ip': '127.0.0.1', 'uid': '2155fe1580e6aed626aa1ad74c1ce54e', 'datetime': '2023-10-17 15:27:12'}
{'assistant': 'Hello! How can I assist you today?', 'is_tool_calls': False, 'uid': '2155fe1580e6aed626aa1ad74c1ce54e'}

转换为json格式:

aifd convert

得到chat_openai.json

[
    {
        "datetime": "2023-10-17 15:27:12",
        "ip": "127.0.0.1",
        "model": "gpt-3.5-turbo",
        "temperature": 1,
        "messages": [
            {
                "user": "hi"
            }
        ],
        "tools": null,
        "is_tool_calls": false,
        "assistant": "Hello! How can I assist you today?"
    }
]

贡献

欢迎通过提交拉取请求或在仓库中提出问题来为此项目做出贡献。

许可证

OpenAI-Forward 采用 MIT 许可证。

openai-forward's People

Contributors

kenyony avatar limboinf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

openai-forward's Issues

如何将最终接口设置成https?

Checklist

  • 我相信这个想法很棒并且将使框架变得更好
  • 我已在issue中搜索了类似请求,包括已关闭的请求

详细描述

如题,部分安卓app,不允许http访问,只能用https

希望添加自动禁用无效key

作者你好!如果能添加自动禁用无效key就太好了,例如429无余额、401封号的key。再问下,这个key最多可以输入多少个呢?

Render部署失败

Initial Checks

  • My openai-forward version is not lower than v0.7.0

Issue Description

使用render一键部署不成功,请问是我自己的账号原因还是什么问题呢?
image
image

Configuration/Code Example and Output

No response

My Insight

No response

Environment

使用的是render部署

Final Step

  • I believe my description above is detailed enough for developers to reproduce the issue.

http error: connection closed before message completed

Hi,
when i am using your openai api forward as service, i found it frequently reported http error like that:

http error: error sending request for url (http://xxx/v1/chat/completions): connection closed before message 
completed

Caused by:
   0: error sending request for url (http://xxx/v1/chat/completions): connection closed before message completed
   1: connection closed before message completed

I found a sultion for this issue in: https://help.mulesoft.com/s/article/Getting-Error-sending-HTTP-request-when-sending-request-using-http-requester

This error indicates that the request has been successfully sent from the Mule Application, but the http response timeout occurred before the http response is received from the called service. By default, the http response timeout will inherit it's value from defaultResponseTimeout under global settings which is 10 Seconds, thus it will timeout if the service provider doesn't serve the request and send the response back to the client in 10 seconds.

Can you fix that issue? Thanks.

Fail to deploy on Render for the newest version

初始检查

  • 我运行的openai-forward版本不低于v0.6.0

问题描述

Deploy failed for 94df783: ✨ Log module keeps up with gpt-1106
Exited with status 1 while running your code. Check your deploy logs for more information.

==> Deploying...
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
==> Deploying...
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/openai-forward/openai_forward/__main__.py", line 74, in <module>
    main()
  File "/home/openai-forward/openai_forward/__main__.py", line 70, in main
    fire.Fire(Cli)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/openai-forward/openai_forward/__main__.py", line 27, in run
    uvicorn.run(
  File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/openai-forward/openai_forward/app.py", line 7, in <module>
    from .forward.extra import generic_objs
  File "/home/openai-forward/openai_forward/forward/extra.py", line 1, in <module>
    from .base import GenericForward
  File "/home/openai-forward/openai_forward/forward/base.py", line 11, in <module>
    from flaxkv.pack import encode
  File "/usr/local/lib/python3.10/site-packages/flaxkv/__init__.py", line 16, in <module>
    from .serve.client import RemoteDictDB
  File "/usr/local/lib/python3.10/site-packages/flaxkv/serve/client.py", line 1, in <module>
    import httpx
ModuleNotFoundError: No module named 'httpx'

配置/代码示例和输出

No response

我的见解

No response

环境

N/A

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

您好,这是什么问题?

初始检查

  • 我运行的openai-forward版本不低于v0.6.0

问题描述

File "C:\Users\GXH.conda\envs\openai-forward-main\lib\site-packages\openai_forward\app.py", line 7, in
from .forward.extra import generic_objs
File "C:\Users\GXH.conda\envs\openai-forward-main\lib\site-packages\openai_forward\forward\extra.py", line 1, in
from .base import GenericForward
File "C:\Users\GXH.conda\envs\openai-forward-main\lib\site-packages\openai_forward\forward\base.py", line 15, in
from ..cache.chat_completions import generate, stream_generate_efficient
File "C:\Users\GXH.conda\envs\openai-forward-main\lib\site-packages\openai_forward\cache\chat_completions.py", line 121, in
model: str, content: str | None, tool_calls: list | None, request: Request
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'

配置/代码示例和输出

No response

我的见解

No response

环境

(openai-forward-main) PS D:\apython\openai-forward-main> python -c "import sys,platform; print('python: {}\nOS: {}'.format(sys.version, platform.syste
m()));"
python: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:40:31) [MSC v.1929 64 bit (AMD64)]
OS: Windows
(openai-forward-main) PS D:\apython\openai-forward-main> python -c "import openai_forward; print('openai_forward: {}'.format(openai_forward._version
_));"
openai_forward: 0.6.7

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

现在cf render vercel 部署后访问都是404

初始检查

  • 我运行的openai-forward版本不低于v0.7.0

问题描述

代码已同步至当前版本

配置/代码示例和输出

No response

我的见解

No response

环境

云平台无法使用该命令

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

能否实现按照 model 进行区分

Checklist

  • 我相信这个想法很棒并且将使框架变得更好
  • 我愿提交PR来实现这个想法

详细描述

我们在利用 api-for-open-llm 项目做部署的时候会发现只能启动一个模型。

假设模型部署如下:

假设我用两个 api-for-open-llm 分别完成下方部署 (【格式】模型名称: 访问网址)

假设 openai-forward 部署情况如下:

现在希望代码base_url接入后,通过model 进行管理

client = OpenAI(
    base_url='http://127.0.0.1:8000/v1',
    api_key="EMPTY",
)
# 希望如此就能调用 gpt3.5
completion = client.chat.completions.create(
        model='gpt3.5',
        messages=messages,
        temperature=0.85,
        top_p=0.8,
    )

# 希望如此就能调用 gpt4
completion = client.chat.completions.create(
        model='gpt4',
        messages=messages,
        temperature=0.85,
        top_p=0.8,
    )

Feat: Azure OpenAI support

Checklist

  • 我相信这个想法很棒并且将使框架变得更好
  • 我已在issue中搜索了类似请求,包括已关闭的请求

详细描述

I hope to increase support for Azure OpenAI, so that it can be a simple implementation of CloudFlare AI Gateway!

如果我想二次转发,我应该配置哪里?

Checklist

  • 我相信这个想法很棒并且将使框架变得更好
  • 我已在issue中搜索了类似请求,包括已关闭的请求

详细描述

如果我不想直连 https://api.openai.com ,比如我想链接到 api.openai-forward.com 二次转发,我应该修改哪里?好像要改的地方很多,我都改 ,但是没生效

启用.env之后,会出现无法启动docker的情况,而且会out of memory

初始检查

  • 我运行的openai-forward版本不低于v0.6.0

问题描述

如题

docker run -d -p 8000:8000 beidongjiedeguang/openai-forward:latest
//enter docker /home/openai-forward
touch .env
// use modified .env file will auto stop and ran out of memory

配置/代码示例和输出

# LOG_CHAT: 是否开启日志
LOG_CHAT=true

PRINT_CHAT=true

CACHE_CHAT_COMPLETION=true
# `CACHE_BACKEND`: MEMORY, LMDB, ROCKSDB
CACHE_BACKEND=MEMORY

BENCHMARK_MODE=true

# OPENAI_BASE_URL: 转发openai风格的任何服务地址,允许指定多个, 以逗号隔开。
# 如果指定超过一个,则任何OPENAI_ROUTE_PREFIX/EXTRA_ROUTE_PREFIX都不能为根路由/
OPENAI_BASE_URL='https://api.openai.com, http://localhost:8080'

# OPENAI_ROUTE_PREFIX: 可指定所有openai风格(为记录日志)服务的转发路由前缀
OPENAI_ROUTE_PREFIX='/openai, /localai'

CHAT_COMPLETION_ROUTE=/openai/v1/chat/completions
COMPLETION_ROUTE=/v1/completions

# OPENAI_API_KEY:允许输入多个api key, 以逗号隔开, 形成轮询池
OPENAI_API_KEY='sk-fdfdfdfdfdf'

# FORWARD_KEY: 当前面的OPENAI_API_KEY被设置,就可以设置这里的FORWARD_KEY,客户端调用时可以使用FORWARD_KEY作为api key
FORWARD_KEY=fk-dffdfd

# EXTRA_BASE_URL: 可指定任意服务转发
EXTRA_BASE_URL='http://localhost:8882, http://localhost:8881'

# EXTRA_ROUTE_PREFIX: 与 EXTRA_BASE_URL 匹配的路由前缀
EXTRA_ROUTE_PREFIX='/tts, /translate'

# `REQ_RATE_LIMIT`: 指定路由的请求速率限制(区分用户)
# format: {route: ratelimit-string}
# ratelimit-string format [count] [per|/] [n (optional)] [second|minute|hour|day|month|year] :ref:`ratelimit-string`: https://limits.readthedocs.io/en/stable/quickstart.html#rate-limit-string-notation
REQ_RATE_LIMIT='{
"/openai/v1/chat/completions": "1/10seconds",
"/localai/v1/chat/completions": "2/second"
}'

# rate limit后端: [memory, redis, Memcached, ...] :ref: https://limits.readthedocs.io/en/stable/storage.html#
REQ_RATE_LIMIT_BACKEND="redis://localhost:6379"

# `GLOBAL_RATE_LIMIT`: 限制所有`REQ_RATE_LIMIT`没有指定的路由. 不填默认无限制
GLOBAL_RATE_LIMIT=inf

#`RATE_LIMIT_STRATEGY` Options: (fixed-window, fixed-window-elastic-expiry, moving-window) ref: https://limits.readthedocs.io/en/latest/strategies.html
# `fixed-window`: most memory efficient strategy; `moving-window`:most effective for preventing bursts but higher memory cost.
RATE_LIMIT_STRATEGY=fixed-window

# `PROXY` http代理
PROXY=http://localhost:7890

# `TOKEN_RATE_LIMIT` 对每一份流式返回的token速率限制 (注:这里的token并不严格等于gpt中定义的token,而是SSE的chunk)
TOKEN_RATE_LIMIT={"/v1/chat/completions":"20/second", "/benchmark/v1/chat/completions":"500/second"}


TIMEOUT=10

ITER_CHUNK_TYPE=efficiency

# 设定时区
TZ=Asia/Shanghai

我的见解

No response

环境

/home/openai-forward # python -c "import sys,platform; print('python: {}\nOS: {}'.format(sys.ve
rsion, platform.system()));"
forward: {}'.format(openai_forward.version));"
python: 3.10.13 (main, Sep 29 2023, 00:51:50) [GCC 12.2.1 20220924]
OS: Linux
/home/openai-forward # python -c "import openai_forward; print('openai_forward: {}'.format(open
ai_forward.version));"
openai_forward: 0.6.2

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

使用gpt4,不使用流式返回,提交长文本之后,会返回(504 Gateway Time-out),代码把httpx超时和转发的超时都设置成120也不管用

初始检查

  • 我运行的openai-forward版本不低于v0.7.0

问题描述

提示词大概2700tokens,使用的是langchain的包

from langchain_openai import ChatOpenAI
#from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
try:
chat_model = ChatOpenAI(
model='gpt-4-1106-preview',
openai_api_key='',
openai_api_base='',
#streaming=True,
#callbacks=[StreamingStdOutCallbackHandler()],
# temperature=0.2
)

image
代码设置的超时,

image
转发服务器设置的超时,

配置/代码示例和输出

<title>504 Gateway Time-out</title>

504 Gateway Time-out


nginx/1.18.0 (Ubuntu)

我的见解

不知道问题出在哪里,搜索项目也没有找到nginx,看readme里面有proxy_buffering off;
但是没有在env里面找到配置的地方,项目的代码里面搜索了也没找到

环境

python: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
OS: Windows

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

accessToken 转 Token 使用疑问

#82 (comment) 中提到可以使用 https://chat.openai.com/api/auth/session 中的 accessToken

对此进行了测试,如果 openai 本身额度已过期的话调用会失败,我尝试使用额度已过期的账号和有额度的账号分别获取 accessToken 来测试,没额度的会报 “502” 或 “You exceeded your current quota, please check your plan and billing details.”,有额度的可以正常使用但会消耗额度,我不确定是否是一个 bug 还是本身就是这样 预期为使用 accessToken 应该无关 openai 是否有额度

Launch Successfully, But Fail After input Url in Browser

成功启动,没有问题,但是在浏览器里访问 http://0.0.0.0:9999/ 或者 http://0.0.0.0:9999/v1/chat/completions
就是出现这个错误,请问是什么情况,大神求助!应该不会仅局限在有公网ip的机器吧。

Traceback (most recent call last):

File "/home/user/anaconda3/envs/py39/bin/openai_forward", line 8, in
sys.exit(main())
│ │ └ <function main at 0x7f8353162a60>
│ └
└ <module 'sys' (built-in)>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/openai_forward/main.py", line 21, in main
fire.Fire(Cli)
│ │ └ <class 'openai_forward.main.Cli'>
│ └ <function Fire at 0x7f8352fb25e0>
└ <module 'fire' from '/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/fire/init.py'>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
│ │ │ │ │ └ 'openai_forward'
│ │ │ │ └ {}
│ │ │ └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
│ │ └ ['run', '--port=9999', '--workers=1']
│ └ <class 'openai_forward.main.Cli'>
└ <function _Fire at 0x7f8352814a60>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
│ └ <function _CallAndUpdateTrace at 0x7f8352814b80>
└ <function Cli.run at 0x7f8352fb2550>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
│ │ └ {}
│ └ [9999, 1]
└ <function Cli.run at 0x7f8352fb2550>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/openai_forward/main.py", line 9, in run
uvicorn.run(
│ └ <function run at 0x7f83524bac10>
└ <module 'uvicorn' from '/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/uvicorn/init.py'>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/uvicorn/main.py", line 578, in run
server.run()
│ └ <function Server.run at 0x7f83524ba670>
└ <uvicorn.server.Server object at 0x7f83524159a0>
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
│ │ │ │ └ None
│ │ │ └ <function Server.serve at 0x7f83524ba700>
│ │ └ <uvicorn.server.Server object at 0x7f83524159a0>
│ └ <function run at 0x7f835292adc0>
└ <module 'asyncio' from '/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/init.py'>
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
│ │ └ <coroutine object Server.serve at 0x7f8352481b40>
│ └ <function BaseEventLoop.run_until_complete at 0x7f835292c820>
└ <_UnixSelectorEventLoop running=True closed=False debug=False>
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
self.run_forever()
│ └ <function BaseEventLoop.run_forever at 0x7f835292c790>
└ <_UnixSelectorEventLoop running=True closed=False debug=False>
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
│ └ <function BaseEventLoop._run_once at 0x7f835292e310>
└ <_UnixSelectorEventLoop running=True closed=False debug=False>
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
handle._run()
│ └ <function Handle._run at 0x7f83529f5790>
└ <Handle <TaskStepMethWrapper object at 0x7f83507da130>()>
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
│ │ │ │ │ └ <member '_args' of 'Handle' objects>
│ │ │ │ └ <Handle <TaskStepMethWrapper object at 0x7f83507da130>()>
│ │ │ └ <member '_callback' of 'Handle' objects>
│ │ └ <Handle <TaskStepMethWrapper object at 0x7f83507da130>()>
│ └ <member '_context' of 'Handle' objects>
└ <Handle <TaskStepMethWrapper object at 0x7f83507da130>()>

Assistants API support

Detailed Description

ref: https://platform.openai.com/docs/api-reference/assistants

curl "https://api.openai.com/v1/assistants" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "OpenAI-Beta: assistants=v1" \
  -d '{
    "instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
    "name": "Math Tutor",
    "tools": [{"type": "code_interpreter"}],
    "model": "gpt-4"
  }'

[bug]使用NGINX反代后,流式输出是一段一段的,不是一个字一个字

image

使用nginx进行反代

server {
        listen 80;
        listen [::]:80;

        server_name ****.com;

        client_max_body_size 10M;

        location / {

        #       add_header "Access-Control-Allow-Origin"  *;
        add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD";
        add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";
        # 不缓存,支持流式输出
        proxy_cache off;  # 关闭缓存
        proxy_buffering off;  # 关闭代理缓冲
        chunked_transfer_encoding on;  # 开启分块传输编码
        tcp_nopush on;  # 开启TCP NOPUSH选项,禁止Nagle算法
        tcp_nodelay on;  # 开启TCP NODELAY选项,禁止延迟ACK算法
        keepalive_timeout 300;  # 设定keep-alive超时时间为65秒
                proxy_pass http://localhost:9000;
        }
}

CloudFlare Pages 部署后 Forward_key 无效,但转发有效。

我使用 CloudFlare Pages 部署,环境变量配置如下
image

使用 Forward key 发送请求后返回值:
{
"error": {
"message": "",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}
但使用 Openai 的官方 key 则正常返回。

在 cf 的配置指南里您提到:

这种部署方式轻便简洁,支持流式转发. 对于没有vps的用户还是十分推荐的。不过目前_worker.js这个简单脚本仅提供转发服务, 不提供额外功能。

想确认一下使用 pages 部署也只是仅有转发功能吗?

作者麻烦问下,现在大概支持多少key呀?我这加得比较多,运行不到1小时就大量超时了,前面是正常的。报错如下

          作者麻烦问下,现在大概支持多少key呀?我这加得比较多,运行不到1小时就大量超时了,前面是正常的。报错如下

| ERROR | openai_forward.forwarding.base:try_send:112 - <class 'httpx.PoolTimeout'>: traceback=Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/httpcore/_synchronization.py", line 125, in wait
await self._anyio_event.wait()
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1778, in wait
if await self._event.wait():
File "/usr/local/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/usr/local/lib/python3.10/site-packages/httpcore/_synchronization.py", line 124, in wait
with anyio.fail_after(timeout):
File "/usr/local/lib/python3.10/site-packages/anyio/_core/_tasks.py", line 119, in exit
raise TimeoutError
TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 242, in handle_async_request
raise exc
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 233, in handle_async_request
connection = await status.wait_for_connection(timeout=timeout)
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 35, in wait_for_connection
await self._connection_acquired.wait(timeout=timeout)
File "/usr/local/lib/python3.10/site-packages/httpcore/_synchronization.py", line 123, in wait
with map_exceptions(anyio_exc_map):
File "/usr/local/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.PoolTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/openai-forward/openai_forward/forwarding/base.py", line 88, in try_send
return await self.client.send(req, stream=True)
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1617, in send
response = await self._send_handling_auth(
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1645, in _send_handling_auth
response = await self._send_handling_redirects(
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
response = await self._send_single_request(request)
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1719, in _send_single_request
response = await transport.handle_async_request(request)
File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
with map_httpcore_exceptions():
File "/usr/local/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.PoolTimeout

Originally posted by @gougui in #71 (comment)

根据Forward key限制模型访问未生效

初始检查

  • 我运行的openai-forward版本不低于v0.7.0

问题描述

在.env中设置forward_key对应层级后未正常生效,受限层级用户仍能访问到全部模型。

配置/代码示例和输出

FORWARD_CONFIG=[{"base_url": "http://111.11.111.11:8888";, "route": "/", "type": "general"}, {"base_url": "http://111.11.111.12:8888";, "route": "/xxx", "type": "general"}]
OPENAI_API_KEY={"KEY": [0], "EMPTY": [1]} FORWARD_KEY={"KEY": 0, "EMPTY": 1}
LEVEL_MODELS={"1": ["Code-Llama-7B-Instruct"]}
CACHE_OPENAI=False
CACHE_GENERAL=False
CACHE_BACKEND=MEMORY
CACHE_ROOT_PATH_OR_URL=.
DEFAULT_REQUEST_CACHING_VALUE=False
CACHE_ROUTES=["/v1/chat/completions"]
GLOBAL_RATE_LIMIT=inf
RATE_LIMIT_STRATEGY=fixed-window
TOKEN_RATE_LIMIT={"/v1/chat/completions": "60/second", "/v1/completions": "60/second"}
REQ_RATE_LIMIT={"/v1/chat/completions": "100/2minutes", "/v1/completions": "60/minute", "/v1/embeddings": "100/2minutes"}
ITER_CHUNK_TYPE=efficiency
LOG_GENERAL=True
LOG_OPENAI=False
TZ=Asia/Shanghai
TIMEOUT=10
BENCHMARK_MODE=False

我的见解

需要确认上述.env中写法是否正确
需要确认多路由转发时api key层级限制是否生效
需要确认api key层级限制对转发类型api是否生效

环境

  • 操作系统: CentOS 7.4
  • Python 版本: 3.8
  • OpenAI-Forward 版本: 0.7.1

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

Enable rate limiting

Have you considered adding a rate limiter? I have made good experiences with:
https://pypi.org/project/fastapi-limiter/

In this case,

from fastapi_limiter import FastAPILimiter
from fastapi_limiter.depends import RateLimiter

@app.on_event("startup")
async def startup():
    redis = redis.from_url("redis://localhost", encoding="utf-8", decode_responses=True)
    await FastAPILimiter.init(redis)

and

app = FastAPI(dependencies=[RateLimiter(times=2, seconds=5))]) 

. If you want, might also be added as optional, or if redis is offline:

# for forward compatability:
if os.environ.get("RATE_LIMIT", False):
    ...

流式日志保存失败

较长的流式回答会出现错误

` File "\openai-forward1\openai_forward\content\chat.py", line 27, in parse_chat_completions
line0 = txt_lines[0]
└ []

IndexError: list index out of range

使用 Docker 部署,并用 Nginx 反代,出现错误

Nginx 配置如下:

    location ^~ / {
        proxy_pass http://openai-api-forward/;
        proxy_cache off;
        proxy_buffering off;
    }

使用下面代码测试:

curl http://127.0.0.1:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-*******" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

报错信息:

{"detail":"<class 'httpx.ConnectError'>: [Errno -3] Try again | Please check if host=172.22.0.1 can access [https://api.openai.com] successfully?"}

现在能确定部署的服务器能直连https://api.openai.com/。请问该如何解决,谢谢!

429 Too Many Requests

初始检查

  • 我运行的openai-forward版本不低于v0.7.0

问题描述

即使没有程序在使用openai api,程序log仍然出现大量的 429 Too Many Requests 。
截屏2024-02-01 13 53 05

配置/代码示例和输出

LOG_CHAT=false

CACHE_CHAT_COMPLETION=false

CACHE_BACKEND=LevelDB

OPENAI_BASE_URL='https://api.openai.com/v1'
OPENAI_ROUTE_PREFIX='/v1'

CHAT_COMPLETION_ROUTE=/chat/completions
COMPLETION_ROUTE=/completions

EXTRA_BASE_URL='http://chatgpt:7999'
EXTRA_ROUTE_PREFIX='/'

#OPENAI_API_KEY=
#FORWARD_KEY=

#PROXY http代理
PROXY=http://localhost:6292
PROXY=http://[占位符]@localhost:6292

#TCP connection timeout duration (in seconds)
TIMEOUT=120

#Set timezone
TZ=Asia/Shanghai

我的见解

No response

环境

python: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
OS: Linux
openai_forward: 0.7.1

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

需要加上:OpenAI-Beta: assistants=v1'

初始检查

  • 我运行的openai-forward版本不低于v0.6.0

问题描述

BadRequestError: Error code: 400 - {'error': {'message': "You must provide the 'OpenAI-Beta' header to access the Assistants API. Please try again by setting the header 'OpenAI-Beta: assistants=v1'.", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_beta'}}

配置/代码示例和输出

BadRequestError: Error code: 400 - {'error': {'message': "You must provide the 'OpenAI-Beta' header to access the Assistants API. Please try again by setting the header 'OpenAI-Beta: assistants=v1'.", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_beta'}}

我的见解

No response

环境

BadRequestError: Error code: 400 - {'error': {'message': "You must provide the 'OpenAI-Beta' header to access the Assistants API. Please try again by setting the header 'OpenAI-Beta: assistants=v1'.", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_beta'}}

最后一步

  • 我认为上述的描述已经足够详细,开发者能够复现该问题

openai_forward不支持whisper功能?

@beidongjiedeguang

python: 3.10.9
openai_forward: 0.2.4
命令行:nohup openai_forward run --port=36001 > openai_forward.log 2>&1 &

示例代码:
curl https://api.openai.com/v1/audio/translations
-H "Authorization: Bearer sk-v9xxxx......"
-H "Content-Type: multipart/form-data"
-F file="@/root/audio.m4a"
-F model="whisper-1"

输出:{"text":"This is a test. Thank you!"}

如果改成经过openai_forward:
代码:
curl http://127.0.0.1:36001/v1/audio/translations
-H "Authorization: Bearer sk-v9xxxxx....."
-H "Content-Type: multipart/form-data"
-F file="@/root/audio.m4a"
-F model="whisper-1"

输出:
{
"error": {
"message": "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)",
"type": "invalid_request_error",
"param": null,
"code": null
}
}

输出日志:
nohup: ignoring input
╭─────────────────── 🤗 openai-forward is ready to serve! ────────────────────╮
│ base-url route-prefix api-key-polling-pool no │
https://api.openai.com / False │
╰──────────────────────────────────────────────────────────────────────────────╯
INFO: Started server process [3193]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:36001 (Press CTRL+C to quit)
INFO: 127.0.0.1:59014 - "POST /v1/audio/translations HTTP/1.1" 400 Bad Request

问题:不支持 /audio/translations 接口? 我看源代码里好像没有针对/audio进行处理?
谢谢!

Update docs

Checklist

  • I believe the idea is awesome and would benefit the framework
  • I am willing to submit a PR to implement this idea.

Detailed Description

Update the latest usage documentation.

期待融合accessToken转Token服务

Checklist

  • 我相信这个想法很棒并且将使框架变得更好
  • 我已在issue中搜索了类似请求,包括已关闭的请求

详细描述

融合accessToken转Token服务就完美了!

自动重试机制

自动重试机制:在请求失败时自动重试
作者麻烦问下,目前是哪些情况下会自动重试呢?能否加个配置参数自定一些错误码时自动重试呢?比如配置429,多个key时,一个key429频率超了会自动重试调用其它key

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.