Git Product home page Git Product logo

ollama_proxy_server's Introduction

Ollama Proxy Server

Ollama Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the Ollama Proxy Server.

Prerequisites

Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.

Installation

  1. Clone or download the ollama_proxy_server repository from GitHub: https://github.com/ParisNeo/ollama_proxy_server
  2. Navigate to the cloned directory in the terminal and run pip install -e .

Installation using Dockerfile

  1. Clone this repository as described above.
  2. Build your Container-Image with the Dockerfile provided by this repository

Podman

cd ollama_proxy_server
podman build -t ollama_proxy_server:latest .

Docker

cd ollama_proxy_server
docker build -t ollama_proxy_server:latest .

Configuration

Servers configuration (config.ini)

Create a file named config.ini in the same directory as your script, containing server configurations:

[DefaultServer]
url = http://localhost:11434
queue_size = 5

[SecondaryServer]
url = http://localhost:3002
queue_size = 3

# Add as many servers as needed, in the same format as [DefaultServer] and [SecondaryServer].

Replace http://localhost:11434/ with the URL and port of the first server. The queue_size value indicates the maximum number of requests that can be queued at a given time for this server.

Authorized users (authorized_users.txt)

Create a file named authorized_users.txt in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:

user1:key1
user2:key2

Replace user1, key1, user2, and key2 with the desired username and API key for each user. You can also use the ollama_proxy_add_user utility to add user and generate a key automatically:

ollama_proxy_add_user --users_list [path to the authorized `authorized_users.txt` file]

Usage

Starting the server

Start the Ollama Proxy Server by running the following command in your terminal:

python3 ollama_proxy_server/main.py --config [configuration file path] --users_list [users list file path] --port [port number to access the proxy]

The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.

Client requests

To send a request to the server, use the following command:

curl -X <METHOD> -H "Authorization: Bearer <USER_KEY>" http://localhost:<PORT>/<PATH> [--data <POST_DATA>]

Replace <METHOD> with the HTTP method (GET or POST), <USER_KEY> with a valid user:key pair from your authorized_users.txt, <PORT> with the port number of your running Ollama Proxy Server, and <PATH> with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the --data <POST_DATA> option to send data in the body.

For example:

curl -X POST -H "Authorization: Bearer user1:key1" http://localhost:8080/api/generate --data '{'model':'mixtral:latest,'prompt': "Once apon a time,","stream":false,"temperature": 0.3,"max_tokens": 1024}'

Starting the server using the created Container-Image

To start the proxy in background with the above created image, you can use either

  1. docker: docker run -d --name ollama-proxy-server -p 8080:8080 ollama_proxy_server:latest
  2. podman: podman run -d --name ollama-proxy-server -p 8080:8080 ollama_proxy_server:latest

ollama_proxy_server's People

Contributors

dependabot[bot] avatar jammsen avatar parisneo avatar petritavd avatar vfbfoerst avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ollama_proxy_server's Issues

qsize and not stream

Hi ParisNeo.

First of all, thank you for share awesome project. I just check it for parallel requests with Ollama. I found this repo from Ollama issue.

I've run two Ollama instances and run ollama_proxy_server. And I found two problem. One is that's not working with stream response. It works just send last data. Other is qsize couldn't get right. I attached screenshot.

image

It's just first try. I'll test it more and give feedback more.

Thanks.

[Bug?] TypeError: _StoreTrueAction.__init__() got an unexpected keyword argument 'const'

Hi :),

I tried the new version with the Option "-d" but went into the following error message when starting the proxy via

ollama_proxy_server -d --config ./config.ini --port 8080

Traceback (most recent call last):
  File "/usr/local/bin/ollama_proxy_server", line 33, in <module>
    sys.exit(load_entry_point('ollama-proxy-server', 'console_scripts', 'ollama_proxy_server')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ollama_proxy_server/ollama_proxy_server/main.py", line 50, in main
    parser.add_argument('-d', '--deactivate_security', action='store_true', const=True, default=False, help='Deactivates security')
  File "/usr/local/lib/python3.11/argparse.py", line 1450, in add_argument
    action = action_class(**kwargs)

TypeError: _StoreTrueAction.__init__() got an unexpected keyword argument 'const'

Environment: Debian GNU/Linux 12 (bookworm), Python 3.11

I was able to fix it via:

sed -i "s/action='store_true'/action='store_const'/g" ollama_proxy_server/main.py

(-> changed action='store_true' to action='store_const' in the main.py.)
The proxy then starts as expected.

[Bug] Streaming doesn't work

Using this proxy with continuedev does not stream the responses.
The server answers correctly but the response is only output once the generation is totally complete

Unexpected non-whitespace character after JSON at position 214

So close to having this working.

Proxy and ollama are on different machines, I have gone though a number of issues, for instance it seems to want a real host rather than just an IP address.

But down now down to this error, I am using Chaxbox on a third machine, for the connection. It connects to LLM studio fine.

Unexpected non-whitespace character after JSON at position 214

Thanks

AttributeError: 'RequestHandler' object has no attribute 'user'

config.ini

[DefaultServer]
url = http://localhost:11434
queue_size = 3

commend

ollama_proxy_server --port 8080 -d

Error Log

Starting server
Running server on port 8080
192.168.110.5 - - [20/Feb/2024 17:02:12] "GET /api/tags HTTP/1.1" - -
192.168.110.5 - - [20/Feb/2024 17:02:12] "GET /api/tags HTTP/1.1" 200 -
192.168.110.5 - - [20/Feb/2024 17:02:15] "GET /api/version HTTP/1.1" - -
192.168.110.5 - - [20/Feb/2024 17:02:15] "GET /api/version HTTP/1.1" 200 -
192.168.110.5 - - [20/Feb/2024 17:02:18] "POST /api/chat HTTP/1.1" - -
----------------------------------------
Exception occurred during processing of request from ('192.168.110.5', 62516)
Traceback (most recent call last):
  File "/home/fvt/miniconda3/envs/llm/lib/python3.11/socketserver.py", line 691, in process_request_thread
    self.finish_request(request, client_address)
  File "/home/fvt/miniconda3/envs/llm/lib/python3.11/socketserver.py", line 361, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/home/fvt/miniconda3/envs/llm/lib/python3.11/socketserver.py", line 755, in __init__
    self.handle()
  File "/home/fvt/miniconda3/envs/llm/lib/python3.11/http/server.py", line 431, in handle
    self.handle_one_request()
  File "/home/fvt/miniconda3/envs/llm/lib/python3.11/http/server.py", line 419, in handle_one_request
    method()
  File "/home/fvt/Desktop/Code/ollama_proxy_server/ollama_proxy_server/main.py", line 86, in do_POST
    self.proxy()
  File "/home/fvt/Desktop/Code/ollama_proxy_server/ollama_proxy_server/main.py", line 145, in proxy
    self.add_access_log_entry(event="gen_request", user=self.user, ip_address=client_ip, access="Authorized", server=min_queued_server[0], nb_queued_requests_on_server=que.qsize())
                                                        ^^^^^^^^^
AttributeError: 'RequestHandler' object has no attribute 'user'

My Research

if not deactivate_security and not self._validate_user_and_key():

The if-logic above is incorrect, because self._validate_user_and_key() would not be execute while the first expression not deactivate_security is not True.

I fix the problem via assign initial value to self.user:

def proxy(self):
+ self.user = "unknown"
  if not deactivate_security and not self._validate_user_and_key():

Support for real_ip_header

Not sure, if this is something the proxy can support, but i noticed when running it inside docker, that the ip the requests are comming from, are all showing the same ip (not sure which one, i guess its the docker host ip; Ip of proxy is internal 172.17.0.18 and the requests come from 172.17.0.1)
I have an nginx reverse proxy in front of it and configured it correctly, to forward the real ip header, but the ollama_proxy_server apparentely doesnt recognizes that or doesnt receive it properly. I am not sure. Maybe i am missing something, and there needs to be some additional configuration.

More verbose errors/logs?

Hi! Thanks for making this available!

I'm currently debugging this issue: curl: (52) Empty reply from server, which is the response from running the example:
curl -X POST -H "Authorization: Bearer user1:key1" http://my_hostname:8080/api/generate --data '{'model':'gemma2,'prompt': "Once apon a time,","stream":false,"temperature": 0.3,"max_tokens": 1024}'

This is what's logged by the ollama-proxy-server:

[23/Jul/2024 16:47:19] "POST /api/generate HTTP/1.1" - -

When checking the logs of the ollama container, there is no incoming traffic.
Originally, I had this working and then it stopped working. Sometimes it only worked from the same server and not from any other machine on WAN. So I fiddled around with binding the ip addresses of the ollama container and adapting the proxy's config.ini.

How could I get a more expressive log of the ollama-proxy-server to get more information to debug this?

[Feature-Request] Add the possibility to use it without bearer token

Hi :)
Just found your project and it seems to be exactly the project I was looking for. Cool!
It would be nice if there was an option to opt out the authorization feature (or is it already there?), as many ollama projects, e.g. the ollama-webui, don't support the bearer Token yet. So opting out would be really nice to use your project with ollama-projects that don't support it yet.

Thank you! :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.