Easy to use resource usage tracker (can be used as a rate limiter). Define a config file and use it right away! Use the production ready Docker image or run it locally.
- Define a config file (yaml) in the format below -
<resource_name>:
<time>: <limit>
As an example -
email:
1m: 2
1h: 5
1d: 6
Allowed time units are s
, m
, h
, d
, w
, M
, y
. The time unit should be in ascending order. For example, 1m: 2, 1h: 5
is valid, but 1h: 5, 1m: 2
is not.
-
Then just use the API to check the limit and track the usage.
-
Get usage availability
curl -H "Access-ID: 123" http://localhost:8000/api/v1/usage/email
{ "access_in_ms": 0 }
This means the user can access the resource right now.
-
Register a usage
curl -X POST -H "Access-ID: 123" http://localhost:8000/api/v1/usage/email
-
Notice that the Access-ID
header is required. This is the unique identifier for the user or any other entity that is using the resource. This is used to track the usage for that perticular entity.
Say you have a resource called Email. You want to limit this email change capacity for users. This is set to twice per day. You are very happy to code with that, now the requirements come again and said twice per day but not more than one per hour, and monthly limit should be 5 ๐คฌ Now Cap-em will come to the rescue! It's an independent service, so you can deploy in your microservices or SOA.
You can have different configurations like above or as many configs as you like with several resources. All you need is to make a config file, and use the service right away!
docker pull ananto30/cap-em
docker run -p 8000:8000 -e DB_URI=<database_connection_url> -e CONFIG=<base64_of_config_file> ananto30/cap-em
2 environment variables are required to run the service. DB_URI
is the database connection url. CONFIG
is the base64 encoded config file. Check the Makefile#L6 to see how to encode the config file.
DB_URI is in SQLAlchemy format. For example -
postgresql://user:password@\<HOST>:5432/capem
Also note that the database should be created before running the service. The service will not create the database for you. It will only create the tables.
- /api/v1/help
- GET
- Get the help doc
- GET
- /api/v1/configs
- GET
- Check the loaded configs (yaml file)
- GET
- /api/v1/usage/{resource_name}
- GET
- Get the usage availability in milliseconds
- POST
- Register a usage
- GET
-
Setup project
make setup
-
Run the service
make run
-
Build the image
make docker-build
-
Run the image
make docker-run
The Makefile try to get your IP and set in the DB_URI
environment variable. If it fails, you need to set it manually.
Tests are better to be run with SQLite database. Because there will be entries in DB and those should be cleared after each test is run. So if you use any other than sqlite, make sure to delete the entries to pass tests. To use SQLite you need to set the environment variable DB_URI
-
make test
Custom DB -
DB_URI=sqlite:///capem-ut.db make test
Priority
- A persistant way for configs? Like redis, so that multiple workers can get the same config
- Local caching is good but how to share the configs with different workers when config get changed
- Endpoint(s) to load/update configs, in bulk or single
Less priority
- gRPC
- Messaging (for event-driven services)
- Non-relational DB support