perdy / starlette-prometheus Goto Github PK
View Code? Open in Web Editor NEWPrometheus integration for Starlette.
License: GNU General Public License v3.0
Prometheus integration for Starlette.
License: GNU General Public License v3.0
Hi! Is there any official grafana dashboard? Or any grafana dashboard?
Hi, and thanks for your middleware. Very useful and save so much time !
I would like to trigger a discussion although regarding the current implementation which is sometime a little bit confusing with error handling: in my use case I use your middleware along with FastAPI, it works great, BUT, something that I expect, is that response related metrics highlight errors as well, but it won't in case of exception leading to HTTP 500 status because of the current flow:
except Exception as e:
EXCEPTIONS.labels(method=method, path_template=path_template, exception_type=type(e).__name__).inc()
raise e from None
else:
REQUESTS_PROCESSING_TIME.labels(method=method, path_template=path_template).observe(after_time - before_time )
RESPONSES.labels(method=method, path_template=path_template, status_code=response.status_code).inc()
finally:
REQUESTS_IN_PROGRESS.labels(method=method, path_template=path_template).dec()
Would it be relevant to report as well exception into responses with 500 status code inference ?
it would be nice to have a sample/reference grafana dashboard in this project.
Hello,
Just letting you know that your middleware is impacted by this issue in Starlette.
I did not investigate much but it seems that you could fix it without waiting for a fix in Starlette (if you feel like it of course).
Best Regards
In starlette 0.12.0 ASGIInstance
was removed from types.py and so now there is an exception:
File "/home/max/work/indigo-backend/main.py", line 8, in <module>
from starlette_prometheus import metrics, PrometheusMiddleware
File "/home/max/work/indigo-backend/env/lib/python3.7/site-packages/starlette_prometheus/__init__.py", line 1, in <module>
from starlette_prometheus.middleware import PrometheusMiddleware
File "/home/max/work/indigo-backend/env/lib/python3.7/site-packages/starlette_prometheus/middleware.py", line 6, in <module>
from starlette.types import ASGIInstance
ImportError: cannot import name 'ASGIInstance' from 'starlette.types' (/home/max/work/indigo-backend/env/lib/python3.7/site-packages/starlette/types.py)
Is is possible?
thks
Is there any function for that?
It seems all the PRs are stuck and issues not even acknowledged. @perdy
Is there a way to show the endpoint in openapi.json when using the middleware with FastAPI?
I want a way to inject custom labels like app_id/tenant_id for a SaaS application
So that it is easier to visualize no. of requests and other metrics for specific tenant.
Note: I can do this by creating a custom metric, but just wondering if there's a way to add custom label (tenant_id), and tell starlette-prometheus to pick it from the route handlers or if we could use it as a dependency or something
as mentioned in #5 (comment) by keeping the path
in the metric labels, there's a potential to have a very high cardinality if a project has many different paths, going against the Prometheus best practices https://prometheus.io/docs/practices/instrumentation/#do-not-overuse-labels.
I propose removing the path from the labels or at least make it configurable.
Cheers
Hi, there!
Thanks for a great middleware! I've been using it a while and now I want to show response time by url in grafana. It works good with regular paths, like /users
, but not with templated paths like /users/{id}
because in /metrics
they appear as actual paths (/users/1
, /users/2
, etc...)
I've made a quick pull request #6 for this. Let me know what you think of this idea and feel free to decline it if anything
The moment prometheus_multiproc_dir
joins the game, the Prometheus client stops working and no metrics are returned. The request to the endpoint is successful (200), but the body is completely empty.
pyenv virtualenv venv_test_prometheus_multiproc
pyenv activate venv_test_prometheus_multiproc
pip install starlette-prometheus fastapi requests
Here are the exact requirements:
certifi==2020.6.20
chardet==3.0.4
fastapi==0.59.0
idna==2.10
prometheus-client==0.7.1
pydantic==1.6
requests==2.24.0
starlette==0.13.4
starlette-prometheus==0.7.0
urllib3==1.25.9
from fastapi import FastAPI
from starlette.testclient import TestClient
from starlette.applications import Starlette
from starlette_prometheus import metrics, PrometheusMiddleware
import tempfile
import os
with tempfile.TemporaryDirectory() as tmpdir:
print('created temporary directory', tmpdir)
os.environ["prometheus_multiproc_dir"] = tmpdir
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/something")
def read_root():
return {"Hello": "something"}
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics/", metrics)
client = TestClient(app)
print(client.get("/").content)
print(client.get("/").content)
print(client.get("/something").content)
print(client.get("/metrics/").content)
You will see the following output:
$ python main.py
created temporary directory /tmp/tmp8kcanvsd
b'{"Hello":"World"}'
b'{"Hello":"World"}'
b'{"Hello":"something"}'
b''
The metrics endpoint returns nothing.
Is there provision in this plugin to add a custom metric? We are using FastAPI to host a custom application, I need the ability to add a custom metric.
Any info on how to do this would be good.
Thanks
Can you release a new version of starlette-prometheus that includes the feature of filtering unhandled paths (merged in #13) ?
I'm a new user to Starlette, and looking to monitor some Gunicorn processes for my Starlette server. This library looks promising, and I've successfully integrated and viewed the plain text stats at /metrics
.
However, I'd like a better visualization of these performance metrics. I've looked at integrating Grafana, but am having difficulty (https://prometheus.io/docs/visualization/grafana/ looks promising).
I'm looking for the most basic level of monitoring; the console templates at https://prometheus.io/docs/visualization/consoles/ look promising.
It'd be really nice to have the following:
/metrics
with a simple HTML page. I think I'd like to see an interface like this:from starlette.applications import Starlette
from starlette_prometheus import metrics, metric_viz, PrometheusMiddleware
app = Starlette()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics/", metrics)
app.add_route("/metric-viz/", metric_viz)
Hi, there.
In short, I created a new project over the weekend and started using the latest prometheus_client==0.8.0
. After also adding starlette-prometheus
, there's a version conflict:
ERROR: starlette-prometheus 0.7.0 has requirement prometheus_client<0.8,>=0.7, but you'll have prometheus-client 0.8.0 which is incompatible.
Can you consider bumping requirement versions so we can use the latest versions?
Hi @perdy! Would you mind releasing a new version of starlette-prometheus with the dependeny on the new version of prometheus client?
Thanks!
Hi, I could not find any way to add custom metrics. Is that not supported ?
When submounting routes, rather than the full path template being used, only the mount prefix is used.
Running the following app:
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.responses import Response
from starlette.routing import Mount, Route
from starlette_prometheus import PrometheusMiddleware, metrics
async def foo(request):
return Response()
async def bar_baz(request):
return Response()
routes = [
Route("/foo", foo),
Mount("/bar", Route("/baz", bar_baz)),
Route("/metrics", metrics),
]
middleware = [Middleware(PrometheusMiddleware)]
app = Starlette(routes=routes, middleware=middleware)
Then making the following requests:
$ curl localhost:8000/foo
$ curl localhost:8000/bar/baz
$ curl localhost:8000/metrics
Gives the following output (I only included one metric as an example, but it's the same for all of them). Note the label for the request to localhost:8000/bar/baz
has a path label of /bar
.
starlette_requests_total{method="GET",path_template="/foo"} 1.0
starlette_requests_total{method="GET",path_template="/bar"} 1.0
starlette_requests_total{method="GET",path_template="/metrics"} 1.0
Hello! I have a deps conflict due to the fact that current version fron pypi 0.7.0 is still list prometheus-client dep as <8.0 in poetry, but another package needs prometheus-client >=8.0.
I see that updated dependency is already merged since august, is it possible to release something like v0.7.1 with this dependency?
Hello!
I've tried to use starlette_prometheus with prometheus-client==0.16.0, but there is a Dependencies conflict
ERROR: Cannot install -r requirements.txt (line 20) and prometheus-client==0.16.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested prometheus-client==0.16.0
starlette-prometheus 0.9.0 depends on prometheus_client<0.13 and >=0.12
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
Are there any reasons to install starlette_prometheus only with prometheus-client==0.12.0?
I am using PrometheusMiddleware
from starlette_prometheus
ever second or the, it keeps generating log, this grows the log file.
How do i disable this log, this specific?
INFO: 127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO: 127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
............several million times .............................
INFO: 127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
Hi,
I'm seeing an issue with FastAPI, where I am raising an exception in a route handler. I've created a small reproducer:
from fastapi import FastAPI
from starlette.middleware import Middleware
from starlette_prometheus import PrometheusMiddleware
middleware = [
Middleware(PrometheusMiddleware)
]
app = FastAPI(middleware=middleware)
@app.get("/")
def read_root():
raise ValueError("Test error")
# return {"Hello": "World"}
Here's the output from running the reproducer and calling it with curl localhost:8000/
:
$ uvicorn example:app
INFO: Started server process [5099]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 208, in __call__
await super().__call__(scope, receive, send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 57, in __call__
task_group.cancel_scope.cancel()
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 572, in __aexit__
raise ExceptionGroup(exceptions)
anyio._backends._asyncio.ExceptionGroup: 2 exceptions were raised in the task group:
----------------------------
Traceback (most recent call last):
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 30, in coro
await self.app(scope, request.receive, send_stream.send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 656, in __call__
await route.handle(scope, receive, send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 259, in handle
await self.app(scope, receive, send)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 61, in app
response = await func(request)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 226, in app
raw_response = await run_endpoint_function(
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/concurrency.py", line 39, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/to_thread.py", line 28, in run_sync
return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread
return await future
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 754, in run
result = context.run(func, *args)
File "./example.py", line 14, in read_root
raise ValueError("Test error")
ValueError: Test error
----------------------------
Traceback (most recent call last):
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette_prometheus/middleware.py", line 53, in dispatch
response = await call_next(request)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 35, in call_next
message = await recv_stream.receive()
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/streams/memory.py", line 89, in receive
await receive_event.wait()
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 1655, in wait
await checkpoint()
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 440, in checkpoint
await sleep(0)
File "/Users/krisb/.pyenv/versions/3.8.9/lib/python3.8/asyncio/tasks.py", line 644, in sleep
await __sleep0()
File "/Users/krisb/.pyenv/versions/3.8.9/lib/python3.8/asyncio/tasks.py", line 638, in __sleep0
yield
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 55, in __call__
response = await self.dispatch_func(request, call_next)
File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette_prometheus/middleware.py", line 65, in dispatch
RESPONSES.labels(method=method, path_template=path_template, status_code=status_code).inc()
UnboundLocalError: local variable 'status_code' referenced before assignment
anyio==3.4.0
asgiref==3.4.1
click==8.0.3
fastapi==0.70.1
h11==0.12.0
idna==3.3
prometheus-client==0.11.0
pydantic==1.9.0
sniffio==1.2.0
starlette==0.16.0
starlette-prometheus==0.8.0
typing-extensions==4.0.1
uvicorn==0.16.0
It seems like starlette has started raising asyncio.exceptions.CancelledError
, which is not based on Exception
caught here
but rather BaseException
.
I believe this was introduced in version 0.15.0 of Starlette, in PR encode/starlette#1157.
I've tried to change the exception catching to include both – i.e. except (Exception, asyncio.exceptions.CancelledError)
, this seems to revert the behavior to the expected.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.