Git Product home page Git Product logo

starlette_exporter's Introduction

starlette_exporter

Prometheus exporter for Starlette and FastAPI

starlette_exporter collects basic metrics for Starlette and FastAPI based applications:

  • starlette_requests_total: a counter representing the total requests
  • starlette_request_duration_seconds: a histogram representing the distribution of request response times
  • starlette_requests_in_progress: a gauge that keeps track of how many concurrent requests are being processed

Metrics include labels for the HTTP method, the path, and the response status code.

starlette_requests_total{method="GET",path="/",status_code="200"} 1.0
starlette_request_duration_seconds_bucket{le="0.01",method="GET",path="/",status_code="200"} 1.0

Use the HTTP handler handle_metrics at path /metrics to expose a metrics endpoint to Prometheus.

Table of Contents

Usage

pip install starlette_exporter

Starlette

from starlette.applications import Starlette
from starlette_exporter import PrometheusMiddleware, handle_metrics

app = Starlette()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", handle_metrics)

...

FastAPI

from fastapi import FastAPI
from starlette_exporter import PrometheusMiddleware, handle_metrics

app = FastAPI()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", handle_metrics)

...

Options

app_name: Sets the value of the app_name label for exported metrics (default: starlette).

prefix: Sets the prefix of the exported metric names (default: starlette).

labels: Optional dict containing default labels that will be added to all metrics. The values can be either a static value or a callback function that retrieves a value from the Request object. See below for examples.

exemplars: Optional dict containing label/value pairs. The "value" should be a callback function that returns the desired value at runtime.

group_paths: Populate the path label using named parameters (if any) in the router path, e.g. /api/v1/items/{item_id}. This will group requests together by endpoint (regardless of the value of item_id). As of v0.18.0, the default is True, and changing to False is highly discouraged (see warnings about cardinality).

filter_unhandled_paths: setting this to True will cause the middleware to ignore requests with unhandled paths (in other words, 404 errors). This helps prevent filling up the metrics with 404 errors and/or intentially bad requests. Default is True.

buckets: accepts an optional list of numbers to use as histogram buckets. The default value is None, which will cause the library to fall back on the Prometheus defaults (currently [0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0]).

skip_paths: accepts an optional list of paths, or regular expressions for paths, that will not collect metrics. The default value is None, which will cause the library to collect metrics on every requested path. This option is useful to avoid collecting metrics on health check, readiness or liveness probe endpoints.

skip_methods: accepts an optional list of methods that will not collect metrics. The default value is None, which will cause the library to collect request metrics with each method. This option is useful to avoid collecting metrics on requests related to the communication description for endpoints.

always_use_int_status: accepts a boolean. The default value is False. If set to True the libary will attempt to convert the status_code value to an integer (e.g. if you are using HTTPStatus, HTTPStatus.OK will become 200 for all metrics).

optional_metrics: a list of pre-defined metrics that can be optionally added to the default metrics. The following optional metrics are available:

  • response_body_size: a counter that tracks the size of response bodies for each endpoint

For optional metric examples, see below.

Full example:

app.add_middleware(
  PrometheusMiddleware,
  app_name="hello_world",
  prefix='myapp',
  labels={
      "server_name": os.getenv("HOSTNAME"),
  }),
  buckets=[0.1, 0.25, 0.5],
  skip_paths=['/health'],
  skip_methods=['OPTIONS'],
  always_use_int_status=False),
  exemplars=lambda: {"trace_id": get_trace_id}  # function that returns a trace id

Labels

The included metrics have built-in default labels such as app_name, method, path, and status_code. Additional default labels can be added by passing a dictionary to the labels arg to PrometheusMiddleware. Each label's value can be either a static value or, optionally, a callback function. The built-in default label names are reserved and cannot be reused.

If a callback function is used, it will receive the Request instance as its argument.

app.add_middleware(
  PrometheusMiddleware,
  labels={
     "service": "api",
     "env": os.getenv("ENV")
    }

Ensure that label names follow Prometheus naming conventions and that label values are constrained (see this writeup from Grafana on cardinality).

Label helpers

from_header(key: string, allowed_values: Optional[Iterable]): a convenience function for using a header value as a label.

allowed_values allows you to supply a list of allowed values. If supplied, header values not in the list will result in an empty string being returned. This allows you to constrain the label values, reducing the risk of excessive cardinality.

Do not use headers that could contain unconstrained values (e.g. user id) or user-supplied values.

from starlette_exporter import PrometheusMiddleware, from_header

app.add_middleware(
  PrometheusMiddleware,
  labels={
      "host": from_header("X-Internal-Org", allowed_values=("accounting", "marketing", "product"))
    }

Exemplars

Exemplars are used for labeling histogram observations or counter increments with a trace id. This allows adding trace ids to your charts (for example, latency graphs could include traces corresponding to various latency buckets).

To add exemplars to starlette_exporter metrics, pass a dict to the PrometheusMiddleware class with label as well as a callback function that returns a string (typically the current trace id).

Example:

# must use `handle_openmetrics` instead of `handle_metrics` for exemplars to appear in /metrics output.
from starlette_exporter import PrometheusMiddleware, handle_openmetrics

app.add_middleware(
  PrometheusMiddleware,
  exemplars=lambda: {"trace_id": get_trace_id}  # supply your own callback function
)

app.add_route("/metrics", handle_openmetrics)

Exemplars are only supported by the openmetrics-text exposition format. A new handle_openmetrics handler function is provided (see above example).

For more information, see the Grafana exemplar documentation.

Optional metrics

Optional metrics are pre-defined metrics that can be added to the default metrics.

  • response_body_size: the size of response bodies returned, in bytes
  • request_body_size: the size of request bodies received, in bytes

Example:

from fastapi import FastAPI
from starlette_exporter import PrometheusMiddleware, handle_metrics
from starlette_exporter.optional_metrics import response_body_size, request_body_size

app = FastAPI()
app.add_middleware(PrometheusMiddleware, optional_metrics=[response_body_size, request_body_size])

Custom Metrics

starlette_exporter will export all the prometheus metrics from the process, so custom metrics can be created by using the prometheus_client API.

Example:

from prometheus_client import Counter
from starlette.responses import RedirectResponse

REDIRECT_COUNT = Counter("redirect_total", "Count of redirects", ["redirected_from"])

async def some_view(request):
    REDIRECT_COUNT.labels("some_view").inc()
    return RedirectResponse(url="https://example.com", status_code=302)

The new metric will now be included in the the /metrics endpoint output:

...
redirect_total{redirected_from="some_view"} 2.0
...

Multiprocess mode (gunicorn deployments)

Running starlette_exporter in a multiprocess deployment (e.g. with gunicorn) will need the PROMETHEUS_MULTIPROC_DIR env variable set, as well as extra gunicorn config.

For more information, see the Prometheus Python client documentation.

Developing

This package supports Python 3.6+.

git clone https://github.com/stephenhillier/starlette_exporter
cd starlette_exporter
pytest tests

License

Code released under the Apache License, Version 2.0.

Dependencies

https://github.com/prometheus/client_python (>= 0.12)

https://github.com/encode/starlette

Credits

Starlette - https://github.com/encode/starlette

FastAPI - https://github.com/tiangolo/fastapi

Flask exporter - https://github.com/rycus86/prometheus_flask_exporter

Alternate Starlette exporter - https://github.com/perdy/starlette-prometheus

starlette_exporter's People

Contributors

alexpearce avatar backbord avatar bunny-therapist avatar dependabot[bot] avatar dolfinus avatar elatomo avatar evstratbg avatar florianludwig avatar intelroman avatar lqhuang avatar paweldudzinski avatar poofeg avatar rcoup avatar rybo avatar sbrandtb avatar scotgopal avatar stephenhillier avatar yiurule avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

starlette_exporter's Issues

Feature request: skip_paths take patterns

The flask-prometheus-exporter uses excluded_paths which can take regular expressions like "/management/.*" which then excludes all paths under that path. I would like to do the same (I am trying to replace flask-prometheus-exporter with starlette-exporter without affecting metrics), but "skip_paths" just takes a list of strings. To get the same functionality I would have to list every endpoint under "/management/" - and if ever another endpoint is added to management (in another module), I would have to remember to add that as well.

Therefore, I request that skip_paths accepts regular expressions, or some kind of pattern, or at least a leading base path. I believe this won't break any backwards compatibility since an exact string can be seen as a regular expression that just matches that string (especially given the limited charset of urls). If backwards compatibility is a concern, one could simply pass in re.Pattern objects or similar to skip_paths and treat instances of str as before.

How to see python_*** metrics

I can use starlette_exporter to generate starlette_request_*** metrics now, but I don't see python_*** metrics.
Is there a flag to enable it?
(Sorry, I cannot find related information in issues and code... If I miss anything, please let me know, thanks.)

Library missing py.typed marker

mypy error:

error: Skipping analyzing "starlette_exporter": module is installed, but missing library stubs or py.typed marker
  • starlette-exporter = "0.14.0"
  • mypy = "0.971"
  • python = "3.10"

add hostname to labels

hostname is part of url, and an api may be configured for multiple domains.
so, adding it to labels of request count/time metrics could enrich it

status_code displays enum name in stats

When using code like the example below (with fastapi in this case)

from starlette_exporter import PrometheusMiddleware, handle_metrics
from http import HTTPStatus
from fastapi import FastAPI, HTTPException, Response
import uvicorn

app = FastAPI()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", handle_metrics)


@app.get("/")
async def root():
    return Response(status_code=HTTPStatus.OK)


@app.get("/200")
async def root():
    return {"I am returning 200"}


@app.get("/500")
async def root():
    raise HTTPException(status_code=500)
    return {"I am returning 200"}


@app.get("/500v2")
async def root():
    raise HTTPException(status_code=HTTPStatus.INTERNAL_SERVER_ERROR)
    return {"I am returning 200"}


if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=5000, log_level="info")

The status code that is displayed in the metrics is the enum name, not the numeric code.

starlette_request_duration_seconds_bucket{app_name="starlette",le="10.0",method="GET",path="/",status_code="HTTPStatus.OK"} 2.0

Is this desired behaviour?

Perhaps it might be better to attempt to convert the status code to an int?

I think all we would have to do is maybe do a type check and convert to int right here
https://github.com/stephenhillier/starlette_exporter/blob/master/starlette_exporter/middleware.py#L174

docker host problem

I push code in docker.
I got errors return when I curl localhost:8000/metrics from the outside docker
logs output:
curl: (52) Empty reply from server

Inside docker to excute curl command that It can show metrics nomally

`from_header` but for `Response`?

Is there any way to create a label based on response in starlette_exporter?
My use-case is adding a hit-or-miss={hit,miss} label based on the x-fastapi-cache header.

Notice: group_paths and filter_unhandled_paths will both default to True

The default values for group_paths and filter_unhandled_paths will be changing from False to True. Most users will already want to set these both to True, but when these options were added they defaulted to False to avoid changes to existing behavior. See #78

The changes will be made on or after Nov 17, 2023 (a week from now). A deprecation notice is now part of v0.17.0.

Feedback welcome!

How to add custom metrics ?

HI again ! Last issue for the period I believe ;-)

Would you mind adding a quick doc about how to extend the metrics ? The FastAPI / starlette ones are a very good basis, but I'ld like to add some related to my app.

For example I have a Postgres database, I want to add the number of active subscription in the metrics, aka the result of a SELECT * FROM subscription WHERE active == 1 that would show up as starlette_subscription_active{app_name="starlette",query="SELECT * FROM subscription WHERE active == 1"} 1382

Type error on version 0.8.0

My pytest tests fails since I upgraded starlette_exporter to 0.8.0 (from 0.7.0).
I use FastAPI.

Here the error:

            labels = [method, path, status_code, self.app_name]
    
            self.request_count.labels(*labels).inc()
>           self.request_time.labels(*labels).observe(end - begin)
E           TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'

venv/lib/python3.7/site-packages/starlette_exporter/middleware.py:105: TypeError

It seems that there are cases where end is never set.

The full trace:

venv/lib/python3.7/site-packages/requests/sessions.py:590: in post
    return self.request('POST', url, data=data, json=json, **kwargs)
venv/lib/python3.7/site-packages/starlette/testclient.py:431: in request
    json=json,
venv/lib/python3.7/site-packages/requests/sessions.py:542: in request
    resp = self.send(prep, **send_kwargs)
venv/lib/python3.7/site-packages/requests/sessions.py:655: in send
    r = adapter.send(request, **kwargs)
venv/lib/python3.7/site-packages/starlette/testclient.py:243: in send
    raise exc from None
venv/lib/python3.7/site-packages/starlette/testclient.py:240: in send
    loop.run_until_complete(self.app(scope, receive, send))
venv/lib/python3.7/site-packages/nest_asyncio.py:70: in run_until_complete
    return f.result()
/usr/lib/python3.7/asyncio/futures.py:178: in result
    raise self._exception
/usr/lib/python3.7/asyncio/tasks.py:223: in __step
    result = coro.send(None)
venv/lib/python3.7/site-packages/fastapi/applications.py:199: in __call__
    await super().__call__(scope, receive, send)
venv/lib/python3.7/site-packages/starlette/applications.py:112: in __call__
    await self.middleware_stack(scope, receive, send)
venv/lib/python3.7/site-packages/starlette/middleware/errors.py:181: in __call__
    raise exc from None
venv/lib/python3.7/site-packages/starlette/middleware/errors.py:159: in __call__
    await self.app(scope, receive, _send)

`filter_unhandled_paths` filters out `Mount`ed routes

Here's a minimal reproduction:

from starlette.applications import Starlette
from starlette_exporter import PrometheusMiddleware, handle_metrics
from starlette.routing import Mount, Route
from starlette.responses import PlainTextResponse


async def hello_world(request):
    return PlainTextResponse("hi")


hello_routes = [Route("/world", hello_world)]

app = Starlette(
    debug=True,
    routes=[
        Mount("/hello", routes=hello_routes, name="hello"),
    ],
)
app.add_middleware(PrometheusMiddleware, filter_unhandled_paths=True)
app.add_route("/metrics", handle_metrics)

/metrics shows requests to /hello/world when filter_unhandled_paths is False, but not when it's set to True. I'm on starlette_exporter v0.8.2, starlette v0.14.2, python v3.8.10.

[Feature Request] Group unhandled paths instead of filter

Hi,
just ran into this #79 issue and im glad to see the new defaults. It definitely makes sense to me.

Im just a bit worried about filter_unhandled_paths=True throwing away information about some 404 requests. I think we are missing out on some potentially key information about what kinds of requests are being sent to the server. Instead of filtering, wouldn't it make sense to group these requests under some others/unhandled value?

If this sounds interesting, there could be a new flag group_unhandled_paths created. If set to True, it could group all unhandled paths into some common value for these unhandled requests. The other flag name filter_unhandled_paths does not make much sense to implement this kind of logic, so it makes more sense to me to create a new one.

If there is agreement, I would be happy to create a PR.

filter_unhandled_paths broken with root_path set via Uvicorn

I noticed that setting the root_path via Uvicorn breaks all metrics having a path label when filter_unhandled_paths is enabled.

The FastAPI docs suggest that setting root_path via uvicorn is equivalent to the FastAPI attribute of the same name

This exporter however seems to make the assumption that it's set via FastAPI. I believe this is the problematic bit of code.

if hasattr(app, "root_path"):

In my case, I simply switched to setting root_path via the FastAPI attribute.

I'd be happy to submit a fix, but I'm not sure what the best way to inspect the uvicorn settings parameters in this context. Any tips would be appreciated.

Thanks!

Exemplar with data from header?

Hi,

I would like to add a trace id from a request as an exemplar, if that makes sense.

The exemplar callback however seems to be called w/o arguments in

extra["exemplar"] = self.exemplars()

Would it be possible, to pass the request to the exemplar callback, allowing for

def my_exemplars(request: Request) -> dict[str, str]:
    return {"trace_id": request.headers.get("Trace-Id", "")}

...
exemplars=my_exemplars
...

or even have individual fields be callbacks like it is done with labels in

async def _default_label_values(self, request: Request):

exemplars=lambda: {"trace_id": from_header("Trace-Id")}

?

Any help would be greatly appreciated.

PS: I'm new to exemplars and might be misinterpreting something. :)

Duplicated timeseries in Collector Registry

Hi. I'm trying to use this package to store metrics for the model that I am serving using FastAPI. While I was able to reproduce the the default metrics, I wanted to explore on making my custom metrics. But I encountered this error when I tried.

ValueError: Duplicated timeseries in CollectorRegistry: {'predict_created', 'predict_total'}

I've scoured the net for a solution, but none was able to solve my problem. Please do help. The following is my source code.

from starlette_exporter import PrometheusMiddleware, handle_metrics
from prometheus_client import Counter

TOTAL_PREDICT_REQUEST = Counter("predict", "Count of predicts", ("from",))

app = FastAPI()
image_classifier = ImageClassifier()

"""Add instrumentation"""
app.add_middleware(
    PrometheusMiddleware,
    app_name = "fastapi",
    prefix = "fastapi",
    filter_unhandled_paths = True
    )
app.add_route("/metrics", handle_metrics)

@app.get("/")
def home():
    return "Hello!"


@app.post("/predict", response_model=ResponseDataModel)
async def predict(file: UploadFile = File(...)):

    TOTAL_PREDICT_REQUEST.labels(endpoint="/predict").inc()

    if file.content_type.startswith("image/") is False:
        raise HTTPException(
            status_code=400, detail=f"File '{file.filename}' is not an image."
        )

    try:
        contents = await file.read()
        image = Image.open(io.BytesIO(contents)).convert("RGB")

        predicted_class = image_classifier.predict(image)

        logging.info(f"Predicted Class: {predicted_class}")

        return {
            "filename": file.filename,
            "content_type": file.content_type,
            "likely_class": predicted_class,
        }

    except Exception as error:
        logging.exception(error)
        e = sys.exc_info()[1]
        raise HTTPException(status_code=500, detail=str(e))


if __name__ == "__main__":
    uvicorn.run("app.main:app", host="127.0.0.1", port=8000, log_level="info")

Full traceback:

Traceback (most recent call last):
  File "d:/CertifAI/deployment-course-labs/day_4/model_monitoring/app/main.py", line 71, in <module>
    uvicorn.run("app.main:app", host="127.0.0.1", port=8000, log_level="info")
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\uvicorn\main.py", line 386, in run
    server.run()
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\uvicorn\server.py", line 49, in run
    loop.run_until_complete(self.serve(sockets=sockets))
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\asyncio\base_events.py", line 616, in run_until_complete
    return future.result()
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\uvicorn\server.py", line 56, in serve
    config.load()
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\uvicorn\config.py", line 308, in load
    self.loaded_app = import_from_string(self.app)
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\uvicorn\importer.py", line 20, in import_from_string      
    module = importlib.import_module(module_str)
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "d:\certifai\deployment-course-labs\day_4\model_monitoring\app\main.py", line 19, in <module>
    TOTAL_PREDICT_REQUEST = Counter("predict", "Count of predicts", ("from",))
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\prometheus_client\metrics.py", line 107, in __init__      
    registry.register(self)
  File "C:\Users\USER\miniconda3\envs\day4-demo\lib\site-packages\prometheus_client\registry.py", line 27, in register      
    raise ValueError(
ValueError: Duplicated timeseries in CollectorRegistry: {'predict_created', 'predict_total'}

filter_unhandled_paths does not work when root_path is set with fastapi

Our team have a common ingress called https://base.ingess/
And my service is named as my-service, https://base.ingess/my-service

I need my openapi.json work normally
So, I set the option for fastapi

app = FastAPI(root_path="/my-service")

For this, I can let openapi work normally.

However, I turn on the option filter_unhandled_paths.

I found that there is no any metrics in https://base.ingess/my-service/metrics .
After printed some log for matches method in usr/local/lib/python3.10/site-packages/starlette/routing.py. (print(scope["path"], self.path_regex)

...
/my-service/some-endpoint/ re.compile('^/some-endpoint$')
...

This let the result is mismatching.
Someone have a solution or fix for it?

All I think can let it work is replace

"path": scope.get("root_path", "") + scope.get("path"),

with

"path": scope.get("path"),
at line 171 in starlette_exporter/middleware.py

Wrong data using startlette_exporter with fastAPI BackgroudTasks on starlette_request_duration_seconds_bucket and sum

I added have custom endpoint where I just added one backgroud tasks which is just inserting some data log to database.

Average response time it's about 100 -150 ms

image

But exporter show's that this request was more than one 1.8 second long

image

My import is: from starlette_exporter import PrometheusMiddleware, handle_metrics

It seems to me that the exporter additionally waits for the backgroud task instead of specifying the time only for the request

No info on Python Version

Hi there. I would like to contribute to this repo but am unclear on what python version is used for development.

Starlette AuthenticationBackend exceptions disappear

Looks like PrometheusMiddleware might misbhehave when Starlette AuthenticationMiddleware raises an exception -- for example because a database or some other required resource is down.

It looks like finally block has a return statement, and the effect is that:

  • other middleware won't see the exception
  • Starlette default error handler does not see the error
  • ASGI server (e.g. uvicorn needs to catch the invalid behaior ("ASGI callable returned without starting response")

This happens because there is a return statement in finally block:

  • finally:
    # Decrement 'requests_in_progress' gauge after response sent
    self.requests_in_progress.labels(
    method, self.app_name, *default_labels
    ).dec()
    if self.filter_unhandled_paths or self.group_paths:
    grouped_path = self._get_router_path(scope)
    # filter_unhandled_paths removes any requests without mapped endpoint from the metrics.
    if self.filter_unhandled_paths and grouped_path is None:
    return

POC, more or less the same style as Starlette docs + unit tests:

from starlette.applications import Starlette
from starlette.authentication import (
    AuthCredentials,
    AuthenticationBackend,
    AuthenticationError,
    SimpleUser,
    UnauthenticatedUser,
    requires,
)
from starlette.middleware import Middleware
from starlette.middleware.authentication import AuthenticationMiddleware
from starlette.responses import JSONResponse
from starlette.routing import Route
from starlette_exporter import PrometheusMiddleware


class PocAuthBackend(AuthenticationBackend):
    async def authenticate(self, request):
        if "Authorization" not in request.headers:
            return None

        auth_scheme, _, auth_token = request.headers["Authorization"].partition(" ")
        if auth_scheme != "token":
            raise AuthenticationError("Invalid authorization")

        scopes: list[str] = []
        if auth_token == "beef":
            user = SimpleUser(username="bobby")
            scopes = ["authenticated"]
        elif "raise" in auth_token:
            # Pretend that actual token check failed (e.g. DB connection error)
            raise ValueError("Failed")
        else:
            user = UnauthenticatedUser()
            scopes = []

        return AuthCredentials(scopes), user


@requires("authenticated")
async def hello(request):
    return JSONResponse(
        {
            "authenticated": request.user.is_authenticated,
            "user": request.user.display_name,
        },
    )


app = Starlette(
    routes=[
        Route("/hello", hello),
    ],
    middleware=[
        Middleware(
            PrometheusMiddleware,
            app_name="poc",
            prefix="poc",
            group_paths=True,
            filter_unhandled_paths=True,
        ),
        Middleware(
            AuthenticationMiddleware,
            backend=PocAuthBackend(),
        ),
    ],
)

Running the server:

$ uvicorn expoc:app

Sequence of requests:

$ curl localhost:8000/hello
Forbidden
$ curl -H "Authorization: token beef" localhost:8000/hello
{"authenticated":true,"user":"bobby"}
$ curl -H "Authorization: token dead" localhost:8000/hello       
Forbidden
$ curl -H "Authorization: token raise" localhost:8000/hello
Internal Server Error

Server logs:

INFO:     127.0.0.1:48628 - "GET /hello HTTP/1.1" 403 Forbidden
INFO:     127.0.0.1:55530 - "GET /hello HTTP/1.1" 200 OK
INFO:     127.0.0.1:42388 - "GET /hello HTTP/1.1" 403 Forbidden
ERROR:    ASGI callable returned without starting response.
INFO:     127.0.0.1:42392 - "GET /hello HTTP/1.1" 500 Internal Server Error

The error logging comes from uvicorn, meaning that Starlette error handling did not see the exception. Also any other middleware like Sentry would not be able to see it.

group_paths not work for method `OPTIONS`

I think something wrong when request method is OPTIONS like below

starlette_requests_total{app_name="starlette",method="GET",path="/api/v1/datasets",status_code="200"} 92.0
starlette_requests_total{app_name="starlette",method="GET",path="/api/v1/tasks/nm",status_code="200"} 53.0
starlette_requests_total{app_name="starlette",method="GET",path="/api/v1/datasets/{did}",status_code="200"} 22.0
starlette_requests_total{app_name="starlette",method="GET",path="/api/v1/datasets/{did}/stats",status_code="200"} 7.0
starlette_requests_total{app_name="starlette",method="DELETE",path="/api/v1/datasets/{did}",status_code="200"} 4.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/567738610",status_code="200"} 1.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/2003501303",status_code="200"} 1.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/921436406",status_code="200"} 1.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/799719666",status_code="200"} 1.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/1602879743",status_code="200"} 1.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/150110457",status_code="200"} 1.0
starlette_requests_total{app_name="starlette",method="OPTIONS",path="/api/v1/datasets/1256292570",status_code="200"} 1.0

My version is starlette-exporter==0.11.0 and I add prometheus middleware

app.add_middleware(PrometheusMiddleware, group_paths=True)
app.add_route("/metrics", handle_metrics)

Is there any problem? Please take a look

multiprocess using the wrong registry

Hi, I believe that when using prometheus_multiproc_dir, the results are not generated correctly because it uses the wrong registry.

in your init.py:
return Response(generate_latest(), status_code=200, headers=headers)
should be
return Response(generate_latest(registry), status_code=200, headers=headers)

otherwise the generated metrics use the default registry, and not the multiprocess one, ending up in metrics being generated per process, and not accumulated for the whole app.

Mount has no attribute endpoint

If you are using starlette Mounts (e.g. via FastAPI StaticFiles):

app.mount("/static", StaticFiles(directory="some/local/path"), name="static")

starlette_exporter results in the following error log:

ERROR:    'Mount' object has no attribute 'endpoint'

due to this code:

try:
path = [route for route in request.scope['router'].routes if route.endpoint == request.scope['endpoint']][0].path
except Exception as e:
logger.error(e)

starlette_request_duration_seconds_bucket le bug

Seems that there is a bug in starlette-exporter

Here is a peace of my code:

app = fastapi.FastAPI(docs_url="/swagger", redoc_url="/swagger2")
app.add_middleware(PrometheusMiddleware,app_name="listener", skip_paths=["/metrics"])
app.add_route("/metrics", handle_metrics)
.....
server_config = uvicorn.Config(
        app=app,
        host=props["webserver.host"],
        port=int(props["webserver.port"]),
        loop="asyncio"
    )
    server = uvicorn.Server(server_config)
    await server.serve()

This works, metrics are exposed in /metrics path, but after each request to any endpoint all these metrics are increased by +1:

starlette_request_duration_seconds_bucket{app_name="listener",le="0.01",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.025",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.05",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.075",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.1",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.25",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.5",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="0.75",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="1.0",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="2.5",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="5.0",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="7.5",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="10.0",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0
starlette_request_duration_seconds_bucket{app_name="listener",le="+Inf",method="DELETE",path="/remove_all_connections",status_code="401"} 1.0

I believe that for each request only one metric should be increased. Maybe I'm doing something wrong....

FastAPI version: 0.110.0
starlette-exporter: v0.21.0)
I'm using routers in FastAPI: app.include_router(private_router.router)

Options to set basename

HI again there

Would you mind adding an option to change the base metric name ? Replacing starlette_requests_total by my_app_requests_total for example. Because if I keep using this lib, I'll mix up data between differents services

Best regards,

PrometheusMiddleware raises wrong exception

Hi
If a route handler fails, PrometheusMiddleware raises this kind of exception

...
  File "/app/common/server.py", line 55, in middleware
    response = await call_next(request)
  File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 45, in call_next
    task.result()
  File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.9/site-packages/starlette_exporter/middleware.py", line 105, in __call__
    self.request_time.labels(*labels).observe(end - begin)
TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'

It is the result of end being not initialized on this line
https://github.com/stephenhillier/starlette_exporter/blob/master/starlette_exporter/middleware.py#L110
if wrapped_send function is not called due to exception on this line
https://github.com/stephenhillier/starlette_exporter/blob/master/starlette_exporter/middleware.py#L86
Apparently wrapper_send is not called because startlette raises exception from a route handler function before calling sender
https://github.com/encode/starlette/blob/master/starlette/exceptions.py#L82

Need requests_inprogress metric as a default metric

This is one of the basic metrics when it comes to monitoring requests made to a server. Such a metric is useful to give additional insight on the kind of load that the server is going through in that particular moment.

This is the feature that I plan to work on for this package.

Add option to use custom buckets

It would be nice if we could specify the default buckets for the histogram like this:

buckets = (.5, 1, 3, 5, 10, 15, 20, 30, 45, 60, 80, 100, 125, 150, 175, 200, 250, 300, 400)
app.add_middleware(PrometheusMiddleware, group_paths=True, buckets=buckets)
app.add_route("/metrics", handle_metrics)

They provide something similar in the flask_exporter: https://github.com/rycus86/prometheus_flask_exporter/blob/master/prometheus_flask_exporter/__init__.py

Is this something I could put a pull request in for?

Support for multiple gunicorn workers.

I just discovered this problem when I was stress testing my fastapi. When I send 1000 requests in logs I get them all, stress test tools shows also 1000 success 200 code, but prometheus shows only around 25% of request in metrics. It's like it scrap only from 1st or random worker. Is this possible to solve this problem with starlette_exporter? Do you have any idea on workaround?

Linking orginal issue on: issue

Feature request: Support async label callback

Would be nice if we could define async callback functions that will produce metric label. If request body is needed to be parsed in the callback, we need to await since starlette body() is a coroutine. I inspected the code and it seems trivial to add. Thanks!

Counter decreasing

This might be a deployment issue but any info could help.

I'm using starlette_exporter with a fastapi app in k8s. Currently I have just a single pod running. The metrics that get scraped keep flicking between two values.

For example:

starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 19.0
starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 11.0
starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 11.0
starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 11.0
starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 19.0
starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 19.0
starlette_requests_total{method="GET", path="/hello/{hello_id}", status_code="200"} 11.0

Where would these multiple states be stored? Could this be a problem with multi threading in fastapi?

/metrics and /docs (again?)

I am running FastAPI (0.79.1), starlette (0.19.1), satrlette_exporter (0.14.0), uvicorn (0.17.6) and prometheus_client (0.14.1). When I set root_path to fix my /docs (It is behind nginx), most of my application metrics disappeared from /metrics. I see this was supposed to be fixed in v0.12.0, so perhaps I am missing something? Thanks for any help or insight on this.

Consider changing `group_paths` default value to `True`

group_paths: setting this to True will populate the path label using named parameters (if any) in the router path, e.g. /api/v1/items/{item_id}. This will group requests together by endpoint (regardless of the value of item_id). This option may come with a performance hit for larger routers. Default is False, which will result in separate metrics for different URLs (e.g., /api/v1/items/42, /api/v1/items/43, etc.).

This seems like a really bad default. 99% of the times this is not what you want and the consequence is really bad: labels explosion.

gunicorn doesn't work

I can generate correct Prometheus metrics with gunicorn when it's worker is only 1. (I check the gauge is correct)
But when I increase the worker to 2, the gauge is not correct anymore.

And I add this environment, the result is the metric page is empty now.
export PROMETHEUS_MULTIPROC_DIR=/home/ubuntu/ap/tmp

I also try to add the code, but it's still show empty. Any suggestion?

from prometheus_client import multiprocess

def child_exit(server, worker):
    multiprocess.mark_process_dead(worker.pid)

Feature request: ignoring paths not mapped in the app (`filter_unhandled_paths`)

Hey! I would like to add a feature to filter out paths that are not mapped by the application. Currently, a malicious actor could DoS the underlying Prometheus DB by generating multiple requests that will result in a 404.

My suggestion for the parameter name is filter_unhandled_paths -- this will make the starlette_exporter API-level compatible with starlette-prometheus.

Better naming

Waiting proposal for naming .. on optional_metrics
Now we have 2 optional metrics
request_response_body_size => how much bytes the server send back to the client.
client_receive_body_size => how much data receive server from client (post/put)

It will be great to have a better naming .

Custom labels

How to define custom labels that will be used on every starlette_exporter metrics ?
For example to add the user_id (email) extracted from a JWT token or any part of the request/session.

It could be acheive with a callback or by monkey-patching *.

Does it make sense ?

Exposing metrics not hosted via starlette

Often in a production application one might want to hide the /metrics endpoint from public traffic. Using the methods listed in the README, one would have to explicitly mask the /metrics route within a reverse proxy and bind to two different ports which can be non-trivial (see encode/uvicorn#571 for context).

In my experience I've found it easier to just expose the metrics on a separate port (Ie. 9090) via Prometheus's default start_http_server function but I'm not sure if this is supported by starlette-exporter. This way the metrics requests are served completely internally (and, for example, can only be exposed internally within a kubernetes cluster). While probably not necessary, to be clean I also integrated the server used in start_http_server with starlette's lifespan events (otherwise I'm worried for example the socket won't unbind for some period of time when hot reloading).

My questions are (edit: updated questions):

  1. Is this supported/possible?
  2. Would an example that uses start_http_server be accepted into the README?
  3. Would code that calls start_http_server handling lifespan hooks be accepted as a contribution?

cc @NargiT

Blocking Calls when in Multiprocess mode

I am still getting a bit of a handle on how python async works however I have a question.

Since starlette_exporter depends on the client_python which is not generally async am I blocking the event loop when I enable client_python's multi-process mode?

I can see it uses the file system here to store/share its metrics between workers and in doing so makes regular file open calls.

Do these open calls block the event loop?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.