Git Product home page Git Product logo

fastapi-redis-cache's Introduction

fastapi-redis-cache

PyPI version PyPI - Downloads PyPI - License PyPI - Python Version Maintainability codecov

Features

  • Cache response data for async and non-async path operation functions.
  • Lifetime of cached data is configured separately for each API endpoint.
  • Requests with Cache-Control header containing no-cache or no-store are handled correctly (all caching behavior is disabled).
  • Requests with If-None-Match header will receive a response with status 304 NOT MODIFIED if ETag for requested resource matches header value.

Installation

pip install fastapi-redis-cache

Usage

Initialize Redis

Create a FastApiRedisCache instance when your application starts by defining an event handler for the "startup" event as shown below:

import os

from fastapi import FastAPI, Request, Response
from fastapi_redis_cache import FastApiRedisCache, cache
from sqlalchemy.orm import Session

LOCAL_REDIS_URL = "redis://127.0.0.1:6379"

app = FastAPI(title="FastAPI Redis Cache Example")

@app.on_event("startup")
def startup():
    redis_cache = FastApiRedisCache()
    redis_cache.init(
        host_url=os.environ.get("REDIS_URL", LOCAL_REDIS_URL),
        prefix="myapi-cache",
        response_header="X-MyAPI-Cache",
        ignore_arg_types=[Request, Response, Session]
    )

After creating the instance, you must call the init method. The only required argument for this method is the URL for the Redis database (host_url). All other arguments are optional:

  • host_url (str) — Redis database URL. (Required)
  • prefix (str) — Prefix to add to every cache key stored in the Redis database. (Optional, defaults to None)
  • response_header (str) — Name of the custom header field used to identify cache hits/misses. (Optional, defaults to X-FastAPI-Cache)
  • ignore_arg_types (List[Type[object]]) — Cache keys are created (in part) by combining the name and value of each argument used to invoke a path operation function. If any of the arguments have no effect on the response (such as a Request or Response object), including their type in this list will ignore those arguments when the key is created. (Optional, defaults to [Request, Response])
    • The example shown here includes the sqlalchemy.orm.Session type, if your project uses SQLAlchemy as a dependency (as demonstrated in the FastAPI docs), you should include Session in ignore_arg_types in order for cache keys to be created correctly (More info).

@cache Decorator

Decorating a path function with @cache enables caching for the endpoint. Response data is only cached for GET operations, decorating path functions for other HTTP method types will have no effect. If no arguments are provided, responses will be set to expire after one year, which, historically, is the correct way to mark data that "never expires".

# WILL NOT be cached
@app.get("/data_no_cache")
def get_data():
    return {"success": True, "message": "this data is not cacheable, for... you know, reasons"}

# Will be cached for one year
@app.get("/immutable_data")
@cache()
async def get_immutable_data():
    return {"success": True, "message": "this data can be cached indefinitely"}

Response data for the API endpoint at /immutable_data will be cached by the Redis server. Log messages are written to standard output whenever a response is added to or retrieved from the cache:

INFO:fastapi_redis_cache:| 04/21/2021 12:26:26 AM | CONNECT_BEGIN: Attempting to connect to Redis server...
INFO:fastapi_redis_cache:| 04/21/2021 12:26:26 AM | CONNECT_SUCCESS: Redis client is connected to server.
INFO:fastapi_redis_cache:| 04/21/2021 12:26:34 AM | KEY_ADDED_TO_CACHE: key=api.get_immutable_data()
INFO:     127.0.0.1:61779 - "GET /immutable_data HTTP/1.1" 200 OK
INFO:fastapi_redis_cache:| 04/21/2021 12:26:45 AM | KEY_FOUND_IN_CACHE: key=api.get_immutable_data()
INFO:     127.0.0.1:61779 - "GET /immutable_data HTTP/1.1" 200 OK

The log messages show two successful (200 OK) responses to the same request (GET /immutable_data). The first request executed the get_immutable_data function and stored the result in Redis under key api.get_immutable_data(). The second request did not execute the get_immutable_data function, instead the cached result was retrieved and sent as the response.

In most situations, response data must expire in a much shorter period of time than one year. Using the expire parameter, You can specify the number of seconds before data is deleted:

# Will be cached for thirty seconds
@app.get("/dynamic_data")
@cache(expire=30)
def get_dynamic_data(request: Request, response: Response):
    return {"success": True, "message": "this data should only be cached temporarily"}

NOTE! expire can be either an int value or timedelta object. When the TTL is very short (like the example above) this results in a decorator that is expressive and requires minimal effort to parse visually. For durations an hour or longer (e.g., @cache(expire=86400)), IMHO, using a timedelta object is much easier to grok (@cache(expire=timedelta(days=1))).

Response Headers

A response from the /dynamic_data endpoint showing all header values is given below:

$ http "http://127.0.0.1:8000/dynamic_data"
  HTTP/1.1 200 OK
  cache-control: max-age=29
  content-length: 72
  content-type: application/json
  date: Wed, 21 Apr 2021 07:54:33 GMT
  etag: W/-5480454928453453778
  expires: Wed, 21 Apr 2021 07:55:03 GMT
  server: uvicorn
  x-fastapi-cache: Hit

  {
      "message": "this data should only be cached temporarily",
      "success": true
  }
  • The x-fastapi-cache header field indicates that this response was found in the Redis cache (a.k.a. a Hit). The only other possible value for this field is Miss.
  • The expires field and max-age value in the cache-control field indicate that this response will be considered fresh for 29 seconds. This is expected since expire=30 was specified in the @cache decorator.
  • The etag field is an identifier that is created by converting the response data to a string and applying a hash function. If a request containing the if-none-match header is received, any etag value(s) included in the request will be used to determine if the data requested is the same as the data stored in the cache. If they are the same, a 304 NOT MODIFIED response will be sent. If they are not the same, the cached data will be sent with a 200 OK response.

These header fields are used by your web browser's cache to avoid sending unnecessary requests. After receiving the response shown above, if a user requested the same resource before the expires time, the browser wouldn't send a request to the FastAPI server. Instead, the cached response would be served directly from disk.

Of course, this assumes that the browser is configured to perform caching. If the browser sends a request with the cache-control header containing no-cache or no-store, the cache-control, etag, expires, and x-fastapi-cache response header fields will not be included and the response data will not be stored in Redis.

Pre-defined Lifetimes

The decorators listed below define several common durations and can be used in place of the @cache decorator:

  • @cache_one_minute
  • @cache_one_hour
  • @cache_one_day
  • @cache_one_week
  • @cache_one_month
  • @cache_one_year

For example, instead of @cache(expire=timedelta(days=1)), you could use:

from fastapi_redis_cache import cache_one_day

@app.get("/cache_one_day")
@cache_one_day()
def partial_cache_one_day(response: Response):
    return {"success": True, "message": "this data should be cached for 24 hours"}

If a duration that you would like to use throughout your project is missing from the list, you can easily create your own:

from functools import partial, update_wrapper
from fastapi_redis_cache import cache

ONE_HOUR_IN_SECONDS = 3600

cache_two_hours = partial(cache, expire=ONE_HOUR_IN_SECONDS * 2)
update_wrapper(cache_two_hours, cache)

Then, simply import cache_two_hours and use it to decorate your API endpoint path functions:

@app.get("/cache_two_hours")
@cache_two_hours()
def partial_cache_two_hours(response: Response):
    return {"success": True, "message": "this data should be cached for two hours"}

Cache Keys

Consider the /get_user API route defined below. This is the first path function we have seen where the response depends on the value of an argument (id: int). This is a typical CRUD operation where id is used to retrieve a User record from a database. The API route also includes a dependency that injects a Session object (db) into the function, per the instructions from the FastAPI docs:

@app.get("/get_user", response_model=schemas.User)
@cache(expire=3600)
def get_user(id: int, db: Session = Depends(get_db)):
    return db.query(models.User).filter(models.User.id == id).first()

In the Initialize Redis section of this document, the FastApiRedisCache.init method was called with ignore_arg_types=[Request, Response, Session]. Why is it necessary to include Session in this list?

Before we can answer that question, we must understand how a cache key is created. If the following request was received: GET /get_user?id=1, the cache key generated would be myapi-cache:api.get_user(id=1).

The source of each value used to construct this cache key is given below:

  1. The optional prefix value provided as an argument to the FastApiRedisCache.init method => "myapi-cache".
  2. The module containing the path function => "api".
  3. The name of the path function => "get_user".
  4. The name and value of all arguments to the path function EXCEPT for arguments with a type that exists in ignore_arg_types => "id=1".

Since Session is included in ignore_arg_types, the db argument was not included in the cache key when Step 4 was performed.

If Session had not been included in ignore_arg_types, caching would be completely broken. To understand why this is the case, see if you can figure out what is happening in the log messages below:

INFO:uvicorn.error:Application startup complete.
INFO:fastapi_redis_cache.client: 04/23/2021 07:04:12 PM | KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1,db=<sqlalchemy.orm.session.Session object at 0x11b9fe550>)
INFO:     127.0.0.1:50761 - "GET /get_user?id=1 HTTP/1.1" 200 OK
INFO:fastapi_redis_cache.client: 04/23/2021 07:04:15 PM | KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1,db=<sqlalchemy.orm.session.Session object at 0x11c7f73a0>)
INFO:     127.0.0.1:50761 - "GET /get_user?id=1 HTTP/1.1" 200 OK
INFO:fastapi_redis_cache.client: 04/23/2021 07:04:17 PM | KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1,db=<sqlalchemy.orm.session.Session object at 0x11c7e35e0>)
INFO:     127.0.0.1:50761 - "GET /get_user?id=1 HTTP/1.1" 200 OK

The log messages indicate that three requests were received for the same endpoint, with the same arguments (GET /get_user?id=1). However, the cache key that is created is different for each request:

KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1,db=<sqlalchemy.orm.session.Session object at 0x11b9fe550>
KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1,db=<sqlalchemy.orm.session.Session object at 0x11c7f73a0>
KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1,db=<sqlalchemy.orm.session.Session object at 0x11c7e35e0>

The value of each argument is added to the cache key by calling str(arg). The db object includes the memory location when converted to a string, causing the same response data to be cached under three different keys! This is obviously not what we want.

The correct behavior (with Session included in ignore_arg_types) is shown below:

INFO:uvicorn.error:Application startup complete.
INFO:fastapi_redis_cache.client: 04/23/2021 07:04:12 PM | KEY_ADDED_TO_CACHE: key=myapi-cache:api.get_user(id=1)
INFO:     127.0.0.1:50761 - "GET /get_user?id=1 HTTP/1.1" 200 OK
INFO:fastapi_redis_cache.client: 04/23/2021 07:04:12 PM | KEY_FOUND_IN_CACHE: key=myapi-cache:api.get_user(id=1)
INFO:     127.0.0.1:50761 - "GET /get_user?id=1 HTTP/1.1" 200 OK
INFO:fastapi_redis_cache.client: 04/23/2021 07:04:12 PM | KEY_FOUND_IN_CACHE: key=myapi-cache:api.get_user(id=1)
INFO:     127.0.0.1:50761 - "GET /get_user?id=1 HTTP/1.1" 200 OK

Now, every request for the same id generates the same key value (myapi-cache:api.get_user(id=1)). As expected, the first request adds the key/value pair to the cache, and each subsequent request retrieves the value from the cache based on the key.

Cache Keys Pt 2.

What about this situation? You create a custom dependency for your API that performs input validation, but you can't ignore it because it does have an effect on the response data. There's a simple solution for that, too.

Here is an endpoint from one of my projects:

@router.get("/scoreboard", response_model=ScoreboardSchema)
@cache()
def get_scoreboard_for_date(
    game_date: MLBGameDate = Depends(), db: Session = Depends(get_db)
):
    return get_scoreboard_data_for_date(db, game_date.date)

The game_date argument is a MLBGameDate type. This is a custom type that parses the value from the querystring to a date, and determines if the parsed date is valid by checking if it is within a certain range. The implementation for MLBGameDate is given below:

class MLBGameDate:
    def __init__(
        self,
        game_date: str = Query(..., description="Date as a string in YYYYMMDD format"),
        db: Session = Depends(get_db),
    ):
        try:
            parsed_date = parse_date(game_date)
        except ValueError as ex:
            raise HTTPException(status_code=400, detail=ex.message)
        result = Season.is_date_in_season(db, parsed_date)
        if result.failure:
            raise HTTPException(status_code=400, detail=result.error)
        self.date = parsed_date
        self.season = convert_season_to_dict(result.value)

    def __str__(self):
        return self.date.strftime("%Y-%m-%d")

Please note the __str__ method that overrides the default behavior. This way, instead of <MLBGameDate object at 0x11c7e35e0>, the value will be formatted as, for example, 2019-05-09. You can use this strategy whenever you have an argument that has en effect on the response data but converting that argument to a string results in a value containing the object's memory location.

Questions/Contributions

If you have any questions, please open an issue. Any suggestions and contributions are absolutely welcome. This is still a very small and young project, I plan on adding a feature roadmap and further documentation in the near future.

fastapi-redis-cache's People

Contributors

a-luna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fastapi-redis-cache's Issues

@cache decorator is getting skipped

Hello,

I am facing an error where the @cache decorator does not seem to be producing any output or writing any keys.

I started the Redis client according to the docs:

def startup():
    redis_cache = FastApiRedisCache()
    redis_cache.init(
        host_url = os.environ.get("REDIS_URL", LOCAL_REDIS_URL),
        prefix="multitenant_cache"
    )

and applied the decorator as follows:

@cache()
def GetNewOrder(request: Request):
   ...
    return Response(status_code = resp.status_code, content = resp.content, headers = cleanupRespHeaders(resp.headers))

When I check my server output, I see where redis_cache successfully connects to my Redis server, but I'm not getting any output that indicates that it's caching, and I'm not getting any new keys in my Redis database. Any idea if I'm missing some setup step or doing something else wrong here?

Object of type <class '...'> is not JSON-serializable

Hello!

Just tried to use your package to cache frequent GET requests in my FastApi app.

Endpoint looks like this:

@router.get("/{id}", response_model=Malt)
@cache_one_hour()
def get_malt_by_id(*, db: Session = Depends(get_db), id: UUID4, ):
    return actions.malt.get(db=db, id=id)   # actions returns SQLAlchemy 'Malt' model which is converted to Pydantic 'Malt' schema after that, also tries to return Malt.from_orm(actions...) - same result

When I try to make requests I can see following in the logs:

FAILED_TO_CACHE_KEY: Object of type <class '....db.models.dictionaries.malt.Malt'> is not JSON-serializable: key=cache:mypkg.api.routes.dictionaries.malts.get_malt_by_id(id=7af9090c-dcb5-4778-b090-5b6890618566)

Pydantic schema looks like following (basically it is much more complex but I removed everything but name and still get same result.

class Malt(BaseModel):
    name: str

    class Config:
        orm_mode = True

My example seems to be quite the same to your example with User model from documentation. So why doesn't it work?

add support for if-modified-since directive

the project which led me to create this plugin doesn't serve any assets that make sense to validate with the last-mod time, (e.g., files) since the response data in my project is always retrieved from a database.

However, it would be pretty silly to assume that all websites are the same as mine, so supporting the if-modified-since directive is definitely required.

handling dirty cache entries

Hi,

Do you any recommendation how to handle cache entries that may have become dirty?
e.g. if you have a PUT endpoint modifying a ressource that may be in my cache, I guess the caching mechanism in fast-redis-cache's code will not be aware by pure magic that the cache entry has become dirty. Do I have to handle (update or delete) the cache entry explicitly within the PUT code? This doesn't look very elegant to me, compared to the clean way the rest of the caching is hidden from the developer.

cheers
j.

Can't Cache List of serialized data

Hi
thank you for this great module, i just have an issue that i cant cache a list of serialized dict data, i am using tortoise ORM with Pydantic Model .. my query be like
`

@app.get("/forcasts")
@cache(expire=120)
async def forcasts():
     today = datetime.today().strftime("%Y-%m-%d")
     return await Forcast_pydantic.from_queryset(Forcast.filter(date=today))

`

and i get this :
INFO:fastapi_redis_cache.client: 12/12/2022 07:13:18 PM | FAILED_TO_CACHE_KEY: Object of type <class 'list'> is not JSON-serializable: key=myapi-cache:main.forcasts()

my response that not coached correctly
[ { "id": 1, "date": "2022-12-12", "company_ar": "الرياض", "company_en": "RIBL", "code": 1010, "market_cap": 931500, "negative_positive": "negative", "duration": 5, "profit": null, "capital_protection": "-10.2601156069375" }, { "id": 2, "date": "2022-12-12", "company_ar": "الجزيرة", "company_en": "BJAZ", "code": 1020, "market_cap": 15957, "negative_positive": "negative", "duration": 7, "profit": null, "capital_protection": "-15.9395248380128" }]

can you help me with that ?
Thanks

ttl expired -> request contains eTag/last-mod -> current behavior is wrong

when a browser sends a request where the time indicated by the value of the max-age directive or expires header is in the past, the correct behavior is to:

  • revalidate the clients copy
  • if the eTag has changed or last-mod time is different than the client's copy:
    • send a 200 response, including the new version of the content
  • if the eTag has not changed or last-mod time is the same as the client's copy:
    • send a 304 response with no body content, and header values that indicate max-age/expires time

currently, when the ttl has elapsed and a request is received for the expired data, no revalidation is performed and a 200 response is always sent that includes the entire response data.

A Rebuild Maybe ?! ....

Feedback on Your Package

I came across your package on the internet, and at first, I was excited about it's going to be crazy , A new feature that hasn't been released yet.

However, my enthusiasm turned into disappointment when I realized that it didn't meet my requirements and i was wondered why it's not working, so i checked your source .

It was like a simple caching system to me that just cache A DATA. well i remembered someone said this on a blog about choosing between react , svelte he said why don't you go and build a package that in the future someone see and says wow this is so bad and build a better one from that.

I have to say this why didn't you used __init__ instead of init and making a extra calling on it ..??!!!.

I just wanted to say i rebuilt you package and it's better now .

Cyrus-Kit on PyPI

Of course . still have some bugs but hope someone find this in the future and make a better version of this XD

Feature request - async Redis client

Hi,
Thanks for the great library.

I was looking at the code and I saw that it supports caching async endpoints, but not yet support the use of a Redis async client.

redis-py (https://github.com/redis/redis-py/) now since v4.2.x supports async connection to Redis, so it should be possible to take advantage of that and add it as an option next to the current sync Redis client.
I believe this would be a great addition and would make this library perfect.

Ps.: If you want, I could make a PR for it once I have the time.

coroutine '<method that is cached>' was never awaited

Hey,
I am using your library and everything works fine for "normal" requests.

Problem arises, when I want to call cached method from within the same service.

My route:

@router.get('/get-fiat-currency-choices')
@cache()
def get_fiat_currency_choices():
    response_data = requests.get(f'{CURRENCY_EXCHANGE_API_URL}/currencies').json()

    choices = []
    for symbol, name in response_data.items():
        choices.append(
            {
                'symbol': symbol,
                'name': name,
            }
        )

    return choices

I am trying to call it with:

    @validator('currency')
    def validate_currency(cls, val):
        allowed_currencies = [currency['symbol'] for currency in get_fiat_currency_choices()]
        if val not in allowed_currencies:
            raise ValidationError('Currency is not supported.')
        return val

Error I am getting:

RuntimeWarning: coroutine 'get_fiat_currency_choices' was never awaited
2021-08-14T17:24:37.540579531Z   allowed_currencies = [currency['symbol'] for currency in get_fiat_currency_choices()]
2021-08-14T17:24:37.540585575Z RuntimeWarning: Enable tracemalloc to get the object allocation traceback

Would appreciate any help, thanks!

P.S. The same thing occurs when I am using similar library 'fastapi-cache2' (https://github.com/long2ice/fastapi-cache)

Nothing being cached

Description

Sorry for the very apt title. I'm working on this codebase: https://github.com/joeflack4/ccdh-terminology-service/tree/feature_cache

I have an app.py file where I'm importing FastApiRedisCache() and setting it up in startup().

I'm adding @cache() to several routers, one of which is in models.py. I'm using this endpoint as a test case to make sure things re working.

There's not much there at the moment, but it returns something. Here's what you can get from that endpoint as seen on our production server:
https://terminology.ccdh.io/models/
https://terminology.ccdh.io/docs#/CRDC-H%20and%20CRDC%20Node%20Models/get_models

However when I check server logs or the Redis monitor, I'm not seeing anything being cached at this or any other endpoint.

To see all the changes I've made to my codebase to implement this feature, perhaps looking at this diff in my draft pull request might help: https://github.com/cancerDHC/ccdh-terminology-service/pull/53/files

Note that this setup uses docker.

Where I've checked

1. Redis monitor

docker exec -it docker_ccdh-redis_1 sh

I don't see anything coming up as I'm checking my routes on localhost.

/data # redis-cli FLUSHALL
OK
/data # redis-cli MONITOR
OK

2. Server logs

I can't see any of the tell-tale signs of caching as per the fastapi-redis-cache documentation. I do see that the endpoints are getting hit. But they're not caching.

INFO: Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

INFO: 172.19.0.1:57602 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO: 172.19.0.1:57602 - "GET /docs HTTP/1.1" 200 OK
INFO: 172.19.0.1:57602 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 172.19.0.1:57864 - "GET /models/ HTTP/1.1" 200 OK
INFO: 172.19.0.1:57864 - "GET /models/ HTTP/1.1" 200 OK
INFO: 172.19.0.1:57880 - "GET /conceptreferences?key=uri&value=1&modifier=equals HTTP/1.1" 404 Not Found
INFO: 172.19.0.1:57880 - "GET /conceptreferences?key=uri&value=1&modifier=equals HTTP/1.1" 404 Not Found
INFO: 172.19.0.1:57890 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO: 172.19.0.1:57890 - "GET /docs HTTP/1.1" 200 OK
INFO: 172.19.0.1:57896 - "GET /models/ HTTP/1.1" 200 OK
INFO: 172.19.0.1:57940 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO: 172.19.0.1:57940 - "GET /docs HTTP/1.1" 200 OK
INFO: 172.19.0.1:57940 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 172.19.0.1:57944 - "GET /models HTTP/1.1" 307 Temporary Redirect
INFO: 172.19.0.1:57944 - "GET /models/ HTTP/1.1" 200 OK
INFO: 172.19.0.1:57944 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 172.19.0.1:57948 - "GET /models/GDC HTTP/1.1" 200 OK
INFO: 172.19.0.1:57948 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO: 172.19.0.1:57948 - "GET /docs HTTP/1.1" 200 OK
INFO: 172.19.0.1:57948 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 172.19.0.1:57954 - "GET /models/GDC/entities HTTP/1.1" 200 OK

What I've tried

I tried Checking that the 'redis_url' is actually getting set.
In app.py, I set up FastApiRedisCache() as follows

    redis_cache = FastApiRedisCache()
    redis_cache.init(host_url=get_settings().redis_url)

I then entered the docker container and ran python and just checked to make sure that get_settings().redis_url existed and was correct.

# ls
Pipfile  Pipfile.lock  README.md  ccdh	crdc-nodes  data  docker  docs	env  output  pytest.ini  tests
# python
Python 3.8.11 (default, Jul 22 2021, 15:32:17)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from ccdh.config import get_settings
>>> get_settings()
Settings(app_name='TCCM API', neo4j_username='neo4j', neo4j_password='nFDqqgkNqzt', neo4j_host='ccdh-neo4j', neo4j_bolt_port='7687', redis_url='redis://127.0.0.1:6379', ccdhmodel_branch='main')

Possible solutions

This actually may be a problem on my docker config end. I'm investigating that now, trying out at least these 3 things:

  1. Make sure my redis url is correct, given docker setup. (currently FastAPIRedisCache says it can't connect!)
  2. Add 'link' to redis container
  3. Add depends_on health check to redis container

I'm not sure what else to try. Maybe there is something wrong with my setup in my Python code? Maybe something wrong with my docker setup?

"Too much data for declared Content-Length" error when activating caching

The same exact code is raising exceptions when using the @cache decorator:

from fastapi import FastAPI, Request, Response
from fastapi_redis_cache import FastApiRedisCache, cache

import base_server_info
import prometheus
import settings

logger = logging.getLogger(__name__)


app = FastAPI(title="Base Server Info", version="0.1.0")


@app.on_event("startup")
def startup_event():
    logger.info("starting up")
    redis_cache = FastApiRedisCache()
    redis_cache.init(
        host_url=f"redis://{settings.REDIS_HOST}:{settings.REDIS_PORT}",
        prefix="base-server-info",
        response_header="X-base-server-info-Cache",
        ignore_arg_types=[Request, Response],
    )


@app.get("/servers")
@cache(expire=60)
async def get_servers():
    logger.info("getting servers")

    servers = base_server_info.get_base_servers_list()
    k8sNodes = base_server_info.get_kubernetes_nodes()

    all = base_server_info.consolidate_servers_info(servers, k8sNodes)

    return_value = prometheus.prepare_for_prometheus(all)

    return return_value

This code raises the exception: "Too much data for declared Content-Length"

The same code runs well when I remove the @cache decorator (no surprise here), but also works well when the redis server is unavailable (forcing the caching mechanism to evaluate the code).

Thanks in advance for any insight...

The stack trace is as below:

poc_prometheus_http_service_discovery-app-1    | WARNING:  StatReload detected changes in 'main.py'. Reloading...
poc_prometheus_http_service_discovery-app-1    | INFO:     Shutting down
poc_prometheus_http_service_discovery-app-1    | INFO:     Waiting for application shutdown.
poc_prometheus_http_service_discovery-app-1    | INFO:     Application shutdown complete.
poc_prometheus_http_service_discovery-app-1    | INFO:     Finished server process [8]
poc_prometheus_http_service_discovery-app-1    | INFO:     Started server process [9]
poc_prometheus_http_service_discovery-app-1    | INFO:     Waiting for application startup.
poc_prometheus_http_service_discovery-app-1    | INFO:fastapi_redis_cache.client: 10/25/2022 10:53:57 AM | CONNECT_BEGIN: Attempting to connect to Redis server...
poc_prometheus_http_service_discovery-app-1    | INFO:fastapi_redis_cache.client: 10/25/2022 10:53:57 AM | CONNECT_SUCCESS: Redis client is connected to server.
poc_prometheus_http_service_discovery-app-1    | INFO:     Application startup complete.
poc_prometheus_http_service_discovery-app-1    | INFO:     172.19.0.1:42710 - "GET /servers HTTP/1.1" 200 OK
poc_prometheus_http_service_discovery-app-1    | WARNING:  StatReload detected changes in 'main.py'. Reloading...
poc_prometheus_http_service_discovery-app-1    | INFO:     Shutting down
poc_prometheus_http_service_discovery-app-1    | INFO:     Waiting for application shutdown.
poc_prometheus_http_service_discovery-app-1    | INFO:     Application shutdown complete.
poc_prometheus_http_service_discovery-app-1    | INFO:     Finished server process [9]
poc_prometheus_http_service_discovery-app-1    | INFO:     Started server process [19]
poc_prometheus_http_service_discovery-app-1    | INFO:     Waiting for application startup.
poc_prometheus_http_service_discovery-app-1    | INFO:fastapi_redis_cache.client: 10/25/2022 10:57:05 AM | CONNECT_BEGIN: Attempting to connect to Redis server...
poc_prometheus_http_service_discovery-app-1    | INFO:fastapi_redis_cache.client: 10/25/2022 10:57:05 AM | CONNECT_SUCCESS: Redis client is connected to server.
poc_prometheus_http_service_discovery-app-1    | INFO:     Application startup complete.
poc_prometheus_http_service_discovery-app-1    | INFO:fastapi_redis_cache.client: 10/25/2022 10:57:11 AM | KEY_ADDED_TO_CACHE: key=base-server-info:main.get_servers()
poc_prometheus_http_service_discovery-app-1    | INFO:     172.19.0.1:54082 - "GET /servers HTTP/1.1" 200 OK
poc_prometheus_http_service_discovery-app-1    | ERROR:    Exception in ASGI application
poc_prometheus_http_service_discovery-app-1    | Traceback (most recent call last):
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
poc_prometheus_http_service_discovery-app-1    |     result = await app(  # type: ignore[func-returns-value]
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
poc_prometheus_http_service_discovery-app-1    |     return await self.app(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
poc_prometheus_http_service_discovery-app-1    |     await super().__call__(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
poc_prometheus_http_service_discovery-app-1    |     await self.middleware_stack(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
poc_prometheus_http_service_discovery-app-1    |     raise exc
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
poc_prometheus_http_service_discovery-app-1    |     await self.app(scope, receive, _send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 75, in __call__
poc_prometheus_http_service_discovery-app-1    |     raise exc
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 64, in __call__
poc_prometheus_http_service_discovery-app-1    |     await self.app(scope, receive, sender)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
poc_prometheus_http_service_discovery-app-1    |     raise e
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
poc_prometheus_http_service_discovery-app-1    |     await self.app(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 680, in __call__
poc_prometheus_http_service_discovery-app-1    |     await route.handle(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 275, in handle
poc_prometheus_http_service_discovery-app-1    |     await self.app(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 68, in app
poc_prometheus_http_service_discovery-app-1    |     await response(scope, receive, send)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 167, in __call__
poc_prometheus_http_service_discovery-app-1    |     await send({"type": "http.response.body", "body": self.body})
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 61, in sender
poc_prometheus_http_service_discovery-app-1    |     await send(message)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 159, in _send
poc_prometheus_http_service_discovery-app-1    |     await send(message)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 512, in send
poc_prometheus_http_service_discovery-app-1    |     output = self.conn.send(event)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/h11/_connection.py", line 512, in send
poc_prometheus_http_service_discovery-app-1    |     data_list = self.send_with_data_passthrough(event)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/h11/_connection.py", line 545, in send_with_data_passthrough
poc_prometheus_http_service_discovery-app-1    |     writer(event, data_list.append)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/h11/_writers.py", line 65, in __call__
poc_prometheus_http_service_discovery-app-1    |     self.send_data(event.data, write)
poc_prometheus_http_service_discovery-app-1    |   File "/usr/local/lib/python3.10/site-packages/h11/_writers.py", line 91, in send_data
poc_prometheus_http_service_discovery-app-1    |     raise LocalProtocolError("Too much data for declared Content-Length")
poc_prometheus_http_service_discovery-app-1    | h11._util.LocalProtocolError: Too much data for declared Content-Length

Use cache() decorator outside of fastapi

Hi i'm trying to use @cache() decorator using a worker like celery/dramatiq.
Is there a way to use decorator outside of fastapi?

@cache()
def long_function_query(word):
     pass # do long running stuff.

How can I cache POST requests?

Hi, thank you for the library. Is there any way I can cache the POST requests as well? My use case is we do not change any resource per se in the server but we use POST to allow request body to have some parameters. There are high chances for repeated requests with the same combination. Can you please suggest how I can proceed with this using your library?

I also don't mind caching manually if the above solution is not possible. If manual caching is possible with your library, please let me know how I can do that.

cache-control: no-cache behavior is wrong

per MDN, the HTTP 1.1 spec, and every other authoritative resource on this subject, the cache-control no-cache directive is NOT used to indicate that a response should not be cached.

This is rather unintuitive, but I probably should have known this before I went and built a caching plugin for FastAPI. I need to figure out exactly how to fix this, which will probably reveal even more places where I have violated the most basic of caching rules.

@cache Decorator Not Working

Hi Team,
I have followed the steps given in https://pypi.org/project/fastapi-redis-cache/.

I have added the following methods in main.py and observation is as follows: (Please note I am using the Tryout option of FastAPI docs to test the endpoint). Please help to resolve this issue. I am using python 3.11.
Observation:

INFO: 127.0.0.1:62749 - "GET /dynamic_data HTTP/1.1" 200 OK
INFO:fastapi_redis_cache.client: 05/18/2023 06:53:13 PM | KEY_ADDED_TO_CACHE: key=myapi-cache:main.get_dynamic_data()

Response Header:

cache-control: max-age=30
content-length: 72
content-type: application/json
date: Thu,18 May 2023 10:53:13 GMT
etag: W/2847227004069749289
expires: Thu,18 May 2023 10:53:43 GMT
server: uvicorn
x-myapi-cache: Miss

Note:-

When the end point is being executed for second time, the status of x-myapi-cache should be Hit and as follows. But it is not happening

x-myapi-cache: Hit

{
"message": "this data should only be cached temporarily",
"success": true
}

Method:

Will be cached for thirty seconds

@app.get("/dynamic_data")
@cache(expire=30)
def get_dynamic_data(request: Request, response: Response):
return {"success": True, "message": "this data should only be cached temporarily"}

mani.py

import os

from fastapi import FastAPI, Request, Response
from fastapi_redis_cache import FastApiRedisCache, cache

LOCAL_REDIS_URL = "redis://143.42.77.29:6379"

app = FastAPI(title="FastAPI Redis Cache Example")

@app.on_event("startup")
def startup():
redis_cache = FastApiRedisCache()
redis_cache.init(
host_url=os.environ.get("REDIS_URL", LOCAL_REDIS_URL),
prefix="myapi-cache",
response_header="X-MyAPI-Cache",
ignore_arg_types=[Request, Response]
)

Will be cached for thirty seconds

@app.get("/dynamic_data")
@cache(expire=30)
def get_dynamic_data(request: Request, response: Response):
return {"success": True, "message": "this data should only be cached temporarily"}

add async support for redis

since redis-py is support async natively, is there any plan to add async support? That will be more powerful intergrated with FastAPI.

Add support for authentication to a remote redis

Hello,

I am trying to connect my fastapi application to a redis server instance hosted on redis_labs and it requires me to provide my credentials(username and password) before i can connect to the server. unfortunately i can't find a way to provide my credentials when instantiating the FastApiRedisCache() class.

Is there a walkaround to this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.