Git Product home page Git Product logo

python-redis-cache's Issues

LUA Cache fn purpose

Hey there, not sure if this is the right place to ask, but I was curious as to why does the library attach an LUA script to the Redis client (not sure what it's supposed to do).

Support for jsonable_encoder for pydantic responses

For pydantic classes the cache gives "the object is not JSON serializable" error response. This would need jsonable_encoder to be set as the encoder recommended by fastapi.

since jsonable_encoder return dict instead of string we would need to handle this case

        if isinstance(serialized_data, dict):
            serialized_data = compact_dump(serialized_data).encode('utf-8')

Error "EVALSHA - all keys must map to the same key slot" when work with redis cluster

When using redis cluster as the client, CacheDecorator may happen this error:

  File "/usr/local/lib/python3.8/site-packages/redis_cache/__init__.py", line 144, in inner
    get_cache_lua_fn(self.client)(keys=[key, self.keys_key], args=[result_serialized, self.ttl, self.limit])
  File "/usr/local/lib/python3.8/site-packages/redis/commands/core.py", line 5807, in __call__
    return client.evalsha(self.sha, len(keys), *args)
  File "/usr/local/lib/python3.8/site-packages/redis/commands/core.py", line 5194, in evalsha
    return self._evalsha("EVALSHA", sha, numkeys, *keys_and_args)
  File "/usr/local/lib/python3.8/site-packages/redis/commands/core.py", line 5178, in _evalsha
    return self.execute_command(command, sha, numkeys, *keys_and_args)
  File "/usr/local/lib/python3.8/site-packages/redis/cluster.py", line 1074, in execute_command
    raise e
  File "/usr/local/lib/python3.8/site-packages/redis/cluster.py", line 1047, in execute_command
    target_nodes = self._determine_nodes(
  File "/usr/local/lib/python3.8/site-packages/redis/cluster.py", line 875, in _determine_nodes
    slot = self.determine_slot(*args)
  File "/usr/local/lib/python3.8/site-packages/redis/cluster.py", line 965, in determine_slot
    raise RedisClusterException(
redis.exceptions.RedisClusterException: EVALSHA - all keys must map to the same key slot

I think the problem is in the limit process in Lua script:

def get_cache_lua_fn(client):
    if not hasattr(client, '_lua_cache_fn'):
        client._lua_cache_fn = client.register_script("""
local ttl = tonumber(ARGV[2])
local value
if ttl > 0 then
  value = redis.call('SETEX', KEYS[1], ttl, ARGV[1])
else
  value = redis.call('SET', KEYS[1], ARGV[1])
end
local limit = tonumber(ARGV[3])
if limit > 0 then
  local time_parts = redis.call('TIME')
  local time = tonumber(time_parts[1] .. '.' .. time_parts[2])
  redis.call('ZADD', KEYS[2], time, KEYS[1])
  local count = tonumber(redis.call('ZCOUNT', KEYS[2], '-inf', '+inf'))
  local over = count - limit
  if over > 0 then
    local stale_keys_and_scores = redis.call('ZPOPMIN', KEYS[2], over)
    -- Remove the the scores and just leave the keys
    local stale_keys = {}
    for i = 1, #stale_keys_and_scores, 2 do
      stale_keys[#stale_keys+1] = stale_keys_and_scores[i]
    end
    redis.call('ZREM', KEYS[2], unpack(stale_keys))
    redis.call('DEL', unpack(stale_keys))
  end
end
return value
""")
    return client._lua_cache_fn

In this limit process, there are operators related to current key and the "keys_key" which is for tracing the count of cached keys. When using redis cluster, the current key should be in the same slot as keys_key. However, there is no guarantee that these two keys will be in the same slot.

I solve (not sure, need more test) this problem by applying the hash tag of redis to the namespace ("namespace" -> "{namespace}"):

class CacheDecorator:
    def __init__(self, redis_client, prefix="rc", serializer=dumps, deserializer=loads, key_serializer=None, ttl=0, limit=0, namespace=None):
        self.client = redis_client
        self.prefix = prefix
        self.serializer = serializer
        self.key_serializer = key_serializer
        self.deserializer = deserializer
        self.ttl = ttl
        self.limit = limit
        self.namespace = f"{{{namespace}}}"
        self.keys_key = None

Then I give my data function an interface, and runtime apply the decorator to this interface with different namespaces like this:

        @redis_cache.cache(namespace=user_id)
        def data_func(...)

This makes the keys_key and current key use the same slot and avoids the error.
However, I have to figure out my own strategy for choosing the namespace, which is the hash tag. This can only be considered a workaround-level solution for my own use case. In addition, non-decorator actions may also encounter the same situation.

Upgrading to v3 breaks when using redis cluster

Upgrading to v3 triggered a Error "EVALSHA - all keys must map to the same key slot" when work with redis cluster for all the insertions that was not happening on v2. I cannot really track down why but it works with v2 but not with v3.

If it makes any difference, I am using json.dumps(jsonable_encoder(obj), sort_keys=True) as a cache serializer. Where jsonable_encoder comes from fastapi.

Library tries to import inexisting "django.utils.six" module

Hi,

This library tries to import the "django.utils.six" module, which only existed before Django 3. So it's not currently compatible with Django 3, unless some trickery is involved.

My suggestion is that either this dependency gets dropped in favor of using the Python 3 native libs, or at least try to import six from the six library if installed by the user.

Cheers,
Diogo

Use __qualname__ instead of __name__ for the namespace

If you have two classes with the same method name inside a module and cache both, the keys will be the same because __name__ only returns the name of the method. But __qualname__ returns the name of the class and the name of the method.
So, can you use that instead?
In any case, thanks for your work!

Support custom cache key generation

Currently the cache key is generated by serializing all args and kwargs:

serialized_data = self.serializer([args, kwargs])

In some cases selecting only subset of arguments or providing completely custom key generation function is needed as some of the arguments are not relevant, nor serializable (e.g. DB session object). See e.g. signature_generator in redis_cache_decorator package for similar feature.

I've solved it now by monkey patching get_key method in CacheDecorator but it would be nice to configure it using decorator parameters.

Decorating async functions?

Hi:
This may be simple, but I can't find in the docs or the code: How do I cache the results of an async function? I'm getting an error that a coroutine isn't JSON serializable which is true, but is there a way to tell redis-cache to await instead of caching the coro itself? Or would that be a simple change to submit a PR for?

Thanks much!

How to use cache decorator inside class method

How can we use cache decorator inside a class without providing a custom serializer for every class?

I have following class.

class Myclass:
  def my_def2(self, value):
    return value + 2
    
  @cache.cache()
  def my_def(self, value):
     value += 5
     return self.mydef2(value)

This throws TypeError as json serializer doesn't recognize the type of self argument.

I am dealing with it like this.

def compact_dump(value):
  self_arg = value.get('self')
  # Don't have access to func here, so can't check func.__qualname__ != func.__name__. 
  # May be wrap @cache inside another decorator and check func.__qualname__ != func.__name__ there.
  # May be add condition self_arg is it not int, float, tuple, dict etc as they are inherited from object class
  if self_arg is not None and isinstance(self_arg, object):
    value.pop('self')
  return dumps(value, separators=(",", ":"), sort_keys=True)
         
cache = RedisCache(redis_client=client, serializer=compact_dump)

But this looks hackish. Is there a better/recommeded way to deal with it?

I just want to throw decorator @cache without worrying about the method type.

Option to disable the cache & Default TTL

Hi,
Thanks for the repo, great work!
I want to suggest a couple of features

  1. Add the option to disable the cache
    I need to have the ability to disable the cache based on an env. variable(ie. lower environments or troubleshooting) without changing the code and preferably without creating my own wrapper
    Maybe some sort of an "active" flag when instantiating the cache so that if it's inactive just return the wrapped function itself?
    Flaks-caching(which I'm trying to migrate from) does this by setting if it helps
    cache_config = {
    "CACHE_TYPE": "null",
    }

  2. Add the ability for a default TTL when instantiating the cache.
    For now my code in all cache parts(a lot) looks like this and it looks cumbersome
    @cache.cache(ttl=int(os.getenv("CACHE_TTL", "3600"))
    It would be cleaner to set a default when creating the object.

Thanks!

Unable to use the package

Could you please help me with an example of how I can use it?
I am unable to understand how to deploy this package.

Ability to ignore provided function results

Hello. Simple case - your function fetches some data from remote resource. This resource can return some error, for example "Service unavailable". So response from my backend is similar.
Surely I don't want to cache this result for any time.

I think something like

@cache.cache(..., ignore_results=(None))
def func(...):
    ...
    if response.status_code == 503:
        return None
    ...

So if you call it again with same arguments it won't use this cache.

If you okay with this idea I can try to implement this and make a PR.

Pass caching when redis server goes down during execution

It seems that the CacheDecorator does not have a mechanism to stop caching and revert back to directly running the function to retrieve values when the redis server goes down. Maybe add something like this?

        @wraps(fn)
        def inner(*args, **kwargs):
            nonlocal self
            key = self.get_key(args, kwargs)

            try:
                result = self.client.get(key)
            except (RedisError, ConnectionError):    # Maybe need more like BusyLoadingError, ...
                result = fn(*args, **kwargs)
                return result

            if not result:
                result = fn(*args, **kwargs)
                result_serialized = self.serializer(result)
                get_cache_lua_fn(self.client)(keys=[key, self.keys_key], args=[result_serialized, self.ttl, self.limit])
            else:
                result = self.deserializer(result)
            return result

Perhaps similar ignore mechanisms could be added to invalidate() and invalidate_all() so that the program can continue running when redis server goes down.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.