Git Product home page Git Product logo

cached's People

Contributors

4jx avatar 9999years avatar a-nickol avatar absoludity avatar altair-bueno avatar baxhugh avatar bowenwang1996 avatar clia avatar commanderstorm avatar csos95 avatar danielmellado avatar djmcgill avatar drwilco avatar filipandersson245 avatar jaemk avatar jkoelker avatar jqnatividad avatar kijewski avatar kloenk avatar makorne avatar mfornet avatar mikhailok avatar omid avatar paulvt avatar raz-hemo avatar samoylovfp avatar stargateur avatar tirithen avatar troublescooter avatar yagince avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cached's Issues

Revive TTL Option

Hey yall, so I have used this crate since a few days now and I have noticed that there is a time prop which can be passed to give the cached values a certain time to live before getting removed.
Now my feature request: Add an option (maybe called reset_time idk) which accepts a bool and if its true reset the time again when the same key gets accessed while the value still is in cache.

adaptive replacement cache

An adaptive replacement cache keeps both the most recently used (L1) and most frequently used (L2) in cache. The cache keeps a list of recently evicted cache entries (G1, G2). On a cache miss it checks if the entry is in G1 or G2 and adapts the target ratio of L1/L2 entries. When evicting it decides to evict from L1 or L2 by comparing the current ratio to the target ratio.

https://en.wikipedia.org/wiki/Adaptive_replacement_cache

Support Redis

I suggest supporting any network-based cache software, like Redis, Memcached, and ...

In the k8s environment, in which you may have several small pods working at the same time, caching locally is not helpful enough and will increase memory usage which may end in OOM Kill, having a shared place to cache the data is helpful.

If you agree, I can start trying to do it.
By adding a new store type, RedisCache which I hope will be similar to TimedCache and UnboundCache.

To support this, I think the return type of the function must implement Serialize and Deserialize.

lifetime `'static` required?

I'm getting the following error:

 #[cached::proc_macro::cached(time = 240, result = true)]
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lifetime `'static` required
72 | pub fn get_data(data1: &str, data2: &str) -> Result<Vec<MyStruct>, reqwest::Error> {
   |                                  ---- help: add explicit lifetime `'static` to the type of `data1`: `&'static str`
   |

Why is it suggesting a static lifetime? Adding a static lifetime propagates the static requirement up the call chain. Where I finally get
error[E0759]: `data1` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement

How can I fix this?

Cache without key

What's the idiomatic way to express a cache on a fn with parameters but without a "key"?

#[cached(time = 5, key = "&str", convert = r#"{ "" }"#)]
async fn cached_health(db_pool: DBPool) -> Result<(), Error> {

The above works but it looks a bit dirty.

Could there be a "locked" version?

The function-cache is not locked for the duration of the function's execution, so initial (on an empty cache) concurrent calls of long-running functions with the same arguments will each execute fully and each overwrite the memoized value as they complete. This mirrors the behavior of Python's functools.lru_cache.

I'm guessing it's out of scope of this crate but I'm asking just in case.

I read The Benefits of Microcaching with NGINX and it seems using proxy_cache_lock has some benefit.

And I really like the idea of using #[cached] with functions.

Ability to clear/defeat cache stores

I was wondering if there are plans to add functionality to clear caches programatically, or be able to force recomputation/defeat the cache on certain calls?

I'd love to contribute as I can, but I wanted to make sure this fit within the scope of what this crate intends to do. :)

Feature suggestion: Auto-refresh when remaining TTL is below under certain threshold

Use case: Imagine your hot call always takes around 1-2s and you want to cache it with a TTL of 60s. As you never want your calling functions to wait 1-2s the cache should auto-refresh when the remaining TTL drops below 10s.

Such a feature would make the lib complete for me. Please bare with me if that's already possible. I'm happy to hear about any hint for implementing that on top of the lib. Even I might not have the resources to do so.

Examples: cache invalidation

I just skimmed through the docs, so bear with me:

I have two functions, one is cached, and another one might invalidate the cache or some key in it. There's no example of how to get ahold of the cache in the second function, in order to be able to call methods like cache_remove or cache_clear.

The second function is not cached, it just invalidates the cache, and there can be an infinite number of functions which need to invalidate a cache.

Other than this missing documentation, looks like an awesome crate. Thank you!

Ignore some arguments

Is there any way of ignoring some of the arguments?

Imagine I have a method to get one user from the database. And I would like to cache the user.
So this method may need two params, DB connection pool and the user ID.
In my case, I do not care about connection pool uniqueness and would like to ignore it!

pub fn get_user(conn: &PgConnection, user_id: &Uuid) -> MyResult<User>

My suggestion us to have something like:

#[cached(time = 600, ignore = "conn,another_param")]
pub fn get_user(conn: &PgConnection, user_id: &Uuid, another_param: String) -> MyResult<User>

SizedCache cache_get is O(n)

    fn cache_get(&mut self, key: &K) -> Option<&V> {
        let val = self.store.get(key);
        match val {
            Some(slot) => {
                // if there's something in `self.store`, then `self.order`
                // cannot be empty, and `key` must be present
                let index = self.order.iter().enumerate()
                                .find(|&(_, e)| { key == e })
                                .expect("SizedCache::cache_get key not found in ordering").0;
                let mut tail = self.order.split_off(index);
                let used = tail.pop_front().expect("SizedCache::cache_get ordering is empty");
                self.order.push_front(used);
                self.order.append(&mut tail);
                self.hits += 1;
                Some(slot.get().expect("SizedCache::cache_get slots should never be empty"))
            }
            None => {
                self.misses += 1;
                None
            }
        }
    }

This is terrible, SizedCache is insanely slow with large cache size

Variables used inside cached functions lint as unused?

use anyhow::Result;
use cached::{cached_key_result, SizedCache};

const MAX_CACHE_SIZE: usize = 5;
const A_CONSTANT: u8 = 10;

cached_key_result! {
	MY_FUNCTION: SizedCache<String, u8> = SizedCache::with_size(
		MAX_CACHE_SIZE
	);
	Key = {format!("{}{}", a, b)};

	fn my_function(a: &str, b: &str) -> Result<u8> = {
		Ok(A_CONSTANT)
	}
}

This is a small repro of the issue I'm having, although it seems to be easily replicable. When testing this code, it will say constant is never used: MAX_CACHE_SIZE and constant is never used: A_CONSTANT.

Is this expected behavior? Should I be taking extra steps to prevent this? Is this the linters fault? Is it a problem with the library? Could it be fixed?

Add ability to store cache to disk

I saw #11 and I think it would be good to add this functionality.

Some prior art that I'm aware of includes https://github.com/brmscheiner/memorize.py which is a python library for memoization that supports storing the cache in disk. Some details about that library:

  • the filename of the cache for a given function is <filename-of-function>_<function-name>.cache
  • the folder of the cache file is either the current directory or the directory of the source file of the memoized function, and library users can specify which option
  • the file is written to on each function call
  • the file is in the pickle format

IMO, I think some better options would be to:

  • store the cache files in the platform specified application cache folder, so on Linux that would $XDG_CACHE_HOME/<appname> or ~/.cache/<appname> by default
  • write to the cache file on program exit if possible. The issue to consider with this approach is that it probably wouldn't be possible to write to the file if the program crashes/panics
  • there is a rust pickle library but idk if that would be the best choice :D. Using json might be difficult since you can't use tuples as key values of objects. The full list of formats serde supports is here: https://serde.rs/#data-formats for some alternatives

Let me know what you think!

edit: So I noticed you can specify a key format, so that would solve the json tuple issue actually. Definitely a nice feature!

edit2: It would probably be good to store the caches in a subdirectory of ~/.cache/<appname>, like in a directory called memoized or something.

Support TTL for the cache

Is there any plan to support TTL for the cached value in the macro?

#[cached(size=100, ttl=10)]
fn keyed(a: String, b: String) -> usize {
    let size = a.len() + b.len();
    sleep(Duration::new(size as u64, 0));
    size
}

PS: Or we already have this feature?!

lighter async mutex dependency

I was suprised to see this not onky depends on async-std, but also exposes it as public API. It looks like you're only using its async Mutex type, though? Maybe you can switch to async-mutex which is way smaller and by the same author.

Document how this should work on floats?

Perhaps it's just a part of the language, but if I want to use

#[cached(time=60)]
async fn get_alerts(config: &cmdline::Config) -> Result<Vec<Alert>, reqwest::Error> {

I would expect that to work as is. The problem I'm having is that Config there contains an f64. That f64 doesn't implement Hash. I think I get how to do this, but perhaps more could be done to help the user in this case? It's kind of a pita to have to write all this code to say NaN = NaN.

Cache member functions?

Hi.

I don't know why this isn't possible yet, and while searching the issues I did not find something that answered my questions: Why is there no caching for member functions?

Consider:

pub struct Foo(i32, i32);

impl Foo {
    #[cached]
    pub fn get_first(&self) -> i32 { /* ... */ }
    #[cached]
    pub fn get_snd(&self) -> i32 { /* ... */ }
}

That's actually not that hard to cache, is it? Maybe I'm missing some important pieces here, I don't know...


That being said, one could easily craft a type which just calls (cached) free private functions, like so:

pub struct Foo(i32, i32);

impl Foo {
    pub fn get_first(&self) -> i32 { foo_get_first(self) }
    pub fn get_snd(&self) -> i32 { foo_get_snd(self) }
}

#[cached]
fn foo_get_first(f: &Foo) -> i32 { /* ... */ }
#[cached]
fn foo_get_snd(f: &Foo) -> i32 { /* ... */ }

right? That'd yield the same, AFAICS?

Proc macro based version

Do you have a plan to rebuild the macros based on the proc_macro?

Those are sweeter.

#[cached(Fib)]
fn fib(n: u64) -> u64 {
     if n == 0 || n == 1 { 
        return n; 
      }
      fib(n-1) + fib(n-2)
}

error[E0282]: type annotations needed -> this method call resolves to `std::option::Option<&V>`

I'm trying to implement a file based cacher. This is my cache type MyCache. I'm trying to follow readme.


pub struct MyCache {
    pub dir: String,
}

type Key = (String, String);

impl MyCache {
    pub fn new() -> MyCache {
        MyCache { dir: "./my_cache".to_string() }
    }

    fn get_key_hash(&self, key: &Key) -> String {
        format!("{}__{}", key.0, key.1)
    }
}

impl<V: DeserializeOwned + Serialize> Cached<Key, V> for MyCache {
    fn cache_get(&mut self, k: &(String, String)) -> Option<&V> {
        let output = cacache::read_sync(self.dir, self.get_key_hash(k)).ok()?;

        let as_str = String::from_utf8(output).unwrap();
        let o: V = serde_json::from_str(&as_str).unwrap();

        return Some(&o);
    }

    fn cache_set(&mut self, k: Key, v: V) -> Option<V> {
        let vec = serde_json::to_vec(&v).unwrap();
        cacache::write_sync(self.dir, self.get_key_hash(&k), vec).unwrap();

        Some(v)
    }

    fn cache_get_mut(&mut self, k: &Key) -> Option<&mut V> {
        self.cache_get(k).map(|x: &V| (*x).borrow_mut())
    }

    fn cache_get_or_set_with<F: FnOnce() -> V>(&mut self, k: Key, f: F) -> &mut V {
        let gotten: Option<&V> = self.cache_get(&k);
        match gotten {
            Some(res) => res.borrow_mut(),
            None => {
                let new = f();
                self.cache_set(k, new).unwrap();
                new.borrow_mut()
            }
        }
    }

    fn cache_remove(&mut self, k: &Key) -> Option<V> {
        let v: Option<&V> = self.cache_get(k);
        cacache::remove_sync(self.dir, self.get_key_hash(k));
        let v = *v.unwrap();
        Some(v)
    }

    fn cache_clear(&mut self) {
        cacache::clear_sync(self.dir);
    }

    fn cache_reset(&mut self) {
        cacache::clear_sync(self.dir);
    }

    fn cache_size(&self) -> usize {
        todo!()
    }
}

When trying to create a cached function like this:


#[cached(
    type = "MyCache",
    create = "{ MyCache::new() }",
    convert = r#"{("ymdh".to_string(), bucket_name.to_string())}"#
)]
pub fn cached_get_folder_names_from_bucket_containing_ymdh_subfolders(
    g_cloud: Option<GoogleCloudInterface>,
    a_cloud: Option<AwsCloudInterface>,
    bucket_name: &str,
) -> Vec<String> {
    vec![]
}

I get


error[E0282]: type annotations needed
   --> qwe.rs:100:1
    |
100 | / #[cached(
101 | |     type = "MyCache",
102 | |     create = "{ MyCache::new() }",
103 | |     convert = r#"{("ymdh".to_string(), bucket_name.to_string())}"#
104 | | )]
    | |__^ this method call resolves to `std::option::Option<&V>`
    |
    = note: type must be known at this point
    = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)

error: aborting due to previous error

For more information about this error, try `rustc --explain E0282`.

Compile error if function args are `mut`

This is a really cool macro!
I noticed that it doesn't work if we declare mut function args:

use cached::proc_macro::cached;

#[cached]
fn foo(mut x: i32) -> i32 {
    x += 1;
    x
}

This gives:

error: expected expression, found keyword `mut`
 --> src/bin/foo.rs:4:8
  |
4 | fn foo(mut x: i32) -> i32 {
  |        ^^^ expected expression

This works:

use cached::proc_macro::cached;

#[cached]
fn foo(x: i32) -> i32 {
    let mut x = x;
    x += 1;
    x
}

It's easy to work around but it feels like a parsing bug that the former doesn't work.

How to work with bigint?

I want to implement a big integer version of the Fibonacci sequence.

But I encountered the following error:

use cached::proc_macro::cached;
use num::BigUint;

#[cached(size = 255)]
fn fibonacci_u(n: &BigUint) -> BigUint {
    match n {
        BigUint::from(0) | BigUint::from(1) => n.clone(),
        _ => fibonacci_u(n - 1) + fibonacci_u(n - 2)
    }
}

fn main() {
    println!("{}", fibonacci_u(&BigUint::from(100usize)))
}
error[E0308]: mismatched types
 --> src\main.rs:4:1
  |
4 | #[cached(size = 255)]
  | ^^^^^^^^^^^^^^^^^^^^^ expected `&num::BigUint`, found struct `num::BigUint`
  |
  = note: expected reference `&&'static num::BigUint`
             found reference `&num::BigUint`
  = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0308]: mismatched types
 --> src\main.rs:4:1
  |
4 | #[cached(size = 255)]
  | ^^^^^^^^^^^^^^^^^^^^^
  | |
  | expected `&num::BigUint`, found struct `num::BigUint`
  | help: consider borrowing here: `&#[cached(size = 255)]`
  |
  = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)

I think that cached size always be usize, how can I solve this?

[dependencies]
num = "0.2"
cached = "0.18"

Would using RwLock instead of Mutex make sense?

hi,

I am working on high performance Smart IVR use case where we need to process massive amounts of audio data. App is built on tokio.rs. We are also calling some REST APIs from application which require bearer token. Since this is expiring token we need to cache it. Original solution used actor for token management, see https://tokio.rs/tokio/tutorial/shared-state#tasks-threads-and-contention

Basically we have spawned task that is running loop accepting GetTokenRequest from channel. Internally this components holds simple cache and refreshes token as needed. Overall design is rather awkward so I was thinking about replacing whole actor/token state manager with cache that will simply memoize our get_token function (returning Result<String, Err>) .

I have one doubt: you are using Mutex, not RwLock, which probably does have some performance penalty.

Are there any specific reason why cached does not use RwLock instead so that only update will do locking?

While using cached instead of custom actor/token manager greatly simplifies our design I am little bit afraid of using Mutex in the situation where we run thousands transactions in parallel (each requiring to retrieve bearer token). Eould this not lead to contention (too many tasks trying to lock single Mutex -> additional latency introduced in tokio.rs task processing)? Any advice here?

one important note: because of the nature of our app (smart ivr/voice chatbot) low latency is extremely important for us

cached_result! fails to compile if no error is emitted from function with 'cannot infer type'

�Hi,

I've been banging my head against the wall for a few hours trying to compile my initial test of the cached crate. I defined a cached_result!, but since I was just testing this out, I left out either the Err or the Ok case (basically empty function). This kept breaking the compile with 'cannot infer type'. Now, a more seasoned rust developer might figure this out quickly, but alas... I did not.

After finally compiling with -Z macro-backtrace, I to gain a better unerstanding of the problem:

64  | / macro_rules! cached_result {
65  | |     // Unfortunately it's impossible to infer the cache type because it's not the function return type
66  | |     ($cachename:ident : $cachetype:ty = $cacheinstance:expr ;
67  | |      fn $name:ident ($($arg:ident : $argtype:ty),*) -> $ret:ty = $body:expr) => {
...   |
79  | |             let val = (||$body)()?;
    | |                       ^^^^^^^^^^^^ cannot infer type
...   |
84  | |     };
85  | | }
    | |_- in this expansion of `cached_result!` (#1)
    | 
   ::: src/quakes.rs:173:1
    |
173 | / cached_result! {
174 | |     HISTORICAL: UnboundCache<i64, String> = UnboundCache::new();
175 | |     fn get_historical_quake_week(week: i64) -> Result<String, String> = {
176 | |         Ok("Ok".to_owned())
177 | |     }
178 | | }
    | |_- in this macro invocation (#1)

I mostly wanted to get this out there in case others stumble upon the same error, it might warrant a mention in the documentation that the _result macro requires both arms of the Result to be implemented in the function. Perhaps this could be fixed in the macro since the return type is supplied, but I'm no there yet myself.

Lifetimes not added to inner fn in proc macro

Given this example function

use cached::proc_macro::cached;
use cached::UnboundCache;

#[cached(
    type = "UnboundCache<usize, CreateMessage>",
    create = "{ UnboundCache::with_capacity(1) }",
    convert = r#"{ cmds.len() }"#
)]
fn generate_help<'a>(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
    CreateMessage::default()
}

Expands to

use cached::proc_macro::cached;
use cached::UnboundCache;
static GENERATE_HELP: ::cached::once_cell::sync::Lazy<
    std::sync::Mutex<UnboundCache<usize, CreateMessage>>,
> = ::cached::once_cell::sync::Lazy::new(|| {
    std::sync::Mutex::new({ UnboundCache::with_capacity(1) })
});
fn generate_help<'a>(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
    use cached::Cached;
    let key = { cmds.len() };
    {
        let mut cache = GENERATE_HELP.lock().unwrap();
        if let Some(result) = cache.cache_get(&key) {
            return result.clone();
        }
    }
    fn inner(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
        CreateMessage::default()
    }
    let result = inner(cmds);
    let mut cache = GENERATE_HELP.lock().unwrap();
    cache.cache_set(key, result.clone());
    result
}

Resulting in

error[E0261]: use of undeclared lifetime name `'a`
  --> src/<file>.rs:67:66
   |
66 | )]
   |   - help: consider introducing lifetime `'a` here: `<'a>`
67 | fn generate_help<'a>(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
   |                                                                  ^^ undeclared lifetime

Since the lifetime specifiers aren't copied into the inner func, we're not able to return types that require specifiers. The same problem exists with the cached! macro, except that it completely fails to parse the lifetimes instead of translating them wrong.

Share running futures

Hello,
Thanks for this awesome crate!

My team use this crate to cache results of async function to prevent expensive io/computations. We run processing in parallel, but caching works only for sequential requests. For example:

static COUNTER: AtomicI64 = AtomicI64::new(0); 

#[tokio::main]
async fn main() {
    let result: Vec<i64> = (0..=20).into_iter() // run 20 iterations
        .map(|_| process_input("test".to_string()))  // async processing
        .collect::<FuturesUnordered<_>>()
        .collect::<Vec<_>>().await;
    println!("actual: {:?}", result);
    println!("expected: {:?}", vec![0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]);
}

#[cached]
async fn process_input(input: String) -> i64 {
    let data = COUNTER.fetch_add(1, Ordering::SeqCst);
    sleep(Duration::from_secs(2)).await; // simulate io/processing
    return data;
}

prints:

actual: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
expected: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

Is this desired behavior?

To workaround this problem we added wrapper function:

static COUNTER: AtomicI64 = AtomicI64::new(0);

#[tokio::main]
async fn main() {
    let result: Vec<i64> = (0..=20).into_iter() // run 20 iterations
        .map(|_| process_input_wrapper("test".to_string())) // async processing
        .collect::<FuturesUnordered<_>>()
        .collect::<Vec<_>>().await;
    println!("actual: {:?}", result);
    println!("expected: {:?}", vec![0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]);
}

#[cached]
fn process_input_wrapper(input: String) -> Shared<
    Pin<Box<dyn Future<Output=i64> + std::marker::Send>>,
> {
    return process_input(input).boxed().shared();
}

async fn process_input(input: String) -> i64 {
    // heavy input processing
    let data = COUNTER.fetch_add(1, Ordering::SeqCst);
    sleep(Duration::from_secs(2)).await; // simulate io/processing
    return data;
}

prints:

actual: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
expected: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

What is the correct way to solve it? Is it possible to implement support for sharing running futures in this crate?

Cache impl functions

Is this even possible? Would love to be able to put cached functions inline with an impl rather than having to rely on an external implementation.

Feature request: When caching a Result or Option: soft and hard timeouts, to have the ability to return the expired cache value if the operation fails

In some cases it may be worthwile to fall back to the cached, but expired value if the function is not successful in computing the current value of the thing to be cached.

For example a function retrieves some rarely changing value over the network, and caches it for say 1 day. In case the thing is requested after 24 hours and 1 minute, the cache is expired, so we attempt to retrieve it again. In case the retrieval fails (because network is down, server is down, etc.), one may assume that the old cached value is probably still valid, and I'd like to have it return it. This would basically make the function infallible though I'm not sure we can express that using the type system as the function itself needs to be able to return a failure to communicate to the cache that it needs to return the cached value even though it is expired.

One step further and adding more flexibility would be to add a "soft" and a "hard" timeout for the cached values. After the soft timeout the operation to calculate the value is attempted again, but if it fails the cached value is returned. After the hard timeout the cached value is not returned at all.

macro: Not working inside impl blocks

The cached! macro doesn't work inside struct's impl block. (Haven't tried other impl blocks.)

Repro:

#[macro_use]
extern crate cached;

#[macro_use]
extern crate lazy_static;

cached! {
    FIB;
    fn fib(n: u64) -> u64 = {
        if n == 0 || n == 1 { return n }
        fib(n-1) + fib(n-2)
    }
}

struct Foo {}

impl Foo {
    cached! {
        ANOTHER_FIB;
        fn another_fib(n: u64) -> u64 = {
            if n == 0 || n == 1 { return n }
            another_fib(n-1) + another_fib(n-2)
        }
    }
}

fn main() {
    println!("fib(10): {}", fib(10));

    let _ = Foo {};
    println!("another_fib(10): {}", another_fib(10));
}

Error:

error: expected one of `async`, `const`, `crate`, `default`, `existential`, `extern`, `fn`, `pub`, `type`, or `unsafe`, found `struct`
  --> src/main.rs:18:5
   |
18 |        cached! {
   |   _____^
   |  |_____|
   | ||
19 | ||         ANOTHER_FIB;
20 | ||         fn another_fib(n: u64) -> u64 = {
21 | ||             if n == 0 || n == 1 { return n }
22 | ||             another_fib(n-1) + another_fib(n-2)
23 | ||         }
24 | ||     }
   | ||     ^
   | ||_____|
   | |______expected one of 10 possible tokens here
   |        unexpected token
   |
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

error: aborting due to previous error

Supporting references

Hi! I recently picked up cached and it enabled me to very quickly set up some caches for hot functions. I think this project is awesome.

As I began to optimize the rest of my codebase, it quickly became clear that cloning cached values was a bottleneck (mostly just the memory allocation, I was storing Vec<String>). After some thought, I realized that my use case could live off of references to the cached values.

I believed that cached doesn't support this sort of behavior; is it something you would consider adding? I am happy to help with implementation / flesh out the proposal a bit if it seems in scope for this project.

Thanks for building this in the first place!

Option::unwrap() on a None value in cache_set

We're caching some results of looking up data in S3, and every few days we get a panic in cached that poisons the internal mutex.

Our cached definition is this:

cached_key_result! {
    QUERY: SizedCache<String, Vec<Inventory>> = SizedCache::with_size(100);
    Key = { format!("{}/{}/{}", region, bucket, recording_id) };
    fn cached_query(region: &str, bucket: &str, recording_id: &str) -> Result<Vec<Inventory>> = {
        match do_query(region, bucket, recording_id) {
            Ok(v) => if v.is_empty() {
                Err(InventoryError::new(400, "No match"))
            } else {
                Ok(v)
            },
            Err(e) => Err(e)
        }
    }
}

The crash is this where cached is on line 9:

Sep 18 21:29:44: thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:345:21
Sep 18 21:29:44: note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Sep 18 21:29:44: stack backtrace:
Sep 18 21:29:44:    0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
Sep 18 21:29:44:              at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
Sep 18 21:29:44:    1: std::sys_common::backtrace::print
Sep 18 21:29:44:              at libstd/sys_common/backtrace.rs:71
Sep 18 21:29:44:              at libstd/sys_common/backtrace.rs:59
Sep 18 21:29:44:    2: std::panicking::default_hook::{{closure}}
Sep 18 21:29:44:              at libstd/panicking.rs:211
Sep 18 21:29:44:    3: std::panicking::default_hook
Sep 18 21:29:44:              at libstd/panicking.rs:227
Sep 18 21:29:44:    4: std::panicking::rust_panic_with_hook
Sep 18 21:29:44:              at libstd/panicking.rs:511
Sep 18 21:29:44:    5: std::panicking::continue_panic_fmt
Sep 18 21:29:44:              at libstd/panicking.rs:426
Sep 18 21:29:44:    6: rust_begin_unwind
Sep 18 21:29:44:              at libstd/panicking.rs:337
Sep 18 21:29:44:    7: core::panicking::panic_fmt
Sep 18 21:29:44:              at libcore/panicking.rs:92
Sep 18 21:29:44:    8: core::panicking::panic
Sep 18 21:29:44:              at libcore/panicking.rs:53
Sep 18 21:29:44:    9: <cached::stores::SizedCache<K, V> as cached::Cached<K, V>>::cache_set
Sep 18 21:29:44:   10: recoordinator::inventory::s3::query
Sep 18 21:29:44:   11: recoordinator::inventory::s3::query_one
Sep 18 21:29:44:   12: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &'a mut F>::call_once
Sep 18 21:29:44:   13: <&'a mut I as core::iter::iterator::Iterator>::next
Sep 18 21:29:44:   14: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T, I>>::from_iter
Sep 18 21:29:44:   15: recoordinator::reel::to_desc::reel_to_desc
Sep 18 21:29:44:   16: recoordinator::dispatch
Sep 18 21:29:44:   17: std::panicking::try::do_call
Sep 18 21:29:44:   18: __rust_maybe_catch_panic
Sep 18 21:29:44:              at libpanic_unwind/lib.rs:105

This is probably from one of these two rows:

cached/src/stores.rs

Lines 124 to 125 in 0ed46ce

let lru_key = self.order.pop_back().unwrap();
self.store.remove(&lru_key).unwrap();

I guess one of these unwrap() assumptions doesn't hold true for us?

Can we get rid of the std::mem::replace warning?

My compiler keeps complaining about:

std::mem::replace(self, HashMap::new());

warning: unused return value of `std::mem::replace` that must be used
   --> src/stores.rs:630:9
    |
630 |         std::mem::replace(self, HashMap::new());
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: `#[warn(unused_must_use)]` on by default
    = note: if you don't need the old value, you can just assign the new value directly

warning: 1 warning emitted

Is there a reason for using std::mem::replace instead of just *self = HashMap::new();?

For convenience, incoming PR

Need to serialize/deserialize the cache

I need to serialize the cache:

  1. There is currently no way to iterate with Cached trait
  2. Without GATs I don't think we could have an associate Iter type on Cached, maybe it's possible I don't known
  3. We could implement Serialize/Deserialize from serde for CachedSized & Co, this require some work and have some cons but this could allow to keep time(not very useful for timed cache) and order information.
  4. We could add a method in Cached to return an HashMap of Cache consuming self, not all implementation could do it without allocate a new hashmap and we lost information about time or order.

Naming on Cached trait

All method on the Cached trait are named cache_*, it's very redundant. I think we should remove this noise.

Dynamic TimedCache

First off, thank you for an awesome crate!
I'm trying to do something like the following:

pub fn fn_name(some_parameters, time: u64) -> Result<> {
 cached_key_result! {
            FN_NAME: TimedCache<> = TimedCache::with_lifespan_and_capacity(time, 10);
           fn inner(same_paramters_as_above) -> Result<> {
           ...code...
           }
  }
  inner(original_parameters)
}

But when I do, I get compiler errors that time isn't a const. I tried writing code to turn time into a const, but that never seemed to workout. So I'm wondering if its possible to have a more dynamic timer for TimedCache.

Thank you in advance.

Cached key result async not working

Is it possible to have an async function with the cached_key_result! macro?

I cannot get it to work... here's a basic example:

cached_key_result! {
    CACHE: SizedCache<String, String> = SizedCache::with_size(100);
    Key = { id.to_owned() };
    async fn foo(id: String) -> Result<String, &'static dyn std::error::Error> = {
        tokio::time::sleep(std::time::Duration::from_secs(1)).await;
        Ok("Hey".to_string())
    }
}

It doesn't work with the async keyword there.. but when you remove the async keyword (and the use of tokio sleep), it works.

How to aproach generics ?

Since apparently i can't really use cached with generic functions (let's say fn add_one<T: One + Add<Output=T>>(n: T) -> T { n + T::one() }), how would i proceed to make a cache for a specific type that's valid as said generic (let's use usize) ?

macro: `pub fn` is not supported

The macro doesn't let me assign privacy to the function.

error: no rules expected the token `pub`
  --> src/...
   |
24 |         pub fn get(a_keyword: &str) -> Option<Self> = {
   |         ^^^

cache_proc_macro version was not increased together with 0.21.0 release

Seem cache_proc_macro module version was not increased therefore not containing necessary changes regarding timed caches.

#[cached(size=10, time=10)]

does not compile printing out the following error:

error: custom attribute panicked
   --> src/repo/mod.rs:135:1
    |
135 | #[cached(size=10, time=10)]
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = help: message: cache types (unbound, size, time, or type and create) are mutually exclusive

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.