jaemk / cached Goto Github PK
View Code? Open in Web Editor NEWRust cache structures and easy function memoization
License: MIT License
Rust cache structures and easy function memoization
License: MIT License
Hey yall, so I have used this crate since a few days now and I have noticed that there is a time
prop which can be passed to give the cached values a certain time to live before getting removed.
Now my feature request: Add an option (maybe called reset_time
idk) which accepts a bool
and if its true
reset the time
again when the same key gets accessed while the value still is in cache.
An adaptive replacement cache keeps both the most recently used (L1) and most frequently used (L2) in cache. The cache keeps a list of recently evicted cache entries (G1, G2). On a cache miss it checks if the entry is in G1 or G2 and adapts the target ratio of L1/L2 entries. When evicting it decides to evict from L1 or L2 by comparing the current ratio to the target ratio.
I suggest supporting any network-based cache software, like Redis, Memcached, and ...
In the k8s environment, in which you may have several small pods working at the same time, caching locally is not helpful enough and will increase memory usage which may end in OOM Kill, having a shared place to cache the data is helpful.
If you agree, I can start trying to do it.
By adding a new store type, RedisCache
which I hope will be similar to TimedCache
and UnboundCache
.
To support this, I think the return type of the function must implement Serialize
and Deserialize
.
I'm getting the following error:
#[cached::proc_macro::cached(time = 240, result = true)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lifetime `'static` required
72 | pub fn get_data(data1: &str, data2: &str) -> Result<Vec<MyStruct>, reqwest::Error> {
| ---- help: add explicit lifetime `'static` to the type of `data1`: `&'static str`
|
Why is it suggesting a static lifetime? Adding a static lifetime propagates the static requirement up the call chain. Where I finally get
error[E0759]: `data1` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
How can I fix this?
What's the idiomatic way to express a cache on a fn with parameters but without a "key"?
#[cached(time = 5, key = "&str", convert = r#"{ "" }"#)]
async fn cached_health(db_pool: DBPool) -> Result<(), Error> {
The above works but it looks a bit dirty.
The function-cache is not locked for the duration of the function's execution, so initial (on an empty cache) concurrent calls of long-running functions with the same arguments will each execute fully and each overwrite the memoized value as they complete. This mirrors the behavior of Python's functools.lru_cache.
I'm guessing it's out of scope of this crate but I'm asking just in case.
I read The Benefits of Microcaching with NGINX and it seems using proxy_cache_lock
has some benefit.
And I really like the idea of using #[cached]
with functions.
Do I understand correctly that this library is in-memory only, and caching to disk (i.e. to persist things between program launches) is not supported? If so, could you please note that in the project description? Thanks!
I was wondering if there are plans to add functionality to clear caches programatically, or be able to force recomputation/defeat the cache on certain calls?
I'd love to contribute as I can, but I wanted to make sure this fit within the scope of what this crate intends to do. :)
Use case: Imagine your hot call always takes around 1-2s and you want to cache it with a TTL of 60s. As you never want your calling functions to wait 1-2s the cache should auto-refresh when the remaining TTL drops below 10s.
Such a feature would make the lib complete for me. Please bare with me if that's already possible. I'm happy to hear about any hint for implementing that on top of the lib. Even I might not have the resources to do so.
I just skimmed through the docs, so bear with me:
I have two functions, one is cached, and another one might invalidate the cache or some key in it. There's no example of how to get ahold of the cache in the second function, in order to be able to call methods like cache_remove
or cache_clear
.
The second function is not cached, it just invalidates the cache, and there can be an infinite number of functions which need to invalidate a cache.
Other than this missing documentation, looks like an awesome crate. Thank you!
Is there any way of ignoring some of the arguments?
Imagine I have a method to get one user from the database. And I would like to cache the user.
So this method may need two params, DB connection pool and the user ID.
In my case, I do not care about connection pool uniqueness and would like to ignore it!
pub fn get_user(conn: &PgConnection, user_id: &Uuid) -> MyResult<User>
My suggestion us to have something like:
#[cached(time = 600, ignore = "conn,another_param")]
pub fn get_user(conn: &PgConnection, user_id: &Uuid, another_param: String) -> MyResult<User>
fn cache_get(&mut self, key: &K) -> Option<&V> {
let val = self.store.get(key);
match val {
Some(slot) => {
// if there's something in `self.store`, then `self.order`
// cannot be empty, and `key` must be present
let index = self.order.iter().enumerate()
.find(|&(_, e)| { key == e })
.expect("SizedCache::cache_get key not found in ordering").0;
let mut tail = self.order.split_off(index);
let used = tail.pop_front().expect("SizedCache::cache_get ordering is empty");
self.order.push_front(used);
self.order.append(&mut tail);
self.hits += 1;
Some(slot.get().expect("SizedCache::cache_get slots should never be empty"))
}
None => {
self.misses += 1;
None
}
}
}
This is terrible, SizedCache is insanely slow with large cache size
use anyhow::Result;
use cached::{cached_key_result, SizedCache};
const MAX_CACHE_SIZE: usize = 5;
const A_CONSTANT: u8 = 10;
cached_key_result! {
MY_FUNCTION: SizedCache<String, u8> = SizedCache::with_size(
MAX_CACHE_SIZE
);
Key = {format!("{}{}", a, b)};
fn my_function(a: &str, b: &str) -> Result<u8> = {
Ok(A_CONSTANT)
}
}
This is a small repro of the issue I'm having, although it seems to be easily replicable. When testing this code, it will say constant is never used: MAX_CACHE_SIZE
and constant is never used: A_CONSTANT
.
Is this expected behavior? Should I be taking extra steps to prevent this? Is this the linters fault? Is it a problem with the library? Could it be fixed?
I saw #11 and I think it would be good to add this functionality.
Some prior art that I'm aware of includes https://github.com/brmscheiner/memorize.py which is a python library for memoization that supports storing the cache in disk. Some details about that library:
<filename-of-function>_<function-name>.cache
IMO, I think some better options would be to:
$XDG_CACHE_HOME/<appname>
or ~/.cache/<appname>
by defaultLet me know what you think!
edit: So I noticed you can specify a key format, so that would solve the json tuple issue actually. Definitely a nice feature!
edit2: It would probably be good to store the caches in a subdirectory of ~/.cache/<appname>
, like in a directory called memoized
or something.
Is there any plan to support TTL for the cached value in the macro?
#[cached(size=100, ttl=10)]
fn keyed(a: String, b: String) -> usize {
let size = a.len() + b.len();
sleep(Duration::new(size as u64, 0));
size
}
PS: Or we already have this feature?!
The documentation should probably include information about the behavior of the cache under concurrent access to it.
I was suprised to see this not onky depends on async-std, but also exposes it as public API. It looks like you're only using its async Mutex type, though? Maybe you can switch to async-mutex which is way smaller and by the same author.
Perhaps it's just a part of the language, but if I want to use
#[cached(time=60)]
async fn get_alerts(config: &cmdline::Config) -> Result<Vec<Alert>, reqwest::Error> {
I would expect that to work as is. The problem I'm having is that Config
there contains an f64
. That f64
doesn't implement Hash
. I think I get how to do this, but perhaps more could be done to help the user in this case? It's kind of a pita to have to write all this code to say NaN = NaN.
And a TimedSizedCache
combining functionality of Sized
and Timed
caches.
Hi.
I don't know why this isn't possible yet, and while searching the issues I did not find something that answered my questions: Why is there no caching for member functions?
Consider:
pub struct Foo(i32, i32);
impl Foo {
#[cached]
pub fn get_first(&self) -> i32 { /* ... */ }
#[cached]
pub fn get_snd(&self) -> i32 { /* ... */ }
}
That's actually not that hard to cache, is it? Maybe I'm missing some important pieces here, I don't know...
That being said, one could easily craft a type which just calls (cached) free private functions, like so:
pub struct Foo(i32, i32);
impl Foo {
pub fn get_first(&self) -> i32 { foo_get_first(self) }
pub fn get_snd(&self) -> i32 { foo_get_snd(self) }
}
#[cached]
fn foo_get_first(f: &Foo) -> i32 { /* ... */ }
#[cached]
fn foo_get_snd(f: &Foo) -> i32 { /* ... */ }
right? That'd yield the same, AFAICS?
or even a tuple of the reference of the key and value.
I'm currently doing a set and a get doing two hash compute and find for something that could only take one step.
Do you have a plan to rebuild the macros based on the proc_macro?
Those are sweeter.
#[cached(Fib)]
fn fib(n: u64) -> u64 {
if n == 0 || n == 1 {
return n;
}
fib(n-1) + fib(n-2)
}
First off, thanks for this library. The payoff was immediate, as it doubled the performance of the geocoder I was using.
As my keys are relatively short (lat/long coordinate), I was wondering if cached can work with smartstring (https://docs.rs/smartstring/0.2.9/smartstring/) to further increase performance.
I'm trying to implement a file based cacher. This is my cache type MyCache. I'm trying to follow readme.
pub struct MyCache {
pub dir: String,
}
type Key = (String, String);
impl MyCache {
pub fn new() -> MyCache {
MyCache { dir: "./my_cache".to_string() }
}
fn get_key_hash(&self, key: &Key) -> String {
format!("{}__{}", key.0, key.1)
}
}
impl<V: DeserializeOwned + Serialize> Cached<Key, V> for MyCache {
fn cache_get(&mut self, k: &(String, String)) -> Option<&V> {
let output = cacache::read_sync(self.dir, self.get_key_hash(k)).ok()?;
let as_str = String::from_utf8(output).unwrap();
let o: V = serde_json::from_str(&as_str).unwrap();
return Some(&o);
}
fn cache_set(&mut self, k: Key, v: V) -> Option<V> {
let vec = serde_json::to_vec(&v).unwrap();
cacache::write_sync(self.dir, self.get_key_hash(&k), vec).unwrap();
Some(v)
}
fn cache_get_mut(&mut self, k: &Key) -> Option<&mut V> {
self.cache_get(k).map(|x: &V| (*x).borrow_mut())
}
fn cache_get_or_set_with<F: FnOnce() -> V>(&mut self, k: Key, f: F) -> &mut V {
let gotten: Option<&V> = self.cache_get(&k);
match gotten {
Some(res) => res.borrow_mut(),
None => {
let new = f();
self.cache_set(k, new).unwrap();
new.borrow_mut()
}
}
}
fn cache_remove(&mut self, k: &Key) -> Option<V> {
let v: Option<&V> = self.cache_get(k);
cacache::remove_sync(self.dir, self.get_key_hash(k));
let v = *v.unwrap();
Some(v)
}
fn cache_clear(&mut self) {
cacache::clear_sync(self.dir);
}
fn cache_reset(&mut self) {
cacache::clear_sync(self.dir);
}
fn cache_size(&self) -> usize {
todo!()
}
}
When trying to create a cached function like this:
#[cached(
type = "MyCache",
create = "{ MyCache::new() }",
convert = r#"{("ymdh".to_string(), bucket_name.to_string())}"#
)]
pub fn cached_get_folder_names_from_bucket_containing_ymdh_subfolders(
g_cloud: Option<GoogleCloudInterface>,
a_cloud: Option<AwsCloudInterface>,
bucket_name: &str,
) -> Vec<String> {
vec![]
}
I get
error[E0282]: type annotations needed
--> qwe.rs:100:1
|
100 | / #[cached(
101 | | type = "MyCache",
102 | | create = "{ MyCache::new() }",
103 | | convert = r#"{("ymdh".to_string(), bucket_name.to_string())}"#
104 | | )]
| |__^ this method call resolves to `std::option::Option<&V>`
|
= note: type must be known at this point
= note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: aborting due to previous error
For more information about this error, try `rustc --explain E0282`.
This is a really cool macro!
I noticed that it doesn't work if we declare mut
function args:
use cached::proc_macro::cached;
#[cached]
fn foo(mut x: i32) -> i32 {
x += 1;
x
}
This gives:
error: expected expression, found keyword `mut`
--> src/bin/foo.rs:4:8
|
4 | fn foo(mut x: i32) -> i32 {
| ^^^ expected expression
This works:
use cached::proc_macro::cached;
#[cached]
fn foo(x: i32) -> i32 {
let mut x = x;
x += 1;
x
}
It's easy to work around but it feels like a parsing bug that the former doesn't work.
Basically a unbound cache. By the way, any plan to add entry api of std ?
I want to implement a big integer version of the Fibonacci sequence.
But I encountered the following error:
use cached::proc_macro::cached;
use num::BigUint;
#[cached(size = 255)]
fn fibonacci_u(n: &BigUint) -> BigUint {
match n {
BigUint::from(0) | BigUint::from(1) => n.clone(),
_ => fibonacci_u(n - 1) + fibonacci_u(n - 2)
}
}
fn main() {
println!("{}", fibonacci_u(&BigUint::from(100usize)))
}
error[E0308]: mismatched types
--> src\main.rs:4:1
|
4 | #[cached(size = 255)]
| ^^^^^^^^^^^^^^^^^^^^^ expected `&num::BigUint`, found struct `num::BigUint`
|
= note: expected reference `&&'static num::BigUint`
found reference `&num::BigUint`
= note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0308]: mismatched types
--> src\main.rs:4:1
|
4 | #[cached(size = 255)]
| ^^^^^^^^^^^^^^^^^^^^^
| |
| expected `&num::BigUint`, found struct `num::BigUint`
| help: consider borrowing here: `&#[cached(size = 255)]`
|
= note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info)
I think that cached size always be usize
, how can I solve this?
[dependencies]
num = "0.2"
cached = "0.18"
hi,
I am working on high performance Smart IVR use case where we need to process massive amounts of audio data. App is built on tokio.rs. We are also calling some REST APIs from application which require bearer token. Since this is expiring token we need to cache it. Original solution used actor for token management, see https://tokio.rs/tokio/tutorial/shared-state#tasks-threads-and-contention
Basically we have spawned task that is running loop accepting GetTokenRequest from channel. Internally this components holds simple cache and refreshes token as needed. Overall design is rather awkward so I was thinking about replacing whole actor/token state manager with cache that will simply memoize our get_token function (returning Result<String, Err>) .
I have one doubt: you are using Mutex, not RwLock, which probably does have some performance penalty.
Are there any specific reason why cached does not use RwLock instead so that only update will do locking?
While using cached instead of custom actor/token manager greatly simplifies our design I am little bit afraid of using Mutex in the situation where we run thousands transactions in parallel (each requiring to retrieve bearer token). Eould this not lead to contention (too many tasks trying to lock single Mutex -> additional latency introduced in tokio.rs task processing)? Any advice here?
one important note: because of the nature of our app (smart ivr/voice chatbot) low latency is extremely important for us
�Hi,
I've been banging my head against the wall for a few hours trying to compile my initial test of the cached crate. I defined a cached_result!, but since I was just testing this out, I left out either the Err or the Ok case (basically empty function). This kept breaking the compile with 'cannot infer type'. Now, a more seasoned rust developer might figure this out quickly, but alas... I did not.
After finally compiling with -Z macro-backtrace, I to gain a better unerstanding of the problem:
64 | / macro_rules! cached_result {
65 | | // Unfortunately it's impossible to infer the cache type because it's not the function return type
66 | | ($cachename:ident : $cachetype:ty = $cacheinstance:expr ;
67 | | fn $name:ident ($($arg:ident : $argtype:ty),*) -> $ret:ty = $body:expr) => {
... |
79 | | let val = (||$body)()?;
| | ^^^^^^^^^^^^ cannot infer type
... |
84 | | };
85 | | }
| |_- in this expansion of `cached_result!` (#1)
|
::: src/quakes.rs:173:1
|
173 | / cached_result! {
174 | | HISTORICAL: UnboundCache<i64, String> = UnboundCache::new();
175 | | fn get_historical_quake_week(week: i64) -> Result<String, String> = {
176 | | Ok("Ok".to_owned())
177 | | }
178 | | }
| |_- in this macro invocation (#1)
I mostly wanted to get this out there in case others stumble upon the same error, it might warrant a mention in the documentation that the _result macro requires both arms of the Result to be implemented in the function. Perhaps this could be fixed in the macro since the return type is supplied, but I'm no there yet myself.
Given this example function
use cached::proc_macro::cached;
use cached::UnboundCache;
#[cached(
type = "UnboundCache<usize, CreateMessage>",
create = "{ UnboundCache::with_capacity(1) }",
convert = r#"{ cmds.len() }"#
)]
fn generate_help<'a>(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
CreateMessage::default()
}
Expands to
use cached::proc_macro::cached;
use cached::UnboundCache;
static GENERATE_HELP: ::cached::once_cell::sync::Lazy<
std::sync::Mutex<UnboundCache<usize, CreateMessage>>,
> = ::cached::once_cell::sync::Lazy::new(|| {
std::sync::Mutex::new({ UnboundCache::with_capacity(1) })
});
fn generate_help<'a>(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
use cached::Cached;
let key = { cmds.len() };
{
let mut cache = GENERATE_HELP.lock().unwrap();
if let Some(result) = cache.cache_get(&key) {
return result.clone();
}
}
fn inner(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
CreateMessage::default()
}
let result = inner(cmds);
let mut cache = GENERATE_HELP.lock().unwrap();
cache.cache_set(key, result.clone());
result
}
Resulting in
error[E0261]: use of undeclared lifetime name `'a`
--> src/<file>.rs:67:66
|
66 | )]
| - help: consider introducing lifetime `'a` here: `<'a>`
67 | fn generate_help<'a>(cmds: &[Arc<dyn Command>]) -> CreateMessage<'a> {
| ^^ undeclared lifetime
Since the lifetime specifiers aren't copied into the inner
func, we're not able to return types that require specifiers. The same problem exists with the cached!
macro, except that it completely fails to parse the lifetimes instead of translating them wrong.
Hello,
Thanks for this awesome crate!
My team use this crate to cache results of async function to prevent expensive io/computations. We run processing in parallel, but caching works only for sequential requests. For example:
static COUNTER: AtomicI64 = AtomicI64::new(0);
#[tokio::main]
async fn main() {
let result: Vec<i64> = (0..=20).into_iter() // run 20 iterations
.map(|_| process_input("test".to_string())) // async processing
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>().await;
println!("actual: {:?}", result);
println!("expected: {:?}", vec![0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]);
}
#[cached]
async fn process_input(input: String) -> i64 {
let data = COUNTER.fetch_add(1, Ordering::SeqCst);
sleep(Duration::from_secs(2)).await; // simulate io/processing
return data;
}
prints:
actual: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
expected: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Is this desired behavior?
To workaround this problem we added wrapper function:
static COUNTER: AtomicI64 = AtomicI64::new(0);
#[tokio::main]
async fn main() {
let result: Vec<i64> = (0..=20).into_iter() // run 20 iterations
.map(|_| process_input_wrapper("test".to_string())) // async processing
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>().await;
println!("actual: {:?}", result);
println!("expected: {:?}", vec![0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]);
}
#[cached]
fn process_input_wrapper(input: String) -> Shared<
Pin<Box<dyn Future<Output=i64> + std::marker::Send>>,
> {
return process_input(input).boxed().shared();
}
async fn process_input(input: String) -> i64 {
// heavy input processing
let data = COUNTER.fetch_add(1, Ordering::SeqCst);
sleep(Duration::from_secs(2)).await; // simulate io/processing
return data;
}
prints:
actual: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
expected: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
What is the correct way to solve it? Is it possible to implement support for sharing running futures in this crate?
Is this even possible? Would love to be able to put cached functions inline with an impl rather than having to rely on an external implementation.
In some cases it may be worthwile to fall back to the cached, but expired value if the function is not successful in computing the current value of the thing to be cached.
For example a function retrieves some rarely changing value over the network, and caches it for say 1 day. In case the thing is requested after 24 hours and 1 minute, the cache is expired, so we attempt to retrieve it again. In case the retrieval fails (because network is down, server is down, etc.), one may assume that the old cached value is probably still valid, and I'd like to have it return it. This would basically make the function infallible though I'm not sure we can express that using the type system as the function itself needs to be able to return a failure to communicate to the cache that it needs to return the cached value even though it is expired.
One step further and adding more flexibility would be to add a "soft" and a "hard" timeout for the cached values. After the soft timeout the operation to calculate the value is attempted again, but if it fails the cached value is returned. After the hard timeout the cached value is not returned at all.
The cached!
macro doesn't work inside struct's impl
block. (Haven't tried other impl blocks.)
Repro:
#[macro_use]
extern crate cached;
#[macro_use]
extern crate lazy_static;
cached! {
FIB;
fn fib(n: u64) -> u64 = {
if n == 0 || n == 1 { return n }
fib(n-1) + fib(n-2)
}
}
struct Foo {}
impl Foo {
cached! {
ANOTHER_FIB;
fn another_fib(n: u64) -> u64 = {
if n == 0 || n == 1 { return n }
another_fib(n-1) + another_fib(n-2)
}
}
}
fn main() {
println!("fib(10): {}", fib(10));
let _ = Foo {};
println!("another_fib(10): {}", another_fib(10));
}
Error:
error: expected one of `async`, `const`, `crate`, `default`, `existential`, `extern`, `fn`, `pub`, `type`, or `unsafe`, found `struct`
--> src/main.rs:18:5
|
18 | cached! {
| _____^
| |_____|
| ||
19 | || ANOTHER_FIB;
20 | || fn another_fib(n: u64) -> u64 = {
21 | || if n == 0 || n == 1 { return n }
22 | || another_fib(n-1) + another_fib(n-2)
23 | || }
24 | || }
| || ^
| ||_____|
| |______expected one of 10 possible tokens here
| unexpected token
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
error: aborting due to previous error
Hi! I recently picked up cached and it enabled me to very quickly set up some caches for hot functions. I think this project is awesome.
As I began to optimize the rest of my codebase, it quickly became clear that cloning cached values was a bottleneck (mostly just the memory allocation, I was storing Vec<String>
). After some thought, I realized that my use case could live off of references to the cached values.
I believed that cached doesn't support this sort of behavior; is it something you would consider adding? I am happy to help with implementation / flesh out the proposal a bit if it seems in scope for this project.
Thanks for building this in the first place!
Hey is there a way to see if somethin comes from cache or not? like just a macro or an option in the macro?
We're caching some results of looking up data in S3, and every few days we get a panic in cached that poisons the internal mutex.
Our cached definition is this:
cached_key_result! {
QUERY: SizedCache<String, Vec<Inventory>> = SizedCache::with_size(100);
Key = { format!("{}/{}/{}", region, bucket, recording_id) };
fn cached_query(region: &str, bucket: &str, recording_id: &str) -> Result<Vec<Inventory>> = {
match do_query(region, bucket, recording_id) {
Ok(v) => if v.is_empty() {
Err(InventoryError::new(400, "No match"))
} else {
Ok(v)
},
Err(e) => Err(e)
}
}
}
The crash is this where cached is on line 9:
Sep 18 21:29:44: thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:345:21
Sep 18 21:29:44: note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Sep 18 21:29:44: stack backtrace:
Sep 18 21:29:44: 0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
Sep 18 21:29:44: at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
Sep 18 21:29:44: 1: std::sys_common::backtrace::print
Sep 18 21:29:44: at libstd/sys_common/backtrace.rs:71
Sep 18 21:29:44: at libstd/sys_common/backtrace.rs:59
Sep 18 21:29:44: 2: std::panicking::default_hook::{{closure}}
Sep 18 21:29:44: at libstd/panicking.rs:211
Sep 18 21:29:44: 3: std::panicking::default_hook
Sep 18 21:29:44: at libstd/panicking.rs:227
Sep 18 21:29:44: 4: std::panicking::rust_panic_with_hook
Sep 18 21:29:44: at libstd/panicking.rs:511
Sep 18 21:29:44: 5: std::panicking::continue_panic_fmt
Sep 18 21:29:44: at libstd/panicking.rs:426
Sep 18 21:29:44: 6: rust_begin_unwind
Sep 18 21:29:44: at libstd/panicking.rs:337
Sep 18 21:29:44: 7: core::panicking::panic_fmt
Sep 18 21:29:44: at libcore/panicking.rs:92
Sep 18 21:29:44: 8: core::panicking::panic
Sep 18 21:29:44: at libcore/panicking.rs:53
Sep 18 21:29:44: 9: <cached::stores::SizedCache<K, V> as cached::Cached<K, V>>::cache_set
Sep 18 21:29:44: 10: recoordinator::inventory::s3::query
Sep 18 21:29:44: 11: recoordinator::inventory::s3::query_one
Sep 18 21:29:44: 12: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &'a mut F>::call_once
Sep 18 21:29:44: 13: <&'a mut I as core::iter::iterator::Iterator>::next
Sep 18 21:29:44: 14: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T, I>>::from_iter
Sep 18 21:29:44: 15: recoordinator::reel::to_desc::reel_to_desc
Sep 18 21:29:44: 16: recoordinator::dispatch
Sep 18 21:29:44: 17: std::panicking::try::do_call
Sep 18 21:29:44: 18: __rust_maybe_catch_panic
Sep 18 21:29:44: at libpanic_unwind/lib.rs:105
This is probably from one of these two rows:
Lines 124 to 125 in 0ed46ce
I guess one of these unwrap() assumptions doesn't hold true for us?
I'd like to be able to use cached with an async function and not use block_on. For example:
cached!{
MY_ASYNC_MATH;
async fn(id: i32) -> i32 = {
let result = some_async_math_call(id).await;
result
}
}
My compiler keeps complaining about:
Line 630 in 21bd5c6
warning: unused return value of `std::mem::replace` that must be used
--> src/stores.rs:630:9
|
630 | std::mem::replace(self, HashMap::new());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_must_use)]` on by default
= note: if you don't need the old value, you can just assign the new value directly
warning: 1 warning emitted
Is there a reason for using std::mem::replace
instead of just *self = HashMap::new();
?
For convenience, incoming PR
Would be much more type safe that way.
I need to serialize the cache:
All method on the Cached trait are named cache_*
, it's very redundant. I think we should remove this noise.
First off, thank you for an awesome crate!
I'm trying to do something like the following:
pub fn fn_name(some_parameters, time: u64) -> Result<> {
cached_key_result! {
FN_NAME: TimedCache<> = TimedCache::with_lifespan_and_capacity(time, 10);
fn inner(same_paramters_as_above) -> Result<> {
...code...
}
}
inner(original_parameters)
}
But when I do, I get compiler errors that time
isn't a const
. I tried writing code to turn time
into a const
, but that never seemed to workout. So I'm wondering if its possible to have a more dynamic timer for TimedCache.
Thank you in advance.
Is it possible to have an async function with the cached_key_result!
macro?
I cannot get it to work... here's a basic example:
cached_key_result! {
CACHE: SizedCache<String, String> = SizedCache::with_size(100);
Key = { id.to_owned() };
async fn foo(id: String) -> Result<String, &'static dyn std::error::Error> = {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
Ok("Hey".to_string())
}
}
It doesn't work with the async keyword there.. but when you remove the async keyword (and the use of tokio sleep), it works.
Since apparently i can't really use cached with generic functions (let's say fn add_one<T: One + Add<Output=T>>(n: T) -> T { n + T::one() }
), how would i proceed to make a cache for a specific type that's valid as said generic (let's use usize
) ?
The macro doesn't let me assign privacy to the function.
error: no rules expected the token `pub`
--> src/...
|
24 | pub fn get(a_keyword: &str) -> Option<Self> = {
| ^^^
Seem cache_proc_macro module version was not increased therefore not containing necessary changes regarding timed caches.
#[cached(size=10, time=10)]
does not compile printing out the following error:
error: custom attribute panicked
--> src/repo/mod.rs:135:1
|
135 | #[cached(size=10, time=10)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: message: cache types (unbound, size, time, or type and create) are mutually exclusive
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.