go-redis / cache Goto Github PK
View Code? Open in Web Editor NEWCache library with Redis backend for Golang
Home Page: https://redis.uptrace.dev/guide/go-redis-cache.html
License: BSD 2-Clause "Simplified" License
Cache library with Redis backend for Golang
Home Page: https://redis.uptrace.dev/guide/go-redis-cache.html
License: BSD 2-Clause "Simplified" License
encoding/binary package doesn't have a possibility to unmarshal bytes to google protobuf generated messages.
Error: cannot unmarshal array into Go struct field Attribute.attributes.values of type structpb.Value
( golang/protobuf discussion: golang/protobuf#1019)
Could you please provide a possibility to get raw bytes from the cache ie make getBytes method public (https://github.com/go-redis/cache/blob/master/cache.go#L195)
https://pkg.go.dev/github.com/go-redis/cache?tab=doc#Stats
I can infer it from context I suppose but actual documentation would be better.
Hi,
I think the nil comparaison for cache.Item.Value field used in my places in the package is not correct :
Line 263 in 6382f51
Here is an example a reproduction with nil comparaison :
// Item ( like cache.Item)
type Item struct {
Name string
Value interface{}
}
// SampleStruct as value for Value
type SampleValue struct {
port string
host string
}
func main() {
var sampleValue *SampleValue
item := &Item{Name: "item1", Value: sampleValue}
fmt.Println(item.Value == nil) // this is like the nil comparaison used in the package which return wrong result
}
A fix could be to use this sample isNil function :
func isNilFixed(i interface{}) bool {
if i == nil {
return true
}
switch reflect.TypeOf(i).Kind() {
case reflect.Ptr, reflect.Map, reflect.Array, reflect.Chan, reflect.Slice:
//use of IsNil method
return reflect.ValueOf(i).IsNil()
}
return false
}
First of all, thanks for the great library. I have a use case of requiring the cache client to be able to use either cluster mode or non-cluster mode. My production environment has a redis cluster but the development has a single node redis. It'd be really helpful if you could expose one interface to handle both cases.
Sometimes we need to batch loading or saving cache objects, so MGET/MSET should make life easier.
Should I expect that I can include time.Time values in objects being cached? I'm seeing strange behavior
where the retrieved value is not the same as the cached value.
thanks!
func TestLocalCache(t *testing.T) {
type Object struct {
Num int
Expiry time.Time
}
mycache := cache.New(&cache.Options{
LocalCache: cache.NewTinyLFU(1000, time.Minute),
})
ctx := context.TODO()
key := "mykey"
myTime := time.Now()
obj := &Object{
Expiry: myTime,
Num: 42,
}
if err := mycache.Set(&cache.Item{
Ctx: ctx,
Key: key,
Value: obj,
TTL: time.Hour,
}); err != nil {
panic(err)
}
var wanted Object
if err := mycache.Get(ctx, key, &wanted); err != nil {
panic(err)
}
assert.Equal(t, myTime, wanted.Expiry)
}
=== RUN TestLocalCache
TestLocalCache: ...
Error Trace: ...
Error: Not equal:
expected: time.Time{wall:0xbfee6e9501f40bb8, ext:11320675, loc:(*time.Location)(0x2cd1e00)}
actual : time.Time{wall:0x1f40bb8, ext:63743670740, loc:(*time.Location)(0x2cd1e00)}
Diff:
--- Expected
+++ Actual
@@ -1,4 +1,4 @@
(time.Time) {
- wall: (uint64) 13830113091963325368,
- ext: (int64) 11320675,
+ wall: (uint64) 32771000,
+ ext: (int64) 63743670740,
loc: (*time.Location)({
Test: TestLocalCache
--- FAIL: TestLocalCache (0.00s)
Expected :time.Time{wall:0xbfee6e9501f40bb8, ext:11320675, loc:(*time.Location)(0x2cd1e00)}
Actual :time.Time{wall:0x1f40bb8, ext:63743670740, loc:(*time.Location)(0x2cd1e00)}
Hi there, I'm using both https://github.com/go-redis/cache/v8 and https://github.com/go-redis/redismock/v8 for testing. However even a quite rudimentary testcase doesn't seem to work. Here's my code:
type Object struct {
Str string
}
func TestRedisMock(t *testing.T) {
db, mock := redismock.NewClientMock()
mycache := cache.New(&cache.Options{
Redis: db,
})
key := "sometestingtoken"
rawval := &Object{Str: "mystring"}
val, _ := mycache.Marshal(rawval)
mock.ExpectGet(key).RedisNil()
mock.ExpectSet(key, val, 5*time.Minute)
var result Object
if err := mycache.Get(db.Context(), key, &result); err != nil {
if err := mycache.Set(&cache.Item{
Ctx: db.Context(),
Key: key,
Value: rawval,
TTL: 5 * time.Minute,
}); err != nil {
t.Error(err)
}
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Error(err)
}
mock.ClearExpect()
}
Which results in either:
=== RUN TestRedisMock
connectors_test.go:39: cmd(set), return value is required
--- FAIL: TestRedisMock (0.00s)
or sometimes multiple calls to set?:
{"level":"error","time":"2022-12-30T09:57:51Z","message":"redis cache error all expectations were already fulfilled, call to cmd '[set sometestingtoken [...] ex 300]' was not expected"}
{"level":"error","time":"2022-12-30T09:57:51Z","message":"redis cache error all expectations were already fulfilled, call to cmd '[set sometestingtoken [...] ex 300]' was not expected"}
What do you think about adding an option to change the default msgpack struct tag? https://msgpack.uptrace.dev/#using-json-struct-tags
That would make it possible to change the struct tag without duplicating the whole marshal/unmarshal logic. Maybe a configuration option CodecStructTag
or MarshalStructTag
or just MsgpackStructTag
Thanks!
We have a Go program consisting of hundreds of goroutines and they all need to access the same Redis cache key. Then the bytes in local cache is unmarshalled over and over again.
How about storing the unmarshalled object in the local cache?
type MetricFunc func(key string)
type Options struct {
Redis rediser
LocalCache LocalCache
StatsEnabled bool
Marshal MarshalFunc
Unmarshal UnmarshalFunc
CacheHit MetricFunc
CacheMiss MetricFunc
}
Hi,
I am not sure if this package supports LRU cache? Asking because I can see some old commits, files in the history as well as an old doc (v3). If there is a support, could please someone add an example to Readme or somewhere?
Thank you
Is it possible to remove all cached objects with a certain prefix?
Hi guys,
In my scenario, I have set a TTL value for the redis key and reuse it for next time. Can we support this command for more flexible TTL values?
ttl, _ := redis.TTL(ctx, key).Result()
redis.Set(ctx, key, val, ttl).Result()
Hi - I am new to redis and am trying to understand the utility of this package compared to the go-redis/v9
client library. Is there a Slack channel/Google Group/other discussion forum where this question is better posed?
My understanding so far is that this is a higher level wrapper around the go-redis/v9
client library (or many other go redis client libraries actually) which provides cache-specific helper functions (whereas the general go-redis/v9
client is lower-level and can do much more). Some core features to me seem to be:
go-redis/cache
libIs my understanding so far correct?
Followup question:
3. Does the local cache
referenced above implement client-side server-assisted
caching as described in the Redis blog here? That is, is this local cache in sync with the redis data? Reading the source code, it definitely does not seem to be the case. Can you confirm? Could you describe from your perspective what the utility is for using this local cache if the invalidation option with this cache is limited to TTL (no server/event based invalidation).
Thanks so much!
As a user, I want to batch delete many keys at ones as redis does.
(cd *Cache) Delete(ctx context.Context, keys ...string) error
I expected that using TinyLFU would overwrite a value when setting it, just like Redis. However, overwriting a value doesn't seem to work. From what I can gather of the TinyLFU internals, it doesn't evict the old value.
Example test:
import (
"context"
"testing"
"time"
"github.com/go-redis/cache/v9"
"github.com/stretchr/testify/assert"
)
func Test_TinyLFU(t *testing.T) {
cacheClient := cache.New(&cache.Options{
LocalCache: cache.NewTinyLFU(10, time.Minute*30),
})
key := "key-1"
ctx := context.Background()
// pos1
pos1 := 1
err := cacheClient.Set(&cache.Item{
Key: key,
Value: pos1,
})
assert.NoError(t, err)
var result int
err = cacheClient.Get(ctx, key, &result)
assert.NoError(t, err)
assert.Equal(t, pos1, result)
// pos 2
pos2 := 2
err = cacheClient.Set(&cache.Item{
Key: key,
Value: pos2,
})
assert.NoError(t, err)
var result2 int
err = cacheClient.Get(ctx, key, &result2)
assert.NoError(t, err)
// The following test is erroring
// Error: Not equal:
// expected: 2
// actual : 1
// Test: Test_TinyLFU
assert.Equal(t, pos2, result2)
}
When using github.com/go-redis/cache/v8
on a server, the following panic error sometimes occurs. Since this is happening with untouchable customer data, so I don't know the exact conditions to reproduce it.
It seems to be an error that occurs in the bufpool
package, so I've created the issue in this repository instead of msgpack
package.
I searched for a similar error and found the above Issue. I believe the error in the issue is the same as the one I encountered.
Error message:
runtime error: makeslice: cap out of range
Stacktrace:
github.com/vmihailenco/bufpool.(*bufPool).Get(0x424a2a0, 0x2005ef1, 0x9)
external/com_github_vmihailenco_bufpool/buf_pool.go:45 +0x25b
github.com/vmihailenco/bufpool.Get(...)
external/com_github_vmihailenco_bufpool/buf_pool.go:17
github.com/vmihailenco/bufpool.(*Buffer).grow(0xc00637bbf0, 0x5ef1, 0x0)
external/com_github_vmihailenco_bufpool/buffer_ext.go:57 +0x185
github.com/vmihailenco/bufpool.(*Buffer).Write(0xc00637bbf0, 0xc003884000, 0x5ef1, 0x6000, 0x6000, 0x0, 0x0)
external/com_github_vmihailenco_bufpool/buffer.go:121 +0xd0
github.com/vmihailenco/msgpack/v5.(*Encoder).write(...)
external/com_github_vmihailenco_msgpack_v5/encode.go:251
github.com/vmihailenco/msgpack/v5.(*Encoder).EncodeBytes(0xc000be2ae0, 0xc003884000, 0x5ef1, 0x6000, 0x0, 0x0)
external/com_github_vmihailenco_msgpack_v5/encode_slice.go:88 +0xa2
github.com/vmihailenco/msgpack/v5.marshalBinaryValue(0xc000be2ae0, 0x2b14b80, 0xc0000709a0, 0x196, 0x2b14b80, 0x0)
external/com_github_vmihailenco_msgpack_v5/encode_value.go:221 +0xf1
github.com/vmihailenco/msgpack/v5.(*Encoder).EncodeValue(0xc000be2ae0, 0x2b14b80, 0xc0000709a0, 0x196, 0x2b14b80, 0xc0000709a0)
external/com_github_vmihailenco_msgpack_v5/encode.go:228 +0x8e
github.com/vmihailenco/msgpack/v5.encodeArrayValue(0xc000be2ae0, 0x24a1d60, 0xc006415b78, 0x97, 0x2c3f410, 0x2e4b401)
external/com_github_vmihailenco_msgpack_v5/encode_slice.go:134 +0xf1
github.com/vmihailenco/msgpack/v5.encodeSliceValue(0xc000be2ae0, 0x24a1d60, 0xc006415b78, 0x97, 0x24a1d60, 0xc00006e5c0)
external/com_github_vmihailenco_msgpack_v5/encode_slice.go:125 +0x78
github.com/vmihailenco/msgpack/v5.(*Encoder).EncodeValue(0xc000be2ae0, 0x24a1d60, 0xc006415b78, 0x97, 0xc00637bbf0, 0x1)
external/com_github_vmihailenco_msgpack_v5/encode.go:228 +0x8e
github.com/vmihailenco/msgpack/v5.(*Encoder).Encode(0xc000be2ae0, 0x24a1d60, 0xc006415b78, 0x0, 0x1cd)
external/com_github_vmihailenco_msgpack_v5/encode.go:214 +0x126
github.com/go-redis/cache/v8.(*Cache)._marshal(0xc0000fe6e0, 0x24a1d60, 0xc006415b78, 0x0, 0x0, 0x0, 0x0, 0x0)
external/com_github_go_redis_cache_v8/cache.go:348 +0x1a5
github.com/go-redis/cache/v8.(*Cache).Marshal(...)
external/com_github_go_redis_cache_v8/cache.go:328
github.com/go-redis/cache/v8.(*Cache).set(0xc0000fe6e0, 0xc00643b5e0, 0xc0083f2bb0, 0x2, 0x2, 0xc000db7620, 0x59, 0xc000db71a0)
external/com_github_go_redis_cache_v8/cache.go:156 +0x87
github.com/go-redis/cache/v8.(*Cache).Set(...)
external/com_github_go_redis_cache_v8/cache.go:146
...(Below are the codes of our service)...
2019/08/13 12:01:55 cache: Get key="33.690543:-84.430147" failed: dial tcp: lookup tcp/ᣫ: nodename nor servname provided, or not known
2019/08/13 12:01:55 cache: Set key="33.690543:-84.430147" failed: dial tcp: lookup tcp/ᣫ: nodename nor servname provided, or not known
When I set the item's TTL field to 0, the cache.ttl()
function will reset TTL to default 1 hour.
cache.Code doesnt exist
Does current Redis cache implementation support tree hierarchy for key?
In my point of view, item.Do()
returns the up-to-date value of item everytime when it is called.
so I think maybe we can get the up-to-date value whenitem.Get()
miss cache , and put it into redis/lru with expiration time of TTL.
package main
import (
"context"
"fmt"
cv7 "github.com/go-redis/cache/v7"
cv8 "github.com/go-redis/cache/v8"
rdsv7 "github.com/go-redis/redis/v7"
rdsv8 "github.com/go-redis/redis/v8"
msgpackv4 "github.com/vmihailenco/msgpack/v4"
)
func main() {
d := map[string]interface{}{
"hello": 1,
"world": "ok",
}
// set by v7
v7 := &cv7.Codec{
Redis: rdsv7.NewClient(&rdsv7.Options{}),
Marshal: msgpackv4.Marshal,
Unmarshal: msgpackv4.Unmarshal,
}
err := v7.Set(&cv7.Item{
Key: "aaa",
Object: d,
})
fmt.Println("set err:", err)
// get by v8
v8 := cv8.New(&cv8.Options{
Redis: rdsv8.NewClient(&rdsv8.Options{}),
})
var result map[string]interface{}
err = v8.Get(context.TODO(), "aaa", &result)
fmt.Println("get err:", err)
fmt.Println(result)
}
then run
$ go run upgrade_get_set.go
set err: <nil>
get err: unknown compression method: 6b
map[]
this err from the code here: https://github.com/go-redis/cache/blob/v8/cache.go#L353
so, we can't upgrade go-redis/cache from v7 to v8 directly? if the answer is yes, maybe we should add some change log for this.
As per the local cache interface GET, SET and DELETE is supported, which will work for key-value pair.
However, I am getting data from the Redis using, HSET, HGET i.e. hash key value
, is there any support for it? If so, how can I achieved it?
Please let me know if there is another place to ask such questions, as it is not a bug. Thanks.
I want to use this cache in a concurrent system, should I declare a Mutex for managing simultaneously read/write operations, or can I use it safely without it?
`type Object struct {
ID int
Name string
}
var objs []Object = [{}] //real value
err = repo.cache.Set(&cache.Item{
Key: cacheKey,
Ctx: ctx,
Value: &objs,
TTL: cacheExpirationDuration,
})
err := repo.cache.Get(ctx, cacheKey, &objs)
`
err got me msgpack: unexpected code=c5 decoding map length
Not sure I am missing anything?
I'm having problems regarding connecting to redis on docker as seems in the title.
My go code stands as follows:
func NewRedis() (*Redis, error) {
db, err := strconv.Atoi(os.Getenv("REDIS_DB"))
if err != nil {
return nil, fmt.Errorf("error connecting to redis %s:%s:%s: %v",
os.Getenv("REDIS_ADDR"), os.Getenv("REDIS_PORT"), os.Getenv("REDIS_DB"), err)
}
options := redis.RingOptions{
Addrs: map[string]string{
os.Getenv("REDIS_ADDR"): fmt.Sprintf(":%s", os.Getenv("REDIS_PORT")),
},
DB: db,
}
ring := redis.NewRing(&options)
return &Redis{cache: cache.New(&cache.Options{
Redis: ring,
LocalCache: cache.NewTinyLFU(1000, time.Minute),
})}, nil
}
my docker-compose responsible for redis:
version: '3.7'
networks:
calliope:
name: calliope
driver: bridge
services:
redis:
container_name: redis-calliope
image: redis:latest
command: [ "redis-server", "--bind", "redis", "--port", "6379" ]
ports:
- 6379:6379
networks:
calliope:
aliases:
- redis
My docker-compose for the go service:
version: '3.7'
networks:
calliope:
name: calliope
driver: bridge
services:
calliope_text_analysis:
container_name: calliope_text_analysis_local
build:
context: . # Use an image built from the specified dockerfile in the current directory.
dockerfile: Dockerfile
args:
- ENV_FILE=env-local
environment:
GOOGLE_APPLICATION_CREDENTIALS: /tmp/keys/keyfile.json
volumes:
- ${GOOGLE_APPLICATION_CREDENTIALS}:/tmp/keys/keyfile.json:ro
ports:
- 8888:8888
networks:
- calliope
I stumbled across this through a bug in my application where the local cache had a longer TTL the the remote cache, which lead to unexpected behaviour.. However, using GetSkippingLocalCache
fixed it.
While skimming through the source code I noticed that Item
has the SkipLocalCache
field, but the Set method does not use it. Wouldn't it be better to not use the local cache if the field is true, or is there some reason for this particular behaviour?
I think it would be quick to implement:
func (cd *Cache) set(item *Item) ([]byte, bool, error) {
// ...
if cd.opt.LocalCache != nil && !item.SkipLocalCache {
cd.opt.LocalCache.Set(item.Key, b)
}
// ....
}
Best regards,
Frido
I get a var data []struct from the database, how to “set” the array into it.
Hi,
I am trying to cache a go struct, am setting the expiry as 1 day.
Lets say my struct is {a:"aa", b:"bb"} and the key is K1, its expiry is 1 day. But I changed the struct to have one field, let's say {a:"aa", b:"bb", c:"cc"}, if the Key is not expired, it will not get updated since the same key.
I want to have an option to identify the version of struct change, before get or set cache.
Please let me know if there is any way.
Maybe we just had a strange way of using your library (with gobuffalo).
When we tried to call cache.Get(), we ended up getting a panic (invalid memory address or nil pointer dereference
)
down in pool.(*ConnPool).waitTurn line 273
Apparently the context we were using and passing in to cache.Get() didn't have a proper Done() method.
Changing our code to have something like this and sending in that context seemed to fix it.
ctb := context.Background()
ctx, _ := context.WithCancel(ctb)
in a distributed system where stateless services use central Redis for caching, is there a way to protect against concurrency.
For example 1st in wins?
So the flow would be:
Pre-req: Key is the same between Service 1 and Service 2.
Service 1 attempts caching X
Service 2 attempts caching X1
Service 2 should in this specific case fail and instead of setting X1 should issue a fetch of X.
Ideally Service 1 should issue a lock request if successful it issues a set and unlocks. Service 2 attempts a lock but fails so issues a Get.
Any thoughts?
I wonder that if you will inplement a client side cache with server's publishing invalidation to evict stale cached data.
ref: https://redis.io/topics/client-side-caching
Hello,
I'm trying to get a key using this code:
import (
"context"
"time"
"github.com/go-redis/cache/v8"
"github.com/go-redis/redis/v8"
)
timeout := 15 * time.Second
ring := redis.NewRing(&redis.RingOptions{
Addrs: map[string]string{
"shard1": ":6379",
},
DialTimeout: timeout,
ReadTimeout: timeout,
WriteTimeout: timeout,
})
c := cache.New(&cache.Options{
Redis: ring,
})
var wanted interface{}
err := c.GetSkippingLocalCache(context.TODO(), "key", &wanted)
if err != nil {
panic(err)
}
And I got this error: unknown compression method: 3d
Does anyone know how to fix this?
Environment:
How can we clear all cache with a function call?
Hi,
Thank you for this amazing library!
When using with Redis, I was setting TTL on cache.Item.TTL
and it was used to set the expiration time in Redis.
However, when setting up a local-only cache with NewTinyLFU(size int, ttl time.Duration)
, I am not sure what's the difference between the ttl
parameter and the setting on the cache.Item.TTL.
The ttl
parameter seems to set a global setting for the cache, and TTL one seems to do nothing.
Can you possibly clarify the difference?
Thanks!
I found I can set ttl = -1 in go-redis, but in cache when ttl < 0 it will be set to 0 by following code in Item.ttl():
if item.TTL < 0 {
return 0
}
and because of this can't set some special cache permanently
When writing time.Time
dates to redis, the timezone time.UTC
is read as time.Local
using Get()
Is there a way to prevent this?
Redis Cache version: v8 (last available).
Redis version: v8 (last available).
OS: Mac OS.
When you call Once after the Redis connection is lost, the return err is nil. After some debugging, it seems that somehow the "redis: client is closed" error is being masked by the internal caller.
The same call for Get, Set, etc. gets an error, describing the client is closed situation.
Please check this gist (with a main program) which demonstrates the bug:
https://gist.github.com/datoga/cb31c8ee1ae540dcd64cb956dfecc9a1
In using this library we discovered when using the recommended TinyLRU local cache, that we were seeing corruption when the items in the cache started expiring.
I.e. that the cache returned values for the wrong key when retrieving them.
I have raised #71 in order to demonstrate this issue.
Does Cache work with go-redis v9? We are upgrading our v8 go-redis because we want to use Redis7 and you need go-redis v9 for Redis7, but we are afraid we might lose the Cache functionality as it seems to be tied to v8
Hi, I noticed that this project has no license.
Line 217 in 1cdfea0
GetSkippingLocalCache reads values from Redis but does not update them in the cache. is it a bug or a feature?
fix:
-if !skipLocalCache && cd.opt.LocalCache != nil {
+if cd.opt.LocalCache != nil {
Firstly, thanks for building this. I like the design of the library as a wrapper around the redis client.
I'm curious whether you have any plans to support other redis data types (e.g. hash/set)? If there aren't fundamental issues supporting other data types I might be interested in implementing them.
Was wondering if we could export the default _marshal
and _unmarshal
methods. I wanted to implement a custom Marshal
and Unmarshal
method for specific types, but fall back on the default method for everything else.
Happy to submit a PR to help implement. Appreciate the great work on this & the core library
Hello!
Thanks for supporting this library.
I started using codec.Once()
today and noticed that it's not obtaining a distributed lock before executing a Func
, what results in executing Func
more than one-at-a-time
Is this behaviour by design?
I think it would be cool to add support for redis/rueidis
We can use:
For use redis/rueidis need only add the simple adapter, I made a simple example
What do you think?
My code goes like:
type WrapCache struct {
internalCache *cache.Cache
}
func NewWrapCache(
client redis.UniversalClient,
size int,
localExpired time.Duration,
) *WrapCache {
internalCache := cache.New(&cache.Options{
Redis: client,
LocalCache: cache.NewTinyLFU(size, localExpired),
Marshal: json.Marshal,
Unmarshal: json.Unmarshal,
})
return &WrapCache{
internalCache: internalCache,
}
}
func (c *WrapCache) GetJSON(ctx context.Context, key string, value any) error {
if err := c.internalCache.Get(ctx, key, value); err != nil {
// Treat err cache miss as redis nil because higher layer only care about redis error
if errors.Is(err, cache.ErrCacheMiss) {
return errors.Join(err, redis.Nil)
}
return fmt.Errorf("internal cache: failed to get: %w", err)
}
return nil
}
When I try to use tiny lfu size 1, it's panic when I try to get.
panic: runtime error: index out of range [0] with length 0
goroutine 1 [running]:
github.com/vmihailenco/go-tinylfu.nvec.inc(...)
/Users/anon/go/pkg/mod/github.com/vmihailenco/[email protected]/cm4.go:72
github.com/vmihailenco/go-tinylfu.(*cm4).add(...)
/Users/anon/go/pkg/mod/github.com/vmihailenco/[email protected]/cm4.go:33
github.com/vmihailenco/go-tinylfu.(*T).Get(0x1400043e240, {0x100adfcd1, 0x16})
/Users/anon/go/pkg/mod/github.com/vmihailenco/[email protected]/tinylfu.go:89 +0x36c
github.com/go-redis/cache/v9.(*TinyLFU).Get(0x14000190390?, {0x100adfcd1?, 0x28?})
/Users/anon/go/pkg/mod/github.com/go-redis/cache/[email protected]/local.go:67 +0xc8
github.com/go-redis/cache/v9.(*Cache).getBytes(0x1400043e2c0, {0x100d7fcb0, 0x10131f0e0}, {0x100adfcd1, 0x16}, 0x0)
/Users/anon/go/pkg/mod/github.com/go-redis/cache/[email protected]/cache.go:219 +0x5c
github.com/go-redis/cache/v9.(*Cache).get(0x1400043e2c0, {0x100d7fcb0?, 0x10131f0e0?}, {0x100adfcd1?, 0x8?}, {0x100ca4d80, 0x101321a40}, 0x0?)
/Users/anon/go/pkg/mod/github.com/go-redis/cache/[email protected]/cache.go:210 +0x38
github.com/go-redis/cache/v9.(*Cache).Get(...)
/Users/anon/go/pkg/mod/github.com/go-redis/cache/[email protected]/cache.go:194
Not panic if size is >= 3. Is there a doc about this behaviour?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.