Git Product home page Git Product logo

go-cache's People

Contributors

alexedwards avatar darrenmcc avatar databus23 avatar dustin avatar inf-rno avatar patrickmn avatar temoto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-cache's Issues

what if onEvicted func very slow

if onEvicted func execute very slow , the cleanup func DeleteExpired would be blocked

 func (c *cache) DeleteExpired() {
	var evictedItems []keyAndValue
	now := time.Now().UnixNano()
	c.mu.Lock()
	for k, v := range c.items {
		// "Inlining" of expired
		if v.Expiration > 0 && now > v.Expiration {
			ov, evicted := c.delete(k)
			if evicted {
				evictedItems = append(evictedItems, keyAndValue{k, ov})
			}
		}
	}
	c.mu.Unlock()
	for _, v := range evictedItems {
		c.onEvicted(v.key, v.value)
	}
}  

use one goroutine to do the onEvicted action ?

code is simple - memory leak?

I'm chasing after my own memory leak... I need some APIs to tell me the number of objects in the cache and their aggregate size. Maybe a debug callback when the garbage collector runs.

Accidental dereference causes janitor to shut down

A colleague of mine fell into the following trap:

var myCache cache.Cache
...
myCache = *cache.New(...)

This of course triggers the finalizer which shuts down the janitor and results in a memory leak.

Ever make a typed cache?

Hi there, hearing about the project for the first time. I can see how powerful an un-typed cache would be. I'm just curious if folks have ever looked into making the cache typed -- you pass the type in as an argument at the beginning, and there's some pro-flection (anti-reflection) / metaprogramming under the hood.

Or maybe wrapper methods on look-up that enforce type-safety? Either way, something that reduces the burden on the caller to cast their looked-up values. Maybe if a use-case wanted type-safety, they'd implement their own wrapper method.

Feature Request: Expose Keys?

I'd love to be able to expose the list of keys that are currently in my cache.

This functionality basically already exists in go-cache, it's just unexported/not wrapped in a nice method name.

no values returned

hello,

this is my code

var C = cache.New(50time.Minute, 50time.Minute)

func getFromMemory() {

foo,found := C.Get("foo")

if !found{

	fmt.Println("Nil")
}else {
	fmt.Println(foo,found)
}

}

func insertToMem() {

ip := []string{"192.168.1.1","192.168.1.2","192.168.1.3","192.168.1.5","192.168.1.15"}
Time := []string{time.Now().String(),time.Now().String(),time.Now().String(),time.Now().String(),time.Now().String()}
tsVal := []string{"123","123","123","123","123"}

saif := test{Ip:ip,Time:Time,TsVal:tsVal}




Get := New{}


for _,val := range saif.Ip{

	Get.Foo[0] = val

	for _,getTime := range saif.Time{

		Get.Foo[1] = getTime
	}

	for _,tsVal := range saif.TsVal{
		Get.Foo[2] = tsVal
	}

	C.Set("foo", Get.Foo,cache.DefaultExpiration)


}

}

func main() {

//insertToMem()
getFromMemory()

}

no values return when comment [ insertToMem ] Method

Feature request: max size and/or max objects

To make go-cache as useful as memcache it needs an upper limit on the amount of memory that it can use otherwise it is easy to cache too many things and explode the memory usage.

Ideally it would have a memory limit like memcache does. It could be somewhat approximate as I'm sure it isn't particularly easy to account for memory used in the cache.

A max objects limit as well would probably be useful.

multilevel cache feature

Hello,

Multilevel cache (go-cache + redis) will be quite useful in server cluster case, will this be in the plan?

Is there any good solution for multilevel cache solution?

Thoughts if you make a v2

Hi; thanks for a quick to use in-memory cache. If you make a v2, one thought on API design: if we already set the cache default timeout in the constructor, we should not have to remind it every time we set a cache item. I would think a better interface would be:

c.Set(key, val)

and allow it to be overridden with:

c.SetWithTimeout(key, val, duration)
// or
c.WithTimeout(duration).Set(key,val)

Another thought would be to add a max size parameter that could error if you attempt to set a value into cache after a given threshold is hit. This would provide some good back pressure for any system that becomes too aggressive.

Obviously, no change should happen in this version as to not break api backwards compatibility.

fatal error: concurrent map read and map write

I'm using go-cache on a gin-gonic web service. When I started to loadtest the system with over 200 threads, the whole application crashed with the following error:

fatal error: concurrent map read and map write

When the cache is disabled (always pull from redis), I do not see the errors anymore. Is there possibly a non-thread-safe portion of the code or perhaps something wrong with my code?

var cachedJSONTokens *cache.Cache
var cacheInit sync.Once
var cacheTimeout time.Duration

func GetAllowedRoles(name string, skipLocalCache bool) (bool, bool) {
	cacheInit.Do(func () {
		cacheTimeout = // pulled from environment variable
		cachedJSONTokens = cache.New(cacheTimeout, cacheTimeout*10)
	})


	token_query := fmt.Sprintf("token:%s", name)

	var dat string
	var err error
	if x, found := cachedJSONTokens.Get(token_query); !skipLocalCache && found && cacheTimeout > 0 {
		dat = x.(string)
		if dat == "0" {
			return false, false
		}
	} else {
		dat, err = redis.RedisClient().Get(token_query).Result()
		if err != nil {
			if err.Error() == "redis: nil" {
				cachedJSONTokens.Set(token_query,"0",cache.DefaultExpiration)
				return false, false
			} else {
				logger.Error(err)
				return false, false
			}
		}
		cachedJSONTokens.Set(token_query,dat,cache.DefaultExpiration)
	}

	var red_dat model.Token
	if err := json.Unmarshal([]byte(dat), &red_dat); err != nil {
		logger.Panic(err.Error())
	}

	return red_dat.BoolField1, red_dat.BoolField2
} 

Close objects that implement io.Closer on delete

I think it will be useful if on delete objects that are stored in cache and implement io.Closer get closed.

This is especially useful when they get cache evicted. In this case user has no way of closing them manually.

Feature Request: access expire time

It would be nice to be able to access the expire time, e.g. if I have something that needs to be cached now it might not need to be later so the scenario would play out at following:

  • Cache an object with an expiry of 10 minutes
  • On request for that object, check how much time is left to expire
  • If time is within threshold, refresh the data for that key in the background such that we do not refresh during a live request

Let me know what you think

Feature request: Ensure single OnEvicted call

Hi!
OnEvicted is an awesome idea, but it lacks one key feature - synchronization.

The callback will be called outside any locks so it's possible couple of threads will try to populate the cache at the same time.

There is no issue if only the janitor calls onEvicted.

Why are the keys strings?

Hey there,

A coworker and I were wondering about the reasoning behind making keys to the cache strings. Using structs as keys in maps is slower and often less convenient than using structs (as Ashish Gandhi details towards the end of this talk). Is it because allowing for a cache key of interface{} introduces the risk of runtime type error if a user tried to store, say, a slice?

Thanks for writing and maintaining the package!

T

Provide a way to distinguish "not found" from "wrong type" for incr/decr methods

These are two very different error cases. In my use case, I don't really care if the item's not found (that just means it has expired) but I do care if it's not the type I expect.

It'd be nice if there was a way to distinguish the two. I think the only way to do that without serious API breakage would be to return two different error types. I'd be up for working on a PR if you'd be interested in changing this.

Decoding of gob data may fail when decoding []*T before []T

More details in Go issue #2995: http://code.google.com/p/go/issues/detail?id=2995

If the cache includes a []_T where T.T = []_T, and the cache does not include a []T which is serialized first--which is impossible to predict since the underlying data structure for the cache is a map--loading cache data will fail.

The only impact of this is that, in certain cases, you will be unable to load cache data using Load or LoadFile. All other cache functionality is unaffected, and you can store []*T in a cache just fine.

LoadFile is null

I have some cache Datas,I use SaveFile() in local File. then I use LoadFile() localFile ,but I get map[] , where my data?
sorry,I use error time!

Data Race

2017/09/25 14:24:31 WARNING: DATA RACE
2017/09/25 14:24:31 Write at 0x00c420063c50 by goroutine 27:
2017/09/25 14:24:31 runtime.mapdelete_faststr()
2017/09/25 14:24:31 /usr/local/go/src/runtime/hashmap_fast.go:801 +0x0
2017/09/25 14:24:31 ...s/vendor/github.com/patrickmn/go-cache.(*cache).DeleteExpired()
2017/09/25 14:24:31 ...s/vendor/github.com/patrickmn/go-cache/cache.go:885 +0x318
2017/09/25 14:24:31 github.homedepot.com/ose-platform/nos/vendor/github.com/patrickmn/go-cache.(*janitor).Run()
2017/09/25 14:24:31 ...s/vendor/github.com/patrickmn/go-cache/cache.go:1039 +0xdd

2017/09/25 14:53:13 Goroutine 27 (running) created at:
2017/09/25 14:53:13 ...s/vendor/github.com/patrickmn/go-cache.runJanitor()
2017/09/25 14:53:13 ...s/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0xf2
2017/09/25 14:53:13 ...s/vendor/github.com/patrickmn/go-cache.newCacheWithJanitor()
2017/09/25 14:53:13 ..s/vendor/github.com/patrickmn/go-cache/cache.go:1079 +0x167
2017/09/25 14:53:13 ...s/vendor/github.com/patrickmn/go-cache.New()
2017/09/25 14:53:13 ...s/vendor/github.com/patrickmn/go-cache/cache.go:1092 +0x7b

Return value on increment

Would be extremely useful to return the value on increment and decrement.

Happy to implement and pull request if you're interested

Time Expiration Overwrite

I have this piece of code

c := cache.New(5*time.Minute, 10*time.Minute)
c.Set("foo", "bar", cache.DefaultExpiration)
c.Set("foo", "baz", 10*time.Minute)

When "foo" become expired?

Semantic Versions (tags)

It would be great to be able to pin to a particular version of go-cache using gopkg.in for versioning.

Basically I'm just asking to have something tagged as version 1.0 :-)

Details: http://gopkg.in/

Efficient deletion

Have you thought about introducing (as an option, maybe) a binary tree into the cache structure, with items arranged in sorted order according to their expiration? This could avoid going through all the items when deleting expired ones. I might give it a try if anyone is interested.

About time to tag a new release?

The dep vendoring tool use the latest tag instead of master by default.

Right now the latest tag (v2.0.0) for this package is over a year old.

Unlock Mutex

Out of curiosity any special reason not to use defer mu.Unlock()?

It is not possible to use Items() race-free if a janitor is running

Since the lock on the map is released once Items() returns, the janitor can come along and concurrently modify the map while user code is accessing it. Adding a wrapper of synchronisation as suggested by the documentation is useless.

The solution is to explicitly copy the map and return the copy.

less than one purging time not respected

Summary

Setting up a cache with cleanup_interval less than 1, after an item expires, I am no longer able to Get it.

Steps to reproduce

The following is a minimal working example illustrating the problem:

package main

import (
	"fmt"
	"github.com/patrickmn/go-cache"
	"time"
)

func main() {
	duration := time.Duration(-1) * time.Nanosecond
	fmt.Println(duration < 0)
	cache := cache.New(5*time.Second, duration)
	cache.SetDefault("foo", "bar")
	for {
		item, found := cache.Get("foo")
		if found {
			fmt.Println("Found it!" + item.(string))
			time.Sleep(time.Second)
		} else {
			fmt.Println("Key not found!")
			return
		}
	}
}

Experienced behaviour

After the item expires, Get returns a found boolean of false, indicating that the item cannot be found.

Expected behaviour

The item should still be found, as the janitor should be disabled, as a less than 1 cleanup interval was specified on creation of the cache.

Am I doing something wrong, or misunderstanding something?

Cache items gone when re-running script

Here is the code:

c := cache.New(0, 10*time.Minute)
session := &Session{}

// Not set? Set it
if _, found := c.Get(fmt.Sprintf("mykey-%d", iteration)); found == false {
log.Println("DEBUG: SESSION: NOT FOUND")
session = session.Create(somevarshere)
c.Set(fmt.Sprintf("mykey-%d", iteration), session, cache.NoExpiration)
}

if sess, found := c.Get(fmt.Sprintf("mykey-%d", iteration)); found {
  log.Println("DEBUG: FOUND!")
  session = sess.(*Session)
}

Results:

2019/02/08 00:30:29 DEBUG: STARTING ITERATION  1
2019/02/08 00:30:29 DEBUG: SESSION: NOT FOUND

No matter how many times I run the script, it does not find the cache. Should this be stored in memory so when my cronjob runs the script it is able to pull it? Is the cache.New() overriding it?

panic: RUnlock of unlocked RWMutex when replacing expired item

Hi, "Inlining" of get and Expired has introduced a panic in Add() when replacing an expired object.

// Add an item to the cache only if an item doesn't already exist for the given
// key, or if the existing item has expired. Returns an error otherwise.
func (c *cache) Add(k string, x interface{}, d time.Duration) error {
    c.mu.Lock()
    _, found := c.get(k)

Then we see the implementation of get():

func (c *cache) get(k string) (interface{}, bool) {
    item, found := c.items[k]
    if !found {
        return nil, false
    }
    // "Inlining" of Expired
    if item.Expiration > 0 {
        if time.Now().UnixNano() > item.Expiration {
            c.mu.RUnlock()
            return nil, false

The Add() function acquired a write lock (RWMutex.Lock()), but if there's an expired object with the same key, the "inlined" code calls RWMutex.RUnlock() - but it was the write-lock that was held, not a read-lock.

how to share cache between multiple different sessions of program.

Hi I'm trying to fetch an external json response and storing in to go-cache, but its not available for the next time i call method, before its expiration time. Sharing the code snippet.

func GetRate() map[string]interface{} {
c := cache.New(5_time.Minute, 4_time.Minute)
getrate, found := c.Get("mortrate")
if found {
grate := getrate.(map[string]interface{})
fmt.Printf("This is coming from cache \n")
return grate

} else { // func main() {
    fmt.Printf("We are going to fetch from mysite.com")
            resp, err := http.Get("mysite.com/getrate.htm?output=json")
            c.Set("resp", today, cache.DefaultExpiration)
           return resp

}

Replace time.Now() by runtime.nanotime()

  • time.Time is 24 bytes. int64 returned by nanotime() is 8 bytes. This one is not relevant for the code - item.Expiration is int64 already.
  • runtime.nanotime() is 2x faster
import	_ "unsafe" // required to use //go:linkname

//go:noescape
//go:linkname nanotime runtime.nanotime
func nanotime() int64

// time.Now() is 45ns, runtime.nanotime is 20ns
// I can not create an exported symbol with //go:linkname
// I need a wrapper
// Go does not inline functions? https://lemire.me/blog/2017/09/05/go-does-not-inline-functions-when-it-should/
// The wrapper costs 5ns per call
func Nanotime() int64 {
	return nanotime()
}

Using 1ms resolution we can potentially save 4 bytes more.

Time to IDLE and Time to Live

The defaultExpiration when createing new cache is the time to live. Will go-cache support time to IDLE?

e.g for now defaultExpiration = 10 seconds, item will be removed after 10 seconds no matter how much time item was accessed.
If time to IDLE is supported, item's expiration will be reset when it's accessed

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.