Git Product home page Git Product logo

otter's Introduction

High performance in-memory cache

Go Reference Mentioned in Awesome Go

๐Ÿ“– Contents

๐Ÿ’ก Motivation

I once came across the fact that none of the Go cache libraries are truly contention-free. Most of them are a map with a mutex and an eviction policy. Unfortunately, these are not able to reach the speed of caches in other languages (such as Caffeine). For example, the fastest cache from Dgraph labs called Ristretto, was faster than competitors by 30% at best (Otter is many times faster) but had poor hit ratio, even though its README says otherwise. This can be a problem in real-world applications, because no one wants to bump into performance of a cache library ๐Ÿ™‚. As a result, I wanted to make the fastest, easiest-to-use cache with excellent hit ratio.

Please leave a โญ as motivation if you liked the idea ๐Ÿ˜„

๐Ÿ—ƒ Related works

Otter is based on the following papers

โœจ Features

  • Simple API: Just set the parameters you want in the builder and enjoy
  • Autoconfiguration: Otter is automatically configured based on the parallelism of your application
  • Generics: You can safely use any comparable types as keys and any types as values
  • TTL: Expired values will be automatically deleted from the cache
  • Cost-based eviction: Otter supports eviction based on the cost of each item
  • Excellent throughput: Otter is currently the fastest cache library with a huge lead over the competition
  • Great hit ratio: New S3-FIFO algorithm is used, which shows excellent results

๐Ÿ“š Usage

๐Ÿ“‹ Requirements

  • Go 1.19+

๐Ÿ› ๏ธ Installation

go get -u github.com/maypok86/otter

โœ๏ธ Examples

Otter uses a builder pattern that allows you to conveniently create a cache instance with different parameters.

Cache with const TTL

package main

import (
    "fmt"
    "time"

    "github.com/maypok86/otter"
)

func main() {
    // create a cache with capacity equal to 10000 elements
    cache, err := otter.MustBuilder[string, string](10_000).
        CollectStats().
        Cost(func(key string, value string) uint32 {
            return 1
        }).
        WithTTL(time.Hour).
        Build()
    if err != nil {
        panic(err)
    }

    // set item with ttl (1 hour) 
    cache.Set("key", "value")

    // get value from cache
    value, ok := cache.Get("key")
    if !ok {
        panic("not found key")
    }
    fmt.Println(value)

    // delete item from cache
    cache.Delete("key")

    // delete data and stop goroutines
    cache.Close()
}

Cache with variable TTL

package main

import (
    "fmt"
    "time"

    "github.com/maypok86/otter"
)

func main() {
    // create a cache with capacity equal to 10000 elements
    cache, err := otter.MustBuilder[string, string](10_000).
        CollectStats().
        Cost(func(key string, value string) uint32 {
            return 1
        }).
        WithVariableTTL().
        Build()
    if err != nil {
        panic(err)
    }

    // set item with ttl (1 hour)
    cache.Set("key1", "value1", time.Hour)
    // set item with ttl (1 minute)
    cache.Set("key2", "value2", time.Minute)

    // get value from cache
    value, ok := cache.Get("key1")
    if !ok {
        panic("not found key")
    }
    fmt.Println(value)

    // delete item from cache
    cache.Delete("key1")

    // delete data and stop goroutines
    cache.Close()
}

๐Ÿ“Š Performance

The benchmark code can be found here.

๐Ÿš€ Throughput

Throughput benchmarks are a Go port of the caffeine benchmarks.

Read (100%)

In this benchmark 8 threads concurrently read from a cache configured with a maximum size.

reads=100%,writes=0%

Read (75%) / Write (25%)

In this benchmark 6 threads concurrently read from and 2 threads write to a cache configured with a maximum size.

reads=75%,writes=25%

Read (50%) / Write (50%)

In this benchmark 4 threads concurrently read from and 4 threads write to a cache configured with a maximum size.

reads=50%,writes=50%

Read (25%) / Write (75%)

In this benchmark 2 threads concurrently read from and 6 threads write to a cache configured with a maximum size.

reads=25%,writes=75%

Write (100%)

In this benchmark 8 threads concurrently write to a cache configured with a maximum size.

reads=0%,writes=100%

Otter shows fantastic speed under all workloads except extreme write-heavy, but such a workload is very rare for caches and usually indicates that the cache has a very small hit ratio.

๐ŸŽฏ Hit ratio

Zipf

zipf

S3

This trace is described as "disk read accesses initiated by a large commercial search engine in response to various web search requests."

s3

DS1

This trace is described as "a database server running at a commercial site running an ERP application on top of a commercial database."

ds1

P3

The trace P3 was collected from workstations running Windows NT by using Vtrace which captures disk operations through the use of device filters

p3

P8

The trace P8 was collected from workstations running Windows NT by using Vtrace which captures disk operations through the use of device filters

p8

LOOP

This trace demonstrates a looping access pattern.

loop

OLTP

This trace is described as "references to a CODASYL database for a one hour period."

oltp

In summary, we have that S3-FIFO (otter) is inferior to W-TinyLFU (theine) on lfu friendly traces (databases, search, analytics), but has a greater or equal hit ratio on web traces.

๐Ÿ‘ Contribute

Contributions are welcome as always, before submitting a new PR please make sure to open a new issue so community members can discuss it. For more information please see contribution guidelines.

Additionally, you might find existing open issues which can help with improvements.

This project follows a standard code of conduct so that you can understand what actions will and will not be tolerated.

๐Ÿ“„ License

This project is Apache 2.0 licensed, as found in the LICENSE.

otter's People

Contributors

maypok86 avatar dependabot[bot] avatar aoang avatar narqo avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.