sfcache

package module
v1.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 15, 2025 License: Apache-2.0 Imports: 11 Imported by: 0

README

sfcache - Stupid Fast Cache

sfcache logo

Go Reference Go Report Card License


sfcache is the fastest in-memory cache for Go. Need multi-tier persistence? We have it. Need thundering herd protection? We've got that too.

Designed for persistently caching API requests in an unreliable environment, this cache has an abundance of production-ready features:

Features

  • Faster than a bat out of hell - Best-in-class latency and throughput
  • S3-FIFO eviction - Better hit-rates than LRU (learn more)
  • Multi-tier persistent cache (optional) - Bring your own database or use built-in backends:
  • Optional compression - S2 or Zstd for all persistence backends via pkg/store/compress
  • Per-item TTL - Optional expiration
  • Thundering herd prevention - GetSet deduplicates concurrent loads for the same key
  • Graceful degradation - Cache works even if persistence fails
  • Zero allocation updates - minimal GC thrashing

Usage

As a stupid-fast in-memory cache:

import "github.com/codeGROOVE-dev/sfcache"

// strings as keys, ints as values
cache := sfcache.New[string, int]()
cache.Set("answer", 42)
val, found := cache.Get("answer")

Or as a multi-tier cache with local persistence to survive restarts:

import (
  "github.com/codeGROOVE-dev/sfcache"
  "github.com/codeGROOVE-dev/sfcache/pkg/store/localfs"
)

p, _ := localfs.New[string, User]("myapp", "")
cache, _ := sfcache.NewTiered(p)

cache.SetAsync(ctx, "user:123", user) // Don't wait for the key to persist
cache.Store.Len(ctx)                  // Access persistence layer directly

With S2 compression (fast, good ratio):

import "github.com/codeGROOVE-dev/sfcache/pkg/store/compress"

p, _ := localfs.New[string, User]("myapp", "", compress.S2())

How about a persistent cache suitable for Cloud Run or local development? This uses Cloud DataStore if available, local files if not:

import "github.com/codeGROOVE-dev/sfcache/pkg/store/cloudrun"

p, _ := cloudrun.New[string, User](ctx, "myapp")
cache, _ := sfcache.NewTiered(p)

Performance against the Competition

sfcache prioritizes high hit-rates and low read latency. We have our own built in make bench that asserts cache dominance:

>>> TestLatencyNoEviction: Latency - No Evictions (Set cycles within cache size) (go test -run=TestLatencyNoEviction -v)
| Cache         | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
|---------------|-----------|----------|------------|-----------|----------|------------|
| sfcache       |       7.0 |        0 |          0 |      23.0 |        0 |          0 |
| lru           |      23.0 |        0 |          0 |      23.0 |        0 |          0 |
| ristretto     |      28.0 |       13 |          0 |      77.0 |      118 |          3 |
| otter         |      34.0 |        0 |          0 |     160.0 |       51 |          1 |
| freecache     |      74.0 |        8 |          1 |      53.0 |        0 |          0 |
| tinylfu       |      80.0 |        0 |          0 |     110.0 |      168 |          3 |

- 🔥 Get: 229% better than next best (lru)
- 🔥 Set: 0.000% better than next best (lru)

>>> TestLatencyWithEviction: Latency - With Evictions (Set uses 20x unique keys) (go test -run=TestLatencyWithEviction -v)
| Cache         | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
|---------------|-----------|----------|------------|-----------|----------|------------|
| sfcache       |       7.0 |        0 |          0 |      94.0 |        0 |          0 |
| lru           |      24.0 |        0 |          0 |      83.0 |       80 |          1 |
| ristretto     |      31.0 |       14 |          0 |      73.0 |      119 |          3 |
| otter         |      34.0 |        0 |          0 |     176.0 |       61 |          1 |
| freecache     |      69.0 |        8 |          1 |     102.0 |        1 |          0 |
| tinylfu       |      79.0 |        0 |          0 |     115.0 |      168 |          3 |

- 🔥 Get: 243% better than next best (lru)
- 💧 Set: 29% worse than best (ristretto)

>>> TestZipfThroughput1: Zipf Throughput (1 thread) (go test -run=TestZipfThroughput1 -v)

### Zipf Throughput (alpha=0.99, 75% read / 25% write): 1 threads

| Cache         | QPS        |
|---------------|------------|
| sfcache       |  100.26M   |
| lru           |   44.58M   |
| tinylfu       |   18.42M   |
| freecache     |   14.07M   |
| otter         |   13.52M   |
| ristretto     |   11.32M   |

- 🔥 Throughput: 125% faster than next best (lru)

>>> TestZipfThroughput16: Zipf Throughput (16 threads) (go test -run=TestZipfThroughput16 -v)

### Zipf Throughput (alpha=0.99, 75% read / 25% write): 16 threads

| Cache         | QPS        |
|---------------|------------|
| sfcache       |   36.46M   |
| freecache     |   15.00M   |
| ristretto     |   13.47M   |
| otter         |   10.75M   |
| lru           |    5.87M   |
| tinylfu       |    4.19M   |

- 🔥 Throughput: 143% faster than next best (freecache)

>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)

### Meta Trace Hit Rate (10M ops from Meta KVCache)

| Cache         | 50K cache | 100K cache |
|---------------|-----------|------------|
| sfcache       |   71.16%  |   78.30%   |
| otter         |   41.12%  |   56.34%   |
| ristretto     |   40.35%  |   48.99%   |
| tinylfu       |   53.70%  |   54.79%   |
| freecache     |   56.86%  |   65.52%   |
| lru           |   65.21%  |   74.22%   |

- 🔥 Meta trace: 5.5% better than next best (lru)

>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)

### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)

| Cache         | Size=1% | Size=2.5% | Size=5% |
|---------------|---------|-----------|---------|
| sfcache       |  63.80% |    68.71% |  71.84% |
| otter         |  61.77% |    67.67% |  71.33% |
| ristretto     |  34.91% |    41.23% |  46.58% |
| tinylfu       |  63.83% |    68.25% |  71.56% |
| freecache     |  56.65% |    57.84% |  63.39% |
| lru           |  57.33% |    64.55% |  69.92% |

- 🔥 Hit rate: 0.34% better than next best (tinylfu)

Want even more comprehensive benchmarks? See https://github.com/tstromberg/gocachemark where we win the top score.

Implementation Notes

Differences from the S3-FIFO paper

sfcache implements the core S3-FIFO algorithm (Small/Main/Ghost queues with frequency-based promotion) with these optimizations:

  1. Dynamic Sharding - 1-2048 independent S3-FIFO shards (vs single-threaded) for concurrent workloads
  2. Bloom Filter Ghosts - Two rotating Bloom filters track evicted keys (vs storing actual keys), reducing memory 10-100x
  3. Lazy Ghost Checks - Only check ghosts when evicting, saving 5-9% latency when cache isn't full
  4. Intrusive Lists - Embed pointers in entries (vs separate nodes) for zero-allocation queue ops
  5. Fast-path Hashing - Specialized for int/string keys using wyhash and bit mixing

License

Apache 2.0

Documentation

Overview

Package sfcache provides a high-performance cache with S3-FIFO eviction and optional persistence.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type MemoryCache

type MemoryCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

MemoryCache is a fast in-memory cache without persistence. All operations are context-free and never return errors.

func New added in v1.1.0

func New[K comparable, V any](opts ...Option) *MemoryCache[K, V]

New creates a new in-memory cache.

Example:

cache := sfcache.New[string, User](
    sfcache.Size(10000),
    sfcache.TTL(time.Hour),
)
defer cache.Close()

cache.Set("user:123", user)              // uses default TTL
cache.Set("user:123", user, time.Hour)   // explicit TTL
user, ok := cache.Get("user:123")

func (*MemoryCache[K, V]) Close

func (*MemoryCache[K, V]) Close()

Close releases resources held by the cache. For MemoryCache this is a no-op, but provided for API consistency.

func (*MemoryCache[K, V]) Delete

func (c *MemoryCache[K, V]) Delete(key K)

Delete removes a value from the cache.

func (*MemoryCache[K, V]) Flush

func (c *MemoryCache[K, V]) Flush() int

Flush removes all entries from the cache. Returns the number of entries removed.

func (*MemoryCache[K, V]) Get

func (c *MemoryCache[K, V]) Get(key K) (V, bool)

Get retrieves a value from the cache. Returns the value and true if found, or the zero value and false if not found.

func (*MemoryCache[K, V]) GetSet added in v1.2.0

func (c *MemoryCache[K, V]) GetSet(key K, loader func() (V, error), ttl ...time.Duration) (V, error)

GetSet retrieves a value from the cache, or calls loader to compute and store it. This prevents thundering herd: if multiple goroutines request the same missing key concurrently, only one loader runs while others wait for its result.

Example:

user, err := cache.GetSet("user:123", func() (User, error) {
    return fetchUserFromDB("123")
})

// Or with explicit TTL:
user, err := cache.GetSet("user:123", func() (User, error) {
    return fetchUserFromDB("123")
}, time.Hour)

func (*MemoryCache[K, V]) Len

func (c *MemoryCache[K, V]) Len() int

Len returns the number of entries in the cache.

func (*MemoryCache[K, V]) Set

func (c *MemoryCache[K, V]) Set(key K, value V, ttl ...time.Duration)

Set stores a value in the cache. If no TTL is provided, the default TTL is used. If no default TTL is configured, the entry never expires.

type Option

type Option func(*config)

Option configures a MemoryCache or TieredCache.

func Size added in v1.1.0

func Size(n int) Option

Size sets the maximum number of entries in the memory cache. Default is 16384.

func TTL added in v1.1.0

func TTL(d time.Duration) Option

TTL sets the default TTL for cache entries. Entries without an explicit TTL will use this value. Default is 0 (no expiration).

type Store added in v1.2.3

type Store[K comparable, V any] interface {
	// ValidateKey checks if a key is valid for this persistence store.
	ValidateKey(key K) error

	// Get retrieves a value from persistent storage.
	Get(ctx context.Context, key K) (V, time.Time, bool, error)

	// Set saves a value to persistent storage with an expiry time.
	Set(ctx context.Context, key K, value V, expiry time.Time) error

	// Delete removes a value from persistent storage.
	Delete(ctx context.Context, key K) error

	// Cleanup removes expired entries older than maxAge.
	Cleanup(ctx context.Context, maxAge time.Duration) (int, error)

	// Location returns the storage location for a given key.
	Location(key K) string

	// Flush removes all entries from persistent storage.
	Flush(ctx context.Context) (int, error)

	// Len returns the number of entries in persistent storage.
	Len(ctx context.Context) (int, error)

	// Close releases any resources held by the persistence store.
	Close() error
}

Store defines the interface for cache persistence backends. Uses only standard library types, so implementations can satisfy it without importing this package.

type TieredCache added in v1.1.0

type TieredCache[K comparable, V any] struct {
	// Store provides direct access to the persistence layer.
	// Use this for persistence-specific operations:
	//   cache.Store.Len(ctx)
	//   cache.Store.Flush(ctx)
	//   cache.Store.Cleanup(ctx, maxAge)
	Store Store[K, V]
	// contains filtered or unexported fields
}

TieredCache is a cache with an in-memory layer backed by persistent storage. The memory layer provides fast access, while the store provides durability. Core operations require context for I/O, while memory operations like Len() do not.

func NewTiered added in v1.1.0

func NewTiered[K comparable, V any](store Store[K, V], opts ...Option) (*TieredCache[K, V], error)

NewTiered creates a cache with an in-memory layer backed by persistent storage.

Example:

store, _ := localfs.New[string, User]("myapp", "")
cache, err := sfcache.NewTiered[string, User](store,
    sfcache.Size(10000),
    sfcache.TTL(time.Hour),
)
if err != nil {
    return err
}
defer cache.Close()

cache.Set(ctx, "user:123", user)              // uses default TTL
cache.Set(ctx, "user:123", user, time.Hour)   // explicit TTL
user, ok, err := cache.Get(ctx, "user:123")
storeCount, _ := cache.Store.Len(ctx)

func (*TieredCache[K, V]) Close added in v1.1.0

func (c *TieredCache[K, V]) Close() error

Close releases resources held by the cache.

func (*TieredCache[K, V]) Delete added in v1.1.0

func (c *TieredCache[K, V]) Delete(ctx context.Context, key K) error

Delete removes a value from the cache. The value is always removed from memory. Returns an error if persistence deletion fails.

func (*TieredCache[K, V]) Flush added in v1.1.0

func (c *TieredCache[K, V]) Flush(ctx context.Context) (int, error)

Flush removes all entries from the cache, including persistent storage. Returns the total number of entries removed from memory and persistence.

func (*TieredCache[K, V]) Get added in v1.1.0

func (c *TieredCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get retrieves a value from the cache. It first checks the memory cache, then falls back to persistence.

func (*TieredCache[K, V]) GetSet added in v1.2.0

func (c *TieredCache[K, V]) GetSet(ctx context.Context, key K, loader func(context.Context) (V, error), ttl ...time.Duration) (V, error)

GetSet retrieves a value from the cache (memory or persistence), or calls loader to compute and store it. This prevents thundering herd: if multiple goroutines request the same missing key concurrently, only one loader runs while others wait for its result.

Example:

user, err := cache.GetSet(ctx, "user:123", func(ctx context.Context) (User, error) {
    return fetchUserFromAPI(ctx, "123")
})

// Or with explicit TTL:
user, err := cache.GetSet(ctx, "user:123", func(ctx context.Context) (User, error) {
    return fetchUserFromAPI(ctx, "123")
}, time.Hour)

func (*TieredCache[K, V]) Len added in v1.1.0

func (c *TieredCache[K, V]) Len() int

Len returns the number of entries in the memory cache. For persistence entry count, use cache.Store.Len(ctx).

func (*TieredCache[K, V]) Set added in v1.1.0

func (c *TieredCache[K, V]) Set(ctx context.Context, key K, value V, ttl ...time.Duration) error

Set stores a value in the cache. If no TTL is provided, the default TTL is used. The value is ALWAYS stored in memory, even if persistence fails. Returns an error if the key violates persistence constraints or if persistence fails.

func (*TieredCache[K, V]) SetAsync added in v1.1.0

func (c *TieredCache[K, V]) SetAsync(ctx context.Context, key K, value V, ttl ...time.Duration) error

SetAsync stores a value in the cache, handling persistence asynchronously. If no TTL is provided, the default TTL is used. Key validation and in-memory caching happen synchronously. Persistence errors are logged but not returned (fire-and-forget). Returns an error only for validation failures (e.g., invalid key format).

Directories

Path Synopsis
pkg
persist module
store/localfs module
store/null module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL