Cachetastic

Overview

Cachetastic is a powerful and user-friendly caching library for Elixir. It provides a unified interface for various caching mechanisms like ETS and Redis, with built-in fault tolerance, telemetry, and more.

Features

Installation

Add cachetastic to your list of dependencies in mix.exs:

def deps do
  [
    {:cachetastic, "~> 1.0"}
  ]
end

Run mix deps.get to fetch the dependencies.

Usage

Configuration

Define the backends and fault tolerance configuration in config/config.exs:

import Config

# Use the pooled Redis backend for production workloads
config :cachetastic, :backends,
  primary: :redis_pool,
  redis_pool: [host: "localhost", port: 6379, pool_size: 10, ttl: 3600],
  ets: [ttl: 600],
  fault_tolerance: [primary: :redis_pool, backup: :ets]

# Optional: prefix all keys (useful when sharing a Redis instance)
config :cachetastic, key_prefix: "myapp"

Cachetastic starts automatically as an OTP application — no manual setup needed.

Basic Operations

# Put a value in the cache
Cachetastic.put("key", "value")

# Put with a custom TTL (in seconds)
Cachetastic.put("key", "value", 120)

# Get a value
{:ok, value} = Cachetastic.get("key")

# Delete a value
Cachetastic.delete("key")

# Clear the entire cache
Cachetastic.clear()

Fetch with Fallback

Compute and cache a value on miss. Includes thundering herd protection — only one process computes the fallback for a given key, concurrent callers wait for the result:

{:ok, users} = Cachetastic.fetch("active_users", fn ->
  Repo.all(from u in User, where: u.active == true)
end)

# With custom TTL
{:ok, data} = Cachetastic.fetch("expensive_query", fn ->
  compute_expensive_data()
end, ttl: 300)

Named Caches

Run multiple isolated caches:

Cachetastic.put(:sessions, "user:123", session_data, 1800)
{:ok, session} = Cachetastic.get(:sessions, "user:123")

Cachetastic.put(:api_cache, "endpoint:/users", response, 60)
{:ok, cached} = Cachetastic.get(:api_cache, "endpoint:/users")

# Each cache is independent
Cachetastic.clear(:sessions)  # does not affect :api_cache

Pattern-Based Invalidation

Delete groups of keys by pattern (requires Redis/RedisPool backend):

# Delete all user-related cache entries
Cachetastic.delete_pattern("user:*")

# Scoped to a named cache
Cachetastic.delete_pattern(:api_cache, "v1:*")

Key Namespacing

Avoid key collisions when sharing a Redis instance between multiple apps:

config :cachetastic, key_prefix: "myapp"

# All keys are automatically prefixed: "myapp:user:123"
Cachetastic.put("user:123", data)

Telemetry Events

Cachetastic emits telemetry events for all operations:

:telemetry.attach("my-handler", [:cachetastic, :cache, :get], fn event, measurements, metadata, _config ->
  Logger.info("Cache #{metadata.result}: #{metadata.key} (#{measurements.duration}ns)")
end, nil)

Events emitted:

Stats

Cachetastic.Stats.get()
# => %{hits: 42, misses: 5, puts: 20, deletes: 3, clears: 1, errors: 0, fallbacks: 0, hit_rate: 0.894}

Cachetastic.Stats.get(:sessions)
Cachetastic.Stats.reset()

Configurable Serialization

By default, Redis values are serialized with JSON. You can change it:

# Use Erlang term format (supports any Elixir term)
config :cachetastic, serializer: Cachetastic.Serializers.ErlangTerm

# Or implement your own
defmodule MyApp.MsgpackSerializer do
  @behaviour Cachetastic.Serializer

  @impl true
  def encode(term), do: Msgpax.pack(term)

  @impl true
  def decode(binary), do: Msgpax.unpack(binary)
end

config :cachetastic, serializer: MyApp.MsgpackSerializer

Distributed Cache Invalidation

Via Erlang :pg (BEAM clusters)

config :cachetastic, pubsub: [adapter: Cachetastic.PubSub.PG]

Via Redis Pub/Sub (non-BEAM deployments)

config :cachetastic, pubsub: [
  adapter: Cachetastic.PubSub.RedisPubSub,
  redis: [host: "localhost", port: 6379]
]

Ecto Integration

Cache Ecto query results automatically:

defmodule MyApp.Repo do
  use Ecto.Repo,
    otp_app: :my_app,
    adapter: Ecto.Adapters.Postgres

  use Cachetastic.Ecto, repo: MyApp.Repo
end
query = from u in User, where: u.active == true

# First call hits the DB and caches the result
{:ok, users} = Repo.get_with_cache(query)

# Subsequent calls return from cache
{:ok, users} = Repo.get_with_cache(query)

# Invalidate when data changes
Repo.invalidate_cache(query)

See Ecto Integration Guide for more details.

Contribution

Feel free to open issues and pull requests. We appreciate your contributions!

License

This project is licensed under the MIT License.