Group

Eventually Consistent distributed process registry, process groups, lifecycle monitoring, and isolated subclusters for Elixir. No external dependencies.

Features

Installation

def deps do
  [{:group, "~> 0.1.0"}]
end

Quick Start

Start a Group instance under your supervision tree:

children = [
  {Group, name: :my_app}
]

Registry

# Register the calling process under a unique key
:ok = Group.register(:my_app, "user/123", %{name: "Alice"})

# Look up by key — returns {pid, meta} or nil
{pid, %{name: "Alice"}} = Group.lookup(:my_app, "user/123")

# Unregister (also happens automatically on process death)
:ok = Group.unregister(:my_app, "user/123")

Process Groups

# Join a group (many processes can join the same key)
:ok = Group.join(:my_app, "chat/room/42", %{role: :member})

# List all members — returns [{pid, meta}, ...]
members = Group.members(:my_app, "chat/room/42")

# Leave
:ok = Group.leave(:my_app, "chat/room/42")

members/2 returns joined processes for a key. Registered processes are not included — use lookup/2 for those. Keys ending with "/" perform a prefix query across all shards:

# All members in rooms under "chat/"
Group.members(:my_app, "chat/")

Monitoring

Subscribe to lifecycle events matching a pattern:

# Prefix match — all keys starting with "user/"
:ok = Group.monitor(:my_app, "user/")

# Exact match
:ok = Group.monitor(:my_app, "user/123")

# Everything
:ok = Group.monitor(:my_app, :all)

Events arrive as {:group, events, info} tuples in the monitoring process's mailbox:

def handle_info({:group, events, _info}, state) do
  Enum.each(events, fn
    %Group.Event{type: :registered, key: key, pid: pid, meta: meta} ->
      # a process registered at `key`
      :ok
    %Group.Event{type: :unregistered, key: key, meta: meta, reason: reason} ->
      # a registered process died or unregistered
      :ok
    %Group.Event{type: :joined, key: key, pid: pid, meta: meta} ->
      # a process joined the group at `key`
      :ok
    %Group.Event{type: :left, key: key, pid: pid, meta: meta, reason: reason} ->
      # a process left or died
      :ok
  end)
  {:noreply, state}
end

Single operations (register, join) produce one event per tuple. Bulk operations (nodedown, process death) batch all events from that operation into one tuple.

Dispatch

Send a message to all members of a key:

:ok = Group.dispatch(:my_app, "chat/room/42", {:new_message, "hello"})

Named Clusters

Isolate groups and registries into named subclusters. Only nodes that have called connect/2 for a cluster participate in that cluster's replication.

# Connect this node to a named cluster
:ok = Group.connect(:my_app, "game_servers_123")

# Or lease the connection while this node still has local interest in it
:ok = Group.connect(:my_app, "game_servers_123", ttl: 30_000)

# All operations accept a :cluster option
:ok = Group.join(:my_app, "room/1", %{}, cluster: "game_servers_123")
members = Group.members(:my_app, "room/1", cluster: "game_servers_123")
:ok = Group.monitor(:my_app, :all, cluster: "game_servers_123")

TTL leases are local policy only:

Nodes

# All Group peers (nodes that completed peer discovery), excluding self
Group.nodes(:my_app)

# All nodes in a named cluster
Group.nodes(:my_app, "game_servers_123")

Runtime Log Level

Toggle verbose logging at runtime without restart:

Group.log_level(:my_app, :verbose)  # turn on verbose
Group.log_level(:my_app, :info)     # back to normal
Group.log_level(:my_app, false)     # silence all Group logs

Group.log_level/2 updates :persistent_term, so it should be used as an occasional admin control, not from a hot path.

Events

Events are delivered as {:group, events, %{name: name}} tuples containing %Group.Event{} structs:

%Group.Event{
  type: :registered | :unregistered | :joined | :left,
  supervisor: :my_app,
  cluster: nil | "cluster_name",
  key: "user/123",
  pid: #PID<0.150.0>,
  meta: %{},
  previous_meta: nil | %{},    # old meta on re-register/re-join
  reason: nil | term()          # exit reason on unregistered/left
}
Event Trigger
:registeredregister/4 — new or re-register (updates meta)
:unregistered Process died or unregister/3 called
:joinedjoin/4 — new or re-join (updates meta)
:left Process died or leave/3 called

Re-registering or re-joining an existing key updates the metadata in place and delivers an event with previous_meta set to the old value.

Consistency Model

All operations are eventually consistent:

Configuration

{Group,
  name: :my_app,
  shards: 8,                                   # number of write shards (default)
  log: :info,                                  # :info | :verbose | false
  resolve_registry_conflict: {MyResolver, :resolve, []},  # partition conflict resolver
  extract_meta: {MyApp, :extract_meta, []},    # transform meta on read
  replicated_pg_receiver_buffer_size: 64,      # buffered remote PG join/leave ops per shard
  replicated_pg_receiver_flush_interval: 5     # max buffer age in ms before flush
}

Options

Architecture

Group.Supervisor (:"my_app_group_sup")
├── Group.Replica.Data        — owns all ETS tables, survives shard crashes
├── Group.Replica.Supervisor  — supervises N shard GenServers
│   ├── Group.Replica (shard 0)
│   ├── Group.Replica (shard 1)
│   └── ...
├── Registry                  — local monitor subscriptions (:"my_app_group_registry")
└── Group.ClusterLease        — local named-cluster TTL sweeper

Sharding

Keys are routed to shards via :erlang.phash2({cluster, key}, num_shards). Including the cluster in the hash avoids false contention between the default cluster and named clusters.

Reads (lookup, members) go directly to ETS — no GenServer hop. This is the hot path and runs at millions of ops/sec.

Writes (register, join, etc.) go through the shard's GenServer, which updates ETS and broadcasts replication messages. Multiple shards reduce write contention for unrelated keys.

ETS Tables

Each shard owns 4 ETS tables:

Table Type Key Purpose
reg_by_key:set{cluster, key} Registry lookup — O(1)
reg_by_pid:ordered_set{pid, cluster, key} Reverse index for death cleanup
pg_by_key:ordered_set{cluster, key, pid} Group membership lookup
pg_by_pid:ordered_set{pid, cluster, key} Reverse index for death cleanup

Plus 3 shared tables:

cluster_nodes / node_clusters remain the authoritative cluster-membership tables. cluster_leases is only local lease metadata used by the sweeper.

Group.Replica.Data owns all tables and is supervised with rest_for_one so tables survive shard crashes.

Peer Discovery

When Group starts (or a new Erlang node connects), shards exchange peer_connect / peer_connect_ack messages with their counterparts on other nodes. This handshake:

  1. Validates that shard counts match (raises on mismatch).
  2. Exchanges cluster membership lists.
  3. Sends cluster_state snapshots for shared clusters — the full registry and group data, delivered in a single message per cluster.

This is how a new node catches up to the existing cluster state.

Replication

After the initial sync, steady-state changes propagate via broadcast messages (replicate_register, replicate_join, etc.) sent from the writing shard to the corresponding shard on all peer nodes in the relevant cluster.

Named Cluster TTL Leases

Named-cluster TTLs are a local way to reduce replication fanout to nodes that no longer care about a cluster.

Process Death Cleanup

Shards monitor all registered/joined processes. On DOWN, the shard:

  1. Removes entries from both the primary and reverse-index ETS tables.
  2. Broadcasts replicate_unregister / replicate_leave to peer nodes.
  3. Fires :unregistered / :left events to local monitors.

Node Disconnect

On nodedown, each shard purges all entries owned by the disconnected node from its ETS tables and fires events for each removed entry.

Testing

mix test

See test/README.md for details on the distributed test infrastructure.

Benchmarks

cd priv/bench

# Local (single-node)
./run_local.sh

# Distributed (3 separate BEAM VMs)
./run_distributed.sh
./run_distributed.sh --shards 4

See priv/bench/README.md for scenario descriptions.

License

MIT