🦑 Squid Mesh

Workflow automation platform for Elixir applications.

CIHexHexDocsElixir ForumLicense: Apache 2.0

Squid Mesh lets Phoenix and OTP applications define, run, inspect, replay, and recover durable workflows in code.

<i>The name blends a squid’s coordinated arms with a mesh of connected workflow steps, capturing the idea of orchestrating many moving parts without rebuilding the coordination layer in every app.</i>

[!WARNING] Squid Mesh is still in early development. The runtime is suitable for evaluation, local development, and integration work, but it is not yet positioned as production-ready. See Production Readiness for the current checklist and remaining bar.

What You Get

Runtime Shape

Quick Start

Requirements:

1. Install from Hex.pm

defp deps do
  [
    {:squid_mesh, "~> 0.1.0-alpha.2"}
  ]
end

If the host app defines custom steps with use Jido.Action, add :jido explicitly as well:

defp deps do
  [
    {:jido, "~> 2.0"},
    {:squid_mesh, "~> 0.1.0-alpha.2"}
  ]
end

2. Configure Squid Mesh and Oban

config :squid_mesh,
  repo: MyApp.Repo,
  execution: [
    name: Oban,
    queue: :squid_mesh
  ]

config :my_app, Oban,
  repo: MyApp.Repo,
  queues: [squid_mesh: 10]

The host app's Oban config must include the :squid_mesh queue when Squid Mesh is using that queue name.

3. Install migrations

mix deps.get
mix squid_mesh.install
mix ecto.migrate

mix squid_mesh.install copies only Squid Mesh tables into the host app's priv/repo/migrations. The host app still owns its Oban setup and oban_jobs migration.

Example: Daily RSS To Discord

This kind of workflow is where Squid Mesh gets interesting: one cron trigger, typed payload defaults, built-in steps, custom steps, explicit failure routing, and step-level retry on the side effect that actually needs it.

defmodule Content.Workflows.PostDailyDigest do
  use SquidMesh.Workflow

  workflow do
    trigger :daily_digest do
      cron("0 9 * * 1-5", timezone: "Etc/UTC")

      payload do
        field(:feed_url, :string, default: "https://example.com/feed.xml")
        field(:discord_webhook_url, :string)
        field(:posted_on, :string, default: {:today, :iso8601})
      end
    end

    step(:fetch_feed, Content.Steps.FetchFeed, output: :feed)
    step(:build_digest, Content.Steps.BuildDigest,
      input: [:feed, :posted_on],
      output: :digest
    )
    step(:announce_post, :log, message: "Posting digest to Discord", level: :info)
    step(:record_failed_delivery, Content.Steps.RecordFailedDelivery)

    step(:post_to_discord, Content.Steps.PostToDiscord,
      input: [:digest, :discord_webhook_url],
      retry: [max_attempts: 5, backoff: [type: :exponential, min: 1_000, max: 30_000]]
    )

    transition(:fetch_feed, on: :ok, to: :build_digest)
    transition(:build_digest, on: :ok, to: :announce_post)
    transition(:announce_post, on: :ok, to: :post_to_discord)
    transition(:post_to_discord, on: :ok, to: :complete)
    transition(:post_to_discord, on: :error, to: :record_failed_delivery)
    transition(:record_failed_delivery, on: :ok, to: :complete)
  end
end

The step modules can stay small and domain-focused, while Squid Mesh handles durable state, scheduling through Oban, retries, failure routing after retry exhaustion, and run inspection.

When a step needs a narrower contract than the whole payload plus accumulated context, use input: [...] to select keys and output: :key to namespace the returned map for downstream steps.

Start the workflow through the public API and inspect the result with history:

{:ok, run} =
  SquidMesh.start_run(Content.Workflows.PostDailyDigest, %{
    discord_webhook_url: webhook_url
  })

SquidMesh.inspect_run(run.id, include_history: true)

With history enabled, the inspected run includes both chronological step_runs and a graph-aware steps view so host apps can render dependency workflows in a useful order.

Documentation

Use the docs index for setup, workflow authoring, operations, and architecture:

Contributing