Legion

CILicenseVersionHex Docs

Legion is an Elixir-native framework for building AI agents. Unlike traditional function-calling approaches, Legion agents generate and execute actual Elixir code, giving them the full power of the language while staying safely sandboxed.

Quick Start

1. Define your tools

Tools are regular Elixir modules that expose functions to your agents:

defmodule MyApp.Tools.ScraperTool do
  use Legion.Tool

  @doc "Fetches recent posts from HackerNews"
  def fetch_posts do
    Req.get!("https://hn.algolia.com/api/v1/search_by_date").body["hits"]
  end
end

defmodule MyApp.Tools.DatabaseTool do
  use Legion.Tool

  @doc "Saves a post title to the database"
  def insert_post(title), do: Repo.insert!(%Post{title: title})
end

2. Define an Agent

Agents are long or short-lived Elixir processes that maintain state and can be messaged.

defmodule MyApp.ResearchAgent do
  @moduledoc """
  Fetch posts, evaluate their relevance and quality, and save the good ones.
  """
  use Legion.Agent

  def tools, do: [MyApp.Tools.ScraperTool, MyApp.Tools.DatabaseTool]
end

3. Run the Agent

{:ok, result} = Legion.execute(MyApp.ResearchAgent, "Find cool Elixir posts about Advent of Code and save them")
# => {:ok, "Found 3 relevant posts and saved 2 that met quality criteria."}

Features

Installation

Add legion to your list of dependencies in mix.exs:

def deps do
  [
    {:legion, "~> 0.2"}
  ]
end

Configure your LLM API key (see req_llm configuration for all options):

# config/runtime.exs
config :req_llm, openai_api_key: System.get_env("OPENAI_API_KEY")

How It Works

When you ask an agent: “Find cool Elixir posts about Advent of Code and save them”

The agent first fetches and filters relevant posts:

ScraperTool.fetch_posts()
|> Enum.filter(fn post ->
  title = String.downcase(post["title"] || "")
  String.contains?(title, "elixir") and String.contains?(title, "advent")
end)

The LLM reviews the results, decides which posts are actually “cool”, then saves them:

["Elixir Advent of Code 2024 - Day 5 walkthrough", "My first AoC in Elixir!"]
|> Enum.each(&DatabaseTool.insert_post/1)

Traditional function-calling would need dozens of round-trips. Legion lets the LLM write expressive pipelines and make subjective judgments at the same time.

Long-lived Agents

For multi-turn conversations or persistent agents:

# Start an agent that maintains context
{:ok, pid} = Legion.start_link(MyApp.AssistantAgent, "Help me analyze this data")

# Send follow-up messages
{:ok, response} = Legion.call(pid, "Now filter for items over $100")

# Or fire-and-forget
Legion.cast(pid, "Also check the reviews")

Configuration

Configure Legion in your config/config.exs:

config :legion, :config, %{
  model: "openai:gpt-4o-mini",
  max_iterations: 10,
  max_retries: 3,
  sandbox_timeout: 60_000,
  share_bindings: false
}

Agents can override global settings:

defmodule MyApp.DataAgent do
  use Legion.Agent

  def tools, do: [MyApp.HTTPTool]
  def config, do: %{model: "anthropic:claude-sonnet-4-20250514", max_iterations: 5}
end

Agent Callbacks

All callbacks are optional with sensible defaults:

Callback Default Description
tools/0[] Tool modules available to the agent
description/0@moduledoc Agent description for the system prompt
output_schema/0%{"type" => "string"} JSON Schema for structured output
tool_config/1[] Per-tool keyword config
system_prompt/0 auto-generated Override the entire system prompt
config/0%{} Model, timeouts, limits
defmodule MyApp.DataAgent do
  use Legion.Agent

  def tools, do: [MyApp.HTTPTool]

  # Structured output schema
  def output_schema do
    [
      summary: [type: :string, required: true],
      count: [type: :integer, required: true]
    ]
  end

  # Additional instructions for the LLM
  def system_prompt do
    "Always validate URLs before fetching. Prefer JSON responses."
  end

  # Pass options to specific tools (accessible via Vault)
  def tool_config(MyApp.HTTPTool), do: [timeout: 10_000]
end

Authorization

To authorize tool calls for a specific user, put auth data into Vault before starting the agent and read it inside the tool. LLM-generated code has no access to Vault.

# Before starting the agent
Vault.init(:current_user, %{id: user.id})

{:ok, result} = Legion.execute(MyApp.PostsAgent, "Find my posts from today and summarize them")
# Inside your tool
defmodule MyApp.Tools.PostsTool do
  use Legion.Tool

  def get_my_posts do
    %{id: user_id} = Vault.get(:current_user)
    Repo.all(from p in Post, where: p.user_id == ^user_id)
  end
end

Human in the Loop tool

Request human input during agent execution:

# Agent can use built-in HumanTool (if you allow it to)
HumanTool.ask("Should I proceed with this operation?")

# Your application responds
Legion.call(agent_pid, {:respond, "Yes, proceed"})

Multi-Agent Systems

Agents can spawn and communicate with other agents using the built-in AgentTool:

defmodule MyApp.OrchestratorAgent do
  use Legion.Agent

  def tools, do: [Legion.Tools.AgentTool, MyApp.Tools.DatabaseTool]
  def tool_config(Legion.Tools.AgentTool), do: [agents: [MyApp.ResearchAgent, MyApp.WriterAgent]]
end

The orchestrator agent can then delegate tasks:

# One-off task delegation
{:ok, research} = AgentTool.call(MyApp.ResearchAgent, "Find info about Elixir 1.18")

# Start a long-lived sub-agent
{:ok, pid} = AgentTool.start_link(MyApp.WriterAgent, "Write a blog post")
AgentTool.cast(pid, "Add a section about pattern matching")
{:ok, draft} = AgentTool.call(pid, "Show me what you have so far")

Agent Pools

Since agents are regular BEAM processes, you can use Erlang’s :pg (process groups) to create agent pools with no external infrastructure:

# Spawn a pool of support agents
for _ <- 1..5 do
  {:ok, pid} = Legion.start_link(SupportAgent)
  :pg.join(:support_pool, pid)
end

# Route incoming tickets to the next available agent
defp handle_ticket(ticket) do
  pool = :pg.get_members(:support_pool)
  agent = Enum.random(pool)
  Legion.cast(agent, "Handle this support ticket: #{ticket}")
end

Hot Code Reloading

Since tools and agents are regular Elixir modules, the BEAM’s hot code reloading works out of the box. You can update tool implementations, swap agent behaviors, or add entirely new capabilities to running agents — without restarting the VM, without dropping conversations, without losing state.

Telemetry

Legion.Telemetry.attach_default_logger()

Legion emits telemetry events for observability:

Plus, Legion emits Req telemetry events for HTTP requests.

Limitations

Sandboxing

Legion’s sandbox restricts what LLM-generated code can do — but it is not a full process isolation sandbox yet. Generated code runs inside the same BEAM VM as your application.

What the sandbox does:

What it does not do:

The practical implication: Legion is designed for trusted code generators (your own LLM-backed agents with controlled tool access), not for running arbitrary untrusted code from unknown sources. If your threat model requires full process isolation, you might want to spawn legion agents in an isolated BEAM instance.

License

MIT License - see LICENSE for details.