Alloy
Minimal, OTP-native agent loop for Elixir.
Alloy is the completion-tool-call loop and nothing else. Send messages to any LLM, execute tool calls, loop until done. Swap providers with one line. Run agents as supervised GenServers. No opinions on sessions, persistence, memory, scheduling, or UI — those belong in your application.
{:ok, result} = Alloy.run("Read mix.exs and tell me the version",
provider: {Alloy.Provider.OpenAI, api_key: System.get_env("OPENAI_API_KEY"), model: "gpt-5.4"},
tools: [Alloy.Tool.Core.Read]
)
result.text #=> "The version is 0.7.4"Why Alloy?
Most agent frameworks try to be everything — sessions, memory, RAG, multi-agent orchestration, scheduling, UI. Alloy does one thing well: the agent loop. Inspired by Pi Agent's minimalism, Alloy brings the same philosophy to the BEAM with OTP's natural advantages: supervision, fault isolation, parallel tool execution, and real concurrency.
- 3 providers — Anthropic, OpenAI, and OpenAICompat (works with any OpenAI-compatible API: Ollama, OpenRouter, xAI, DeepSeek, Mistral, Groq, Together, etc.)
- 4 built-in tools — read, write, edit, bash
- GenServer agents — supervised, stateful, message-passing
- Streaming — token-by-token from any provider, unified interface
- Async dispatch —
send_message/2fires non-blocking, result arrives via PubSub - Middleware — custom hooks, tool blocking
- Context compaction — automatic summarization when approaching token limits
- OTP-native — supervision trees, hot code reloading, real parallel tool execution
- ~5,000 lines — small enough to read, understand, and extend
Installation
Add alloy to your dependencies in mix.exs:
def deps do
[
{:alloy, "~> 0.7"}
]
endQuick Start
Simple completion
{:ok, result} = Alloy.run("What is 2+2?",
provider: {Alloy.Provider.Anthropic, api_key: "sk-ant-...", model: "claude-sonnet-4-6"}
)
result.text #=> "4"Agent with tools
{:ok, result} = Alloy.run("Read mix.exs and summarize the dependencies",
provider: {Alloy.Provider.OpenAICompat,
api_url: "https://generativelanguage.googleapis.com",
chat_path: "/v1beta/openai/chat/completions",
api_key: "...", model: "gemini-2.5-flash-lite"},
tools: [Alloy.Tool.Core.Read, Alloy.Tool.Core.Bash],
max_turns: 10
)
Gemini model IDs Alloy now budgets for include gemini-2.5-pro,
gemini-2.5-flash, gemini-2.5-flash-lite, gemini-3-pro-preview, and
gemini-3-flash-preview.
Swap providers in one line
# The same tools and conversation work with any provider
opts = [tools: [Alloy.Tool.Core.Read], max_turns: 10]
# Anthropic
Alloy.run("Read mix.exs", [{:provider, {Alloy.Provider.Anthropic, api_key: "...", model: "claude-sonnet-4-6"}} | opts])
# OpenAI
Alloy.run("Read mix.exs", [{:provider, {Alloy.Provider.OpenAI, api_key: "...", model: "gpt-5.4"}} | opts])
# xAI via Responses-compatible API
Alloy.run("Read mix.exs", [{:provider, {Alloy.Provider.OpenAI, api_key: "...", api_url: "https://api.x.ai", model: "grok-4"}} | opts])
# Any OpenAI-compatible API (Ollama, OpenRouter, xAI, DeepSeek, Mistral, Groq, etc.)
Alloy.run("Read mix.exs", [{:provider, {Alloy.Provider.OpenAICompat, api_url: "http://localhost:11434", model: "llama4"}} | opts])Streaming
Stream tokens as they arrive — works with every provider:
{:ok, agent} = Alloy.Agent.Server.start_link(
provider: {Alloy.Provider.OpenAI, api_key: "...", model: "gpt-5.4"},
tools: [Alloy.Tool.Core.Read]
)
{:ok, result} = Alloy.Agent.Server.stream_chat(agent, "Explain OTP", fn chunk ->
IO.write(chunk) # Print each token as it arrives
end)
All providers support streaming. If a custom provider doesn't implement
stream/4, the turn loop falls back to complete/3 automatically.
Overriding model metadata
Alloy derives the compaction budget from the configured provider model when it knows that model's context window. If you need to support a just-released model before Alloy ships a catalog update, override it in config:
{:ok, result} = Alloy.run("Summarise this repository",
provider: {Alloy.Provider.OpenAI, api_key: "...", model: "gpt-5.4-2026-03-05"},
model_metadata_overrides: %{
"gpt-5.4" => 900_000,
"acme-reasoner" => %{limit: 640_000, suffix_patterns: ["", ~r/^-\d{4}\.\d{2}$/]}
}
)
Set max_tokens explicitly when you want a fixed compaction budget. Otherwise
Alloy derives it from the current model, including after
Alloy.Agent.Server.set_model/2 switches to a different provider model.
Supervised GenServer agent
{:ok, agent} = Alloy.Agent.Server.start_link(
provider: {Alloy.Provider.Anthropic, api_key: "...", model: "claude-sonnet-4-6"},
tools: [Alloy.Tool.Core.Read, Alloy.Tool.Core.Edit, Alloy.Tool.Core.Bash],
system_prompt: "You are a senior Elixir developer."
)
{:ok, response} = Alloy.Agent.Server.chat(agent, "What does this project do?")
{:ok, response} = Alloy.Agent.Server.chat(agent, "Now refactor the main module")Async dispatch (Phoenix LiveView)
Fire a message without blocking the caller — ideal for LiveView and background jobs:
# Subscribe to receive the result
Phoenix.PubSub.subscribe(MyApp.PubSub, "agent:#{session_id}:responses")
# Returns {:ok, request_id} immediately — agent works in the background
{:ok, req_id} = Alloy.Agent.Server.send_message(agent, "Summarise this report",
request_id: "req-123"
)
# Handle the result whenever it arrives
def handle_info({:agent_response, %{text: text, request_id: "req-123"}}, socket) do
{:noreply, assign(socket, :response, text)}
endProviders
| Vendor | Recommended Module | Example Models |
|---|---|---|
| Anthropic | Alloy.Provider.Anthropic | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
| OpenAI | Alloy.Provider.OpenAI | gpt-5.4 |
| xAI | Alloy.Provider.OpenAI with api_url: "https://api.x.ai" | grok-4, grok-4-fast-reasoning, grok-code-fast-1 |
| Gemini | Alloy.Provider.OpenAICompat | gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite, gemini-3-pro-preview |
| Other OpenAI-compatible APIs | Alloy.Provider.OpenAICompat | Ollama, OpenRouter, DeepSeek, Mistral, Groq, Together |
Use Alloy.Provider.OpenAI for native Responses APIs like OpenAI and xAI.
Use Alloy.Provider.OpenAICompat for chat-completions compatible APIs and local runtimes.
OpenAICompat works with any API that implements the OpenAI chat completions format.
Just set api_url, model, and optionally api_key and chat_path.
Built-in Tools
| Tool | Module | Description |
|---|---|---|
| read | Alloy.Tool.Core.Read | Read files from disk |
| write | Alloy.Tool.Core.Write | Write files to disk |
| edit | Alloy.Tool.Core.Edit | Search-and-replace editing |
| bash | Alloy.Tool.Core.Bash | Execute shell commands (restricted shell by default) |
Custom tools
defmodule MyApp.Tools.WebSearch do
@behaviour Alloy.Tool
@impl true
def name, do: "web_search"
@impl true
def description, do: "Search the web for information"
@impl true
def input_schema do
%{
type: "object",
properties: %{query: %{type: "string", description: "Search query"}},
required: ["query"]
}
end
@impl true
def execute(%{"query" => query}, _context) do
# Your implementation here
{:ok, "Results for: #{query}"}
end
endCode execution (Anthropic)
Enable Anthropic's server-side code execution sandbox:
{:ok, result} = Alloy.run("Calculate the first 20 Fibonacci numbers",
provider: {Alloy.Provider.Anthropic, api_key: "...", model: "claude-sonnet-4-6"},
code_execution: true
)Architecture
Alloy.run/2 One-shot agent loop (pure function)
Alloy.Agent.Server GenServer wrapper (stateful, supervisable)
Alloy.Agent.Turn Single turn: call provider → execute tools → return
Alloy.Provider Behaviour: translate wire format ↔ Alloy.Message
Alloy.Tool Behaviour: name, description, input_schema, execute
Alloy.Middleware Pipeline: custom hooks, tool blocking
Alloy.Context.Compactor Automatic conversation summarizationSessions, persistence, multi-agent coordination, scheduling, skills, and UI belong in your application layer. See Anvil for a reference Phoenix application built on Alloy.
License
MIT — see LICENSE.
Releases
Hex.pm publishing is handled by GitHub Actions on v* tags.
Successful publishes also dispatch the landing-site version sync workflow.