Altar.AI

Altar.AI Logo

Unified AI adapter foundation for Elixir - Protocol-based abstractions for multiple AI providers

Hex.pmDocumentationLicense

Features

Supported Providers

All SDK dependencies are optional - Altar.AI works with whatever you have installed.

Installation

Add altar_ai to your list of dependencies in mix.exs:

def deps do
  [
    {:altar_ai, "~> 0.1.0"},
    # Optional: Add the AI SDKs you want to use
    # {:gemini, "~> 0.1.0"},
    # {:claude_agent_sdk, "~> 0.1.0"},
    # {:codex_sdk, "~> 0.1.0"}
  ]
end

Quick Start

Basic Usage

# Create an adapter
adapter = Altar.AI.Adapters.Gemini.new(api_key: "your-api-key")

# Generate text
{:ok, response} = Altar.AI.generate(adapter, "Explain Elixir protocols")
IO.puts(response.content)

# Check what the adapter can do
Altar.AI.capabilities(adapter)
#=> %{generate: true, stream: true, embed: true, batch_embed: true, ...}

Composite Adapters with Fallbacks

# Create a composite that tries multiple providers
composite = Altar.AI.Adapters.Composite.new([
  Altar.AI.Adapters.Gemini.new(),
  Altar.AI.Adapters.Claude.new(),
  Altar.AI.Adapters.Fallback.new()  # Always succeeds
])

# Or use the default chain (auto-detects available SDKs)
composite = Altar.AI.Adapters.Composite.default()

# Now generate with automatic fallback
{:ok, response} = Altar.AI.generate(composite, "Hello, world!")

Embeddings

adapter = Altar.AI.Adapters.Gemini.new()

# Single embedding
{:ok, vector} = Altar.AI.embed(adapter, "semantic search query")
length(vector)  #=> 768 (or model-specific dimension)

# Batch embeddings
{:ok, vectors} = Altar.AI.batch_embed(adapter, ["query 1", "query 2", "query 3"])

Classification

# Use fallback adapter for simple keyword-based classification
fallback = Altar.AI.Adapters.Fallback.new()

{:ok, classification} = Altar.AI.classify(
  fallback,
  "I love this product!",
  ["positive", "negative", "neutral"]
)

classification.label       #=> "positive"
classification.confidence  #=> 0.8
classification.all_scores  #=> %{"positive" => 0.8, "negative" => 0.2, "neutral" => 0.2}

Code Generation

adapter = Altar.AI.Adapters.Codex.new()

# Generate code
{:ok, code_result} = Altar.AI.generate_code(
  adapter,
  "Create a fibonacci function in Elixir",
  language: "elixir"
)

IO.puts(code_result.code)

# Explain code
{:ok, explanation} = Altar.AI.explain_code(
  adapter,
  "def fib(0), do: 0\ndef fib(1), do: 1\ndef fib(n), do: fib(n-1) + fib(n-2)"
)

IO.puts(explanation)

Architecture

Altar.AI uses protocols instead of behaviours, providing several advantages:

  1. Runtime Dispatch - Protocols dispatch on adapter structs, allowing cleaner composite implementations
  2. Capability Detection - Easy runtime introspection of what each adapter supports
  3. Flexibility - Adapters only implement the protocols they support

Core Protocols

Capability Detection

adapter = Altar.AI.Adapters.Gemini.new()

# Check specific capability
Altar.AI.supports?(adapter, :embed)  #=> true
Altar.AI.supports?(adapter, :classify)  #=> false

# Get all capabilities
Altar.AI.capabilities(adapter)
#=> %{
#=>   generate: true,
#=>   stream: true,
#=>   embed: true,
#=>   batch_embed: true,
#=>   classify: false,
#=>   generate_code: false,
#=>   explain_code: false
#=> }

# Human-readable description
Altar.AI.Capabilities.describe(adapter)
#=> "Gemini: text generation, streaming, embeddings, batch embeddings"

Testing

Altar.AI provides a Mock adapter for testing:

# Create a mock adapter
mock = Altar.AI.Adapters.Mock.new()

# Configure responses
mock = Altar.AI.Adapters.Mock.with_response(
  mock,
  :generate,
  {:ok, %Altar.AI.Response{content: "Test response", provider: :mock, model: "test"}}
)

# Use in tests
{:ok, response} = Altar.AI.generate(mock, "any prompt")
assert response.content == "Test response"

# Or use custom functions
mock = Altar.AI.Adapters.Mock.with_response(
  mock,
  :generate,
  fn prompt -> {:ok, %Altar.AI.Response{content: "Echo: #{prompt}"}} end
)

Telemetry

All operations emit telemetry events under [:altar, :ai]:

:telemetry.attach(
  "my-handler",
  [:altar, :ai, :generate, :stop],
  fn event, measurements, metadata, _config ->
    IO.inspect({event, measurements, metadata})
  end,
  nil
)

# Events:
# [:altar, :ai, :generate, :start]
# [:altar, :ai, :generate, :stop]
# [:altar, :ai, :generate, :exception]
# [:altar, :ai, :embed, :start]
# [:altar, :ai, :embed, :stop]
# ... and more

Hexagonal Architecture

Altar.AI follows the Hexagonal (Ports & Adapters) architecture:

This makes it easy to:

License

MIT License - see LICENSE for details

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgments