LLM Models

Hex.pmLicense

LLM model metadata catalog with fast, capability-aware lookups. Use simple "provider:model" or "model@provider" specs, get validated Provider/Model structs, and select models by capabilities. Ships with a packaged snapshot; no network required by default.

Installation

Model metadata is refreshed regularly, so versions follow a date-based format (YYYY.MM.DD):

def deps do
  [
    {:llm_db, "~> 2025.11.0"}
  ]
end

model_spec (the main interface)

A model_spec is a string in one of two formats:

Both formats are automatically recognized and work interchangeably. Use the @ format when model specs appear in filenames, CI artifact names, or other filesystem contexts where colons are problematic.

Tuples {:provider_atom, "id"} also work, but prefer the string spec.

{:ok, model} = LLMDb.model("openai:gpt-4o-mini")
#=> %LLMDb.Model{id: "gpt-4o-mini", provider: :openai, ...}

{:ok, model} = LLMDb.model("gpt-4o-mini@openai")
#=> %LLMDb.Model{id: "gpt-4o-mini", provider: :openai, ...}

Quick Start

# Get a model and read metadata
{:ok, model} = LLMDb.model("openai:gpt-4o-mini")
model.capabilities.tools.enabled  #=> true
model.cost.input                  #=> 0.15  (per 1M tokens)
model.limits.context              #=> 128_000

# Select a model by capabilities (returns {provider, id})
{:ok, {provider, id}} = LLMDb.select(
  require: [chat: true, tools: true, json_native: true],
  prefer:  [:openai, :anthropic]
)
{:ok, model} = LLMDb.model({provider, id})

# List providers
LLMDb.providers()
#=> [%LLMDb.Provider{id: :anthropic, ...}, %LLMDb.Provider{id: :openai, ...}]

# Check availability (allow/deny filters)
LLMDb.allowed?("openai:gpt-4o-mini") #=> true

API Cheatsheet

See the full function docs in hexdocs.

Data Structures

Provider

%LLMDb.Provider{
  id: :openai,
  name: "OpenAI",
  base_url: "https://api.openai.com",
  env: ["OPENAI_API_KEY"],
  doc: "https://platform.openai.com/docs",
  extra: %{}
}

Model

%LLMDb.Model{
  id: "gpt-4o-mini",
  provider: :openai,
  name: "GPT-4o mini",
  family: "gpt-4o",
  limits: %{context: 128_000, output: 16_384},
  cost: %{input: 0.15, output: 0.60},
  capabilities: %{
    chat: true,
    tools: %{enabled: true, streaming: true},
    json: %{native: true, schema: true},
    streaming: %{text: true, tool_calls: true}
  },
  tags: [],
  deprecated?: false,
  aliases: [],
  extra: %{}
}

Configuration

The packaged snapshot loads automatically at app start. Optional runtime filters, preferences, and custom providers:

# config/runtime.exs
config :llm_db,
  filter: %{
    allow: :all,                     # :all or %{provider => [patterns]}
    deny: %{openai: ["*-preview"]}   # deny patterns override allow
  },
  prefer: [:openai, :anthropic],     # provider preference order
  custom: %{
    local: [
      name: "Local Provider",
      base_url: "http://localhost:8080",
      models: %{
        "llama-3" => %{capabilities: %{chat: true}},
        "mistral-7b" => %{capabilities: %{chat: true, tools: %{enabled: true}}}
      }
    ]
  }

Filter Examples

# Allow all, deny preview/beta models
config :llm_db,
  filter: %{
    allow: :all,
    deny: %{openai: ["*-preview", "*-beta"]}
  }

# Allow only specific model families
config :llm_db,
  filter: %{
    allow: %{
      anthropic: ["claude-3-haiku-*", "claude-3.5-sonnet-*"],
      openrouter: ["anthropic/claude-*"]
    },
    deny: %{}
  }

# Runtime override (widen/narrow filters without rebuild)
{:ok, _snapshot} = LLMDb.load(
  allow: %{openai: ["gpt-4o-*"]},
  deny: %{}
)

Custom Providers

Add local or private models to the catalog:

# config/runtime.exs
config :llm_db,
  custom: %{
    # Provider ID as key
    local: [
      name: "Local LLM Provider",
      base_url: "http://localhost:8080",
      env: ["LOCAL_API_KEY"],
      doc: "http://localhost:8080/docs",
      models: %{
        "llama-3-8b" => %{
          name: "Llama 3 8B",
          family: "llama-3",
          capabilities: %{chat: true, tools: %{enabled: true}},
          limits: %{context: 8192, output: 2048},
          cost: %{input: 0.0, output: 0.0}
        },
        "mistral-7b" => %{
          capabilities: %{chat: true}
        }
      }
    ],
    myprovider: [
      name: "My Custom Provider",
      models: %{
        "custom-model" => %{capabilities: %{chat: true}}
      }
    ]
  }

# Use custom models like any other
{:ok, model} = LLMDb.model("local:llama-3-8b")
{:ok, {provider, id}} = LLMDb.select(require: [chat: true], prefer: [:local, :openai])

Filter Rules:

See Runtime Filters guide for details and troubleshooting.

Updating Model Data

Snapshot is shipped with the library. To rebuild with fresh data:

# Fetch upstream data (optional)
mix llm_db.pull

# Run ETL and write snapshot.json
mix llm_db.build

See the Sources & Engine guide for details.

Using with ReqLLM

Designed to power ReqLLM, but fully standalone. Use model_spec + model/1 to retrieve metadata for API calls.

Docs & Guides

License

MIT License - see LICENSE file for details.