DotPrompt

A high-performance, native Elixir compiler for the DotPrompt language.

HexElixir VersionsLicense


What This Is

DotPrompt compiles .prompt files — a domain-specific language where branching, fragments, and contracts resolve at compile time. Your LLM receives only the final, flat prompt string with zero logic residue.

Unlike the TypeScript and Python clients (which talk to a runtime container), this is the native Elixir compiler — it runs in-process with no network overhead.

Website:dotprompt.run
Documentation:dotprompt.run/docs
GitHub:github.com/dot-prompt/dot-prompt


Features


Installation

The package can be installed by adding dot_prompt to your list of dependencies in mix.exs:

def deps do
  [
    {:dot_prompt, "~> 1.0.0"}
  ]
end

Prerequisites

Place your .prompt files in a prompts/ directory at your project root, or configure a custom path:

config :dot_prompt,
  prompts_dir: "priv/prompts"

Quick Start

# Compile a prompt with parameters
{:ok, result} = DotPrompt.compile("my_prompt", %{name: "World"})
IO.puts(result.prompt)

Full Example: Compile + Render + Validate

# Extract schema (parameters, types, contracts)
{:ok, schema} = DotPrompt.schema("teaching_prompt")
IO.inspect(schema.params)

# Compile: resolve branching, expand fragments, select variations
{:ok, compiled} = DotPrompt.compile("teaching_prompt", %{
  pattern_step: 2,
  variation: "recognition",
  answer_depth: "medium",
  skill_names: ["Milton Model"]
}, seed: 42) # deterministic variation selection

IO.puts("Template: #{compiled.prompt}")

# Render: inject runtime variables into the compiled template
{:ok, rendered} = DotPrompt.render("teaching_prompt",
  %{
    pattern_step: 2,
    variation: "recognition",
    answer_depth: "medium",
    skill_names: ["Milton Model"]
  },
  %{
    user_input: "Can you give me an example?",
    user_level: "intermediate"
  }
)

IO.puts("Final prompt: #{rendered.prompt}")

# Validate LLM response against the contract
response = Jason.encode!(%{score: 8, explanation: "Good answer"})
case DotPrompt.validate_output(response, compiled.response_contract) do
  :ok -> IO.puts("Valid response")
  {:error, reason} -> IO.puts("Invalid: #{reason}")
end

API Reference

Core Functions

Function Returns Description
list_prompts()[String.t()] All available prompts including fragments
list_root_prompts()[String.t()] Root-level prompts only (excludes fragments)
list_fragment_prompts()[String.t()] Fragment prompts only
list_collections()[String.t()] Root-level collections
schema(prompt, major?){:ok, schema_info()} | {:error, map()} Prompt metadata, params, fragments, contracts
compile(prompt, params, opts?){:ok, Result.t()} | {:error, map()} Resolve branching, expand fragments
render(prompt, params, runtime, opts?){:ok, Result.t()} | {:error, map()} Full compile + inject runtime variables
inject(template, runtime)String.t() Inject runtime vars into a raw template
compile_string(content, params, opts?){:ok, Result.t()} | {:error, map()} Compile inline prompt content
validate_output(response_json, contract, opts?):ok | {:error, String.t()} Validate LLM response against contract

Compile Options

Option Type Default Description
seedinteger() Deterministic variation selection
indentinteger()0 Indentation level
annotatedboolean()false Keep section annotations in output
majorinteger()nil Request specific major version

DotPrompt.Result Struct

Field Type Description
promptString.t() The compiled/rendered prompt string
response_contractmap() | nil JSON schema for expected LLM output
vary_selectionsmap() Which variation branches were selected
compiled_tokensinteger() Estimated token count of compiled prompt
injected_tokensinteger() Estimated token count after runtime injection
cache_hitboolean() Whether the result came from cache
majorinteger() | nil Major version of the compiled prompt
versioninteger() | String.t() | nil Full version
metadatamap() Additional metadata (used vars, files, warnings, params)

Cache Management

Function Returns Description
invalidate_cache(prompt):ok Invalidate cache for a specific prompt
invalidate_all_cache():ok Clear all caches
cache_stats()map() Current cache usage statistics

Configuration

config :dot_prompt,
  prompts_dir: "priv/prompts"
Option Default Description
prompts_dir"prompts" Directory containing .prompt files

Telemetry

DotPrompt emits :telemetry events you can attach to:

:telemetry.attach(
  "dot-prompt-render",
  [:dot_prompt, :render, :stop],
  fn _event, measurements, metadata, _config ->
    IO.puts("Rendered #{metadata.prompt_name} in #{measurements.duration}ms")
    IO.puts("Tokens: #{measurements.compiled_tokens}")
    IO.puts("Cache hit: #{measurements.cache_hit}")
  end,
  []
)

Example .prompt File

This is what the compiler compiles from:

init do
  @major: 1

  params:
    @name: str -> user's name
    @mode: enum[formal, casual] = casual -> communication style

  fragments:
    {greeting}: static from: greetings
      match: @mode

end init

case @mode do
formal: Dear @name, welcome to our service.
casual: Hey @name! Great to see you.
end @mode

Into this clean string for your LLM:

Dear Sarah, welcome to our service.

No branching logic. No fragment references. Just the instruction.


Related Packages

Package Description
@dotprompt/client TypeScript client for the dotprompt runtime container
dotprompt-client Python client for the dotprompt runtime container
dot_prompt_server Phoenix web server and DevUI for the dotprompt runtime

License

Apache 2.0 — see LICENSE