GEPA for Elixir
An Elixir implementation of GEPA (Genetic-Pareto), a framework for optimizing text-based system components using LLM-based reflection and Pareto-efficient evolutionary search.
Installation
Add gepa_ex to your list of dependencies in mix.exs:
def deps do
[
{:gepa_ex, "~> 0.1.2"}
]
endAbout GEPA
GEPA optimizes arbitrary systems composed of text componentsβlike AI prompts, code snippets, or textual specsβagainst any evaluation metric. It employs LLMs to reflect on system behavior, using feedback from execution traces to drive targeted improvements.
This is an Elixir port of the Python GEPA library, designed to leverage:
- π BEAM concurrency for 5-10x evaluation speedup (coming in Phase 4)
- π‘οΈ OTP supervision for fault-tolerant external service integration
- π Functional programming for clean, testable code
- π Telemetry event schema for lifecycle, iteration, proposal, and evaluation metrics
-
β¨ Production LLMs - OpenAI GPT-4o-mini & Google Gemini Flash Lite (
gemini-flash-lite-latest)
Production Ready
Core Features
Optimization System:
-
β
GEPA.optimize/1- Public API (working!) -
β
GEPA.Engine- Full optimization loop with stop conditions -
β
GEPA.Proposer.Reflective- Mutation strategy -
β
LLM-based instruction proposal via
reflection_llmand custom templates -
β
GEPA.State- State management with automatic Pareto updates (96.5% coverage) -
β
GEPA.Utils.Pareto- Multi-objective optimization (93.5% coverage, property-verified) -
β
GEPA.Result- Result analysis (100% coverage) -
β
GEPA.Adapters.Basic- Q&A adapter (92.1% coverage) - β Stop conditions with budget control
- β State persistence (save/load)
- β Telemetry event emitters for runs, iterations, proposals, and evaluation batches
- β End-to-end integration tested
Phase 1 Additions - NEW! π
Production LLM Integration:
-
β
GEPA.LLM- Unified LLM behavior -
β
GEPA.LLM.ReqLLM- Production implementation via ReqLLM- OpenAI support (GPT-4o-mini default)
- Google Gemini support (gemini-flash-lite-latest)
- Error handling, retries, timeouts
- Configurable via environment or runtime
-
β
GEPA.LLM.Mock- Testing implementation with flexible responses
Advanced Batch Sampling:
-
β
GEPA.Strategies.BatchSampler.EpochShuffled- Epoch-based training with shuffling - β Reproducible with seed control
- β Better training dynamics than simple sampling
Working Examples:
- β 4 .exs script examples (quick start, math, custom adapter, persistence)
- β 3 Livebook notebooks (interactive learning)
- β Comprehensive examples/README.md guide
- β Livebook guide with visualizations
Phase 2 Additions - NEW! π
Merge Proposer:
-
β
GEPA.Proposer.Merge- Genealogy-based candidate merging -
β
GEPA.Utils- Pareto dominator detection (93.3% coverage) -
β
GEPA.Proposer.MergeUtils- Ancestry tracking (92.3% coverage) - β Engine integration with merge scheduling
- β 44 comprehensive tests (34 unit + 10 properties)
Incremental Evaluation:
-
β
GEPA.Strategies.EvaluationPolicy.Incremental- Progressive validation - β Configurable sample sizes and thresholds
- β Reduces computation on large validation sets
- β 12 tests
Advanced Stop Conditions:
-
β
GEPA.StopCondition.Timeout- Time-based stopping -
β
GEPA.StopCondition.NoImprovement- Early stopping - β Flexible time units and patience settings
- β 9 tests
Test Quality:
- 201 tests (185 unit + 16 properties + 1 doctest)
- 100% passing β
- 75.4% coverage (excellent!)
- Property tests with 1,600+ runs
- Zero Dialyzer errors
- TDD methodology throughout
What's Next?
β Phase 1: Production Viability - COMPLETE!
- β Real LLM integration (OpenAI, Gemini)
- β Quick start examples (4 scripts + 3 livebooks)
- β EpochShuffledBatchSampler
β Phase 2: Core Completeness - COMPLETE!
- β Merge proposer (genealogy-based recombination)
- β IncrementalEvaluationPolicy (progressive validation)
- β Additional stop conditions (Timeout, NoImprovement)
- β Engine integration for merge proposer
Phase 3: Production Hardening - in progress
- β Telemetry event schema and helpers
- π¨ Progress tracking (planned)
- π‘οΈ Robust error handling (planned)
Phase 4: Ecosystem Expansion - 12-14 weeks
- π Additional adapters (Generic, RAG)
- π Performance optimization (parallel evaluation)
- π Community infrastructure
Quick Start
With Mock LLM (No API Key Required)
# Define training data
trainset = [
%{input: "What is 2+2?", answer: "4"},
%{input: "What is 3+3?", answer: "6"}
]
valset = [%{input: "What is 5+5?", answer: "10"}]
# Create adapter with mock LLM (for testing)
adapter = GEPA.Adapters.Basic.new(llm: GEPA.LLM.Mock.new())
# Run optimization
{:ok, result} = GEPA.optimize(
seed_candidate: %{"instruction" => "You are a helpful assistant."},
trainset: trainset,
valset: valset,
adapter: adapter,
max_metric_calls: 50
)
# Access results
best_program = GEPA.Result.best_candidate(result)
best_score = GEPA.Result.best_score(result)
IO.puts("Best score: #{best_score}")
IO.puts("Iterations: #{result.i}")With Production LLMs (NEW!)
# OpenAI (GPT-4o-mini) - Requires OPENAI_API_KEY
llm = GEPA.LLM.ReqLLM.new(provider: :openai)
adapter = GEPA.Adapters.Basic.new(llm: llm)
# Or Gemini (`gemini-flash-lite-latest`) - Requires GEMINI_API_KEY
llm = GEPA.LLM.ReqLLM.new(provider: :gemini)
adapter = GEPA.Adapters.Basic.new(llm: llm)
# Then run optimization as above
{:ok, result} = GEPA.optimize(
seed_candidate: %{"instruction" => "..."},
trainset: trainset,
valset: valset,
adapter: adapter,
max_metric_calls: 50
)See Examples overview for complete working examples!
Candidate Selection Strategies (NEW)
GEPA includes multiple candidate selectors to balance exploration vs. exploitation:
GEPA.Strategies.CandidateSelector.Pareto(default): frequency-weighted sampling from Pareto frontGEPA.Strategies.CandidateSelector.CurrentBest: always pick the best-scoring programGEPA.Strategies.CandidateSelector.EpsilonGreedy: configurable exploration with optional epsilon decay
Stateful selectors (like epsilon-greedy) are carried forward automatically so decay persists across iterations.
To enable epsilon-greedy with decay:
selector =
GEPA.Strategies.CandidateSelector.EpsilonGreedy.new(
epsilon: 0.3,
epsilon_decay: 0.95,
epsilon_min: 0.05
)
{:ok, result} =
GEPA.optimize(
seed_candidate: %{"instruction" => "..."},
trainset: trainset,
valset: valset,
adapter: adapter,
max_metric_calls: 50,
candidate_selector: selector
)LLM-Based Instruction Proposal (NEW!)
Use an LLM to propose improved component instructions based on reflective feedback. You can also provide a custom proposal template.
reflection_llm = GEPA.LLM.ReqLLM.new(provider: :openai, model: "gpt-4o-mini")
custom_template = """
Improve {component_name}:
Current: {current_instruction}
Feedback: {reflective_dataset}
New instruction:
"""
{:ok, result} = GEPA.optimize(
seed_candidate: %{"instruction" => "You are a concise math tutor."},
trainset: trainset,
valset: valset,
adapter: adapter,
max_metric_calls: 50,
reflection_llm: reflection_llm,
proposal_template: custom_template
)
When reflection_llm is not provided, GEPA falls back to a simple testing-only improvement marker ("[Optimized]").
Interactive Livebooks (NEW!)
For interactive learning and experimentation:
# Install Livebook
mix escript.install hex livebook
# Open a livebook
livebook server livebooks/01_quick_start.livemdAvailable Livebooks:
01_quick_start.livemd- Interactive introduction02_advanced_optimization.livemd- Parameter tuning and visualization03_custom_adapter.livemd- Build adapters interactively
See livebooks/README.md for details!
With State Persistence
{:ok, result} = GEPA.optimize(
seed_candidate: seed,
trainset: trainset,
valset: valset,
adapter: GEPA.Adapters.Basic.new(),
max_metric_calls: 100,
run_dir: "./my_optimization" # State saved here, can resume
)Development
# Get dependencies
mix deps.get
# Run tests
mix test
# Run with coverage
mix test --cover
# Run specific tests
mix test test/gepa/utils/pareto_test.exs
# Format code
mix format
# Type checking
mix dialyzerArchitecture
Based on behavior-driven design with functional core:
GEPA.optimize/1
β
GEPA.Engine β Behaviors β User Implementations
βββ Adapter (evaluate, reflect, propose)
βββ Proposer (reflective, merge)
βββ Strategies (selection, sampling, evaluation)
βββ StopCondition (budget, time, threshold)Documentation
Technical Documentation
- Technical Design
- LLM Adapter Design - Design for real LLM integration
- Completing the Port (Plans)
Changelog
v0.1.2 (2025-11-29)
- Epsilon-greedy candidate selector with decay/reset and stateful selector support in engine/proposer
- Telemetry event schema and LLM-backed instruction proposal with custom templates
- Reflective proposer consumes instruction proposals with fallback marker when no LLM is provided
- Docs for completing the port and telemetry-first experiment tracking
v0.1.1 (2025-11-29)
- Documentation cleanup and release tagging
v0.1.0 (2025-10-29)
- Initial release with Phase 1 & 2 complete
- Production LLM integration (OpenAI GPT-4o-mini, Google Gemini Flash Lite)
- Core optimization engine with reflective and merge proposers
- Incremental evaluation and advanced stop conditions
- 218 tests passing with 75.4% coverage
Related Projects
- GEPA Python - Original implementation
- GEPA Paper - Research paper