Scout

Production-Ready Hyperparameter Optimization for Elixir

Build StatusCoverageHex.pmDocumentationLicense: MITDocker

Scout is a production-ready hyperparameter optimization framework with high feature parity with Optuna, leveraging Elixir's BEAM platform for superior fault tolerance, real-time dashboards, and native distributed computing.

Quick Start

Try it now - no database required!

git clone https://github.com/your-org/scout
cd scout && mix deps.get
cd apps/scout_core && mix run ../../quick_start.exs

See QUICK_START.md for a complete 30-second tutorial.

From code:

# Start Scout (uses ETS - no database needed)
Application.ensure_all_started(:scout_core)

# Optimize like Optuna
result = Scout.Easy.optimize(
  fn params -> train_model(params) end,
  %{learning_rate: {:log_uniform, 1e-5, 1e-1}, n_layers: {:int, 2, 8}},
  n_trials: 100
)

IO.puts("Best: #{result.best_value} with #{inspect(result.best_params)}")

Why Scout?

Complete Feature Parity with Optuna

High feature parity - All major Optuna features implemented and validated on standard benchmarks:

Real-Time Dashboard

BEAM Platform Advantages

Production-Ready Infrastructure

Performance Benchmarks

Scout's sampler implementations validated on standard optimization benchmarks with 90%+ test coverage:

Scout vs Optuna: Side-by-Side Comparison

RandomSampler on standard test functions (100 trials, 10 runs, mean ± std):

Function Scout Optuna 3.x Difference
Sphere (5D) 8.21 ± 2.28 8.39 ± 3.37 -2.1%
Rosenbrock (2D) 0.29 ± 0.34 0.84 ± 0.57 +190% (Scout better)
Rastrigin (5D) 32.55 ± 9.07 34.13 ± 8.59 -4.6%
Ackley (2D) 2.36 ± 1.21 2.66 ± 0.84 -11.3%

Result: Scout's RandomSampler shows statistically equivalent performance to Optuna on 3/4 benchmarks, with better performance on Rosenbrock's narrow valley function.

Methodology: Identical conditions (same bounds, same trial count, same random sampling algorithm). Run python3 benchmark_optuna_comparison.py to reproduce.

v0.3.0 Comprehensive Benchmarks

See BENCHMARK_RESULTS.md for complete analysis including:

Installation

# mix.exs
defp deps do
  [
    {:scout, "~> 0.3"},
    # Auto-included: Phoenix LiveView, Oban, Ecto
  ]
end
# Setup
mix deps.get
cp config.sample.exs config/config.exs  # Configure database
mix ecto.create && mix ecto.migrate

# Run with dashboard
mix scout.server
# Dashboard: http://localhost:4050

Docker Deployment

# Quick start
git clone <scout-repo>
cd scout
docker-compose up -d

# Access services
# Scout Dashboard: http://localhost:4050
# Grafana Monitoring: http://localhost:3000
# Prometheus Metrics: http://localhost:9090

Kubernetes Deployment

# Production deployment
kubectl apply -f k8s/postgres.yaml    # Database
kubectl apply -f k8s/secrets.yaml     # Configuration
kubectl apply -f k8s/deployment.yaml  # 3-replica Scout app

# Auto-scaling, persistence, monitoring included

Real ML Example

# Neural network hyperparameter optimization
result = Scout.Easy.optimize(
  fn params, report_fn ->
    model = build_model(
      layers: params.n_layers,
      neurons: params.neurons,
      dropout: params.dropout
    )

    # Train with early stopping
    for epoch <- 1..20 do
      loss = train_epoch(model, params.learning_rate, params.batch_size)

      case report_fn.(loss, epoch) do
        :continue -> :ok
        :prune -> throw(:early_stop)  # Hyperband pruning
      end
    end

    validate_model(model)
  end,
  %{
    # Architecture
    n_layers: {:int, 2, 8},
    neurons: {:int, 32, 512},
    dropout: {:uniform, 0.1, 0.5},

    # Training
    learning_rate: {:log_uniform, 1e-5, 1e-1},
    batch_size: {:choice, [16, 32, 64, 128]},
    optimizer: {:choice, ["adam", "sgd", "rmsprop"]}
  },
  n_trials: 100,
  sampler: :tpe,           # Tree-structured Parzen Estimator
  pruner: :hyperband,      # Aggressive early stopping
  parallelism: 4,          # 4 concurrent trials
  dashboard: true          # Real-time monitoring
)

# Live dashboard shows progress at http://localhost:4050
IO.puts("Best accuracy: #{result.best_value}")
IO.puts("Best params: #{inspect(result.best_params)}")

Distributed Optimization

# Multi-node setup
Node.connect(:"worker@node1")
Node.connect(:"worker@node2")

result = Scout.Easy.optimize(
  expensive_ml_objective,
  complex_search_space,
  n_trials: 1000,
  parallelism: 20,      # Distributed across cluster
  executor: :oban,      # Persistent job queue
  dashboard: true       # Monitor from any node
)

Migration from Optuna

Scout provides drop-in replacement simplicity:

Optuna (Python)

import optuna
study = optuna.create_study()
study.optimize(objective, n_trials=100)
print(study.best_params)

Scout (Elixir)

result = Scout.Easy.optimize(objective, search_space, n_trials: 100)
IO.puts(inspect(result.best_params))

Migration Benefits

Documentation

Architecture

Scout Production Stack
├── Scout.Easy API              # Optuna-compatible interface
├── Phoenix Dashboard           # Real-time monitoring
├── Advanced Samplers           # TPE, CMA-ES, NSGA-II, QMC
├── Intelligent Pruners         # Hyperband, Successive Halving
├── Oban Execution             # Distributed job processing
├── Ecto Persistence           # PostgreSQL storage
├── Docker Images              # Production containers
├── Kubernetes Manifests       # Auto-scaling deployment
└── Monitoring Stack           # Prometheus + Grafana

What's New in v0.3

Production Features

Contributing

License

MIT License - see LICENSE for details.

Acknowledgments


Scout: Production-ready hyperparameter optimization that scales with your ambitions.

Quick Start | Benchmarks | Deploy | Dashboard | Examples