Macula Neuroevolution

Population-based evolutionary training for neural networks.

Hex.pmHex DocsBuy Me A Coffee

Overview

macula_neuroevolution is an Erlang library that provides domain-agnostic population-based evolutionary training for neural networks. It works with macula_tweann to evolve network weights through selection, crossover, and mutation.

Architecture Overview

Features

The Liquid Conglomerate Vision

This library implements the first level of a hierarchical meta-learning system called the Liquid Conglomerate:

Liquid Conglomerate

The Liquid Conglomerate is a novel architecture that uses hierarchical Liquid Time-Constant (LTC) neural networks to create a self-optimizing training system. Instead of manually tuning hyperparameters, the system learns how to learn at multiple timescales:

Key effects on training:

  1. Self-tuning hyperparameters - Mutation rate, selection ratio adapt automatically
  2. Automatic stagnation recovery - Detects and escapes local optima
  3. Phase-appropriate strategies - Different strategies for exploration vs exploitation
  4. Transfer of meta-knowledge - Training strategies can transfer across domains

See The Liquid Conglomerate Guide for the full explanation, or LTC Meta-Controller for implementation details.

Evolution Lifecycle

Installation

Add to your rebar.config:

{deps, [
    {macula_neuroevolution, "~> 0.12.0"}
]}.

Quick Start

%% Define your evaluator module (implements neuroevolution_evaluator behaviour)
-module(my_evaluator).
-behaviour(neuroevolution_evaluator).
-export([evaluate/2]).

evaluate(Individual, Options) ->
    Network = Individual#individual.network,
    %% Run your domain-specific evaluation
    Score = run_simulation(Network),
    UpdatedIndividual = Individual#individual{
        metrics = #{total_score => Score}
    },
    {ok, UpdatedIndividual}.

%% Start training
Config = #neuro_config{
    population_size = 50,
    selection_ratio = 0.20,
    mutation_rate = 0.10,
    mutation_strength = 0.3,
    network_topology = {42, [16, 8], 6},  % 42 inputs, 2 hidden layers, 6 outputs
    evaluator_module = my_evaluator
},

{ok, Pid} = neuroevolution_server:start_link(Config),
neuroevolution_server:start_training(Pid).

Configuration

Parameter Default Description
population_size 50 Number of individuals
evaluations_per_individual 10 Games/tests per individual per generation
selection_ratio 0.20 Fraction of population that survives (top 20%)
mutation_rate 0.10 Probability of mutating each weight
mutation_strength 0.3 Magnitude of weight perturbation
max_generations infinity Maximum generations to run
network_topology - {InputSize, HiddenLayers, OutputSize}
evaluator_module - Module implementing neuroevolution_evaluator
evaluator_options#{} Options passed to evaluator
event_handlerundefined{Module, InitArg} for event notifications

Event Handling

Subscribe to training events by providing an event handler:

-module(my_event_handler).
-export([handle_event/2]).

handle_event({generation_started, Gen}, _State) ->
    io:format("Generation ~p started~n", [Gen]);
handle_event({generation_complete, Stats}, _State) ->
    io:format("Generation ~p: Best=~.2f, Avg=~.2f~n",
              [Stats#generation_stats.generation,
               Stats#generation_stats.best_fitness,
               Stats#generation_stats.avg_fitness]);
handle_event(_Event, _State) ->
    ok.

%% Configure with event handler
Config = #neuro_config{
    %% ... other options ...
    event_handler = {my_event_handler, undefined}
}.

Custom Evaluators

Implement the neuroevolution_evaluator behaviour:

-module(snake_game_evaluator).
-behaviour(neuroevolution_evaluator).
-export([evaluate/2, calculate_fitness/1]).

%% Required callback
evaluate(Individual, Options) ->
    Network = Individual#individual.network,
    NumGames = maps:get(games, Options, 10),

    %% Play multiple games and aggregate results
    Results = [play_game(Network) || _ <- lists:seq(1, NumGames)],

    TotalScore = lists:sum([R#result.score || R <- Results]),
    TotalTicks = lists:sum([R#result.ticks || R <- Results]),
    Wins = length([R || R <- Results, R#result.won]),

    UpdatedIndividual = Individual#individual{
        metrics = #{
            total_score => TotalScore,
            total_ticks => TotalTicks,
            wins => Wins
        }
    },
    {ok, UpdatedIndividual}.

%% Optional callback for custom fitness calculation
calculate_fitness(Metrics) ->
    Score = maps:get(total_score, Metrics, 0),
    Ticks = maps:get(total_ticks, Metrics, 0),
    Wins = maps:get(wins, Metrics, 0),
    Score * 50.0 + Ticks / 50.0 + Wins * 2.0.

Building

rebar3 compile
rebar3 eunit
rebar3 dialyzer

Academic References

Evolutionary Algorithms

Neuroevolution

Selection & Breeding

Fitness Evaluation

Related Projects

Macula Ecosystem

Inspiration & Related Work

Guides

Getting Started

Advanced Topics

License

Apache License 2.0

Links