Jido.SimpleMem

jido_simplemem is a developer-focused SimpleMem-style memory plugin and runtime for Jido agents.

It is built around the same core ideas as upstream SimpleMem: buffered dialogue ingestion, semantic compression, synthesis, and intent-aware retrieval. This package adapts that model to the Jido plugin/action lifecycle and to an Elixir runtime backed by LanceDB. The default runtime path uses jido_action, ReqLLM.

At a glance:

What This Package Is

Use this package when you want a Jido agent to:

This package is not a generic multi-backend memory abstraction. It is a single-tier, LanceDB-backed runtime with a SimpleMem-style workflow.

Quick Start

The main integration path is the plugin.

defmodule MyApp.MemoryAgent do
  alias Jido.SimpleMem.Actions.{Finalize, PostTurn, PreTurn}

  use Jido.Agent,
    name: "memory_agent",
    plugins: [
      {Jido.SimpleMem.Plugin,
       %{
         window_size: 4,
         overlap_size: 1
       }}
    ]

  def chat(agent, user_input) do
    {agent, _} =
      cmd(agent, {PreTurn, %{user_input: user_input, context_result_key: :memory_context}})

    response = build_response(user_input, agent.state.memory_context)

    {agent, _} =
      cmd(agent, {PostTurn, %{user_input: user_input, assistant_response: response}})

    {:ok, agent, response}
  end

  def flush(agent) do
    {agent, _} = cmd(agent, {Finalize, %{}})
    {:ok, agent}
  end

  defp build_response(_user_input, memory_context) do
    if is_binary(memory_context) and memory_context != "" do
      "I found relevant memory context."
    else
      "I don't have anything in memory yet."
    end
  end
end

Required environment:

export JIDO_SIMPLEMEM_LLM_MODEL="openai:gpt-5-mini"
export JIDO_SIMPLEMEM_EMBEDDING_MODEL="openai:text-embedding-3-small"
export OPENAI_API_KEY="..."

Embedding dimensions are inferred. Keep one embedding size per Lance store. If you switch to an embedding model with a different vector size, clear the store or re-embed the existing data first.

If you want a runnable example, see examples/simple_memory_agent.ex and examples/simple_memory_demo.exs.

Public Surfaces

Plugin

Jido.SimpleMem.Plugin exposes these actions:

Recommended flow:

  1. Call pre_turn before generating a response.
  2. Add the returned memory context to your model prompt.
  3. Generate the assistant response.
  4. Call post_turn with the user input and assistant response.
  5. Call finalize at session end or before switching session_id.

Direct Runtime API

Jido.SimpleMem exposes the same lifecycle without going through a plugin:

Example:

{:ok, _} =
  Jido.SimpleMem.add_dialogues(agent, [
    %{speaker: "user", content: "My name is Alice Chen"},
    %{speaker: "user", content: "I live in Portland"},
    %{speaker: "user", content: "I prefer concise answers"}
  ])

{:ok, _} = Jido.SimpleMem.finalize(agent)
{:ok, result} = Jido.SimpleMem.ask(agent, "Where does Alice Chen live?")

How The Package Works

The default runtime is buffered and LLM-first:

  1. Dialogue is appended to a persisted session buffer.
  2. Once a window fills, the builder sends that window to the configured LLM.
  3. The LLM extracts structured memory candidates.
  4. A synthesis pass consolidates overlapping facts within the current session.
  5. Extracted entries are normalized into MemoryUnit structs.
  6. LanceDB persists those units and indexes them for semantic, lexical, and structured retrieval.
  7. ask/3 plans retrieval with the LLM, executes hybrid search, and can run reflection rounds.
  8. Answerer produces a grounded answer from the selected records.

Important behavior:

Repository Layout

The repository is organized by subsystem:

Storage And Runtime Notes

LanceDB is the only backend.

Because there is no official Elixir LanceDB SDK, the package runs a supervised Python worker over Port. That worker is responsible for:

Default store path:

export JIDO_SIMPLEMEM_LANCE_PATH=".jido/simplemem.lance"

The embedding dimension is pinned per store. You can switch embedding models, but not between models with different vector sizes against the same Lance store without clearing or re-embedding that store.

Configuration

Common runtime settings:

Required environment variables:

Optional environment variables:

If stage-specific models are not set, they fall back to JIDO_SIMPLEMEM_LLM_MODEL.

For a custom OpenAI-compatible endpoint:

export JIDO_SIMPLEMEM_BASE_URL="https://your-endpoint.example.com/v1"
export JIDO_SIMPLEMEM_API_KEY="..."
export JIDO_SIMPLEMEM_LLM_MODEL="openai/gpt-5-mini"
export JIDO_SIMPLEMEM_EMBEDDING_MODEL="openai/text-embedding-3-small"

Development And Testing

Run the default suite:

mix test

mix test excludes :integration by default.

Run live integration tests explicitly:

JIDO_SIMPLEMEM_ENABLE_LIVE_LANCE_TESTS=1 mix test --include integration

Useful checks:

mix format
mix test
mix release.check
mix xref callers Jido.SimpleMem.Runtime

Additional Docs