Jido.SimpleMem
jido_simplemem is a developer-focused SimpleMem-style memory plugin and
runtime for Jido agents.
It is built around the same core ideas as upstream SimpleMem: buffered dialogue ingestion, semantic compression, synthesis, and intent-aware retrieval. This package adapts that model to the Jido plugin/action lifecycle and to an Elixir runtime backed by LanceDB. The default runtime path uses jido_action, ReqLLM.
At a glance:
Jido.SimpleMem.Plugingives Jido agents passive memory hooksJido.SimpleMemexposes the buffered lifecycle directly- LanceDB is the only storage and indexing backend
- retrieval is hybrid and LLM-planned
What This Package Is
Use this package when you want a Jido agent to:
- accumulate dialogue in windows instead of writing every turn immediately
- compress overlapping turns into standalone long-term memories
- ask grounded follow-up questions against stored memory
- inspect, explain, and delete stored memories directly
This package is not a generic multi-backend memory abstraction. It is a single-tier, LanceDB-backed runtime with a SimpleMem-style workflow.
Quick Start
The main integration path is the plugin.
defmodule MyApp.MemoryAgent do
alias Jido.SimpleMem.Actions.{Finalize, PostTurn, PreTurn}
use Jido.Agent,
name: "memory_agent",
plugins: [
{Jido.SimpleMem.Plugin,
%{
window_size: 4,
overlap_size: 1
}}
]
def chat(agent, user_input) do
{agent, _} =
cmd(agent, {PreTurn, %{user_input: user_input, context_result_key: :memory_context}})
response = build_response(user_input, agent.state.memory_context)
{agent, _} =
cmd(agent, {PostTurn, %{user_input: user_input, assistant_response: response}})
{:ok, agent, response}
end
def flush(agent) do
{agent, _} = cmd(agent, {Finalize, %{}})
{:ok, agent}
end
defp build_response(_user_input, memory_context) do
if is_binary(memory_context) and memory_context != "" do
"I found relevant memory context."
else
"I don't have anything in memory yet."
end
end
endRequired environment:
export JIDO_SIMPLEMEM_LLM_MODEL="openai:gpt-5-mini"
export JIDO_SIMPLEMEM_EMBEDDING_MODEL="openai:text-embedding-3-small"
export OPENAI_API_KEY="..."Embedding dimensions are inferred. Keep one embedding size per Lance store. If you switch to an embedding model with a different vector size, clear the store or re-embed the existing data first.
If you want a runnable example, see examples/simple_memory_agent.ex and examples/simple_memory_demo.exs.
Public Surfaces
Plugin
Jido.SimpleMem.Plugin exposes these actions:
pre_turnpost_turnfinalizeaskget_all_memoriesdelete_memory
Recommended flow:
-
Call
pre_turnbefore generating a response. - Add the returned memory context to your model prompt.
- Generate the assistant response.
-
Call
post_turnwith the user input and assistant response. -
Call
finalizeat session end or before switchingsession_id.
Direct Runtime API
Jido.SimpleMem exposes the same lifecycle without going through a plugin:
add_dialogue/4add_dialogues/3finalize/2ask/3get_all_memories/2delete_memory/3explain/3
Example:
{:ok, _} =
Jido.SimpleMem.add_dialogues(agent, [
%{speaker: "user", content: "My name is Alice Chen"},
%{speaker: "user", content: "I live in Portland"},
%{speaker: "user", content: "I prefer concise answers"}
])
{:ok, _} = Jido.SimpleMem.finalize(agent)
{:ok, result} = Jido.SimpleMem.ask(agent, "Where does Alice Chen live?")How The Package Works
The default runtime is buffered and LLM-first:
- Dialogue is appended to a persisted session buffer.
- Once a window fills, the builder sends that window to the configured LLM.
- The LLM extracts structured memory candidates.
- A synthesis pass consolidates overlapping facts within the current session.
-
Extracted entries are normalized into
MemoryUnitstructs. - LanceDB persists those units and indexes them for semantic, lexical, and structured retrieval.
ask/3plans retrieval with the LLM, executes hybrid search, and can run reflection rounds.Answererproduces a grounded answer from the selected records.
Important behavior:
- writes are buffered, not immediate
finalizeis still caller-controlledtokens_before_finalizeis a helper, not a replacement for explicit flushes- plugin auto-capture uses the same ingestion path as direct writes
- auto-capture failures are logged, emitted via telemetry, and returned as typed plugin errors
Repository Layout
The repository is organized by subsystem:
lib/jido/simple_mem/domain: core data structures such asDialogue,MemoryUnit, and embedding helperslib/jido/simple_mem/pipeline: extraction, synthesis, planning, retrieval, explanation, and answer generationlib/jido/simple_mem/runtime: shared runtime/config resolution, supervision, and job handlinglib/jido/simple_mem/plugin: plugin integration and Jido actionslib/jido/simple_mem/store: LanceDB storage adapter and Python worker bridgetest/support: test fixtures, fake clients, and target buildersexamples: minimal agent/demo flows
Storage And Runtime Notes
LanceDB is the only backend.
Because there is no official Elixir LanceDB SDK, the package runs a supervised
Python worker over Port. That worker is responsible for:
- durable memory storage
- durable session buffer storage
- vector search
- Tantivy-backed keyword search
- structured metadata filtering
Default store path:
export JIDO_SIMPLEMEM_LANCE_PATH=".jido/simplemem.lance"The embedding dimension is pinned per store. You can switch embedding models, but not between models with different vector sizes against the same Lance store without clearing or re-embedding that store.
Configuration
Common runtime settings:
window_sizeoverlap_sizeenable_parallel_retrievalmax_retrieval_workersretrieval_limitcontext_token_budgettokens_before_finalizereflection_enabledmax_reflection_roundssession_idnamespacestoreandstore_optsllm_clientandllm_client_optsembedding_clientandembedding_client_opts
Required environment variables:
JIDO_SIMPLEMEM_LLM_MODELJIDO_SIMPLEMEM_EMBEDDING_MODEL-
provider API key env vars for the configured models, such as
OPENAI_API_KEY
Optional environment variables:
JIDO_SIMPLEMEM_EXTRACTION_MODELJIDO_SIMPLEMEM_PLANNING_MODELJIDO_SIMPLEMEM_SYNTHESIS_MODELJIDO_SIMPLEMEM_ANSWER_MODELJIDO_SIMPLEMEM_BASE_URLJIDO_SIMPLEMEM_API_KEYJIDO_SIMPLEMEM_RECEIVE_TIMEOUT_MSJIDO_SIMPLEMEM_POOL_TIMEOUT_MSJIDO_SIMPLEMEM_LANCE_PATHJIDO_SIMPLEMEM_PYTHON_EXECUTABLEJIDO_SIMPLEMEM_UV_EXECUTABLEJIDO_SIMPLEMEM_WORKER_START_TIMEOUT_MSJIDO_SIMPLEMEM_ENABLE_LIVE_LANCE_TESTS
If stage-specific models are not set, they fall back to
JIDO_SIMPLEMEM_LLM_MODEL.
For a custom OpenAI-compatible endpoint:
export JIDO_SIMPLEMEM_BASE_URL="https://your-endpoint.example.com/v1"
export JIDO_SIMPLEMEM_API_KEY="..."
export JIDO_SIMPLEMEM_LLM_MODEL="openai/gpt-5-mini"
export JIDO_SIMPLEMEM_EMBEDDING_MODEL="openai/text-embedding-3-small"Development And Testing
Run the default suite:
mix testmix test excludes :integration by default.
Run live integration tests explicitly:
JIDO_SIMPLEMEM_ENABLE_LIVE_LANCE_TESTS=1 mix test --include integrationUseful checks:
mix format
mix test
mix release.check
mix xref callers Jido.SimpleMem.Runtime