Xqlite

Hex versionBuild StatusLicense: MIT

Low-level, safe, and fast NIF bindings to SQLite 3 for Elixir, powered by Rust and rusqlite. Bundled SQLite — no native install required.

For Ecto 3.x integration see the planned xqlite_ecto3 library (work in progress).

Installation

def deps do
  [
    {:xqlite, "~> 0.5.2"}
  ]
end

Precompiled NIF binaries ship for 8 targets (macOS, Linux, Windows, including ARM and RISC-V) — no Rust toolchain needed. To force source compilation:

XQLITE_BUILD=true mix deps.compile xqlite

Thread safety

Each rusqlite::Connection is wrapped in Arc<Mutex<_>> via Rustler's ResourceArc. One Elixir process accesses a given connection at a time. Connection pooling belongs in higher layers (DBConnection / Ecto adapter).

SQLite is opened with SQLITE_OPEN_NO_MUTEX (rusqlite's default) — the Rust Mutex replaces SQLite's internal one, not the other way around.

Capabilities

Two modules: Xqlite for high-level helpers, XqliteNIF for direct NIF access. See hexdocs for full API reference.

High-level API

Low-level NIF API (XqliteNIF)

Errors are structured tuples: {:error, {:constraint_violation, :constraint_foreign_key, msg}}, {:error, {:read_only_database, msg}}, etc. 30+ typed reason variants including all 13 SQLite constraint subtypes.

Usage

# Open and configure
{:ok, conn} = XqliteNIF.open("my_database.db")
:ok = Xqlite.enable_foreign_key_enforcement(conn)

# Query
{:ok, result} = XqliteNIF.query(conn, "SELECT id, name FROM users WHERE id = ?1", [1])
# => %{columns: ["id", "name"], rows: [[1, "Alice"]], num_rows: 1}

# Use with Table.Reader (Explorer, Kino, etc.)
result |> Xqlite.Result.from_map() |> Table.to_rows()
# => [%{"id" => 1, "name" => "Alice"}]

# Stream large result sets
Xqlite.stream(conn, "SELECT * FROM events") |> Enum.take(100)

# Transaction with immediate lock
:ok = XqliteNIF.begin(conn, :immediate)
{:ok, 1} = XqliteNIF.execute(conn, "UPDATE accounts SET balance = 0 WHERE id = 1", [])
:ok = XqliteNIF.commit(conn)

# Cancel a long-running query from another process
{:ok, token} = XqliteNIF.create_cancel_token()
task = Task.async(fn -> XqliteNIF.query_cancellable(conn, slow_sql, [], token) end)
:ok = XqliteNIF.cancel_operation(token)
{:error, :operation_cancelled} = Task.await(task)

# Read-only connection (writes fail with {:error, {:read_only_database, _}})
{:ok, ro_conn} = XqliteNIF.open_readonly("my_database.db")

# Receive SQLite diagnostic events (auto-index warnings, schema changes, etc.)
{:ok, :ok} = XqliteNIF.set_log_hook(self())
# => receive {:xqlite_log, 284, "automatic index on ..."}

# Receive per-connection change notifications
:ok = XqliteNIF.set_update_hook(conn, self())
{:ok, 1} = XqliteNIF.execute(conn, "INSERT INTO users (name) VALUES (&#39;Bob&#39;)", [])
# => receive {:xqlite_update, :insert, "main", "users", 2}

# Type extensions: automatic DateTime/Date/Time encoding and decoding
alias Xqlite.TypeExtension

extensions = [TypeExtension.DateTime, TypeExtension.Date, TypeExtension.Time]
params = TypeExtension.encode_params([~U[2024-01-15 10:30:00Z], ~D[2024-06-15]], extensions)
{:ok, 1} = XqliteNIF.execute(conn, "INSERT INTO events (ts, day) VALUES (?1, ?2)", params)

# Stream with automatic type decoding
Xqlite.stream(conn, "SELECT ts, day FROM events", [],
  type_extensions: [TypeExtension.DateTime, TypeExtension.Date])
|> Enum.to_list()
# => [%{"ts" => ~U[2024-01-15 10:30:00Z], "day" => ~D[2024-06-15]}]

# Serialize an in-memory database to a binary snapshot
{:ok, binary} = XqliteNIF.serialize(conn)

# Restore from a snapshot (e.g., transfer between connections, backups)
{:ok, conn2} = XqliteNIF.open_in_memory()
:ok = XqliteNIF.deserialize(conn2, binary)

# Read-only deserialization (writes will fail)
:ok = XqliteNIF.deserialize(conn2, "main", binary, true)

# Load a SQLite extension (e.g., spatialite, sqlean modules)
:ok = XqliteNIF.enable_load_extension(conn, true)
:ok = XqliteNIF.load_extension(conn, "/path/to/extension")
:ok = XqliteNIF.enable_load_extension(conn, false)

# Online backup to file, then restore into a new connection
:ok = XqliteNIF.backup(conn, "/path/to/backup.db")
{:ok, conn3} = XqliteNIF.open_in_memory()
:ok = XqliteNIF.restore(conn3, "/path/to/backup.db")

# Backup with progress reporting and cancellation
{:ok, token} = XqliteNIF.create_cancel_token()
:ok = XqliteNIF.backup_with_progress(conn, "main", "/path/to/backup.db", self(), 10, token)
# Receive {:xqlite_backup_progress, remaining, pagecount} messages
# Cancel from another process: XqliteNIF.cancel_operation(token)

# Track changes with sessions, then replicate to another database
{:ok, session} = XqliteNIF.session_new(conn)
:ok = XqliteNIF.session_attach(session, nil)
{:ok, 1} = XqliteNIF.execute(conn, "INSERT INTO users VALUES (1, &#39;alice&#39;)", [])
{:ok, changeset} = XqliteNIF.session_changeset(session)
:ok = XqliteNIF.session_delete(session)

# Apply changeset to replica (conflict strategies: :omit, :replace, :abort)
:ok = XqliteNIF.changeset_apply(replica_conn, changeset, :replace)

# Incremental blob I/O — read/write large BLOBs in chunks
{:ok, 1} = XqliteNIF.execute(conn, "INSERT INTO files VALUES (1, zeroblob(1048576))", [])
{:ok, blob} = XqliteNIF.blob_open(conn, "main", "files", "data", 1, false)
:ok = XqliteNIF.blob_write(blob, 0, chunk1)
:ok = XqliteNIF.blob_write(blob, byte_size(chunk1), chunk2)
{:ok, header} = XqliteNIF.blob_read(blob, 0, 64)
:ok = XqliteNIF.blob_close(blob)

Known limitations

Design notes

Backup API

xqlite provides two backup interfaces: one-shot (backup/2, restore/2) and incremental with progress (backup_with_progress/6).

The incremental variant runs the entire backup inside a single NIF call on a dirty I/O scheduler, sending {:xqlite_backup_progress, remaining, pagecount} messages to a PID after each step. A cancel token (the same one used for query_cancellable/4) allows another process to abort the backup at any time.

We chose this single-call design over exposing a step-by-step Backup resource handle because:

For use cases that genuinely require step-level control from Elixir (e.g., custom retry logic between steps), serialize/1 and deserialize/2 provide atomic database snapshots as binaries that can be chunked and managed in pure Elixir. If demand for a step-by-step backup resource materializes, it can be added in a future release.

Affected row counts (changes/1)

query/3 returns %{columns, rows, num_rows} where num_rows is the count of result rows — not SQLite's sqlite3_changes(). For SELECT statements these are the same thing. For DML (INSERT/UPDATE/DELETE without RETURNING), query/3 returns num_rows: 0 because there are no result rows, even though rows were affected.

To get the actual affected row count after DML, call changes/1 immediately after the statement — or use query_with_changes/3 which captures the count atomically.

Important SQLite behavior:sqlite3_changes() is sticky — per the official docs, "executing any other type of SQL statement does not modify the value returned by these functions." This means changes/1 after a SELECT returns the previous DML's count, not 0. It never resets on its own.

query_with_changes/3 solves this by reading sqlite3_changes() inside the same Mutex hold as the query execution and returning 0 for non-DML statements (detected by empty result columns). This is the recommended function for callers who need reliable affected row counts — including the xqlite_ecto3 adapter.

Roadmap

Planned for xqlite core (before Ecto adapter work):

  1. SQLCipher support (optional)
  2. User-Defined Functions (extremely fiddly across NIF boundaries)
  3. Manual statement lifecycle (prepare/bind/step/reset/release)

Then:xqlite_ecto3 — full Ecto 3.x adapter with DBConnection, migrations, type handling.

Contributing

Contributions are welcome. Please open issues or submit pull requests.

License

MIT — see LICENSE.md.