ObjectStoreX

Hex.pmDocumentationCICoverageLicense

Unified object storage for Elixir with production-ready features like Compare-And-Swap (CAS), conditional operations, streaming, and comprehensive error handling.

ObjectStoreX provides a consistent API across multiple cloud storage providers (AWS S3, Azure Blob Storage, Google Cloud Storage) and local storage, powered by the battle-tested Rust object_store library via Rustler NIFs for near-native performance.

Features

Installation

Add objectstorex to your list of dependencies in mix.exs:

def deps do
  [
    {:objectstorex, "~> 0.1.0"}
  ]
end

Precompiled NIFs

ObjectStoreX provides precompiled native binaries (NIFs) for the following platforms:

No Rust toolchain required for these platforms. The precompiled binaries are automatically downloaded from GitHub Releases during mix deps.get.

Building from Source

If you're on an unsupported platform or prefer to build from source:

1. Install Rust toolchain:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

2. Force local compilation:

Set the OBJECTSTOREX_BUILD environment variable:

export OBJECTSTOREX_BUILD=1
mix deps.get
mix deps.compile objectstorex

Or configure it in config/config.exs:

config :objectstorex, :force_build, true

Troubleshooting

"NIF not loaded" error:

Precompiled binary download fails:

Compilation errors when building from source:

Quick Start

# Create an in-memory store for testing
{:ok, store} = ObjectStoreX.new(:memory)

# Store some data
:ok = ObjectStoreX.put(store, "test.txt", "Hello, World!")

# Retrieve it
{:ok, data} = ObjectStoreX.get(store, "test.txt")
# => "Hello, World!"

# Get metadata
{:ok, meta} = ObjectStoreX.head(store, "test.txt")
# => %{location: "test.txt", size: 13, etag: "...", ...}

# Delete it
:ok = ObjectStoreX.delete(store, "test.txt")

Provider Configuration

AWS S3

{:ok, store} = ObjectStoreX.new(:s3,
  bucket: "my-bucket",
  region: "us-east-1",
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
)

Azure Blob Storage

{:ok, store} = ObjectStoreX.new(:azure,
  account: "myaccount",
  container: "mycontainer",
  access_key: System.get_env("AZURE_STORAGE_KEY")
)

Google Cloud Storage

{:ok, store} = ObjectStoreX.new(:gcs,
  bucket: "my-gcs-bucket",
  service_account_key: File.read!("credentials.json")
)

Local Filesystem

{:ok, store} = ObjectStoreX.new(:local, path: "/tmp/storage")

Advanced Features

Compare-And-Swap (CAS) Operations

Use CAS for optimistic concurrency control:

# Read current value with metadata
{:ok, data} = ObjectStoreX.get(store, "counter.json")
{:ok, meta} = ObjectStoreX.head(store, "counter.json")

# Update only if version matches (CAS)
new_data = update_value(data)

case ObjectStoreX.put(store, "counter.json", new_data,
       mode: {:update, %{etag: meta.etag, version: meta.version}}) do
  {:ok, _} -> :success
  {:error, :precondition_failed} -> :retry  # Someone else modified it
end

Create-Only Writes (Distributed Locks)

Implement distributed locks with atomic create operations:

lock_data = Jason.encode!(%{holder: node(), timestamp: System.system_time()})

case ObjectStoreX.put(store, "locks/resource-123", lock_data, mode: :create) do
  {:ok, _} -> :lock_acquired
  {:error, :already_exists} -> :locked_by_other
end

Conditional GET (HTTP-Style Caching)

Minimize data transfer with conditional requests:

# First fetch
{:ok, data, meta} = ObjectStoreX.get(store, "data.json")
cached_etag = meta.etag

# Later fetch - only download if changed
case ObjectStoreX.get(store, "data.json", if_none_match: cached_etag) do
  {:error, :not_modified} -> use_cached_data()
  {:ok, new_data} -> update_cache(new_data)
end

Rich Metadata and Attributes

Upload objects with content metadata:

ObjectStoreX.put(store, "report.pdf", pdf_data,
  content_type: "application/pdf",
  content_disposition: "attachment; filename=report.pdf",
  cache_control: "max-age=3600"
)

Use Case Examples

ObjectStoreX includes complete, production-ready examples for common distributed systems patterns:

1. Distributed Lock (examples/distributed_lock.ex)

Implement distributed locking for coordinating tasks across multiple nodes:

alias ObjectStoreX.Examples.DistributedLock

# Acquire lock
case DistributedLock.acquire(store, "resource-123") do
  {:ok, lock_info} ->
    try do
      # Do exclusive work
      process_resource()
    after
      DistributedLock.release(store, "resource-123")
    end

  {:error, :locked} ->
    IO.puts("Resource is locked by another process")
end

# Acquire with retry and exponential backoff
{:ok, _} = DistributedLock.acquire_with_retry(store, "resource-123",
  max_retries: 5,
  initial_delay_ms: 100
)

Features:

2. Optimistic Counter (examples/optimistic_counter.ex)

Implement distributed counters with CAS-based optimistic locking:

alias ObjectStoreX.Examples.OptimisticCounter

# Initialize counter
OptimisticCounter.initialize(store, "page-views", 0)

# Increment (automatically retries on conflict)
{:ok, new_value} = OptimisticCounter.increment(store, "page-views")

# Multiple processes can safely increment concurrently
tasks = for _ <- 1..10 do
  Task.async(fn -> OptimisticCounter.increment(store, "page-views") end)
end
Task.await_many(tasks)

# Decrement with minimum value constraint
{:ok, stock} = OptimisticCounter.decrement(store, "inventory-item-123",
  min_value: 0
)

# Custom update function
{:ok, new_val} = OptimisticCounter.update(store, "counter", fn v -> v * 2 end)

Features:

3. HTTP Cache (examples/http_cache.ex)

Implement efficient caching with ETag-based validation:

alias ObjectStoreX.Examples.HTTPCache

# Start cache
{:ok, cache} = HTTPCache.start_cache("my_cache")

# First fetch - cache miss
{:ok, data, :miss} = HTTPCache.get_cached(store, "data.json", cache)

# Second fetch - cache hit (no data transfer if unchanged)
{:ok, data, :hit} = HTTPCache.get_cached(store, "data.json", cache)

# Get statistics
stats = HTTPCache.stats(cache)
# => %{hits: 1, misses: 1, entries: 1, hit_rate: 50.0}

# Manual invalidation
HTTPCache.invalidate(cache, "data.json")

# Clear all entries
HTTPCache.clear(cache)

Features:

Streaming and Bulk Operations

Streaming Uploads/Downloads

# Stream upload from file
stream = File.stream!("large-file.bin", [], 64 * 1024)
:ok = ObjectStoreX.put_stream(store, "large-file.bin", stream)

# Stream download to file
stream = ObjectStoreX.get_stream(store, "large-file.bin")
Stream.into(stream, File.stream!("downloaded.bin"))
|> Stream.run()

Bulk Operations

# Delete multiple objects
paths = ["file1.txt", "file2.txt", "file3.txt"]
:ok = ObjectStoreX.delete_many(store, paths)

# Get multiple byte ranges
ranges = [{0, 1000}, {5000, 6000}]
{:ok, chunks} = ObjectStoreX.get_ranges(store, "file.bin", ranges)

Conditional Copy Operations

# Atomic copy (only if destination doesn&#39;t exist)
case ObjectStoreX.copy_if_not_exists(store, "source.txt", "backup.txt") do
  :ok -> :copied
  {:error, :already_exists} -> :destination_exists
  {:error, :not_supported} -> :provider_not_supported
end

# Atomic rename
case ObjectStoreX.rename_if_not_exists(store, "old.txt", "new.txt") do
  :ok -> :renamed
  {:error, :already_exists} -> :destination_exists
end

Testing

Run the test suite:

mix test

Run integration tests:

mix test test/integration/

Run quality checks:

./bin/qa_check.sh

Provider Support Matrix

Feature S3 Azure GCS Local Memory
PutMode::Create
PutMode::Update (ETag)
PutMode::Update (Version)
if_match
if_none_match
if_modified_since
Attributes ⚠️ ⚠️
Tags
copy_if_not_exists

Legend:

Documentation

Guides

API Reference

Full API documentation is available at HexDocs.

Performance

ObjectStoreX uses high-performance Rust NIFs with async I/O for optimal throughput:

Operation Expected Performance
Basic put/get ~50ms (network dependent)
CAS put (success) Same as regular put
CAS put (conflict) <30ms (fast fail)
Conditional get (not modified) <20ms (no transfer)
Streaming (large files) ~100MB/s+
Bulk operations Parallel execution

License

Copyright 2024-2025. Licensed under the Apache License, Version 2.0.

Credits

Built with: