ObjectStoreX
Unified object storage for Elixir with production-ready features like Compare-And-Swap (CAS), conditional operations, streaming, and comprehensive error handling.
ObjectStoreX provides a consistent API across multiple cloud storage providers (AWS S3, Azure Blob Storage, Google Cloud Storage) and local storage, powered by the battle-tested Rust object_store library via Rustler NIFs for near-native performance.
Features
- Multi-Provider Support: AWS S3, Azure Blob Storage, GCS, local filesystem, in-memory storage
- Advanced Operations:
- Compare-And-Swap (CAS) with ETags
- Conditional GET/PUT operations
- Create-only writes for distributed locks
- Rich metadata and attributes
- Performance: High-performance Rust NIFs with async I/O
- Streaming: Support for large files with streaming uploads/downloads
- Bulk Operations: Efficient batch operations for multiple objects
- Use Case Examples: Distributed locks, optimistic counters, HTTP-style caching
Installation
Add objectstorex to your list of dependencies in mix.exs:
def deps do
[
{:objectstorex, "~> 0.1.0"}
]
endPrecompiled NIFs
ObjectStoreX provides precompiled native binaries (NIFs) for the following platforms:
- macOS (Apple Silicon and Intel)
aarch64-apple-darwin(M1/M2/M3/M4)x86_64-apple-darwin(Intel)
- Linux GNU (x86_64 and ARM64)
x86_64-unknown-linux-gnu(Ubuntu, Debian, RHEL, Fedora, etc.)aarch64-unknown-linux-gnu(AWS Graviton, ARM servers)
- Linux musl (x86_64 and ARM64)
x86_64-unknown-linux-musl(Alpine Linux, containers)aarch64-unknown-linux-musl(Alpine Linux ARM)
- Windows (x86_64)
x86_64-pc-windows-msvc(Visual Studio toolchain)x86_64-pc-windows-gnu(MinGW toolchain)
No Rust toolchain required for these platforms. The precompiled binaries are automatically downloaded from GitHub Releases during mix deps.get.
Building from Source
If you're on an unsupported platform or prefer to build from source:
1. Install Rust toolchain:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh2. Force local compilation:
Set the OBJECTSTOREX_BUILD environment variable:
export OBJECTSTOREX_BUILD=1
mix deps.get
mix deps.compile objectstorex
Or configure it in config/config.exs:
config :objectstorex, :force_build, trueTroubleshooting
"NIF not loaded" error:
- Verify your platform is in the supported list above
-
Try forcing a local build:
OBJECTSTOREX_BUILD=1 mix deps.compile objectstorex --force - Check GitHub Issues for platform-specific problems
Precompiled binary download fails:
- Ensure you have internet connectivity
- Check if the release exists on GitHub Releases
- Try building from source as described above
Compilation errors when building from source:
-
Verify Rust toolchain version:
rustc --version(minimum: 1.86.0) -
Update Rust:
rustup update -
Clean and rebuild:
mix deps.clean objectstorex && mix deps.compile objectstorex
Quick Start
# Create an in-memory store for testing
{:ok, store} = ObjectStoreX.new(:memory)
# Store some data
:ok = ObjectStoreX.put(store, "test.txt", "Hello, World!")
# Retrieve it
{:ok, data} = ObjectStoreX.get(store, "test.txt")
# => "Hello, World!"
# Get metadata
{:ok, meta} = ObjectStoreX.head(store, "test.txt")
# => %{location: "test.txt", size: 13, etag: "...", ...}
# Delete it
:ok = ObjectStoreX.delete(store, "test.txt")Provider Configuration
AWS S3
{:ok, store} = ObjectStoreX.new(:s3,
bucket: "my-bucket",
region: "us-east-1",
access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
)Azure Blob Storage
{:ok, store} = ObjectStoreX.new(:azure,
account: "myaccount",
container: "mycontainer",
access_key: System.get_env("AZURE_STORAGE_KEY")
)Google Cloud Storage
{:ok, store} = ObjectStoreX.new(:gcs,
bucket: "my-gcs-bucket",
service_account_key: File.read!("credentials.json")
)Local Filesystem
{:ok, store} = ObjectStoreX.new(:local, path: "/tmp/storage")Advanced Features
Compare-And-Swap (CAS) Operations
Use CAS for optimistic concurrency control:
# Read current value with metadata
{:ok, data} = ObjectStoreX.get(store, "counter.json")
{:ok, meta} = ObjectStoreX.head(store, "counter.json")
# Update only if version matches (CAS)
new_data = update_value(data)
case ObjectStoreX.put(store, "counter.json", new_data,
mode: {:update, %{etag: meta.etag, version: meta.version}}) do
{:ok, _} -> :success
{:error, :precondition_failed} -> :retry # Someone else modified it
endCreate-Only Writes (Distributed Locks)
Implement distributed locks with atomic create operations:
lock_data = Jason.encode!(%{holder: node(), timestamp: System.system_time()})
case ObjectStoreX.put(store, "locks/resource-123", lock_data, mode: :create) do
{:ok, _} -> :lock_acquired
{:error, :already_exists} -> :locked_by_other
endConditional GET (HTTP-Style Caching)
Minimize data transfer with conditional requests:
# First fetch
{:ok, data, meta} = ObjectStoreX.get(store, "data.json")
cached_etag = meta.etag
# Later fetch - only download if changed
case ObjectStoreX.get(store, "data.json", if_none_match: cached_etag) do
{:error, :not_modified} -> use_cached_data()
{:ok, new_data} -> update_cache(new_data)
endRich Metadata and Attributes
Upload objects with content metadata:
ObjectStoreX.put(store, "report.pdf", pdf_data,
content_type: "application/pdf",
content_disposition: "attachment; filename=report.pdf",
cache_control: "max-age=3600"
)Use Case Examples
ObjectStoreX includes complete, production-ready examples for common distributed systems patterns:
1. Distributed Lock (examples/distributed_lock.ex)
Implement distributed locking for coordinating tasks across multiple nodes:
alias ObjectStoreX.Examples.DistributedLock
# Acquire lock
case DistributedLock.acquire(store, "resource-123") do
{:ok, lock_info} ->
try do
# Do exclusive work
process_resource()
after
DistributedLock.release(store, "resource-123")
end
{:error, :locked} ->
IO.puts("Resource is locked by another process")
end
# Acquire with retry and exponential backoff
{:ok, _} = DistributedLock.acquire_with_retry(store, "resource-123",
max_retries: 5,
initial_delay_ms: 100
)Features:
-
Atomic lock acquisition with
:createmode - Lock staleness detection and automatic cleanup
- Retry with exponential backoff
- Custom metadata support
2. Optimistic Counter (examples/optimistic_counter.ex)
Implement distributed counters with CAS-based optimistic locking:
alias ObjectStoreX.Examples.OptimisticCounter
# Initialize counter
OptimisticCounter.initialize(store, "page-views", 0)
# Increment (automatically retries on conflict)
{:ok, new_value} = OptimisticCounter.increment(store, "page-views")
# Multiple processes can safely increment concurrently
tasks = for _ <- 1..10 do
Task.async(fn -> OptimisticCounter.increment(store, "page-views") end)
end
Task.await_many(tasks)
# Decrement with minimum value constraint
{:ok, stock} = OptimisticCounter.decrement(store, "inventory-item-123",
min_value: 0
)
# Custom update function
{:ok, new_val} = OptimisticCounter.update(store, "counter", fn v -> v * 2 end)Features:
- CAS-based atomic updates with automatic retry
- Increment/decrement operations
- Custom update functions
- Minimum value constraints
- Exponential backoff on conflict
3. HTTP Cache (examples/http_cache.ex)
Implement efficient caching with ETag-based validation:
alias ObjectStoreX.Examples.HTTPCache
# Start cache
{:ok, cache} = HTTPCache.start_cache("my_cache")
# First fetch - cache miss
{:ok, data, :miss} = HTTPCache.get_cached(store, "data.json", cache)
# Second fetch - cache hit (no data transfer if unchanged)
{:ok, data, :hit} = HTTPCache.get_cached(store, "data.json", cache)
# Get statistics
stats = HTTPCache.stats(cache)
# => %{hits: 1, misses: 1, entries: 1, hit_rate: 50.0}
# Manual invalidation
HTTPCache.invalidate(cache, "data.json")
# Clear all entries
HTTPCache.clear(cache)Features:
-
ETag-based conditional GET with
if_none_match - ETS-backed in-memory cache
- Automatic cache invalidation on changes
- Hit/miss statistics and hit rate tracking
-
Support for
if_modified_sincetimestamps
Streaming and Bulk Operations
Streaming Uploads/Downloads
# Stream upload from file
stream = File.stream!("large-file.bin", [], 64 * 1024)
:ok = ObjectStoreX.put_stream(store, "large-file.bin", stream)
# Stream download to file
stream = ObjectStoreX.get_stream(store, "large-file.bin")
Stream.into(stream, File.stream!("downloaded.bin"))
|> Stream.run()Bulk Operations
# Delete multiple objects
paths = ["file1.txt", "file2.txt", "file3.txt"]
:ok = ObjectStoreX.delete_many(store, paths)
# Get multiple byte ranges
ranges = [{0, 1000}, {5000, 6000}]
{:ok, chunks} = ObjectStoreX.get_ranges(store, "file.bin", ranges)Conditional Copy Operations
# Atomic copy (only if destination doesn't exist)
case ObjectStoreX.copy_if_not_exists(store, "source.txt", "backup.txt") do
:ok -> :copied
{:error, :already_exists} -> :destination_exists
{:error, :not_supported} -> :provider_not_supported
end
# Atomic rename
case ObjectStoreX.rename_if_not_exists(store, "old.txt", "new.txt") do
:ok -> :renamed
{:error, :already_exists} -> :destination_exists
endTesting
Run the test suite:
mix testRun integration tests:
mix test test/integration/Run quality checks:
./bin/qa_check.shProvider Support Matrix
| Feature | S3 | Azure | GCS | Local | Memory |
|---|---|---|---|---|---|
| PutMode::Create | ✅ | ✅ | ✅ | ✅ | ✅ |
| PutMode::Update (ETag) | ✅ | ✅ | ✅ | ✅ | ✅ |
| PutMode::Update (Version) | ✅ | ❌ | ✅ | ❌ | ❌ |
| if_match | ✅ | ✅ | ✅ | ✅ | ✅ |
| if_none_match | ✅ | ✅ | ✅ | ✅ | ✅ |
| if_modified_since | ✅ | ✅ | ✅ | ✅ | ✅ |
| Attributes | ✅ | ✅ | ✅ | ⚠️ | ⚠️ |
| Tags | ✅ | ❌ | ✅ | ❌ | ❌ |
| copy_if_not_exists | ❌ | ✅ | ✅ | ✅ | ✅ |
Legend:
- ✅ Fully supported
- ⚠️ Partially supported (limited attributes)
- ❌ Not supported
Documentation
Guides
- Getting Started - Installation and basic usage
- Configuration - Provider-specific configuration options
- Streaming - Efficient handling of large files
- Distributed Systems - Locks, CAS, and caching patterns
- Error Handling - Comprehensive error handling and retry strategies
API Reference
Full API documentation is available at HexDocs.
Performance
ObjectStoreX uses high-performance Rust NIFs with async I/O for optimal throughput:
| Operation | Expected Performance |
|---|---|
| Basic put/get | ~50ms (network dependent) |
| CAS put (success) | Same as regular put |
| CAS put (conflict) | <30ms (fast fail) |
| Conditional get (not modified) | <20ms (no transfer) |
| Streaming (large files) | ~100MB/s+ |
| Bulk operations | Parallel execution |
License
Copyright 2024-2025. Licensed under the Apache License, Version 2.0.
Credits
Built with:
- object_store - High-performance Rust object storage abstraction
- Rustler - Safe Rust bridge for Elixir NIFs