OriginChain docs
06 · deploy

Deploy — what the managed tier gives you.

OriginChain is a managed SaaS — you do not run the substrate, we do. This page walks through what the managed tier provisions on your behalf, how to pick a region/tier, which add-ons unlock what, and what the wider topology looks like once it's live.

Note for operators
Most customers can skip ahead to provisioning. The architecture and topology sections below are mainly informational — relevant if you're on an Enterprise BYO-cloud variant where your team operates the box on infrastructure we don't own.

Tenant architecture

One EC2 instance per tenant — writer plus an optional sync follower in the same region — fronted by a per-tenant ACM wildcard cert and a single Argon2-hashed bearer. No shared load balancer, no shared disk, no shared memory. The control plane (signup, billing, console) is global; the data plane never crosses the region you picked.

Sharding, replicas, and high availability are engine-level concerns, not infrastructure tricks. We do not orchestrate Kubernetes — there is nothing to schedule because there is one substrate per tenant.

Choosing a region + tier

tier use RAM topology
Whisper Prototyping & dev environments 2 GB No SLA. Single AZ.
Thunder Production OLTP for small/mid teams 8 GB Sync replica, 99.9% target.
Storm Production with strict SLA + analytics blends 32 GB Sync replica, 99.95% SLA.
Enterprise Custom — BYO-cloud, HIPAA BAA, GDPR DPA, dedicated capacity by spec Per-contract terms.

Pricing multiplier: Mumbai is the base price; every other region adds 1.15× on compute and storage to cover cross-region operational overhead.

Selecting add-ons

Base tiers ship with the core substrate, single-row CAS, boolean FTS, and sealed-segment PITR. Everything else is an opt-in monthly add-on you can add or remove any time. See /pricing#addons for line-item costs.

SQL Pro
Analytics workloads — JOIN, OUTER JOIN, GROUP BY, HAVING.
Vector Search
Embedding workloads — HNSW ANN with SIMD, filtered topk.
Full-Text Pro
Search workloads — BM25, phrase, Unicode, Snowball stemming.
Graph
Relationship workloads — neighbors, BFS, path, weighted Dijkstra.
Transactions
Multi-row OLTP — snapshot isolation with optimistic conflict detection.
Intra-Segment PITR
Sub-second restore granularity. Continuous tail-shipping.
Multi-Writer Cluster
Active-active cross-region replication. Enterprise — contact sales.

Add-ons attach to any tier and bill on the next invoice. Toggle on or off from the console at any time; line items prorate to the day.

Provisioning

Click-to-running takes ~30 seconds. There are no manual steps; the console drives the whole flow.

1. Pick region + tier
Console → New instance. Mumbai (ap-south-1) is the home region; Frankfurt, Virginia, Tokyo, Sydney carry a 1.15× pricing multiplier on compute + storage.
2. Click create
We provision a dedicated EC2 (writer) and, on paid tiers, a sync follower in the same region — both behind a TLS 1.3 listener with an ACM-issued wildcard cert under <region>.db.originchain.ai.
3. Copy your bearer
Console mints an Argon2-hashed bearer at instance creation. One active token at a time. Rotation is a single click and emits an audit-log entry; the prior token is honored for 60s to cover rolling deploys.
4. Connect
Endpoint is <tenant>.<region>.db.originchain.ai. Median time from click to first 200 OK on /health is ~30 seconds.

DNS & TLS

Each tenant gets a Route 53 A record auto-provisioned at <tenant>.<region>.db.originchain.ai pointing at the writer's public IP. The wildcard ACM cert under *.<region>.db.originchain.ai auto-renews; you never see the private key.

On failover the same Route 53 record is UPSERTed at the new writer's IP with a 60-second TTL — propagation is typically sub-minute.

Bearer rotation

Rotate from the console at any time. The new token activates immediately and the prior token stays honored for 60 seconds so a rolling deploy can swap without a 401-storm. Every rotation lands an entry in the per-tenant audit log with actor, timestamp, and source IP.

Replication topology

Paid tiers run writer + follower in the same region with sync replication (--sync-min-acks=1 default). Commits durably ack only after the follower has the frame on disk: RPO is 0 in steady state, RTO is ~25 seconds via the promote-follower flow (see ops → failover). Both nodes share the same epoch lease in S3, which fences split-brain.

Cross-region active-active replication is available on Enterprise via the Multi-Writer Cluster add-on. On other tiers, every byte stays in the region you picked.

Migration from existing data

Importing from Postgres, DynamoDB, MongoDB, or a flat CSV / Parquet dump? See /docs/migrate for the supported source-shapes and the bulk-load endpoint that bypasses per-row validation for the initial backfill.