OriginChain
compare

OriginChain vs Neon. A serverless Postgres and an AI-native database, side by side.

Neon reshaped Postgres for the cloud — separated compute from storage, made database branching cheap, and pushed scale-to-zero into a category that had been treated as always-on. OriginChain is a managed AI-native database where rows, embeddings, full-text postings, and graph edges live in one substrate and commit atomically. This page is a fair, technical look at where each one is the right call.

01 · choose the right one

The honest split. Pick the one that matches your workload.

Neon is one of the more interesting things to happen to Postgres in years. Separating compute from storage is not just an architecture choice — it makes branching, point-in-time recovery, and scale-to-zero cheap in a way they have never been on a traditional Postgres deployment. The interesting question for your team is whether your workload is Postgres-shaped, where serverless and branching are the killer features, or whether your workload is AI-shaped, where atomic multi-shape writes and native vector / FTS / graph are the killer features. The answer to that question decides the database.

choose Neon if
  • + Workload is Postgres-shaped — relational primary, occasional vector via pgvector, predictable SQL access patterns.
  • + Database branching for preview environments is a killer feature for your team's review-app workflow.
  • + Bursty or low-utilisation traffic where scale-to-zero materially changes the bill.
  • + You are comfortable layering pgvector and other extensions, and tracking their compatibility with the platform.
choose OriginChain if
  • + AI features are the workload — embeddings, hybrid search, graph context, natural language are equal citizens to rows.
  • + You want vector / full-text / graph as native shapes, not as extensions you have to wire and version-pin.
  • + Rows, embeddings, full-text postings, and graph edges have to commit atomically in one round-trip.
  • + Single-tenant compute matters; you'd rather not share storage and compute pools with other customers.
02 · where neon wins

Postgres, reshaped for the cloud.

Neon's architectural bet is that Postgres's storage layer should live somewhere that scales independently of the query engine. The result is a Postgres that branches like git — you can fork an entire database in seconds, run a destructive migration on the fork, point a preview environment at it, and throw the branch away when the pull request closes. Teams that adopt that workflow tend to keep it forever; the alternative of seeding a fresh database for every preview environment, or sharing a stale staging database across the entire team, is genuinely worse.

Scale-to-zero is the second big win. Most application databases sit idle most of the time, and most managed Postgres vendors charge you for that idle. Neon spins compute down when there are no queries and brings it back fast on the next request, which materially changes the bill for low-utilisation projects, hobby workloads, and test environments. Combined with the consumption-based storage pricing, the cost shape is much closer to "what you actually used" than the traditional always-on instance.

Underneath all of it, you still get Postgres. Every ORM works. Every migration tool works. pgvector is available as an extension, tsvector / GIN are there for full-text, recursive CTE handles a fair amount of graph work. If your application is dominantly relational and the AI features are additive, Neon gives you an unusually nice cloud-native Postgres without making you reach outside the ecosystem.

03 · where originchain is different

Different substrate. Atomic multi-shape from day one.

OriginChain is not Postgres-on-object-storage. It is a different substrate — a single hash-keyed key-value store — designed from the start to hold rows, secondary indexes, vector embeddings, HNSW graphs, BM25 full-text postings, and graph edges in the same place under different domain prefixes. The query engine compiles SQL, vector top-k, BM25 search, graph traversal, and natural-language questions to the same plan tree. There is no extension stack to wire and no version drift between pgvector, ParadeDB, or Apache AGE — the AI shapes are not extensions, they are the substrate. HNSW has two operating points worth naming: high_recall at recall@10 = 0.96 with p99 around 109 ms on 100k vectors, and fast at recall@10 ≈ 0.69 with p99 around 37 ms.

The consequence is atomicity that crosses shapes. A single insert writes the row, every secondary index entry, every forward and reverse edge, the BM25 postings, and the vector embedding in one batch. That batch lands as one WAL frame, hits one fsync, and broadcasts to the follower as one unit. With Neon's separation of compute and storage, Postgres's transactional guarantees still hold within Postgres, but the ops shape carries new tunables — page-server caching, autosuspend behaviour, branch retention, compute-size scaling. OriginChain has fewer moving parts because there is no separation-of-storage layer to configure; the ops surface is simpler by construction.

Tenancy is physical. Each customer gets a dedicated single-tenant database in a region of their choice, with its own HTTPS endpoint, its own bearer token, its own write-ahead log, its own encrypted disk. Neon shares compute pools and a shared object-store backing, which is the right trade-off for branching and scale-to-zero; OriginChain's trade-off is the opposite, optimising for predictable per-tenant performance and isolation.

Natural language is part of the same surface. /v1/ask compiles an English question to the same plan AST as a hand-written query — same cost model, same EXPLAIN output, same per-node statistics. The model emits a plan; the executor runs it. There is no LLM on the hot path, no token-priced query layer to budget for, and no second service to deploy alongside the database.

04 · the atomicity gap

What "one INSERT, one WAL frame" actually buys you.

The standard architecture for an AI feature on Neon is a Postgres transaction that touches a row, a tsvector column, and a pgvector column. Inside one transaction, that is genuinely atomic — Postgres's MVCC keeps the writes consistent. The seam shows up when the AI surface grows. Add a separate full-text engine for typo-tolerant retrieval, a graph store for context that does not fit recursive CTE, or an out-of-process embedding worker, and the consistency story splits across services. Most teams paper over the gap with idempotency keys and reconciliation jobs, all of which work until they don't.

OriginChain folds the entire derived state into the write path. The row, the embedding, the full-text postings, every edge update — all of them are part of the same write_batch, which lands as one WAL frame. A torn frame is dropped on recovery, so there is no half-written state to clean up. That property is verified at runtime: a panic-injection harness deliberately crashes the writer at four boundaries inside the WAL flush, and recovery is asserted to equal a prefix of the op stream every time. We run it for a million deterministic iterations on every CI build.

Reads compose the same way. A query can filter on structured columns, rank by vector similarity, intersect with a BM25 search, and join across a graph edge — in one round trip, against one consistent snapshot. With pgvector + tsvector inside one Postgres statement, you can get a long way; once a fourth shape (or a non-Postgres engine) enters the picture, you are stitching results in application code. OriginChain exists because that boundary is exactly where AI applications keep getting bitten.

05 · side by side

The detailed comparison.

A capability-by-capability look. None of this is meant to score points against Neon — it is meant to make the trade-off explicit so you can pick correctly for your workload.

Capability Neon OriginChain
Core engine Postgres with separated compute/storage Hash-keyed k/v substrate, multi-shape
Tenancy model Multi-tenant compute on shared storage Single-tenant per managed instance
Branching Copy-on-write database branches Snapshot-based clones; no per-branch isolation
Scale-to-zero Yes, fast cold start Always-on per-tenant compute
Vector search pgvector extension Native HNSW + f32 SIMD
Full-text tsvector / GIN built-in Native BM25 + phrase + stemming
Graph traversal Recursive CTE Native fwd / rev edges + Dijkstra
Atomicity across shapes Per-row within a Postgres transaction Row + index + embedding + posting + edge in ONE WAL frame
Natural-language query Bring-your-own LLM layer /v1/ask endpoint, plan-bound
Replication Postgres replicas + S3-backed storage Active-passive, sync_one / sync_quorum, RPO=0 paid tier
Pricing shape Compute hours + storage usage Single-tenant compute tier + flat add-ons
Operations footprint Managed Postgres with serverless ops One service that replaces row-store + vector + FTS + graph
06 · operations

Two different operational stories.

Neon's operational story is unusual for a Postgres vendor: the database can sleep when nobody is using it, branch in seconds, and bill in something close to "what you actually consumed." That is genuinely useful, especially for preview environments and projects where the median traffic shape is bursty. The trade-off is that there are knobs unique to the architecture — autosuspend windows, page-server caching, branch retention, compute-size sizing — and Postgres extension compatibility lives within whatever the platform supports for that release.

OriginChain is managed-only and single-tenant by design. Each tenant gets a dedicated database in a region of their choice, with its own HTTPS endpoint, its own bearer token, and its own write-ahead log. There is no shared load balancer, no shared disk, and no shared compute pool between customers. We provision, patch, back up, replicate, and upgrade. You post requests, get JSON back. The trade-off is real: you give up scale-to-zero and copy-on-write branching in exchange for a database where vector / full-text / graph / NL are first-class, atomic across shapes, and tenancy is physical.

Failover is structural. Active-passive replication ships every committed WAL frame to a follower in real time, with per-write opt-in to async, sync_one, or sync_quorum. On paid tiers, sync mode delivers RPO = 0 — no acknowledged write is ever lost on writer failure. A strongly-consistent lease arbitrates which node is primary; takeover is around twenty-five seconds end to end, and a snapshot transfer brings new replicas online without stalling the writer.

Two databases, two different bets. Pick the one your workload deserves.

Plenty of teams run Neon for the long tail of relational state where branching and scale-to-zero earn their keep, and put OriginChain in front of the AI surface — embeddings, hybrid search, graph context, NL queries against the same content. The two are not in a zero-sum fight. The quickstart walks you from signup to your first English query in under ten minutes; pricing lays out exactly what each tier costs.