OriginChain
industries · 6 domains

The AI database, in your industry's vocabulary. One bearer token. One endpoint. Every query shape.

OriginChain is an AI-native database that holds SQL rows, vector embeddings, full-text indexes, and graph edges on the same hash-keyed store — single-tenant, region-isolated, with p99 reads under 8 ms. Below: how teams in six industries use it.

pick your domain

Six industries. The same database underneath.

Each page below opens with the customer pain, the OriginChain answer, concrete latency numbers, and three to five real curl snippets you can paste into a terminal.

Trading, risk, settlement

Financial services

Trading desks reconcile fills, walk counterparty exposure, and search past patterns against three different stores — a Postgres, an Elasticsearch, and a vector index — and pay the consistency tax every time.

4 worked examples
Bedside, triage, audit

Healthcare

Care teams need vitals on the second, clinicians need similar-case retrieval over de-identified summaries, and compliance needs a tamper-evident audit trail — three product surfaces that today come from three different vendors.

4 worked examples
Fleet, routing, supply graph

Logistics

Fleet operators need answers that expire in seconds — where the trucks are, which depots are hubs, which shipments are late, what drivers logged about a damaged seal — and they're stitching telemetry, a graph database, and a search index together by hand.

4 worked examples
Contracts, matters, audit

Legal & compliance

Legal teams keep contracts in DMS, search them with one tool, retrieve similar clauses with another, and prove who looked at what with a third — and the audit trail rarely lines up across systems.

4 worked examples
Inventory, recommendations, reviews

Retail & e-commerce

Retailers run inventory in one system, recommendations in another, review search in a third, and weekly analytics in a Snowflake warehouse — and the stock count never matches across them.

4 worked examples
Catalog, search, recommendations

Media & content

Newsrooms and streaming teams keep their catalog in one system, search in another, recommendations in a third, and analytics in a warehouse — and every recommendation feels a day stale.

4 worked examples
five surfaces, one store

Every shape an AI workload needs, against the same data.

Same bearer token, same single-tenant instance, same backup path. Pick the surface that fits the question.

SQL
POST /v1/sql

Standard SQL with JOIN, GROUP BY, HAVING, and window functions. Reconcile, aggregate, slice — same syntax your team already knows.

Vector · HNSW
POST /v1/vector/topk

Cosine, dot, or L2 against your own embeddings with tunable speed/recall. Default high_recall mode hits recall@10 = 0.96 at 100k with p99 109 ms; fast mode runs p99 37 ms. f32 SIMD distance kernels and metadata filters during graph traversal.

Full-text · BM25
POST /v1/fts/search

Unicode tokenizer, stop-words, language stemming. Phrase matches, OR, and field-scoped queries with the same scoring you'd expect from a dedicated search engine.

Graph traversal
POST /v1/graph/{op}

BFS, DFS, weighted Dijkstra against the ref edges already in your data — no separate graph DB, no separate replication lag.

Natural language
POST /v1/ask

Plain English in. JSON out. Compiled to a deterministic plan, cached after first touch, served against the same store.

what to expect

Concrete numbers, measured on a Storm-tier instance in ap-south-1.

point-get p99
< 4 ms
vector topk p99
< 10 ms
BM25 search p99
< 20 ms
tenancy
single-tenant region-isolated
ready when you are

Ninety seconds to an endpoint. No stack to wire up.

Pick a region, pick a tier, and we provision a single-tenant instance on AWS. The first query you send is the first query we'll show you how to write — in English.

talk to a human