There is a genre of enterprise software announcement that lands like a press release written by committee: carefully hedged, stuffed with adjectives, devoid of anything you can actually use. AlloyDB for PostgreSQL is not that. It is one of those rare products where the numbers do the talking, and the numbers are uncomfortable for whoever sold you on Aurora.
The short version: AlloyDB hit 2.87 million TPC-C transactions per minute in an independent GigaOm benchmark. Amazon Aurora PostgreSQL came in at 1.24 million. That is a 2.31x throughput gap at 2.42x better cost-efficiency. These are not Google’s numbers. GigaOm ran the test.
One Database Instead of Two
Most serious workloads run two databases. There is the OLTP (On-Line Transaction Processing) database handling real-time operations, and the OLAP (On-Line Analytical Processing) database where data eventually lands after an ETL (Extract, Transform, Load) pipeline finishes chugging through the night. That pipeline is expensive, slow, and the reason analytics never quite match what the transaction system shows. AlloyDB’s built-in columnar engine eliminates the second database entirely. Analytics run directly on operational data with no ETL, no sync lag, no separate infrastructure. The OLAP speedup is listed at 100x over standard PostgreSQL. The OLTP speedup is 4x. Both are backed by Google’s Titanium storage offload chip, not a software trick a competitor can patch around next quarter.
The AI Angle Is Real
Every database vendor is currently adding “vector search” to their product page. Most mean they bolted pgvector onto something not designed for it. AlloyDB uses Google’s ScaNN (Scalable Approximate Nearest Neighbors) algorithm, the same indexing technology behind Google Search relevance ranking, delivering 10x better filtered vector query performance over PostgreSQL’s native index. For teams building RAG (Retrieval-Augmented Generation) applications, that means embeddings can live in the same database as the source documents. The synchronization problem disappears. The separate vector store bill disappears with it.
Who Should Care
If you are running Aurora PostgreSQL and paying for Redshift or Snowflake alongside it, the math is straightforward: what does it cost to maintain both, and what do you save by collapsing them? If you are an engineering leader tired of explaining why the analytics dashboard is always six hours behind the transaction database, this is the answer.
The honest Devil’s Advocate case is that Aurora’s ecosystem is enormous, procurement relationships are sticky, and a 2x performance advantage does not always justify a migration project. Fair. But “we’d rather stay on the slower option because we already know it” is a different argument than “AlloyDB doesn’t have a meaningful advantage.” It does.
AWS is closing the raw performance gap with Graviton4, but has not shipped an integrated columnar engine or a ScaNN-equivalent vector index. Those are architectural decisions, not incremental improvements. Azure’s HorizonDB launched in preview in late 2025 with its own performance claims, benchmarked against vanilla open-source PostgreSQL rather than AlloyDB. That is the kind of comparison that looks fine in a vendor briefing and falls apart on the first follow-up question.
Want to go deeper?
- GigaOm benchmark report — full methodology and results
- AlloyDB product page — Titanium chip and columnar engine architecture
- AlloyDB vs PostgreSQL on Google Cloud blog — ScaNN index and migration patterns
- The New Stack on Azure HorizonDB — useful competitive sanity check
