The Explosion of Choice

There has been a dramatic rise in number of instance types made available by Cloud providers in the last 10 years. AWS - which has one of the largest number of instance types - have ~1000 instance types as of 2026. Google cloud offers over 400 types and Microsoft Azure have more than 1000.

This makes you wonder, what prompts the cloud providers to increase their number of instance types. One of the theory is that the general purpose computing which the cloud providers was a huge enabler back in the day is showing dimnishing returns. In other words, customers are now looking for specialised type workload to run their application

The Uneven Evolution of Cloud Hardware

The Cloudspecs research reveals how different components have improved over the past decade:

Component Improvement (per dollar)
Network Bandwidth ~10× improvement
CPU Performance Modest gains
DRAM Modest gains
NVMe Storage Stagnated since 2016

This uneven evolution has critical impact. If you’re running a network-intensive workload in 2026 using instance types optimized for 2018’s hardware economics, you’re likely overpaying significantly. Conversely, if your workload is storage-bound, the calculus hasn’t changed much in nearly a decade.

Performance Per Dollar

The paper emphasizes a crucial insight often overlooked in architecture decisions:

“In the cloud, what often matters most is not peak performance, but performance per dollar.”

This shift from raw throughput to economic efficiency makes cost a first-class design consideration. The decision of whether to cache data in memory, disaggregate into separate compute and storage layers, or consolidate everything depends entirely on the relative pricing of CPU, RAM, SSD storage, and networking at any given time.

The Diminishing Returns of General-Purpose Instances

Here’s where it gets particularly interesting for organizations defaulting to “balanced” or general-purpose instance families. The research shows that:

  1. Specialized instances exist for a reason: With 1,057 instance types on AWS alone, there’s likely an instance optimized for your specific workload profile. General-purpose instances are a compromise by design—they sacrifice cost-efficiency for flexibility.

  2. Cross-cloud arbitrage is real: Smaller cloud providers undercut the “big three” hyperscalers by up to for commodity VMs. If you’re using generic instances, you’re essentially paying a premium for undifferentiated compute.

  3. Workload-specific optimization yields compounding benefits: The paper’s analysis using real database benchmarks (Umbra for OLAP, LeanStore for OLTP) demonstrates that matching instance characteristics to workload profiles produces dramatically better cost-performance ratios.

The Multi-Dimensional Challenge

What makes instance selection genuinely difficult is its multi-dimensional nature. You’re not just choosing between “small” and “large”—you’re navigating trade-offs across:

  • CPU vendor (Intel, AMD, AWS Graviton)
  • Memory-to-CPU ratio
  • Network bandwidth
  • Storage type and IOPS
  • Instance storage vs. EBS
  • Regional pricing variations

What This Means for Your Organization

The days of picking a “safe” general-purpose instance and calling it done are over—at least if cost-efficiency matters to your organization. The research makes clear that:

  1. Workload analysis is no longer optional: You need to understand your application’s actual resource consumption patterns across all dimensions.

  2. Instance selection is a continuous process: With new instance types released regularly and pricing changes, what was optimal last year may not be optimal today.

  3. Generic choices have hidden costs: The 5× price differential that smaller providers offer for commodity compute suggests that generic instance usage carries a substantial opportunity cost.


Reference: Steinert, T., Kuschewski, M., & Leis, V. (2026). Cloudspecs: Cloud Hardware Evolution Through the Looking Glass. CIDR 2026. PDF


layout: post title: “Navigating Cloudburst” date: 2026-01-26 —

The Explosion of Choice

There has been a dramatic rise in number of instance types made available by Cloud providers in the last 10 years. AWS - which has one of the largest number of instance types - have ~1000 instance types as of 2026. Google cloud offers over 400 types and Microsoft Azure have more than 1000.

This makes you wonder, what prompts the cloud providers to increase their number of instance types. One of the theory is that the general purpose computing which the cloud providers was a huge enabler back in the day is showing dimnishing returns. In other words, customers are now looking for specialised type workload to run their application

The Uneven Evolution of Cloud Hardware

The Cloudspecs research reveals how different components have improved over the past decade:

Component Improvement (per dollar)
Network Bandwidth ~10× improvement
CPU Performance Modest gains
DRAM Modest gains
NVMe Storage Stagnated since 2016

This uneven evolution has critical impact. If you’re running a network-intensive workload in 2026 using instance types optimized for 2018’s hardware economics, you’re likely overpaying significantly. Conversely, if your workload is storage-bound, the calculus hasn’t changed much in nearly a decade.

Performance Per Dollar

The paper emphasizes a crucial insight often overlooked in architecture decisions:

“In the cloud, what often matters most is not peak performance, but performance per dollar.”

This shift from raw throughput to economic efficiency makes cost a first-class design consideration. The decision of whether to cache data in memory, disaggregate into separate compute and storage layers, or consolidate everything depends entirely on the relative pricing of CPU, RAM, SSD storage, and networking at any given time.

The Diminishing Returns of General-Purpose Instances

Here’s where it gets particularly interesting for organizations defaulting to “balanced” or general-purpose instance families. The research shows that:

  1. Specialized instances exist for a reason: With 1,057 instance types on AWS alone, there’s likely an instance optimized for your specific workload profile. General-purpose instances are a compromise by design—they sacrifice cost-efficiency for flexibility.

  2. Cross-cloud arbitrage is real: Smaller cloud providers undercut the “big three” hyperscalers by up to for commodity VMs. If you’re using generic instances, you’re essentially paying a premium for undifferentiated compute.

  3. Workload-specific optimization yields compounding benefits: The paper’s analysis using real database benchmarks (Umbra for OLAP, LeanStore for OLTP) demonstrates that matching instance characteristics to workload profiles produces dramatically better cost-performance ratios.

The Multi-Dimensional Challenge

What makes instance selection genuinely difficult is its multi-dimensional nature. You’re not just choosing between “small” and “large”—you’re navigating trade-offs across:

  • CPU vendor (Intel, AMD, AWS Graviton)
  • Memory-to-CPU ratio
  • Network bandwidth
  • Storage type and IOPS
  • Instance storage vs. EBS
  • Regional pricing variations

What This Means for Your Organization

The days of picking a “safe” general-purpose instance and calling it done are over—at least if cost-efficiency matters to your organization. The research makes clear that:

  1. Workload analysis is no longer optional: You need to understand your application’s actual resource consumption patterns across all dimensions.

  2. Instance selection is a continuous process: With new instance types released regularly and pricing changes, what was optimal last year may not be optimal today.

  3. Generic choices have hidden costs: The 5× price differential that smaller providers offer for commodity compute suggests that generic instance usage carries a substantial opportunity cost.


Reference: Steinert, T., Kuschewski, M., & Leis, V. (2026). Cloudspecs: Cloud Hardware Evolution Through the Looking Glass. CIDR 2026. PDF