-
General-purpose
- These have the best price-performance with the most flexible vCPU to memory ratios
-
Series
-
E2
-
Workload
- (Cost-Optimized) Day-to-day computing at a lower cost
- Intel processor or the 2nd Gen AMD EPYC Rome processor with up to 32 vCPUs with up to 128 GB of memory with a maximum of 8 GB per vCPU
- E2 contain shared-core machine types with 2 vCPUs for short periods of bursting
-
Applications
- Low-traffic web servers
- Back office apps
- Containerized microservices
- Microservices
- Virtual desktop
- Development and test environments
-
N2, N2D, N1
-
Workload
- (Balanced) Balanced price/performance across a wide range of VM shapes
- N2 are Intel-based VMs (Ice Lake and Cascade Lake) with up to 128 vCPUs and 0.5 to 8 GB of memory per vCPU
- N2D are AMD-based VMs (EPYC Rome 2nd Gen and EPYC Milan 3rd Gen) with up to 224 vCPUs and 8 GB of memory per vCPU
- N1 are Intel-based VMs (Sandy Bridge, Ivy Bridge, Broadwell, and Skylake) with up to 96 vCPUs and 6.5 GB of memory per vCPU
N1 offers f1-micro and g1-small shared-core machines types which have up to 1 vCPU available for short periods of bursting
-
Applications
- Low to medium traffic web and app servers
- Containerized microservices
- Business intelligence apps
- Virtual desktops
- CRM applications
- Data Pipelines
-
C3, C3D
-
Workload
- Consistently high performance for a variety of workloads
- C3 are Intel-based VMs (Sapphire Rapids and Google's custom Intel Infrastructure Processing Unit) with up to 176 vCPUs and 2, 4, or 8 GB of memory per vCPU
- C3D are AMD-based VMs (EPYC Genoa and Google's custom Intel Infrastructure Processing Unit) with up to 360 vCPUs and 2, 4, or 8 GB of memory per vCPU
-
Applications
- High traffic web and app servers
- Databases
- In-memory caches
- Ad servers
- Game servers
- Data analytics
- Media streaming and transcoding
- CPU-based ML training and inference
-
Tau T2D, Tau T2A
-
Workload
- Best per-core performance/cost for scale-out workloads
- Each Tau T2D VM can have up to 60 vCPS, 4 GB of memory per vCPU, and is available on 3rd Gen AMD EPYC Milan processors.
-
Applications
- Scale-out workloads
- Web serving
- Containerized microservices
- Media transcoding
- Large-scale Java applications
-
Compute-optimized
- These have the highest performance per core on Compute Engine and is optimized for compute-intensive workloads
-
Series
-
H3, C2, C2D
-
Workload
- Ultra high performance for compute-intensive workloads
- H3 VMs offer 88 vCPUs and 352 GB of DDR5 memory. H3 VMs run on Intel Sapphire Rapids CPU platform and Google's custom Intel Infrastructure Processing Unit (IPU)
- C2 VMs offer up to 60 vCPUs, 4 GB of memory per vCPU, and are available on the Intel Cascade Lake CPU platform
- C2D VMs offer up to 112 vCPUs, up to 8 GB of memory per vCPU, and are available on the 3rd Gen AMD EPYC Milan platform
-
Applications
- Compute-bound workloads
- High-performance web servers
- Game servers
- High performance computing (HPC)
- Media transcoding
- Modeling and simulation workloads
- AI/ML
-
Memory-optimized
- Provides the most compute and memory resources of any Compute Engine machine family offering.
-
Series
-
M3, M2, M1
-
Workload
- Highest memory to compute ratios for memory-intensive workloads
- M1 VMs offer up to 160 vCPUs, 14.9 GB to 24 GB of memory per vCPU, and are available on the Intel Skylake and Broadwell CPU platform
- M2 VMs are available as 6 TB, 9 TB, and 12 TB machine types, and are available on Intel Cascade Lake CPU platform
- M3 VMs offer up to 128 vCPUs, with up to 30.5 GB of memory per vCPU, and are available on the Intel Ice Lake CPU platform
-
Applications
- Medium to extra-large SAP HANA in-memory databases
- In-memory data stores, such as Redis
- Simulation
- High performance databases such as Microsoft SQL Server, MySQL
- Electronic design automation
-
Accelerator-optimized
- Is ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workload, such as ML and HPC
-
Series
-
A2, G2
-
Workload
- Optimized for accelerated high performance computing workload
- A2 VMs offer 12 to 96 vCPUs, up to 1360 GB of memory, and are available on the Intel Cascade Lake CPU platform
- G2 VMs offer 4 to 96 vCPUs, up to 432 GB of memory, and are available on the Intel Cascade Lake CPU platform
-
Applications
- CUDA-enable ML training and interference
- High-performance computing (HPC)
- Massively parallelized computing
- BERT natural language processing
- Deep learning recommendation model (DLRM)
- Video transcoding
- Remote visualization workstation