1. IOPs
    1. Persistent disk has no per-I/O costs, so there is no need to estimate monthly I/O to calculate budget for disks
    2. For IOPS-oriented workloads, it is possible to break down the per month cost to look at price per IOPS, for comparison purposes
    3. When the size of a volume is increased, the performance caps is automatically increased at no additional cost
    4. To determine the cost per IOPS of a persistent disk, divide the price per GB per month with the number of IOPS per GB
    5. Standard persistent disks offer affordable capacity, while SSD persistent disks offer price-performance ratios suited for IOPs-oriented workloads
    6. For standard persistent disks, simultaneous reads and writes share the same resources
    7. While an instance is using more read throughput or IOPS, it is able to perform fewer writes
    8. Conversely, instances that use more write throughput or IOPs are able to perform fewer reads
    9. SSD persistent disk are capable of achieving maximum throughput limits for both reads and writes simultaneously
    10. It is not possible for SSD persistent disks to reach their maximum IOPs limits for reads and writes simultaneously
    11. Throughput = IOPs * I/O size
    12. To take advantage of maximum throughput limits for simultaneous reads and writes on SSD persistent disks, use an I/O size such that read and write IOPs combined don't exceed the IOPs limit
  2. Performance
    1. To maximize performance, configure the correct disk size, vCPU count, and machine type
    2. When the size of persistent disks are specified, consider how these disks compare to traditional physical hard drives
    3. The performance of a standard persistent disk scales with its size
    4. Performance also depends on the number of vCPUs assigned to VM instance, due to network egress caps on write throughput
    5. Using sixteen or more vCPUs does not limit performance
    6. Fewer than 16 vCPUs limits performance
    7. SSD persistent disks performance scales linearly until it reaches either the limits of the disk or the limits of the Compute Engine instance to which the disk is attached
    8. I/O bursting provides higher performance for boot volumes than linear scaling
    9. Maximum performance might not be achievable at full CPU utilization
    10. SSD read bandwidth and IOPS consistency near the maximum limits largely depend on network ingress utilization; some variability is to be expected, especially for 16 KB I/Os near the maximum IOPS limits
    11. Compute Engine machine types are grouped and curated for different workloads
    12. Compute-optimized machine types are subject to specific persistent disk limits per vCPU that differ from the limits for other machine types
    13. Resize persistent disks to increase the IOPS and throughput limits
    14. Change the machine type of the instance to increase the per-instance limits
  3. Networking
    1. Virtual machine (VM) instances have a network egress cap that depends on the machine type of the VM
    2. Compute Engine stores data on persistent disks with multiple parallel writes to ensure built-in redundancy
    3. Each write request has some overhead that uses additional write bandwidth
    4. The maximum write traffic that a VM instance can issue is the network egress cap divided by a bandwidth multiplier that accounts for the write bandwidth used by this redundancy and overhead
    5. In a situation where persistent disk is competing with IP traffic for network egress bandwidth, 60% of the maximum write bandwidth goes to persistent disk traffic, leaving 40% for IP traffic
  4. Size
    1. Persistent disks can be up to 64 TB in size, and a single logical volumes of up to 257 TB can be created using logical volume management inside the VM
    2. Not all local file systems work well at this scale. Common operations, such as mounting and file system checking might take longer than expected.
    3. Maximum persistent disk performance is achieved at smaller sizes. Disks take longer to fully read or write with this much storage on one VM. If application supports it, consider using multiple VMs for greater total-system throughput
    4. Snapshotting large amounts of persistent disk might take longer than expected to complete and might provide an inconsistent view of logical volume without careful coordination with the application
    5. If only one disk is used, then that single disk can reach the performance limit corresponding to the combined size of the disks
    6. If all of the disks are used at 100%, the aggregate performance limit is split evenly among the disks regardless of relative disk size