1. Overview
    1. A sole-tenant node is a physical Compute Engine server that is dedicated to hosting VM instances only for a specific customer project
    2. Use sole-tenant nodes to keep instances physically separated from instances in other projects, or to group instances together on the same host hardware
    3. Each node is associated with one physical server, and is the only node running on that server
    4. Within your nodes, you can run multiple instances of various sizes without sharing the host hardware with other projects
    5. Node affinities can be specified between nodes and the instances running on those nodes.
    6. Affinities can be used to either group several workloads together on the same nodes or isolate workloads from one another on different nodes to meet data compliance requirements
    7. To use sole-tenant nodes, create node groups, which contain one or more nodes, rather than creating individual nodes
    8. Sole-tenant nodes ensure instances do not share host hardware with instances from other projects
    9. Use labels to specify how instances are arranged on nodes and separate instances with sensitive workloads into their own private nodes away from other non-sensitive workloads
    10. Normally, VMs run on physical hosts might be shared by many customers
    11. With sole-tenant nodes, users have exclusive access to the VMs on the physical host
    12. Each sole-tenant node is associated with one physical server
    13. Within each node, a single customer can fully control which VMs are running on that node, without sharing the host hardware with other customers
    14. Each sole-tenant node has a unique identifier: the server_id
    15. The server_id is unique across all of Google Cloud hardware, and is available as soon as a sole-tenant node is created
    16. The server_id is a unique identifier that is used to mark each physical server
    17. When a sole-tenant node is allocated to an account, there is a one-to-one relationship between the server_id and the physical server
    18. Google Cloud does not reuse the same server_id for different physical servers
    19. If the physical host is retired due to maintenance-related events, a replacement server along with its server_id is allocated
    20. VMs are moved to the replacement server
    21. Maintenance-related events are logged in the Cloud Audit Logs, which can be used to trace the lineage of physical server
    22. Each time a host is retired and a replacement host provided, the new server_id is available in Cloud Audit Logs
    23. Use these logs to trace the server_id history of the VM's hosts
    24. Users pay for the entire sole-tenant node on a per-second basis, regardless of how many VMs are running on the node
    25. When a sole-tenant node is provisioned, the user controls the VMs on that host
    26. Sole-tenant nodes can be purchased with a 1- or 3-year committed use discount, or users can pay for them as they use them
  2. Node type
    1. Each node within a node group must have a node type
    2. A node type specifies the total amount of cores and memory for that node
    3. Currently, the only available node type is n1-node-96-624 node type which has 96 vCPUs and 624 GB of memory, available in multiple zones
    4. Nodes of this size can accommodate VM instances up to 96 vCPUs and 624 GB of memory
    5. Nodes can be filled with multiple smaller VM instances with various sizes including custom machine types and instances with extended memory
    6. The instances that run on nodes must have at least two vCPUs
    7. When a node is full, additional instances cannot be scheduled on that node
    8. Note that a node type applies to each individual node within a node group (not to the node group as a whole)
    9. If a node group with two nodes is created, each node is allocated 96 vCPUs and 624 GB of memory
    10. You must have enough vCPU quota to support the total vCPUs required for the node group to successfully create the group
    11. Periodically, Compute Engine will replace older node types with newer node types
    12. When a node type is replaced, it is not possible to create node groups using the old node type, and node templates must be upgraded to use the new node types
    13. As a best practice, configure node templates to use flexible node type requirements
  3. Features
    1. If a node's host system requires maintenance, the node and all of the instances on the node continue to operate while they live migrate to updated host hardware
    2. Sustained use discounts and committed use discounts reduce the costs of sole-tenant nodes
    3. VPC networks work with instances running on sole-tenant nodes the same way they work with normal VM instances
    4. VPC networks can be used to establish network connections between sole-tenant instances and normal VM instances
    5. Use custom machine types or predefined machine types to create instances on sole-tenant nodes
    6. Because vCPUs and memory of the node is already paid for, there is no extra fee to pay for these instances
    7. Create managed instance groups on node groups
    8. Managed instance groups can use autoscaling while running on sole-tenant nodes, but the node groups cannot automatically scale
    9. Combine VMs with multiple machine types on each node
    10. Use a mix of machine types and custom machine types on the same node until the node reaches its vCPU and memory limit, which is defined by the node type
  4. Restrictions
    1. Sole-tenant nodes are only available in select zones
    2. VMs cannot be started on machine types that have fewer than two vCPUs. This includes
      1. Shared-core machine types: f1-micro and g1-small
      2. The n1-standard-1 machine type
      3. Custom machine types with only 1 vCPU
    3. The following features are unavailable on sole-tenant nodes
      1. GPUs
      2. Local SSDs
      3. Memory-optimized machine types