-
Overview
- A sole-tenant node is a physical Compute Engine server that is dedicated to hosting VM instances only for a specific customer project
- Use sole-tenant nodes to keep instances physically separated from instances in other projects, or to group instances together on the same host hardware
- Each node is associated with one physical server, and is the only node running on that server
- Within your nodes, you can run multiple instances of various sizes without sharing the host hardware with other projects
- Node affinities can be specified between nodes and the instances running on those nodes.
- Affinities can be used to either group several workloads together on the same nodes or isolate workloads from one another on different nodes to meet data compliance requirements
- To use sole-tenant nodes, create node groups, which contain one or more nodes, rather than creating individual nodes
- Sole-tenant nodes ensure instances do not share host hardware with instances from other projects
- Use labels to specify how instances are arranged on nodes and separate instances with sensitive workloads into their own private nodes away from other non-sensitive workloads
- Normally, VMs run on physical hosts might be shared by many customers
- With sole-tenant nodes, users have exclusive access to the VMs on the physical host
- Each sole-tenant node is associated with one physical server
- Within each node, a single customer can fully control which VMs are running on that node, without sharing the host hardware with other customers
- Each sole-tenant node has a unique identifier: the server_id
- The server_id is unique across all of Google Cloud hardware, and is available as soon as a sole-tenant node is created
- The server_id is a unique identifier that is used to mark each physical server
- When a sole-tenant node is allocated to an account, there is a one-to-one relationship between the server_id and the physical server
- Google Cloud does not reuse the same server_id for different physical servers
- If the physical host is retired due to maintenance-related events, a replacement server along with its server_id is allocated
- VMs are moved to the replacement server
- Maintenance-related events are logged in the Cloud Audit Logs, which can be used to trace the lineage of physical server
- Each time a host is retired and a replacement host provided, the new server_id is available in Cloud Audit Logs
- Use these logs to trace the server_id history of the VM's hosts
- Users pay for the entire sole-tenant node on a per-second basis, regardless of how many VMs are running on the node
- When a sole-tenant node is provisioned, the user controls the VMs on that host
- Sole-tenant nodes can be purchased with a 1- or 3-year committed use discount, or users can pay for them as they use them
-
Node type
- Each node within a node group must have a node type
- A node type specifies the total amount of cores and memory for that node
- Currently, the only available node type is n1-node-96-624 node type which has 96 vCPUs and 624 GB of memory, available in multiple zones
- Nodes of this size can accommodate VM instances up to 96 vCPUs and 624 GB of memory
- Nodes can be filled with multiple smaller VM instances with various sizes including custom machine types and instances with extended memory
- The instances that run on nodes must have at least two vCPUs
- When a node is full, additional instances cannot be scheduled on that node
- Note that a node type applies to each individual node within a node group (not to the node group as a whole)
- If a node group with two nodes is created, each node is allocated 96 vCPUs and 624 GB of memory
- You must have enough vCPU quota to support the total vCPUs required for the node group to successfully create the group
- Periodically, Compute Engine will replace older node types with newer node types
- When a node type is replaced, it is not possible to create node groups using the old node type, and node templates must be upgraded to use the new node types
- As a best practice, configure node templates to use flexible node type requirements
-
Features
- If a node's host system requires maintenance, the node and all of the instances on the node continue to operate while they live migrate to updated host hardware
- Sustained use discounts and committed use discounts reduce the costs of sole-tenant nodes
- VPC networks work with instances running on sole-tenant nodes the same way they work with normal VM instances
- VPC networks can be used to establish network connections between sole-tenant instances and normal VM instances
- Use custom machine types or predefined machine types to create instances on sole-tenant nodes
- Because vCPUs and memory of the node is already paid for, there is no extra fee to pay for these instances
- Create managed instance groups on node groups
- Managed instance groups can use autoscaling while running on sole-tenant nodes, but the node groups cannot automatically scale
- Combine VMs with multiple machine types on each node
- Use a mix of machine types and custom machine types on the same node until the node reaches its vCPU and memory limit, which is defined by the node type
-
Restrictions
- Sole-tenant nodes are only available in select zones
-
VMs cannot be started on machine types that have fewer than two vCPUs. This includes
- Shared-core machine types: f1-micro and g1-small
- The n1-standard-1 machine type
- Custom machine types with only 1 vCPU
-
The following features are unavailable on sole-tenant nodes
- GPUs
- Local SSDs
- Memory-optimized machine types