1. Overview
    1. Google Kubernetes Engine provides a managed environment for deploying, managing, and scaling containerized applications using Google infrastructure
    2. The GKE environment consists of multiple Google Compute Engine machines grouped together to form a cluster
    3. GKE clusters are powered by the Kubernetes open source cluster management system
    4. Kubernetes provides the mechanisms for interacting with clusters
    5. Kubernetes commands and resources can be used to deploy and manage applications, perform administration tasks and set policies, and monitor the health of deployed workloads
    6. Kubernetes draws on the same design principles that run popular Google services and provides the same benefits: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more
    7. Applications running on Google Cloud clusters use technology based on Google's 15+ years of experience running production workloads in containers
    8. GKE clusters advanced cluster management features
      1. Cloud load-balancing for Compute Engine instances
      2. Node pools to designate subsets of nodes within a cluster for additional flexibility
      3. Automatic scaling of cluster's node instance count
      4. Automatic upgrades for cluster's node software
      5. Node auto-repair to maintain node health and availability
      6. Logging and monitoring for visibility into cluster operations
    9. GKE cluster masters are automatically upgraded to run new versions of Kubernetes as those versions become stable, enabling users to take advantage of newer features from the open source Kubernetes projec
    10. Users can opt-in to newer versions of Kubernetes than those scheduled for automatic upgrades by manually initiating a master upgrade
    11. Kubernetes Alpha features are available in special GKE alpha clusters
    12. GKE works with containerized applications: applications packaged into hardware independent, isolated user-space instances, for example by using Docker
    13. GKE and Kubernetes containers, whether for applications or batch jobs, are collectively called workloads
    14. Before deploying a workload on a GKE cluster, it must be packaged into a container
    15. Google Cloud Platform provides continuous integration and continuous delivery tools to help build and serve application containers
    16. Google Cloud Build can be used to build container images (such as Docker) from a variety of source code repositories, and Google Container Registry to store and serve container images
  2. Architecture
    1. A cluster consists of at least one cluster master and multiple worker machines called nodes
    2. The cluster master and node machines run the Kubernetes cluster orchestration system
    3. A cluster is the foundation of Google Kubernetes Engine
    4. Kubernetes objects that represent containerized applications run on top of a cluster
    5. The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler, and core resource controllers
    6. The master's lifecycle is managed by Google Kubernetes Engine when a cluster is created or deleted
    7. Google Kubernetes Engine performs upgrades to the Kubernetes version running on the cluster master automatically
    8. Google Kubernetes Engine upgrades can be manually requested if required earlier than the automatic schedule
    9. All interactions with the cluster are done via Kubernetes API calls
    10. The master runs the Kubernetes API Server process that handles Kubernetes API calls
    11. Kubernetes API calls can be made directly via HTTP/gRPC
    12. Kubernetes API calls can be made indirectly, by running commands from the Kubernetes command-line client (kubectl)
    13. Kubernetes API calls can be made indirectly by interacting with the UI in the Cloud Console
    14. The cluster master's API server process is the hub for all communication for the cluster
    15. All internal cluster processes such as the cluster nodes, system and components, application controllers act as clients of the API server
    16. The Kubernetes API server is the single "source of truth" for the entire cluster
    17. The cluster master is responsible for deciding what runs on all of the cluster's nodes
    18. The cluster master is responsible for scheduling workloads, like containerized applications, and managing the workloads' lifecycle, scaling, and upgrades
    19. The master also manages network and storage resources for those workloads
    20. The master and nodes also communicate using Kubernetes APIs
    21. When a cluster is created or updated, container images for the Kubernetes software running on the masters (and nodes) are pulled from the gcr.io container registry
    22. In the event of a zonal or regional outage of the gcr.io container registry, Google may redirect requests to a zone or region not affected by the outage
    23. Google Cloud status dashboard can be used to check the current status of Google Cloud services
    24. A cluster typically has one or more nodes that run containerized applications and other workloads
    25. Individual nodes are Compute Engine VM instances that GKE creates when a cluster is created
    26. Each node is managed from the master, which receives updates on each node's self-reported status
    27. Users can exercise some manual control over the node lifecycle, or have GKE perform automatic repairs and automatic upgrades on the cluster's nodes
    28. A node runs the services necessary to support the Docker containers that make up a cluster's workloads
    29. Node services include the Docker runtime and the Kubernetes node agent (kubelet) which communicates with the master and is responsible for starting and running Docker containers scheduled on that node
    30. In GKE, there are a number of special containers that run as per-node agents to provide functionality such as log collection and intra-cluster network connectivity
    31. Each node is of a standard Compute Engine machine type
    32. Users can select a different machine type when they create a cluster
    33. Each node runs a specialized OS image for running containers
    34. Users can specify which OS image the clusters and node pools use
    35. A baseline minimum CPU platform for nodes or node pools in cluster can be specified when a cluster is created
    36. Choosing a specific CPU platform can be advantageous for advanced or compute-intensive workloads
    37. Some of a node's resources required to run the GKE and Kubernetes node components are necessary to make that node function as part of a cluster
    38. As larger machine types tend to run more containers (and by extension, more Pods)
    39. The amount of resources that GKE reserves for Kubernetes components scales upward for larger machines
    40. Windows Server nodes also require more resources than a typical Linux node
    41. The nodes need the extra resources to account for running the Windows OS and for the Windows Server components that can't run in containers
    42. Request for resources can be made for Pods or to limit resource usage
  3. Responsibilities
    1. Google is responsible for the control plane (including the master VMs, API server, other components on the master VMs, and etcd)
    2. Google is responsible for the Kubernetes distribution
    3. Google is responsible for the nodes' operating system
    4. Configurations related to these items are generally not available for customers to audit or modify in GKE
    5. Clients are still responsible for upgrading the nodes that run workloads, and the workloads themselves
    6. Clients can generally audit and remediate any recommendations to these components