1. Application Delivery
    1. Application Delivery manages configurations for Google Kubernetes Engine (GKE) workloads declaratively with Git
    2. Review changes before deployment through pull requests on GitHub or Gitlab
    3. Test and promote changes across different environments
    4. Roll back changes quickly
    5. Display applications' version and status on the Google Cloud Console
    6. Application Delivery consists of a command line program (appctl) that manages Application Delivery configurations and repositories, and a GKE add-on that runs in the cluster
    7. Application Delivery allows users to create multiple environments (for example, prod and staging) of the same application using a base configuration and overlays
    8. Overlays allow users to modify or add values to the environment's manifest
    9. Each environment corresponds to a namespace in the cluster
    10. Application Delivery stores configuration in two private Git repositories
    11. GitHub and GitLab repositories are currently supported
    12. Configuration changes are made in the application repository
    13. Configurations in the deployment repository are automatically generated from the application repository
    14. Using these two repositories, Application Delivery introduces a separation of concerns between maintenance and the reviewed source of truth
    15. The workflow encouraged by Application Delivery also prevents these two repositories from diverging
    16. The application repository stores application configuration files in kustomize format
    17. When a user makes a configuration change, they create a git tag and then push the tag to the application repository
    18. The deployment repository stores generated Kubernetes manifests in Git branches
    19. Each branch stores a configuration built with Application Delivery
    20. Configurations can be applied to environments
    21. A user renders the configuration and generates a pull request in the deployment repository with appctl prepare
    22. An administrator reviews the change
    23. After the pull request is merged, the user runs appctl apply
    24. Application Delivery then updates the application's configuration on the cluster
  2. Catalog
    1. Service Catalog allows for applications running on a cluster to easily discover and connect to external services without having to manually import information such as credentials or endpoints
    2. External service dependencies are modeled as Kubernetes resources, which can be easily integrated into existing deployment processes
  3. Metrics
    1. Custom and external metrics allow workloads to adapt to conditions besides the workload itself
    2. Consider an application that pulls tasks from a queue and completes them
    3. The application might have Service-Level objective (SLO) for time to process a task, or for the number of tasks pending
    4. If the queue is increasing, more replicas of the workload might meet the workload's SLO
    5. If the queue is empty or is decreasing more quickly than expected, save money by running fewer replicas, while still meeting workload's SLO
    6. Custom metrics and external metrics differ from each other
    7. A custom metric is reported from applications running in Kubernetes
    8. An external metric is reported from an application or service not running on the cluster, but whose performance impacts the Kubernetes application
    9. The application can report a custom metric to Cloud Monitoring
    10. Kubernetes can be configured to respond to custom metrics and scale workload automatically
    11. Applications can be scaled based on metrics such as queries per second, writes per second, network performance, latency when communicating with a different application, or other metrics that make sense for workloads
    12. A custom metric can be selected for a particular node, Pod, or any Kubernetes object of any kind, including a CustomResourceDefinition (CRD).
    13. The average value for a metric reported by all Pods in a Deployment
    14. A given custom metric can be filtered by label, by adding a selector field set to the label's key and value
    15. Before a custom metrics can be used, users must enable Cloud Monitoring in the Google Cloud project and install the Cloud Monitoring adapter on the cluster
    16. After custom metrics are exported to Cloud Monitoring, they can trigger autoscaling events by Horizontal Pod Autoscaler to change the shape of the workload
    17. Custom metrics must be exported from the application in a specific format
    18. The Cloud Monitoring web UI includes a metric auto-creation tool to help automatically create custom metrics
    19. If the auto-creation tool is used to create custom metrics, Cloud Monitoring detects them automatically
    20. To scale workload based on the performance of an application or service outside of Kubernetes, configure an external metric
    21. To increase the capacity of the application to ingest messages from Pub/Sub, it may be necessary to scale if the number of undelivered messages is trending upward
    22. The external application needs to export the metric to a Cloud Monitoring instance that the cluster can access
    23. The trend of each metric over time causes Horizontal Pod Autoscaler to change the shape of the workload automatically
    24. To import metrics to Cloud Monitoring, export metrics from the application using Cloud Monitoring APIs, or configure the application to emit metrics in Prometheus format
    25. Run the Prometheus to Cloud Monitoring adapter
    26. This is a small open-source sidecar container that scrapes the metrics, translates them to Cloud Monitoring format, and pushes them to the Cloud Monitoring API
    27. GKE uses add-on resizer to scale the metrics-server add-on and the heapster add-on
    28. The add-on resizer scales the resource requests and resource limits of its managed containers in proportion to the number of nodes in the cluster