-
Overview
- Container Registry is a private container image registry that runs on Google Cloud.
- Container Registry supports Docker Image Manifest and OCI image formats
- Many use Dockerhub as a central registry for storing public Docker images
- To control access to images, use a private registry such as Container Registry
- Container Registry can be assessed through secure HTTPS endpoints, which allows users to push, pull, and manage images from any system, VM instance, or hardware
- Docker credential helper command-line tool can be used to configure Docker to authenticate directly with Container Registry
- Registries in Container Registry are named by the host and project ID
- Locations correspond to the multi-regions for Cloud Storage storage buckets
- When an image is pushed to a registry with a new hostname, Container Registry creates a storage bucket in the specified multi-region
- Cloud Storage bucket is the underlying storage for the registry
- Within a project, all registries with the same hostname share one storage bucket
- A registry can contain many images, and these images may have different versions
- To identify a specific version of the image within a registry, specify the image's tag or digest
- Tags are unique to one image within a registry
- Container Registry stores its tags and layer files for container images in a Cloud Storage bucket in the same project as the registry
- Access to the bucket is configured using Cloud Storage identity and access management
- By default, project Owners and Editors have push and pull permissions for that project's Container Registry bucket
- Project Viewers have pull permission only
- Before pushing or pulling images, authentication need to be configured
- Docker can be configured to use the gcloud command-line tool to authenticate requests to Container Registry
- Container Registry supports advanced authentication methods using access tokens or JSON key files
- Docker needs access to Container Registry to push and pull images
- Use the Docker credential helper command-line tool to configure Container Registry credentials for use with Docker
- Docker command-line tool, docker, can be used to interact directly with Container Registry
- When Container Registry API is enabled, Container Registry adds a service account to the project
- Google owns the Container Registry service account account, but it is specific to a project
- If the Container Registry service account is deleted or its permissions changed, certain Container Registry features will not work correctly
- Container Registry service account roles should not be modified or the account deleted
- Pub/Sub can be used to get notifications about changes to container images
- Compute Engine instances and Google Kubernetes Engine clusters can push and pull Container Registry images based on Cloud Storage scopes on the instances
- Images stored in Container Registry can be deployed to the App Engine flexible environment
- Container Registry works with several popular continuous delivery systems
- Container Registry can be integrated with external services
-
Images
- Managed base images are base container images that are automatically patched by Google for security vulnerabilities
- When a container is deployed, two separate operating systems and images are chosen
- Node or host image is the operating system on which the container runs
- Container image is the operating system used by the container itself
- The container image is built by taking an operating system base image, and adding the packages, libraries, and binaries needed for the application
- Google maintains base images for building its own applications, including Google Cloud services like Google App Engine
- Managed base images have security properties which can make them desirable for some uses
- They are regularly scanned for known vulnerabilities, from the CVE database
- Base image security scan uses the same functionality as Container Registry Vulnerability Scanning
- When a patch is available for a found vulnerability, Google applies that patch
- Base images are built reproducibly, so there is a verifiable path from the source code to the binary
- Base images can be verified by comparing it to the GitHub source, ensuring that the build has not introduced any flaws
- They are stored on Google Cloud, so can be pulled directly from the environment without having to traverse networks
- Base images can be pulled using Private Google Access
- Base images can be used outside of Google Cloud
- Managed base images are available in GCP Marketplace
- Support for managed base images is subject to the lifecycles of the corresponding OS distributions
- Unless otherwise noted, Google publishes updated images at least monthly
- Published updates include security updates and other updates installed for operating system versions that are in the mainstream support stage of their lifecycles
- When an operating system version enters its extended lifecycle stage, Google no longer provides updated images
- Google generally does not backport new features to these versions in the extended lifecycle stage or past the extended lifecycle
- Distroless images are minimal, language-focused images
- Container Registry's Docker Hub Mirror offers frequently requested Docker Hub images, including base images
-
Analysis
- Container Analysis provides vulnerability scanning and metadata storage for software artifacts
- The service stores metadata and makes it available for consumption through an API
- The metadata comes from vulnerability scanning, other Cloud services, and third-party providers
- Container Analysis monitors vulnerability information to keep it up to date
- With incremental scanning, Container Analysis scans new images as they are uploaded
- The scan gathers metadata based on the container manifest and updates metadata every time the image is re-uploaded (re-pushed)
- With continuous analysis, Container Analysis continuously monitors the metadata of scanned images in Container Registry for new vulnerabilities
- This type of analysis pertains only to package vulnerabilities and does not include other kinds of metadata.
- Container Analysis performs continuous analysis only for images that have been pulled in the last 30 days
- When the scan of an image is completed, the produced vulnerability result is the collection of vulnerability occurrences for that image
- The severity levels are qualitative labels that reflect factors such as exploitability, scope, impact, and maturity of the vulnerability
- If a vulnerability enables a remote user to easily access a system and run arbitrary code without authentication or user interaction, that vulnerability would be classified as Critical
- Effective severity is the severity level assigned by the Linux distribution
- If distribution-specific severity levels are unavailable, Container Analysis uses the severity level assigned by the note provider
- CVSS score is the Common Vulnerability Scoring System score and associated severity level
- For a given vulnerability, the severity derived from a calculated CVSS score might not match the effective severity
- Linux distributions that assign severity levels use their own criteria to assess the specific impacts of a vulnerability on their distributions
- A high-level piece of metadata, such as a vulnerability or build information, is called a note
- When Container Analysis analyzes an image, each instance of a note that it finds is identified as an occurrence
-
Logging
- Google Cloud services write audit logs to help answer the questions, "Who did what, where, and when?"
- Cloud projects contain only the audit logs for resources that are directly within the project
- Other entities, such as folders, organisations, and billing accounts, each contain the audit logs for the entity itself
- Cloud Audit Logs maintains Admin Activity audit logs, Data Access audit logs and System Event audit logs
- Container Analysis writes Admin Activity audit logs, which record operations that modify the configuration or metadata of a resource
- Only if explicitly enabled, Container Analysis writes Data Access audit logs
- Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data
- Data Access audit logs do not record the data-access operations on resources that are publicly shared or that can be accessed without logging into Google Cloud
- Container Analysis does not write System Event audit logs
- Admin Activity audit logs are always enabled and can't be disabled
- Data Access audit logs are disabled by default and are not written unless explicitly enabled, with the exception of Data Access audit logs for BigQuery, which cannot be disabled
- The Data Access audit logs configured can affect logs pricing in Cloud Logging
- Cloud Identity and Access Management permissions and roles determine which audit logs can be viewed or exported
- Logs reside in projects and in some other entities including organizations, folders, and billing accounts
- If you are using audit logs from a non-project entity, such as an organisation, then change the Project roles to suitable organisation roles
- Audit logs can be exported in the same way as any other kinds of logs
- To keep audit logs for a longer period of time or to use powerful search capabilities, export audit logs to Cloud Storage, BigQuery, or Pub/Sub
- Pub/Sub can be used to export logs to other applications and repositories
- To manage audit logs across an entire organization, create aggregated export sinks that can export logs from any or all projects in the organization
- If Data Access audit logs are over their logs allotments, export and exclude the Data Access audit logs from Logging
- Cloud Logging does not charge for audit logs that cannot be disabled, including all Admin Activity audit logs
- Cloud Logging charges for Data Access audit logs explicitly requested