1. Overview
    1. Cloud Interconnect
      1. Provides low latency, highly available connections that enable users to reliably transfer data between on-premises and Virtual Private Cloud networks
      2. Provide RFC 1918 communication, which means internal (private) IP addresses are directly accessible from both networks
      3. Offers two options for extending the on-premises network
    2. Dedicated Interconnect
      1. Provides a direct physical connection between on-premises network and Google's network
      2. Provides direct physical connections between on-premises network and Google’s network.
      3. Enables the transfer of large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet
      4. The network must physically meet Google's network in a colocation facility
      5. Users must provide their own routing equipment
      6. Customer provisions a cross connect between the Google network and the customer router in a common location
      7. This cross connect is a Dedicated Interconnect connection
      8. To exchange routes, a BGP session is configured over the interconnect between the Cloud Router and on-premises router
      9. Traffic from the on-premises network can reach the VPC network and vice versa
    3. Partner Interconnect
      1. Provides connectivity between on-premises and Google Cloud VPC networks through a supported service provider
      2. Provides connectivity between on-premises network and VPC network through a supported service provider
      3. Connection is useful if the data center is in a physical location that can't reach a Dedicated Interconnect colocation facility or if data needs don't warrant an entire 10 Gbps connection
      4. Requires users to separately obtain services from a third-party network service provider
      5. Google is not responsible for any aspects of Partner Interconnect provided by the third-party service provider nor any issues outside of Google's network
      6. Customers must work with a supported service provider to establish connectivity between their network and on-premises network
  2. Benefits
    1. Traffic between on-premises network and VPC network does not traverse the public Internet
    2. Traffic traverses a dedicated connection or through a service provider with a dedicated connection
    3. By bypassing the public Internet, traffic takes fewer hops, so there are less points of failure where traffic might get dropped or disrupted
    4. VPC network's internal (RFC 1918) IP addresses are directly accessible from the on-premises network
    5. There is no need to use a NAT device or VPN tunnel to reach internal IP addresses
    6. For Dedicated Interconnect, connection capacity is delivered over one or more 10 Gbps or 100 Gbps Ethernet connections
    7. The maximum capacities supported per interconnect is 8 x 10 Gbps connections (80 Gbps total) or 2 x 100 Gbps connections (200 Gbps total)
    8. For Partner Interconnect, the connection capacity for each interconnect attachment (VLAN) is from 50 Mbps to 10 Gbps up to 8 x 10 Gbps interconnect attachments (VLANs) (80 Gbps)
    9. Dedicated Interconnect, Partner Interconnect, Direct Peering, and Carrier Peering can all help users optimize egress traffic from VPC network and can help reduce egress costs
    10. Cloud VPN, by itself, does not reduce egress costs
    11. Cloud Interconnect can be used in conjunction with Private Google Access for on-premises hosts so that on-premises hosts can use internal IP addresses rather than external IP addresses to reach Google APIs and services
  3. Considerations
    1. Cloud VPN
      1. If the low latency and high availability of Cloud Interconnect is not required, consider using Cloud VPN to set up IPsec VPN tunnels between networks
      2. IPsec VPN tunnels encrypt data using industry-standard IPsec protocols as traffic traverses the public Internet
      3. A Cloud VPN tunnel doesn't require the overhead or costs associated with a direct, private connection
      4. Cloud VPN only requires a VPN device in the on-premises network
    2. IP addressing and dynamic routes
      1. When a VPC network is connected to an on-premises network, communication is allowed between the IP address space of the on-premises network and some or all of the subnets in the VPC networks
      2. Which VPC subnets are available depends on the dynamic routing mode of the VPC network
      3. On-premises routers share the routes to the on-premises network to the Cloud Routers in the VPC network, creating custom dynamic routes in the VPC network, each with a next hop set to the appropriate interconnect attachment (VLAN)
      4. Unless modified by custom advertisements, Cloud Routers in the VPC network shares VPC network subnet IP address ranges with the on-premises routers according to the dynamic routing mode of the VPC network
      5. Subnet IP ranges in VPC networks are always RFC 1918 IP addresses
      6. The IP address space on the on-premises network and on the VPC network must not overlap, or traffic will not be routed properly
    3. Transitive routing support
      1. A hub-and-spoke linking of VPC networks can be used to connect an on-premises network as long as no more than one on-premises network is included
      2. Although it is technically possible to create a hub-and-spoke configuration that links two or more on-premises networks to each other by using a VPC network and VPNs or Cloud Interconnect, such a setup is a violation of the Terms of Service
  4. CDN
    1. Overview
      1. Cloud workloads that frequently update data stored in CDN locations benefit from using CDN Interconnect, because the direct link to the CDN provider reduces latency for these CDN destinations
      2. When populating CDN with large data files from Google Cloud, automatically optimize traffic and save money using CDN Interconnect links between Google Cloud and selected providers
      3. CDN Interconnect allows select CDN providers to establish direct peering links with Google's edge network at various locations
      4. Network traffic egressing from Google Cloud Platform through one of these links benefits from the direct connectivity to supported CDN providers and is billed automatically with reduced pricing
      5. Traffic from supported Google Cloud locations to CDN provider automatically takes advantage of the direct connection and reduced pricing
    2. Pricing
      1. Google works with approved CDN partners in supported locations to whitelist provider IP addresses
      2. Data sent to a whitelisted CDN provider from Google Cloud is charged at the reduced price
      3. Traffic between Google Cloud Platform and pre-approved CDN Interconnect locations is billed as follows:
        1. Ingress: Free for all regions.
        2. Egress: Rates only apply to data leaving Google Compute Engine or Google Cloud Storage.
      4. Intra-region pricing for CDN Interconnect applies only to intra-region egress traffic that is sent to CDN providers approved by Google at specific locations approved by Google for those providers
  5. Provisioning
    1. Dedicated Interconnect
      1. Start by ordering an interconnect so that Google can allocate the necessary resources and send an LOA-CFA
      2. After receiving the LOA-CFA, submit it to the vendor so that they can provision the cross connects between Google's network and your network
      3. Configure and test the interconnects with Google before using it
      4. After they're ready, create VLAN attachments to allocate a VLAN on the interconnect
    2. Partner Interconnect
      1. Start by connecting the on-premises network to a supported service provider
      2. Create a VLAN attachment for a Partner Interconnect in the GCP project
      3. This generates a unique pairing key that is used to request a connection from the service provider
      4. Provide other information such as the connection location and capacity
      5. After the service provider configures the attachment, activate it to start using it
      6. Activation allows the user to check that they are connecting to an expected service provider
      7. If the user doesn't need to verify the connection and are using a layer 3 connection, they can choose to pre-activate the attachment
      8. Once the attachment is pre-activated, it can immediately pass traffic after it has been configured by the service provider
      9. Consider pre-activation if using layer 3 and to activate the connection without additional approval
      10. Layer 3 providers automatically configure BGP sessions with Cloud Routers so that BGP starts immediately
      11. For layer 2 connections, there's no benefit for pre-activating VLAN attachments
  6. Redundancy
    1. Dedicated Interconnect
      1. To achieve a specific level of reliability, Google has two prescriptive configurations, one for 99.99% availability and another for 99.9% availability
      2. Google recommends 99.99% configuration for production-level applications with low tolerance for downtime
      3. If applications are not mission-critical and can tolerate some downtime, use the 99.9% configuration
      4. With a redundant topology similar to the 99.99% configuration, there are multiple paths for traffic to traverse from the VPC network to the on-premises network
    2. Partner Interconnect
      1. SLA doesn't include the connectivity between the user’s network and the service provider's network
      2. If the service provider offers an SLA, users can get an end-to-end SLA, based on the Google-defined topologies
      3. For the highest level availability, Google recommends the 99.99% availability configuration
      4. Clients in the on-premises network can reach the IP addresses of VM instances in the selected region through at least one of the redundant paths and vise versa
    3. Setup
      1. 99.99% availability requires at least four VLAN attachments across two metros (one in each edge availability domain)
      2. Use multiple service providers to build a highly available topology
      3. Build redundant connections for each service provider in each metro
      4. Provision two primary connections by using a local service provider that's close to the data center
      5. For the backup connection, use a long-haul service provider to build two connections in a different metro
  7. Partner
    1. For layer 2 connections, a BGP session needs to be established between Cloud Routers and on-premises routers for each VLAN attachment
    2. The BGP configuration information is provided by the VLAN attachment after the service provider has configured it
    3. For layer 2 connections, traffic passes through the service provider's network to reach the VPC or on-premises network
    4. BGP is configured between the on-premises router and a Cloud Router in the VPC network.
    5. For layer 3 connections, the service provider establishes a BGP session between the customer’s Cloud Routers and their edge routers for each VLAN attachment
    6. For layer 3 connections, the customer does not need to configure BGP on the on-premises router
    7. Google and the service provider automatically set the correct configurations
    8. Because the BGP configuration for layer 3 connections is fully automated, users can pre-activate the connections (VLAN attachments)
    9. When pre-activation is enabled, the VLAN attachments are active as soon as the service provider configures them
    10. For layer 3 connections, traffic is passed to the service provider's network, and then their network routes the traffic to the correct destination, either to the on-premises network or to the VPC network
    11. Connectivity between the on-premises and service provider networks depends on the service provider
    12. The service provider might request the user to establish a BGP session with them or configure a static default route to their network
  8. Elements
    1. Partner Interconnect
      1. VLAN attachment is a virtual point-to-point tunnel between the on-premises network and a single region in a VPC network
      2. To request Partner Interconnect connectivity from a service provider, create a VLAN attachment in the GCP project
      3. The VLAN attachment generates a unique pairing key that is shared with the service provider
      4. The service provider uses the pairing key, along with the connection location and capacity, to complete the VLAN attachment configuration
      5. After the service provider configures the attachment, they allocate a specific 802.1q VLAN to the connection
    2. Partner Interconnect location
      1. Partner Interconnect locations are cities where service providers connect to Google's network
      2. When a connection with a service provider is requested, a location where traffic enters Google's network has to be selected
      3. Each location supports a subset of Google Cloud Platform (GCP) regions
      4. These supported regions are where users can connect to Cloud Routers and associated VLAN attachments
    3. Cloud Router
      1. A Cloud Router is used to dynamically exchange routes between the customer’s VPC network and on-premises network via BGP
      2. Before users can create a VLAN attachment, they must configure Cloud Router in the VPC network and region that they wish to connect to
      3. Cloud Router advertises subnets in its VPC network and propagates learned routes to those subnets
      4. The Cloud Router BGP configuration depends on whether layer 2 or layer 3 connectivity is being used
      5. For layer 2, establish a BGP session between the customer’s Cloud Router and on-premises router
      6. For layer 3, the service provider establishes BGP between the customer’s Cloud Router and their edge router
  9. Telemetry
    1. Co-location facility
      1. For Dedicated Interconnect, a colocation facility is Google's point of presence for connecting on-premises network with Google's network
      2. In the colocation facility, work with the facility provider to provision routing equipment before using Dedicated Interconnect
      3. For Partner Interconnect, supported service providers will have connected to Google in at least one of these facilities
    2. Edge availability domain
      1. Each metropolitan area (metro) has at least two zones called edge availability domains
      2. These domains provide isolation during scheduled maintenance, ensuring that two domains in the same metro are not down for maintenance
      3. Edge availability domains span a single metro, not across metros
      4. To maintain availability and an SLA, build duplicate interconnects in different domains in the same metro
      5. Maintenance windows are not co-ordinated across metros
      6. When connecting to multiple metros for redundancy, it is important to connect to different Edge availability domains in each of those metros
    3. LOA-CFA
      1. A Letter of Authorization and Connecting Facility Assignment (LOA-CFA) identifies the connection ports that Google has assigned for the connection and grants permission for a vendor in a colocation facility to connect to them
      2. LOA-CFA documents are required to order Dedicated Interconnect connections in a colocation facility
      3. When dedicated connections are ordered, Google allocates resources for interconnects and then generates an LOA-CFA document for each one
      4. The LOA-CFA lists the demarcation points that Google allocated for interconnects
      5. Submit this form to the facility vendor to provision cross connects between Google's equipment and the customer’s
    4. Metropolitan area
      1. A metropolitan area (metro) is the city where a colocation facility is located
      2. When an interconnect is created, select the colocation facility and metro where the interconnect will live
      3. The metro choice depends on the location of on-premises networks and the location of VM instances (their GCP region)
      4. Typically, pick a metro that's geographically close to the on-premises network to reduce latency
      5. For redundancy, might choose a metro that is further away
      6. In regards to the GCP region, each metro supports a subset of regions
      7. You can create VLAN attachments in supported regions only
    5. Pairing key
      1. Pairing keys are used only for Partner Interconnect
      2. It's a unique identifier that allows service providers to identify particular VLAN attachments without sharing sensitive network information
      3. The key is one-time use and can't be modified
      4. If a new pairing key is needed, delete VLAN attachment and then create a new one
      5. Treat the pairing key as sensitive information until the VLAN attachment is configured
      6. If discovered, it could be used to connect to the customer’s network
    6. Service provider
      1. A network service provider
      2. To use Partner Interconnect, connect to a supported service provider
      3. The service provider provides connectivity between on-premises and VPC network
  10. Locations
    1. Interconnect Locations
      1. For Dedicated Interconnect, the on-premise network must physically meet Google's network in a supported colocation facility
      2. At the colocation facility, the colocation facility provider, provisions a circuit between the on-premise network and a Google Edge Point of Presence
      3. It is more cost effective to create interconnect attachments in the same regions as the VM instances to avoid inter-region egress costs
    2. Low Latency Locations
      1. If low latency is required between VM instances in a region and the Dedicated Interconnect colocation facility, select a low-latency (< 5 milliseconds) facility
      2. The pricing for low-latency locations is the same as for all other locations
      3. If workloads do not require low-latency connectivity, use any colocation facilities locations
    3. Edge Locations
      1. Network edge locations allow users to peer with Google Cloud and connect to Google Cloud services
      2. Cloud CDN locations use Google's globally distributed edge points of presence to cache HTTP(S) load balanced content close to users
      3. Caching content at the edges of Google's network provides faster delivery of content to users while reducing serving costs