-
Overview
-
Cloud Interconnect
- Provides low latency, highly available connections that enable users to reliably transfer data between on-premises and Virtual Private Cloud networks
- Provide RFC 1918 communication, which means internal (private) IP addresses are directly accessible from both networks
- Offers two options for extending the on-premises network
-
Dedicated Interconnect
- Provides a direct physical connection between on-premises network and Google's network
- Provides direct physical connections between on-premises network and Google’s network.
- Enables the transfer of large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet
- The network must physically meet Google's network in a colocation facility
- Users must provide their own routing equipment
- Customer provisions a cross connect between the Google network and the customer router in a common location
- This cross connect is a Dedicated Interconnect connection
- To exchange routes, a BGP session is configured over the interconnect between the Cloud Router and on-premises router
- Traffic from the on-premises network can reach the VPC network and vice versa
-
Partner Interconnect
- Provides connectivity between on-premises and Google Cloud VPC networks through a supported service provider
- Provides connectivity between on-premises network and VPC network through a supported service provider
- Connection is useful if the data center is in a physical location that can't reach a Dedicated Interconnect colocation facility or if data needs don't warrant an entire 10 Gbps connection
- Requires users to separately obtain services from a third-party network service provider
- Google is not responsible for any aspects of Partner Interconnect provided by the third-party service provider nor any issues outside of Google's network
- Customers must work with a supported service provider to establish connectivity between their network and on-premises network
-
Benefits
- Traffic between on-premises network and VPC network does not traverse the public Internet
- Traffic traverses a dedicated connection or through a service provider with a dedicated connection
- By bypassing the public Internet, traffic takes fewer hops, so there are less points of failure where traffic might get dropped or disrupted
- VPC network's internal (RFC 1918) IP addresses are directly accessible from the on-premises network
- There is no need to use a NAT device or VPN tunnel to reach internal IP addresses
- For Dedicated Interconnect, connection capacity is delivered over one or more 10 Gbps or 100 Gbps Ethernet connections
- The maximum capacities supported per interconnect is 8 x 10 Gbps connections (80 Gbps total) or 2 x 100 Gbps connections (200 Gbps total)
- For Partner Interconnect, the connection capacity for each interconnect attachment (VLAN) is from 50 Mbps to 10 Gbps up to 8 x 10 Gbps interconnect attachments (VLANs) (80 Gbps)
- Dedicated Interconnect, Partner Interconnect, Direct Peering, and Carrier Peering can all help users optimize egress traffic from VPC network and can help reduce egress costs
- Cloud VPN, by itself, does not reduce egress costs
- Cloud Interconnect can be used in conjunction with Private Google Access for on-premises hosts so that on-premises hosts can use internal IP addresses rather than external IP addresses to reach Google APIs and services
-
Considerations
-
Cloud VPN
- If the low latency and high availability of Cloud Interconnect is not required, consider using Cloud VPN to set up IPsec VPN tunnels between networks
- IPsec VPN tunnels encrypt data using industry-standard IPsec protocols as traffic traverses the public Internet
- A Cloud VPN tunnel doesn't require the overhead or costs associated with a direct, private connection
- Cloud VPN only requires a VPN device in the on-premises network
-
IP addressing and dynamic routes
- When a VPC network is connected to an on-premises network, communication is allowed between the IP address space of the on-premises network and some or all of the subnets in the VPC networks
- Which VPC subnets are available depends on the dynamic routing mode of the VPC network
- On-premises routers share the routes to the on-premises network to the Cloud Routers in the VPC network, creating custom dynamic routes in the VPC network, each with a next hop set to the appropriate interconnect attachment (VLAN)
- Unless modified by custom advertisements, Cloud Routers in the VPC network shares VPC network subnet IP address ranges with the on-premises routers according to the dynamic routing mode of the VPC network
- Subnet IP ranges in VPC networks are always RFC 1918 IP addresses
- The IP address space on the on-premises network and on the VPC network must not overlap, or traffic will not be routed properly
-
Transitive routing support
- A hub-and-spoke linking of VPC networks can be used to connect an on-premises network as long as no more than one on-premises network is included
- Although it is technically possible to create a hub-and-spoke configuration that links two or more on-premises networks to each other by using a VPC network and VPNs or Cloud Interconnect, such a setup is a violation of the Terms of Service
-
CDN
-
Overview
- Cloud workloads that frequently update data stored in CDN locations benefit from using CDN Interconnect, because the direct link to the CDN provider reduces latency for these CDN destinations
- When populating CDN with large data files from Google Cloud, automatically optimize traffic and save money using CDN Interconnect links between Google Cloud and selected providers
- CDN Interconnect allows select CDN providers to establish direct peering links with Google's edge network at various locations
- Network traffic egressing from Google Cloud Platform through one of these links benefits from the direct connectivity to supported CDN providers and is billed automatically with reduced pricing
- Traffic from supported Google Cloud locations to CDN provider automatically takes advantage of the direct connection and reduced pricing
-
Pricing
- Google works with approved CDN partners in supported locations to whitelist provider IP addresses
- Data sent to a whitelisted CDN provider from Google Cloud is charged at the reduced price
-
Traffic between Google Cloud Platform and pre-approved CDN Interconnect locations is billed as follows:
- Ingress: Free for all regions.
- Egress: Rates only apply to data leaving Google Compute Engine or Google Cloud Storage.
- Intra-region pricing for CDN Interconnect applies only to intra-region egress traffic that is sent to CDN providers approved by Google at specific locations approved by Google for those providers
-
Provisioning
-
Dedicated Interconnect
- Start by ordering an interconnect so that Google can allocate the necessary resources and send an LOA-CFA
- After receiving the LOA-CFA, submit it to the vendor so that they can provision the cross connects between Google's network and your network
- Configure and test the interconnects with Google before using it
- After they're ready, create VLAN attachments to allocate a VLAN on the interconnect
-
Partner Interconnect
- Start by connecting the on-premises network to a supported service provider
- Create a VLAN attachment for a Partner Interconnect in the GCP project
- This generates a unique pairing key that is used to request a connection from the service provider
- Provide other information such as the connection location and capacity
- After the service provider configures the attachment, activate it to start using it
- Activation allows the user to check that they are connecting to an expected service provider
- If the user doesn't need to verify the connection and are using a layer 3 connection, they can choose to pre-activate the attachment
- Once the attachment is pre-activated, it can immediately pass traffic after it has been configured by the service provider
- Consider pre-activation if using layer 3 and to activate the connection without additional approval
- Layer 3 providers automatically configure BGP sessions with Cloud Routers so that BGP starts immediately
- For layer 2 connections, there's no benefit for pre-activating VLAN attachments
-
Redundancy
-
Dedicated Interconnect
- To achieve a specific level of reliability, Google has two prescriptive configurations, one for 99.99% availability and another for 99.9% availability
- Google recommends 99.99% configuration for production-level applications with low tolerance for downtime
- If applications are not mission-critical and can tolerate some downtime, use the 99.9% configuration
- With a redundant topology similar to the 99.99% configuration, there are multiple paths for traffic to traverse from the VPC network to the on-premises network
-
Partner Interconnect
- SLA doesn't include the connectivity between the user’s network and the service provider's network
- If the service provider offers an SLA, users can get an end-to-end SLA, based on the Google-defined topologies
- For the highest level availability, Google recommends the 99.99% availability configuration
- Clients in the on-premises network can reach the IP addresses of VM instances in the selected region through at least one of the redundant paths and vise versa
-
Setup
- 99.99% availability requires at least four VLAN attachments across two metros (one in each edge availability domain)
- Use multiple service providers to build a highly available topology
- Build redundant connections for each service provider in each metro
- Provision two primary connections by using a local service provider that's close to the data center
- For the backup connection, use a long-haul service provider to build two connections in a different metro
-
Partner
- For layer 2 connections, a BGP session needs to be established between Cloud Routers and on-premises routers for each VLAN attachment
- The BGP configuration information is provided by the VLAN attachment after the service provider has configured it
- For layer 2 connections, traffic passes through the service provider's network to reach the VPC or on-premises network
- BGP is configured between the on-premises router and a Cloud Router in the VPC network.
- For layer 3 connections, the service provider establishes a BGP session between the customer’s Cloud Routers and their edge routers for each VLAN attachment
- For layer 3 connections, the customer does not need to configure BGP on the on-premises router
- Google and the service provider automatically set the correct configurations
- Because the BGP configuration for layer 3 connections is fully automated, users can pre-activate the connections (VLAN attachments)
- When pre-activation is enabled, the VLAN attachments are active as soon as the service provider configures them
- For layer 3 connections, traffic is passed to the service provider's network, and then their network routes the traffic to the correct destination, either to the on-premises network or to the VPC network
- Connectivity between the on-premises and service provider networks depends on the service provider
- The service provider might request the user to establish a BGP session with them or configure a static default route to their network
-
Elements
-
Partner Interconnect
- VLAN attachment is a virtual point-to-point tunnel between the on-premises network and a single region in a VPC network
- To request Partner Interconnect connectivity from a service provider, create a VLAN attachment in the GCP project
- The VLAN attachment generates a unique pairing key that is shared with the service provider
- The service provider uses the pairing key, along with the connection location and capacity, to complete the VLAN attachment configuration
- After the service provider configures the attachment, they allocate a specific 802.1q VLAN to the connection
-
Partner Interconnect location
- Partner Interconnect locations are cities where service providers connect to Google's network
- When a connection with a service provider is requested, a location where traffic enters Google's network has to be selected
- Each location supports a subset of Google Cloud Platform (GCP) regions
- These supported regions are where users can connect to Cloud Routers and associated VLAN attachments
-
Cloud Router
- A Cloud Router is used to dynamically exchange routes between the customer’s VPC network and on-premises network via BGP
- Before users can create a VLAN attachment, they must configure Cloud Router in the VPC network and region that they wish to connect to
- Cloud Router advertises subnets in its VPC network and propagates learned routes to those subnets
- The Cloud Router BGP configuration depends on whether layer 2 or layer 3 connectivity is being used
- For layer 2, establish a BGP session between the customer’s Cloud Router and on-premises router
- For layer 3, the service provider establishes BGP between the customer’s Cloud Router and their edge router
-
Telemetry
-
Co-location facility
- For Dedicated Interconnect, a colocation facility is Google's point of presence for connecting on-premises network with Google's network
- In the colocation facility, work with the facility provider to provision routing equipment before using Dedicated Interconnect
- For Partner Interconnect, supported service providers will have connected to Google in at least one of these facilities
-
Edge availability domain
- Each metropolitan area (metro) has at least two zones called edge availability domains
- These domains provide isolation during scheduled maintenance, ensuring that two domains in the same metro are not down for maintenance
- Edge availability domains span a single metro, not across metros
- To maintain availability and an SLA, build duplicate interconnects in different domains in the same metro
- Maintenance windows are not co-ordinated across metros
- When connecting to multiple metros for redundancy, it is important to connect to different Edge availability domains in each of those metros
-
LOA-CFA
- A Letter of Authorization and Connecting Facility Assignment (LOA-CFA) identifies the connection ports that Google has assigned for the connection and grants permission for a vendor in a colocation facility to connect to them
- LOA-CFA documents are required to order Dedicated Interconnect connections in a colocation facility
- When dedicated connections are ordered, Google allocates resources for interconnects and then generates an LOA-CFA document for each one
- The LOA-CFA lists the demarcation points that Google allocated for interconnects
- Submit this form to the facility vendor to provision cross connects between Google's equipment and the customer’s
-
Metropolitan area
- A metropolitan area (metro) is the city where a colocation facility is located
- When an interconnect is created, select the colocation facility and metro where the interconnect will live
- The metro choice depends on the location of on-premises networks and the location of VM instances (their GCP region)
- Typically, pick a metro that's geographically close to the on-premises network to reduce latency
- For redundancy, might choose a metro that is further away
- In regards to the GCP region, each metro supports a subset of regions
- You can create VLAN attachments in supported regions only
-
Pairing key
- Pairing keys are used only for Partner Interconnect
- It's a unique identifier that allows service providers to identify particular VLAN attachments without sharing sensitive network information
- The key is one-time use and can't be modified
- If a new pairing key is needed, delete VLAN attachment and then create a new one
- Treat the pairing key as sensitive information until the VLAN attachment is configured
- If discovered, it could be used to connect to the customer’s network
-
Service provider
- A network service provider
- To use Partner Interconnect, connect to a supported service provider
- The service provider provides connectivity between on-premises and VPC network
-
Locations
-
Interconnect Locations
- For Dedicated Interconnect, the on-premise network must physically meet Google's network in a supported colocation facility
- At the colocation facility, the colocation facility provider, provisions a circuit between the on-premise network and a Google Edge Point of Presence
- It is more cost effective to create interconnect attachments in the same regions as the VM instances to avoid inter-region egress costs
-
Low Latency Locations
- If low latency is required between VM instances in a region and the Dedicated Interconnect colocation facility, select a low-latency (< 5 milliseconds) facility
- The pricing for low-latency locations is the same as for all other locations
- If workloads do not require low-latency connectivity, use any colocation facilities locations
-
Edge Locations
- Network edge locations allow users to peer with Google Cloud and connect to Google Cloud services
- Cloud CDN locations use Google's globally distributed edge points of presence to cache HTTP(S) load balanced content close to users
- Caching content at the edges of Google's network provides faster delivery of content to users while reducing serving costs