Jump to Content

Connecting hybrid and multicloud workloads - Networking Architecture

October 11, 2023
Ammett Williams

Developer Relations Engineer

Eric Yu

Networking Specialist Customer Engineer

Enterprises with existing data centers and other cloud environments may want to analyze data with Google Cloud services. To do so, they need to connect to Google Cloud securely. At a high level, Google Cloud offers several methods to meet the network connectivity requirements of this design. To allow connectivity to these hybrid and multicloud workloads you can consider a few of these options:

Additionally, you can secure and control data access using Cloud Router, firewalls policies, network virtual appliances and VPC Service Controls.
In this blog we will review two architectures: the first using Cloud Interconnect, and another based on Network Connectivity Center. For more, please see Networking for hybrid and multi-cloud workloads: Reference architectures, which goes over several design options.

Connect cloud and on-premises networks with Cloud Interconnect

Cloud Interconnect options provide you with high availability, low latency and private IP communication connections between your offsite environments. Cloud Interconnect is available in three options:

  • Partner Interconnect - This is available starting at 50MB and is facilitated through a service provider.
  • Dedicated Interconnect -Starting at speeds of 10GB, Dedicated Interconnects are available in colocation facilities where both customer and Google facilities exist.
  • Cross-Cloud Interconnect - These are available in 10GB and 100GB speeds and allow you to connect directly to other clouds.

In the following example we show a design using Direct Interconnect to connect to on-prem, and Cross-Cloud Interconnect to connect to another cloud.


The design includes network elements as follows.

Cloud service provider configuration

  • Router- The on-prem setup uses two pairs of routers located in two regions. From a Google Cloud perspective, a cloud service provider and the on-prem data centers use the same OSI layer 2 multi-access gigabit ethernet technology. In this example, that means a total of four routers. One pair of routers exists in the New York zone and another pair exists in the Ashburn region. Each router connects to a different Google peering edge for redundancy.

External BGP (EBGP)
BGP, or Border Gateway Protocol, is the routing protocol used for dynamic route exchange. The customer can advertise the route it wants to be visible on both sides of the connection.

Cloud Router
In each Google Cloud region in the diagram, a single cloud router is configured with VLAN attachments that establish a pair of Cloud Interconnect data link connections to the on-prem and the cloud service provider network.

Cross-Cloud Interconnect configuration

  • VLAN attachments - Cloud Interconnect is a 10/100 GigabitEthernet trunk. Users configure VLAN attachments with a VLAN ID that is associated with a cloud router.
  • Cloud Router can be configured to announce a limited subset of prefixes using custom route advertisements, or it can be configured to announce all known subnet routes to its BGP peer.

Once configurations are complete, your external environments can communicate with Google Cloud.

Network Connectivity Center

Network Connectivity Center lets you connect your on-premises, Google Cloud, and other cloud enterprise networks through a single, centralized logical hub on Google Cloud, connecting multiple on-prem sites and other clouds over the Google Cloud network backbone.

The design below shows one other cloud and three on-prem sites using different connection options.


The design elements are as follows.

Interconnect options

  • Cloud VPN - The design shows the use of Cloud VPN to connect to on-prem.
  • Cloud Interconnect - The design shows the use of Dedicated and Partner Interconnect to provide stable connections to on-prem. Cross-Cloud Interconnect is also used to provide a stable connection to another cloud provider.

Network Connectivity Center
Network Connectivity Center in this example interacts with the other sites. Let's explore how this works.

  • Spokes - These have connections established to hybrid and multicloud environments through the use of Interconnect options or VPN. This allows communication between on-prem and Google Cloud but not to other sites connected in a similar manner. To allow communication between different sites they can be added as spokes to Network Connectivity Center.
  • Network Connectivity Center - Allows communication of spokes that are connected to it. In the diagram above the different on-prem locations want to exchange data between one another. To do this they have to be added as spokes to Network Connectivity Center and configured to allow data exchange.
  • Data exchange - In order to exchange data between Network Connectivity Center spokes, the Site-to-site data transfer option has to be turned on

Overall this design gives you access to Google's high-speed network backbone and reduces infrastructure management overhead. Learn more in the documentation: Work with hubs and spokes and Site-to-site data transfer overview.

More on architecture

The previous blogs in the Networking Architecture series, Six building blocks for cloud networking, Two networking patterns for secure intra-cloud access, and Internet-facing application delivery, are very good to explore. Also, I recommend the following documents and videos:

Want to ask a question, find out more or share a thought? Please connect with me on Linkedin.

Posted in