Different applications and workloads require different network connectivity solutions. Google supports multiple ways to connect your infrastructure to GCP.
Hybrid Connectivity
Dedicated connection (direct to Google) | βΊ | Shared connection (through a partner) | |
Layer 3 (uses public IP) | Direct Peering 10 Gbps / link | Cloud VPN 1.5-3 Gbps / tunnel | Carrier Peering Speed depends on partner |
Layer 2 (uses VLAN) | Dedicated interconnect 10 – 100 Gpbs / link | Partner interconnect 50 Mbps – 10 Gbps / connection |
Dedicated connections provide a direct connection to Google’s network, but shared connections provide a connection to Google’s network through a partner.
Layer 2 connections use a VLAN that pipes directly into your GCP environment, providing connectivity to internal IP addresses in the RFC 1918 address space. Layer 3 connections provide access to Google Workspace services, YouTube, and Google Cloud APIs using public IP addresses.
Google also offers its own virtual private network service called Cloud VPN. This service uses the public internet, but traffic is encrypted and provides access to internal IP addresses. Cloud VPN is a useful addition to Direct Peering and Carrier Peering.
Cloud VPN
Cloud VPN securely connects your on-premises network to your Google Cloud VPC network through an IPsec VPN tunnel. Traffic traveling between the two networks is encrypted by one VPN gateway, then decrypted by the other VPN gateway. This protects your data as it travels over the public internet, and thatβs why Cloud VPN is useful for low-volume data connections.
Cloud VPN supports:
- Site-to-site VPN
- Static routes (supported by classic VPN only)
- Dynamic routes (use cloud router)
- IKEv1 and IKEv2 ciphers
A VPN tunnel then connects your VPN gateways and serves as the virtual medium through which encrypted traffic is passed. In order to create a connection between two VPN gateways, you must establish two VPN tunnels. Each tunnel defines the connection from the perspective of its gateway, and traffic can only pass when the pair of tunnels is established.
Remember when using Cloud VPN is that the maximum transmission unit (MTU) for your on-premises VPN gateway cannot be greater than 1460
bytes. This is because of the encryption and encapsulation of packets.
High Availability VPN
In addition to Classic VPN, Google Cloud also offers a second type of Cloud VPN gateway, HA VPN. HA VPN is a high availability Cloud VPN solution that lets you securely connect your on-premises network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection in a single region.
Each of the HA VPN gateway interfaces supports multiple tunnels. You can also create multiple HA VPN gateways. VPN tunnels connected to HA VPN gateways must use dynamic (BGP) routing. Depending on the way that you configure route priorities for HA VPN tunnels, you can create an active/active or active/passive routing configuration.
HA VPN supports site-to-site VPN in one of the following recommended topologies or configuration scenarios:
- An HA VPN gateway to peer VPN devices
- An HA VPN gateway to an Amazon Web Services (AWS) virtual private gateway
- Two HA VPN gateways connected to each other
Dynamic Routing
In order to use dynamic routes, you need to configure Cloud Router. Cloud Router can manage routes for a Cloud VPN tunnel using Border Gateway Protocol or BGP. This routing method allows for routes to be updated and exchanged without changing the tunnel configuration. To automatically propagate network configuration changes, the VPN tunnel uses Cloud Router to establish a BGP session between the VPC and the on-premises VPN gateway, which must support BGP.
Cloud Interconnect
Dedicated Interconnect provides direct physical connections between your on premises network and Google’s network. This enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet. In order to use Dedicated Interconnect, you need to provision a cross connect between the Google network and your own router in a common co-location facility.
Partner Interconnect provides connectivity between your on-premises network and your VPC network through a supported service provider. This is useful if your data center is in a physical location that can’t reach a dedicated interconnect co-location facility or if your data needs don’t warrant a dedicated interconnect. These service providers have existing physical connections to Google’s network that they make available for their customers to use. After you establish connectivity with a service provider, you can request a Partner Interconnect connection from your service provider.
Cloud Peering
Google allows you to establish a Direct Peering connection between your business network and Google’s. With this connection, you’ll be able to exchange internet traffic between your network and Google’s at one of Google’s broad-reaching edge network locations.
Direct peering with Google is done by exchanging BGP routes between Google and the peering entity. After a direct peering connection is in place, you can use it to reach all of Google’s services. Unlike Dedicated Interconnect, Direct Peering does not have an service level agreement.
If you require access to Google public infrastructure and cannot satisfy Google’s peering requirements, you can connect via a Carrier Peering partner.
Network Connectivity Center
Network Connectivity Center is a network connectivity product that employs a hub-and-spoke architecture to manage hybrid connectivity.
Network Connectivity Center includes features:
- Enable you to connect an external network to Google Cloud by using a third-party SD-WAN router or another virtual appliance. This approach is known as site-to-cloud connectivity.
- Use Google’s network as a wide area network (WAN) to connect sites that are outside of Google Cloud. You can establish full mesh connectivity between your external sites by using resources such as Cloud VPN, Cloud Interconnect, and Router appliance. This approach is known as site-to-site data transfer.
- Monitor traffic within your Google Cloud project by using a third-party firewall appliance.
- Use a third-party virtual router to connect your VPC networks to one another, if you want an alternative to VPC Network Peering and other hybrid connectivity products.
Hub | A global management resource to which you attach spokes. There is one hub per project. The function of the hub varies depending on whether or not your spokes are using the site-to-site data transfer feature. |
Spoke | Represents one or more network resource connected to a hub. Can use any of the Google Cloud resources as its backing resource. Each spoke has a data transfer option. |
Router appliance | Install a third-party network virtual appliance on a VM in Google Cloud and use it as the backing resource for a spoke. Establish BGP peering between the VM and a Cloud Router to enable exchange of routes between Cloud Router and the router appliance. |
Networking Pricing and Service Tiers
Because each GCP service has its own pricing model, you are recommend using the GCP pricing calculator to estimate the cost of a collection of resources. The pricing calculator is a web-based tool that you use to specify the expected consumption of certain services and resources, and it then provides you with an estimated cost.
Network Service Tiers enable you to optimize your cloud network for performance by choosing Premium Tier, or for cost with the Standard Tier.
Premium Tier | Premium Tier delivers traffic over Googleβs well-provisioned, low-latency, highly reliable global network. This network consists of an extensive global private fiber network with over 100 points of presence (PoPs) across the globe. |
Standard Tier | Standard Tier is a new, lower-cost offering. The network quality of this tier is comparable to the quality of other public cloud providers and regional network services, such as regional load balancing with one VIP per region, but lower than the quality of Premium Tier. |
Common Network Designs
Single region, multiple zones. If your application needs increased availability, you can place 2 virtual machines into multiple zones, but within the same subnetwork. Using a single subnetwork allows you to create a firewall rule against it. Therefore, by allocating VMs on a single subnet to separate zones, you get improved availability without additional security complexity.
Multiple regions, multiple zones. Putting resources in different regions provides an even higher degree of failure independence. When using a global load balancer, like the HTTP load balancer, you can route traffic to the region that is closest to the user. This can result in better latency for users and lower network traffic costs for your project.
The 3-tier web services. In this design there is an external-facing web tier using an external HTTP(S) load balancer, which provides a single global IP address for users world-wide. The backends of the load balancer are spread across different regions, providing a high degree of failure independence and improved network latency for global users. These backends then access an internal load balancer in each region as the application tier. Finally, the internal tier communicates with the database tier. The benefit of this 3-tier approach is that neither the database tier nor the application tier are exposed externally, which simplifies security and network pricing.
Internal load balancer in Cloud VPN or Interconnect. Your connect across Cloud VPN, traffic from you reaches the internal load balancer in a VPC network through Cloud VPN. This traffic is then internally load-balanced to a healthy backend instance belonging to the backend service that the forwarding rule points to. This provides access to an internal load balancer for specific users without exposing the service externally.
Use security appliance for next-gen firewall. In a host project, you can contain a next-generation firewall that can provide application visibility, content filtering, advanced malware monitoring, intrusion protection, and user awareness. In other words, this is a security appliance that provides a Layer 7 inspection before forwarding traffic to other service projects. This network design is useful for both client-to-server traffic (known as North-South traffic), and traffic between GCP networks (known as East-West traffic). GCP partners with several security-centric firms for next-generation firewalls.
Private Instances
You probably configure some of your VM instances to only have internal or private IP addresses but to not have an external or public IP address, that can only be accessed by other instances in the same network. Although this helps keeps your VM instances isolated, it also prevents it from accessing the public IP addresses of Google APIs and services, and other Internet services for updates and patches.
To work around these constraints, you can enable Private Google Access on a subnet-by-subnet basis. Private Google Access has no effect on instances that have external IP addresses.
Private Google Access for on-premises hosts allows your on-premises hosts to reach Google APIs and services over a cloud VPN or cloud Internet connection from your data center to GCP. Your on-premises hosts don’t need external IP addresses. Instead, they use internal IP addresses. This service is very similar to Private Google Access, but for your data center.
Private Services Access is a private connection between your VPC network and a service producer’s VPC network. This connection is implemented as a VPC peering connection, and the other service producer’s network is created exclusively for you and is not shared with other customers.
Cloud NAT
Cloud NAT is Google’s managed network address translation service. It lets you provision your VM instances without public IP addresses while also allowing them to access the Internet in a controlled and efficient manner. This means that your private instances can access the Internet for updates, patching, configuration management and more.
However Cloud NAT does not implement inbound NAT. In other words, hosts outside of your VPC network cannot directly access any of the private instances behind the Cloud NAT gateway. This helps you keep your VPC networks isolated and secure.
Terraform
Terraform is an open source tool that lets you provision Google Cloud resources, such as Virtual Machines, containers, storage, and networking, with declarative configuration files. You just specify all the resources needed for your application and declare it a format and deploy your configuration.
This deployment can be repeated over and over with consistent results, and you can delete an entire deployment with one command or click. The benefit of a declarative approach is that it allows us to specify what the configuration should be, and let the system figure out the steps to take.
Unlike Cloud Shell, Terraform will deploy resources in parallel. Terraform uses the underlying APIs in each Google Cloud service to deploy your resources. This enables you to deploy almost everything.
The Terraform language is the user interface to declare resources. A Terraform configuration is a complete document in the Terraform language that tells Terraform how to manage a given collection of infrastructure. A configuration can consist of multiple files and directories. Terraform can be used on public and private clouds, and is already installed in Cloud Shell. The syntax of the Terraform language is something like
[BLOCK TYPE] "BLOCK LABEL" "BLOCK LABEL" {
# block body
[IDENTIFIER] = [EXPRESSION] # argument
...
}
Here is an example of creating auto mode network with HTTP firewall rule:
main.tf
provider "google" {
}
resource "google_compute_network" "vpc_network" {
project = "xxx"
name = "my-auto-mode-network"
auto_create_subnetworks = true
mtu = 1460
}
resource "google_compute_firewall" "rules" {
project = "xxx"
name = "my-firewall-rule"
network = "vpc_network"
allow {
protocol = "tcp"
ports = ["80", "8080"]
}
}
After completing the main.tf file, we can deploy the defined infrastructure in Cloud Shell using commands:
terraform init | Initialize the new Terraform configuration. It makes sure that the Google provider plug-in is downloaded and installed, along with various other bookkeeping files. |
terraform plan | Perform a refresh, unless explicitly disabled, and then determine what actions are necessarily to achieve the desired state specified in the configuration files. |
terraform init | Create the infrastructure defined, when this command has been completed, you can access the defined infrastructure. |
My Certificate
For more on Google Cloud Network Hybrid Connectivity, please refer to the wonderful course here https://www.coursera.org/learn/networking-gcp-hybrid-connectivity-network-management
I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai