By  Insight Editor / 22 Feb 2024 / Topics: DevOps Application development Cloud Networking

In Google Kubernetes Engine (GKE), the Ingress feature serves to expose services outside the cluster, establishing a layer 7 load balancer. However, traditional Ingress implementations pose certain limitations. Specifically, they mandate that Ingress and its associated services reside in the same namespace. Additionally, all routes and rules are confined within a single YAML file, often leading to complexity when managing multiple rules and annotations. This complexity further escalates when multiple teams share the Ingress resource, risking route disruptions caused by changes made by individual teams.
The GKE Gateway addresses these challenges effectively. It enables cross-namespace routing, allowing the Gateway to exist in its dedicated namespace—a manageable space for platform teams. Application teams can independently manage their routes and share the Gateway. The GKE Gateway is Google Kubernetes Engine’s implementation of the Kubernetes Gateway API for Cloud Load Balancing. The Kubernetes Gateway API, overseen by SIG Network, is an open-source project comprising diverse resources that model service networking within Kubernetes.
There are two versions of GKE Gateway controller:
I will cover the multi-cluster Gateway setup as part of this blog.
The GKE Gateway serves as a resource defining an access point. Each GKE Gateway is linked to a Gateway Class, specifying the type of Gateway Controller. Within GKE, various gateway classes facilitate provisioning under GCP load balancers:
This article focuses on setting up Global External L7 Load Balancer for multiple clusters using GKE Gateway. In this example, we’ll set up an external, multi-cluster Gateway to perform load balancing across two GKE clusters, specifically for internet traffic.

gcloud container fleet muti-cluster-services enable — project=<project_name>
gcloud container fleet ingress enable --config-membership=gke-us-east-1 --project=<project_name>
resource "google_project_iam_member" "network_viewer_role_mcs" {
project = "<project-name>"
role = "roles/compute.networkViewer"
member = "serviceAccount:<project-name>.svc.id.goog[gke-mcs/gke-mcs-importer]"
}
resource "google_project_iam_member" "container_admin_role_mcs" {
project = "<project-name>"
role = "roles/container.admin"
member = "serviceAccount:service-<project-number>@gcp-sa-multiclusteringress.iam.gserviceaccount.com"
}
Deploy the demo application in both GKE clusters, ‘gke-us-east-1’ and ‘gke-us-west-1’, using the provided manifests.
kind: Namespace
apiVersion: v1
metadata:
name: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
namespace: demo
spec:
replicas: 2
selector:
matchLabels:
app: demo
version: v1
template:
metadata:
labels:
app: demo
version: v1
spec:
containers:
- name: whereami
image: us-docker.pkg.dev/google-samples/containers/gke/whereami:v1.2.20
ports:
- containerPort: 8080
With the demo application now running across both clusters, we’ll expose and export the applications by deploying Services and ServiceExports to each cluster. ServiceExport is a custom resource which is mapped to a Kubernetes Service. It exports the endpoints of that Service to all clusters registered to the fleet.
Another custom resource, ServiceImport, is automatically generated by the multi-cluster Service controller for each ServiceExport resource. Multi-cluster Gateways utilize ServiceImports as logical identifiers for a Service.
Deploy below Service and ServiceExport resources in ‘gke-us-east-1’.
apiVersion: v1
kind: Service
metadata:
name: demo
namespace: demo
spec:
selector:
app: demo
ports:
- port: 8080
targetPort: 8080
---
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
name: demo
namespace: demo
---
apiVersion: v1
kind: Service
metadata:
name: demo-east-1
namespace: demo
spec:
selector:
app: demo
ports:
- port: 8080
targetPort: 8080
---
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
name: demo-east-1
namespace: demo
apiVersion: v1
kind: Service
metadata:
name: demo
namespace: demo
spec:
selector:
app: demo
ports:
- port: 8080
targetPort: 8080
---
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
name: demo
namespace: demo
---
apiVersion: v1
kind: Service
metadata:
name: demo-west-1
namespace: demo
spec:
selector:
app: demo
ports:
- port: 8080
targetPort: 8080
---
kind: ServiceExport
apiVersion: net.gke.io/v1
metadata:
name: demo-west-1
namespace: demo
kubectl get serviceimports —-namespace demo
# gke-us-west-1
NAME AGE
demo 1m30s
demo-west-1 1m30s
# gke-us-east-1
NAME AGE
demo 1m30s
demo-east-1 1m30s
Once the application and services are deployed, we can configure Gateway and HTTPRoute in the config cluster. To distribute traffic across both clusters, we’ll utilize the Gateway Class ‘gke-l7-global-external-managed-mc’ for creating an External Load Balancer.
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: external-http
namespace: demo
spec:
gatewayClassName: gke-l7-global-external-managed-mc
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
Deploying the Gateway will initiate the creation of a Load Balancer, an external IP address, a forwarding rule, two backend services, and a health check.
Deploy the following HTTPRoute resource in the config cluster (gke-us-east-1).
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: public-demo-route
namespace: demo
labels:
gateway: external-http
spec:
hostnames:
- "demo.example.com"
parentRefs:
- name: external-http
rules:
- matches:
- path:
type: PathPrefix
value: /west
backendRefs:
- group: net.gke.io
kind: ServiceImport
name: demo-west-1
port: 8080
- matches:
- path:
type: PathPrefix
value: /east
backendRefs:
- group: net.gke.io
kind: ServiceImport
name: demo-east-1
port: 8080
- backendRefs:
- group: net.gke.io
kind: ServiceImport
name: demo
port: 8080
When the HTTPRoute is deployed, requests to /west are directed to pods running in the ‘gke-us-west-1’ cluster, requests to /east are directed to pods running in the ‘gke-us-east-1’ cluster, and any other paths are routed to pods in either cluster based on their health, capacity, and proximity to the requesting client.
If the traffic is sent to the root path of the domain the load balancer will send traffic to the closest region.
curl -H “host:demo.example.com” http://<EXTRENAL_IP_ADDRESS__OF_GATEWAY>
The sample output shows that the request was served from the pod running in gke-us-west-1 cluster.
{
“cluster_name” : “gke-us-west-1”,
“zone” : “us-west1-a”,
“host_header” : “demo.example.com”,
“node_name” : “gke-gke-us-west-1-default-pool-2367899-5y53.c.<project-id>.internal”,
“project_id” : “<project-id>”,
“timestamp” : “2024-01-30T14:23:14”,
}
curl -H “host:demo.example.com” http://<EXTRENAL_IP_ADDRESS_OF_GATEWAY>/east
The sample output shows that the request was served from the pod running in gke-us-east-1 cluster.
{
“cluster_name” : “gke-us-east-1”,
“zone” : “us-east1-b”,
“host_header” : “demo.example.com”,
“node_name” : “gke-gke-us-east-1-default-pool-678899-5p93.c.<project-id>.internal”,
“project_id” : “<project-id>”,
“timestamp” : “2024-01-30T14:25:14”,
}
curl -H “host:demo.example.com” http://<EXTRENAL_IP_ADDRESS_OF_GATEWAY>/west
The sample output shows that the request was served from the pod running in gke-us-west-1 cluster.
{
“cluster_name” : “gke-us-west-1”,
“zone” : “us-west1-a”,
“host_header” : “demo.example.com”,
“node_name” : “gke-gke-us-west-1-default-pool-2367899-5y53.c.<project-id>.internal”,
“project_id” : “<project-id>”,
“timestamp” : “2024-01-30T14:26:14”,
}
If you’re deploying the Gateway in a production environment, make sure that the Gateway can be secured by applying Gateway policies.
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: external-http
namespace: demo
spec:
gatewayClassName: gke-l7-global-external-managed-mc
listeners:
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
options:
networking.gke.io/pre-shared-certs: demo-example-com
allowedRoutes:
kinds:
- kind: HTTPRoute
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
name: demo
namespace: demo
spec:
default:
securityPolicy: cloud-armor-policy
targetRef:
group: ""
kind: ServiceImport
name: demo
namespace: demo
In Google Kubernetes Engine (GKE), the Ingress feature serves to expose services outside the cluster, establishing a layer 7 load balancer. However, traditional Ingress implementations pose certain limitations. Specifically, they mandate that Ingress and its associated services reside in the same namespace. Additionally, all routes and rules are confined within a single YAML file, often leading to complexity when managing multiple rules and annotations. This complexity further escalates when multiple teams share the Ingress resource, risking route disruptions caused by changes made by individual teams.
The GKE Gateway addresses these challenges effectively. It enables cross-namespace routing, allowing the Gateway to exist in its dedicated namespace—a manageable space for platform teams. Application teams can independently manage their routes and share the Gateway. The GKE Gateway is Google Kubernetes Engine’s implementation of the Kubernetes Gateway API for Cloud Load Balancing. The Kubernetes Gateway API, overseen by SIG Network, is an open-source project comprising diverse resources that model service networking within Kubernetes.
There are two versions of GKE Gateway controller:
I will cover the multi-cluster Gateway setup as part of this blog.
The GKE Gateway serves as a resource defining an access point. Each GKE Gateway is linked to a Gateway Class, specifying the type of Gateway Controller. Within GKE, various gateway classes facilitate provisioning under GCP load balancers:
This article focuses on setting up Global External L7 Load Balancer for multiple clusters using GKE Gateway. In this example, we’ll set up an external, multi-cluster Gateway to perform load balancing across two GKE clusters, specifically for internet traffic.
The GKE Gateway represents a powerful evolution beyond traditional Ingress. Its role-oriented design surpasses the constraints of conventional Ingress, offering versatile application across namespaces. This article aimed to provide a clear understanding of configuring the GKE gateway for multiple clusters.
Vinita, a Senior Cloud Infrastructure Engineer, thrives on crafting solutions on the Google Cloud Platform, boasting extensive expertise in Kubernetes.
Insight is a professional services market leader and solutions provider of Google Cloud. Since 2000, Insight has helped organizations of every size in healthcare, media, entertainment, retail, manufacturing, and the public sector solve their most complex digital transformation challenges. With offices in North America, India, the UK, and Armenia providing sales, customer support, and professional services, Insight has become Google’s leading partner for generative AI solutions. Insight’s expertise also includes Infrastructure Modernization, Cloud Security, and Data Analytics. A 6x Google Cloud Partner of the Year award winner with 10 Google Cloud Specializations, Insight was recognized as a Niche Player in the 2023 Gartner® Magic Quadrant™ for Public Cloud IT Transformation Services. Insight is a 15x honoree of the Inc. 5000 list of America’s Fastest-Growing Private Companies and has been named to Inc. Magazine’s Best Workplaces four years in a row. Learn more at www.Insight.com.
If you’re interested in becoming a part of the Insight team, please visit our careers page.