As you may now be aware, the POD(container)-to-POD(container) communication on the [any] GKE Cluster is available to all namespaces and all PODs, with the main limiting factor being the end-Container port-configuration itself. Otherwise, lacking any Container-specific header whitelisting, one is able to telnet to other container Ports without any restrictions.

This is acceptable within Development and potential Staging environments for QA and troubleshooting but should be locked down further within the Production GKE Cluster(s)

It is worth to note that while GKE Cluster does come with the Cluster-native networking — IP-aliases which persist on the larger Google Cloud Network and ARE subject to the Google Cloud Platform Firewall rules, there is little granularity offered to manage the nuances of application-to-application communication given the ephemeral nature of the POD(s) IPs.

The Default option offered and should be pursued in the form of Network Policies at the relevant GKE Cluster level.

An important difference between Firewall Rule and a NetworkPolicy:

Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)

NetworkPolicy “Firewall” Direction and Selectors

There are two types of traffic direction you can apply restrictive policies on, alongside two selectors you can employ to select particular pods or namespace

Egress — for traffic leaving the designated PODs (via podSelector) or NAMESPACESs (via namespaceSelector)

— for traffic leaving the designated PODs (via podSelector) or NAMESPACESs (via namespaceSelector) Ingress — for traffic reaching the designated PODs (via podSelector) or NAMESPACESs (via namespaceSelector)

Tag-based “Firewall” NetworkPolicy Example

...

ingress:

- from:

- namespaceSelector:

matchLabels:

app: api-proxy

- podSelector:

matchLabels:

app: redis

...

This will, by default BLACKLIST any access into the POD(container) from any other POD(container) within that cluster.

This will, as specified, will whitelist all-ports all-access, from the namespace: api-proxy to PODs(container) which has a label app: redis where the latter could be in any other namespace.

Granular NetworkPolicy Controls

You are able to define both Ingress and Egress rules in one NetworkPolicy definition, as well as have a degree of granularity as to which PORTS such applications are to be permitted to access.

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: api-netpolicy

spec:

podSelector:

matchLabels:

app: api

policyTypes:

- Ingress

- Egress



ingress:

- from: [] # only accessible from the ALL on port 80

ports:

- port: 80

protocol: TCP



egress:

- to:

- podSelector:

matchLabels:

app: redis

ports:

- port: 6973

protocol: TCP

- to:

- namespaceSelector:

matchLabels:

name: kube-system

podSelector:

matchLabels:

k8s-app: kube-dns

ports:

- port: 53

protocol: UDP

There is a host of exciting Kubernetes projects taking place at Contino. If you are looking to work on the latest-greatest infrastructure stack or looking for a challenge, — Get in touch! We’re hiring, looking for bright minds at every level. At Contino, we pride ourselves on delivering the best practices cloud transformation projects, for medium-sized businesses to large enterprises.

JP