Locking Down Kubernetes Clusters

Network Policies in action

Featured image

The NSA and CISA have recently released a Cybersecurity Technical Report focused on Kubernetes Security Hardening, featuring a distinct section on Network Separation and Hardening. This section advocates for the utilization of Kubernetes Network Policies to limit network traffic amongst Pods.

In light of the prevalent adoption of developing Applications as distributed microservices in contemporary practices, and considering the pivotal role networking holds in facilitating communication between these microservices, I dedicated time to delve into the report’s recommendations and the mechanics of Kubernetes Network Policies. This blog encapsulates my explorations and insights, offering my perspective on Kubernetes Network Policies.

Advantages of Implementing Network Isolation

By default, a Pod within a Kubernetes cluster has the ability to freely communicate with any other Pod in the same cluster. While this simplifies application development and deployment, it also introduces potential security vulnerabilities. If a Pod is compromised, the attacker can potentially extend their attack to other Pods within the cluster. To address this risk, it is advisable to restrict Pod communication and implement a solution that inhibits inter-Pod communication unless explicitly configured to allow such traffic.

Utilizing the Kubernetes NetworkPolicy API

Kubernetes offers the NetworkPolicy API to establish network isolation within the cluster. Initially introduced as an alpha feature in v1.2 in March 2016, it progressed to beta in v1.3 in July 2016, and eventually stabilized, reaching the stable v1 version in v1.7 in June 2017. The NetworkPolicy API provides a robust and meticulously designed solution for enforcing network isolation in Kubernetes, enabling developers to define isolation policies declaratively without necessitating a deep understanding or implementation of intricate networking configurations, which often differ across platforms. The network plugin manages the complex implementation details, while developers simply define the desired rules, with Kubelet and the network plugin handling the rest.

Developers can use the API to define policies that select a Pod or a group of Pods and establish ingress and egress rules for the selected Pod(s). These rules specify the sources from which the selected Pod(s) can accept traffic and the destinations to which they can send traffic. Operating at Layer3 and Layer4 of the Pod network, the NetworkPolicy API’s policies are assessed and enforced by the CNI Network Plugin utilized in the cluster. However, not all network plugins support the NetworkPolicy API; it is supported only by specific ones like Weave-net, Calico, etc. For example, Flannel does not support the NetworkPolicy API.

By default, no network policies are applied to Pods or namespaces, allowing unrestricted ingress and egress traffic within the Pod network. Isolation of Pods is achieved through a network policy applicable to the Pod or its namespace. Once a Pod is targeted in a network policy, it denies any connections not explicitly permitted by any relevant policy object. Pods are selected using the podSelector and/or namespaceSelector options. The formatting of network policy may vary based on the container network interface (CNI) plugin utilized in the cluster. Administrators should implement a default policy that selects all Pods to deny all ingress and egress traffic, ensuring isolation of any unselected Pods. Subsequent policies can then ease these restrictions for allowed connections. While external IP addresses can be utilized in ingress and egress policies using ipBlock, variations among CNI plugins, cloud providers, or service implementations may influence the sequence of NetworkPolicy processing and the modification of addresses within the cluster.

Example Network Policy

To demonstrate how the NetworkPolicy API works, consider a scenario where there are two Pods, frontend and backend, in the default namespace of a Kubernetes cluster. The objective is to allow traffic only from the frontend Pod to the backend Pod, while preventing the backend Pod from sending traffic to the frontend Pod.

Create a default deny policy

The first step is to block all traffic in the default namespace. This is achieved by creating a default deny policy that selects all the pods (i.e. empty podSelector) in the default namespace and blocks all ingress and egress traffic. Note that the policy below allows DNS traffic from the Pods so that Pods can query the DNS service in the cluster to resolve service names. Also note that irrespective of the policy, a Pod is always capable of looping back the traffic to itself.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  egress:
  - to:
    ports:
    - port: 53
      protocol: TCP
    - port: 53
      protocol: UDP

Allow egress traffic from Frontend Pod to Backend Pod

The next step is to selectively allow only egress traffic from frontend Pod to backend Pod. The configuration below selects the frontend Pod using podSelector and specifies that only egress traffic to backend Pod (using podSelector) is allowed. Egress traffic to any other Pod is blocked.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: frontend
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          run: backend

Allow ingress traffic from Frontend Pod to Backend Pod

The previous policy allows egress traffic from the frontend Pod to reach the backend Pod. However the backend Pod is still not configured to allow the ingress traffic coming from the frontend Pod. The following policy will allow the ingress traffic on the backend Pod

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          run: frontend

Network Policy Recipes

The Kubernetes documentation provides a collection of sample NetworkPolicy configurations that serve as an excellent foundation. These configurations can be customized and expanded to align with specific requirements and scenarios. Another valuable resource for utilizing NetworkPolicy in Kubernetes is the GitHub repository called ahmetb/kubernetes-network-policy-recipes. This repository offers a vast range of pre-defined NetworkPolicy configurations for various use cases, ranging from basic to advanced scenarios.

Ok, but are they Really Useful?

Kubernetes Network Policies offer a powerful way to declare and specify communication policies for pods. They provide the ability to define policies in YAML manifests, which can be applied in real time without restarting the running pods. However, there are some considerations to keep in mind when using Kubernetes Network Policies for your applications.

Implementing network policies for multiple applications running in different namespaces can quickly become complex and difficult to manage. It is important to document your policies clearly and find a method to keep track of which policy manifests belong to each policy. This can help avoid inadvertent mistakes. Some network plugins offer GUI-based policy managers that simplify policy definition and visualization.

As your use of network policies grows, it is crucial to perform thorough end-to-end testing to ensure that the policies do not disrupt legitimate pod-to-pod communication. In complex application architectures with multiple microservices developed by different teams, it can be challenging to determine the precise communication matrix between pods. Without comprehensive testing covering all possible scenarios, there is a risk of introducing bugs into production.

Having a large number of network policies in a cluster can impact the performance of pod networking. The networking plugin needs to evaluate and enforce policies before routing packets to their destinations, which can introduce a slight network lag in pod-to-pod communication. Depending on the complexity of the policies and the chosen plugin, this lag may not be acceptable for certain use cases or applications.

Service mesh solutions like Istio and Linkerd offer similar functionality to network policies, along with additional features such as traffic encryption, load balancing, and rate limiting. These solutions may be more suitable for certain use cases, providing a comprehensive set of features beyond network policies alone.

Recent Advancements in Network Policy

The NetworkPolicy subproject, led by the Special Interest Group (SIG) Networking, aims to unify the concepts and usage of Network Policies in Kubernetes. This initiative strives to avoid API fragmentation and unnecessary complexity, while providing a community space for learning and contributing to the Kubernetes NetworkPolicy API and its surrounding ecosystem. As part of this project, a Validating Framework has been developed to facilitate the validation of Network Policies.

In order to address the challenge of testing all potential Network Policy implementations, the team has introduced Cyclonus. Cyclonus is a comprehensive Network Policy fuzzing tool that verifies the compatibility of a CNI provider with hundreds of different Network Policy scenarios. By running Cyclonus on a cluster, you can determine whether your specific CNI configuration fully conforms to the various Kubernetes Network Policy API constructs.

Port Range Policies

When writing a NetworkPolicy, you can target a range of ports instead of a single port. This is achievable with the usage of the endPort field, as the following example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: multi-port-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 32000
      endPort: 32768

The above rule allows any Pod with label role=db on the namespace default to communicate with any IP within the range 10.0.0.0/24 over TCP, provided that the target port is between the range 32000 and 32768.w

Conclusion

Kubernetes Network Policy can be a useful tool in restricting the communication between Pods in a cluster. However, we need to evaluate the wider security context of the Application and carefully weigh the benefits and overheads that come with using it for the teams that leverage the use of Kubernetes.

Build On!