Kubernetes Network Policy (NetworkPolicy)

image

Network Policy

If you want to control network traffic at the IP address or port level (OSI layer 3 or 4), you can consider using Kubernetes network policies (NetworkPolicy) for specific applications in the cluster. NetworkPolicy is an application-centric construct that allows you to set how pods are allowed to interact with various network “entities” on the network (we use entities here to avoid overuse of common terms such as “endpoint” and “service”, which term has a specific meaning in Kubernetes) communication. A NetworkPolicy applies to connections to Pods on one or both ends and has nothing to do with other connections.

Pods that can communicate with each other are identified by a combination of the following three identifiers:

  1. Other allowed Pods (exception: a Pod cannot block access to itself)
  2. allowed namespaces
  3. IP chunks (exception: communication with the node the pod is running on is always allowed, regardless of the pod’s or node’s IP address)

When defining a NetworkPolicy based on a Pod or namespace, you use selectors to specify which traffic can enter or leave Pods that match that operator. Also, when creating an IP-based NetworkPolicy, we define the policy based on IP chunks (CIDR ranges).

Prerequisites

Network policies are implemented through network plug-ins. To use Network Policy, you must use a networking solution that supports NetworkPolicy. Creating a NetworkPolicy resource object without a controller to enforce it has no effect.

Two types of pod isolation

Pods have two types of isolation: egress isolation and ingress isolation. They relate to which connections can be established. The “isolation” here is not absolute, but means “with some restrictions”. Additionally, “non-isolated direction” means that there is no restriction in said direction. These two types of isolation (or not) are declared independently, and both relate to connections from one Pod to another.

By default, a pod’s egress is non-isolated, meaning that all outgoing connections are allowed. A Pod is egress-isolated if there is any NetworkPolicy that selects the Pod and contains “Egress” in its policyTypes , and we say that such a policy applies to the Pod’s egress. When a pod’s egress is isolated, the only connections allowed from the pod are those allowed by the egress list of a NetworkPolicy that applies to the egress pod. The effects of these egress lists are additive.

By default, a Pod is non-isolated for ingress, i.e. all inbound connections are allowed. If any NetworkPolicy selects the Pod and contains “Ingress” in its policyTypes, the Pod is quarantined for Ingress, and we say that this policy applies to the Pod’s Ingress. When a pod’s ingress is quarantined, the only connections allowed into the pod are connections from the pod’s node and those allowed by the ingress list of one of the Pod’s NetworkPolicy that applies to the ingress. The effects of these ingress lists are additive.

Network policies are additive, so there are no conflicts. If a policy applies to a pod’s traffic in a particular direction, the connections allowed by the pod in that direction are the set allowed by the applicable network policy. Therefore, the order of evaluation does not affect the outcome of the policy.

To allow a connection from a source Pod to a destination Pod, both the source Pod’s egress policy and the destination Pod’s ingress policy need to allow the connection. If either party does not allow the connection, establishing the connection will fail.

NetworkPolicy resource

See NetworkPolicy for a full definition of resources.

Here is an example NetworkPolicy:

service/networking/networkpolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    -Ingress
    -Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978

Description:

Sending the above example to the API server has no effect unless you choose a network solution that supports network policies.

Required fields: Like all other Kubernetes configurations, NetworkPolicy requires the apiVersion, kind, and metadata fields. For general information on config file operations, see Configuring Pods to Use ConfigMaps and Object Management.

spec: The NetworkPolicy specification contains all the information needed to define a specific network policy in a namespace.

podSelector: Every NetworkPolicy includes a podSelector that selects the set of Pods to which the policy applies. The policy in the example selects Pods with the label “role=db”. An empty podSelector selects all Pods in the namespace.

policyTypes: Every NetworkPolicy contains a list of policyTypes, which contain Ingress or Egress or both. The policyTypes field indicates whether a given policy applies to inbound traffic to the selected Pod, outbound traffic from the selected Pod, or both. Ingress is always set by default if the NetworkPolicy does not specify policyTypes; Egress is set if the NetworkPolicy has any egress rules.

ingress: Each NetworkPolicy can contain a whitelist of ingress rules. Each rule allows traffic matching both the from and ports sections. The example policy contains a simple rule: it matches a specific port from one of three sources, the first specified via ipBlock and the second via namespaceSelector specified, and the third is specified by podSelector.

egress: Each NetworkPolicy can contain a whitelist of egress rules. Each rule allows traffic matching the to and port sections. This example policy contains a rule that matches traffic on the specified port to any destination in 10.0.0.0/24.

So, this network policy example:

  1. Isolate Pods for role=db in the default namespace (if they are not already isolated).

  2. (Ingress rule) Allow the following Pods to connect to TCP port 6379 of all Pods in the default namespace with the role=db label:

    • All Pods in the default namespace with the role=frontend label
    • Pods in all namespaces with the project=myproject tag
    • IP address ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (that is, all 172.17.0.0/16 except 172.17.1.0/24)
  3. (Egress rule) allows any Pod in the default namespace with the label role=db to connect to TCP port 5978 under CIDR 10.0.0.0/24.

See the declarative network policy walkthrough for more examples.

The behavior of selectors to and from

Four selectors can be specified in the from section of ingress or the to section of egress :

podSelector: This selector will select specific Pods in the same namespace as the NetworkPolicy that should be allowed as inbound traffic sources or outbound traffic destinations.

namespaceSelector: This selector will select a specific namespace that all Pods should use as their inbound traffic source or outbound traffic destination.

namespaceSelector and podSelector: A to/from entry selection specifying namespaceSelector and podSelector A specific Pod in a specific namespace. Take care to use proper YAML syntax; the following strategy:

...
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          user: alice
      podSelector:
        matchLabels:
          role: client
  ...

This policy contains only one element in the from array, only allowing pods from pods marked with role=client in a namespace marked with user=alice link. But this strategy:

...
  ingress:
  -from:
    - namespaceSelector:
        matchLabels:
          user: alice
    - podSelector:
        matchLabels:
          role: client
  ...

It contains two elements in the from array, allowing connections from Pods marked with role=client in the local namespace, or from any name Connections from any pod in the space marked with user=alice.

When in doubt, use kubectl describe to see how Kubernetes interprets the policy.

ipBlock: This selector will select a specific IP CIDR range to use as source of inbound traffic or destination of outbound traffic. These should be cluster external IPs, since Pod IPs are ephemeral and randomly generated.

The cluster’s inbound and outbound mechanisms often require rewriting the source or destination IP of packets. When this happens, it’s uncertain whether it happens before or after NetworkPolicy processing, and it may behave differently for different combinations of network plugins, cloud providers, Service implementations, etc.

For inbound traffic, this means that in some cases you can filter incoming packets based on the actual original source IP, while in other cases the source IP that the NetworkPolicy acts on It may be a LoadBalancer or a Pod node, etc.

For outbound traffic, this means that connections from Pods to Service IPs rewritten to cluster-external IPs may or may not be subject to ipBlock based policies constraints.

Default policies

By default, if no policy exists in a namespace, all traffic to and from Pods in that namespace is allowed. The following example enables you to change the default behavior in this namespace.

Deny all ingress traffic by default

You can create a “default” isolation policy for a namespace by creating a NetworkPolicy that selects all Pods but does not allow any inbound traffic to those Pods.

service/networking/network-policy-default-deny-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
spec:
  podSelector: {}
  policyTypes:
  -Ingress

This ensures that even Pods not selected by any other NetworkPolicy will still be quarantined for ingress. This policy does not affect egress isolation for any Pods.

Allow all ingress traffic

If you want to allow all inbound connections from all Pods in a namespace, you can create an explicit allow policy.

service/networking/network-policy-allow-all-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
spec:
  podSelector: {}
  ingress:
  -{}
  policyTypes:
  -Ingress

With this policy in place, any additional policies will not cause any inbound connections to these Pods to be rejected. This policy has no effect on egress isolation for any Pods.

Deny all egress traffic by default

You can create a “default” isolation policy for a namespace by creating a NetworkPolicy that selects all containers but does not allow any outbound traffic from those containers.

service/networking/network-policy-default-deny-egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
spec:
  podSelector: {}
  policyTypes:
  -Egress

This policy ensures that even pods not selected by any other NetworkPolicy are not allowed to egress traffic. This policy does not change the inbound traffic isolation behavior of any Pods.

Allow all egress traffic

If you want to allow all connections from all Pods in a namespace, you can create a policy that explicitly allows all outbound connections from Pods in that namespace.

service/networking/network-policy-allow-all-egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-egress
spec:
  podSelector: {}
  egress:
  -{}
  policyTypes:
  -Egress

With this policy in place, any additional policies will not cause any outbound connections from these Pods to be rejected. This policy has no effect on the isolation entering any Pod.

Deny all ingress and all egress traffic by default

You can create a “default” policy for a namespace to block all inbound and outbound traffic by creating the following NetworkPolicy in that namespace.

service/networking/network-policy-default-deny-all.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  -Ingress
  -Egress

This policy ensures that even Pods not selected by any other NetworkPolicy are not allowed inbound or outbound traffic.

SCTP support

Feature Status: Kubernetes v1.20 [stable]

As a stable feature, SCTP support is enabled by default. To disable SCTP at the cluster level, you (or your cluster administrator) need to specify --feature-gates=SCTPSupport=false,... for the API server to disable SCTPSupport Feature gating. After enabling this feature gate, users can set the protocol field of NetworkPolicy to SCTP.

Description:

You must use a CNI plugin that supports the SCTP protocol NetworkPolicy.

targeting a range of ports

Feature status: Kubernetes v1.25 [stable]

When writing a NetworkPolicy, you can target a range of ports instead of a fixed port.

This can be achieved by using the endPort field, as shown in the following example:

service/networking/networkpolicy-multiport-egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: multi-port-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    -Egress
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 32000
          endPort: 32768

The above rule allows all Pods with label role=db in namespace default to use TCP protocol with 10.0.0.0/24 IP communication, as long as the destination port is between 32000 and 32768.

The following limitations apply when using this field:

  • The endPort field must be equal to or greater than the value of the port field.
  • endPort can only be defined if port is defined.
  • The setting values of both fields can only be numbers.

Description:

The CNI plugin used by your cluster must support the endPort field in the NetworkPolicy specification. If your network plugin does not support the endPort field, and you specify a NetworkPolicy that includes the endPort field, the policy will only take effect for a single port field.

Targeting multiple namespaces by label

In this case, your Egress NetworkPolicy uses the namespace label name to target multiple namespaces. To do this, you need to set the label for the target namespace. For example:

 kubectl label namespace frontend namespace=frontend
 kubectl label namespace backend namespace=backend

Add a tag under namespaceSelector in the NetworkPolicy docs. For example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-namespaces
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
  -Egress
  egress:
   - to:
     - namespaceSelector:
       matchExpressions:
       - key: namespace
         operator: In
         values: ["frontend", "backend"]

Description:

You cannot specify namespace names directly in NetworkPolicy. You have to use namespaceSelector with matchLabels or matchExpressions to select namespaces based on labels.

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge Cloud native entry skill tree Container arrangement (production environment k8s)kubelet, kubectl, kubeadm three-piece set 14388 people is studying systematically