Enhancing OKE Security with Cilium Network Policy

Umashankar Sankaranarayanan
9 min readAug 2, 2024

--

Oracle Container Engine for Kubernetes (OKE)’s security posture for your complex micro-services workload can be enhanced further with Cilium’s network policies.

What is Cilium ?

Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes.

At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration.

How it can be deployed on OKE?

Kubernetes(k8s) has adopted the Container Network Interface (CNI) specification for network resource management. The CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins.

OKE supports out of the box below two plugins:

  1. VCN-native pod networking

2. Flannel overlay

Both the OCI VCN-Native Pod Networking CNI plugin and the flannel CNI plugin enable you to implement Kubernetes NetworkPolicy resources by allowing you to use Calico & Cilium .

With Calico , it can be deployed alongside of the OKE default CNI offerings ,whereas with Cilium , once Cilium is deployed the default CNI plugin can be removed and Cilium acts main CNI plugin and as well we can enforce k8s native network policies / Cilium network policies.

Detailed step by step instruction for deploying Cilium on OKE is available at this link here.

Once Cilium is deployed you should be seeing the status as below :

What is Kubernetes Network Policy ?

K8s NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network “entities” (we use the word “entity” here to avoid overloading the more common terms such as “endpoints” and “services”, which have specific Kubernetes connotations) over the network.

The entities that a Pod can communicate with are identified through a combination of the following three identifiers:

  1. Other pods that are allowed (exception: a pod cannot block access to itself)
  2. Namespaces that are allowed
  3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node

By default, a pod is non-isolated for egress , ingress; all outbound connections /inbound connections are allowed.

Before we get started lets paint the picture of the current deployment to which the Cilium Network policies will be applied.

>> OKE is deployed in an OCI region with Single Availability Domain .

>> OKE spans across 2 Subnets in a VCN and has 2 Nodepools.

>> Coming to the workloads we have 3 Namespaces cilium-hr,cilium-fin,cilium-mfg .Deployments frontend ,backend , pmo are deployed in all the 3 namespaces.We would be applying Cilium network policies on endpoints/pods of the deployments,namespaces .

Lets get started with Cilium Network Policies

If no policy is loaded in OKE with Cilium, the default behaviour is to allow all communication unless policy enforcement has been explicitly enabled. As soon as the first policy rule is loaded, policy enforcement is enabled automatically and any communication must then be white listed or the relevant packets will be dropped.

Note : Deny policies take precedence over allow policies, regardless of whether they are a Cilium Network Policy, a Clusterwide Cilium Network Policy or even a Kubernetes Network Policy

We can enforce any K8s NetworkPolicy, CiliumNetworkPolicy and CiliumClusterwideNetworkPolicy and they can co-exits.

Cilium Network policies can be applied at below Network layer for Cilium/K8s Entities :

1. Layer 3

2. Layer 4

3. Layer 7

Also we can apply Cluster wide policies using CiliumClusterWideNetworkPolicy and Host / Worker node based policies using nodeSelector . Also we have Deny policies which can be applied after thorough testing .

Now lets start with Cilium Network Policies at various Network Layer .

Layer3 — Cilium Network Policies :

Endpoints Based

Endpoints-based L3 policy is used to establish rules between endpoints inside the cluster managed by Cilium. Endpoints-based L3 policies are defined by using an Endpoint Selector inside a rule to select what kind of traffic can be received (on ingress), or sent (on egress). An empty Endpoint Selector allows all traffic.

In below example all endpoints in namespace: cilium-hr can be connected from pods in namespace : cilium-fin [ i.e whitelisting of endpoints of ns : cilium-fin ]

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l3-ep-hr-rule1"
namespace: cilium-hr
spec:
endpointSelector:
matchLabels:
{}
ingress:
- fromEndpoints:
- {}
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": cilium-fin

Entities Based

fromEntities is used to describe the entities that can access the selected endpoints.

toEntities is used to describe the entities that can be accessed by the selected endpoints.

Host,remote-node,cluster,kube-apiserver,init,unmanaged,world,all are the entity types supported .

Below example allows endpoints/pods with label app:frontend in cilium-fin namespace to connect to kube-apiserver but we can apply these policy only in self managed clusters not with OKE.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "frontend-to-kube-apiserver"
namespace: cilium-fin
spec:
endpointSelector:
matchLabels:
app: frontend
egress:
- toEntities:
- kube-apiserver

Well we can try below example with OKE where it allows only frontend pods to communicate within Entity : “cluster”, whereas the remaining pods/deployments/endpoints other than app=frontend ( i.e backend) need explicity rule to allow egress

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fron-to-cluster"
namespace: cilium-hr
spec:
endpointSelector:
matchLabels:
app: frontend
egress:
- toEntities:
- cluster

IP/CIDR based

toCIDR : List of destination prefixes/CIDRs that endpoints selected by endpointSelector are allowed to talk to. Note that endpoints which are selected by a fromEndpoints are automatically allowed to reply back to the respective destination endpoints.

toCIDRSet : List of destination prefixes/CIDRs that are allowed to talk to all endpoints selected by the endpointSelector, along with an optional list of prefixes/CIDRs per source prefix/CIDR that are subnets of the destination prefix/CIDR to which communication is not allowed.

Below example allows endpoints with lable app=frontend to only access below CIDRs ( OCI Object storage or Yum repo CIDRs or local repo with in VCN) and blocks the rest.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fin-frontend-cidr-rule"
namespace: cilium-fin
spec:
endpointSelector:
matchLabels:
app: frontend
egress:
- toCIDR:
- 134.70.24.1/32
- toCIDR:
- 134.70.28.1/32
- toCIDR:
- 134.70.32.1/32
- toCIDRSet:
- cidr: 10.xx.0.0/24

Additionally we can as well use FQDN, Service based rules in L3 layer .

Note: toFQDN egress rule shouldn’t contain any L3 rules like toEndpoints or toCIDRs.

Layer4 — Cilium Network Policies :

Layer 4 policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular port using a particular protocol. If no layer 4 policy is specified for an endpoint, the endpoint is allowed to send and receive on all layer 4 ports and protocols including ICMP

Well in the below example we will see L4 rule & L3 rule to allow only DNS traffic on port 53 .This rule is an example of default lockdown/deny all rule which is applied to a namespace : cilium-hr

piVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "default-deny-all-cilium-hr"
namespace: cilium-hr
spec:
description: "Block all the traffic (except DNS) by default"
egress:
- toEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: '53'
protocol: UDP
rules:
dns:
- matchPattern: '*'
endpointSelector:
matchExpressions:
- key: io.kubernetes.pod.namespace
operator: NotIn
values:
- kube-system

We can use range of ports as well as below

- ports:
- port: "80"
endPort: 6443
protocol: TCP

CIDR Dependent L4 rule

In below example all endpoints with labell app=pmo in namespace:cilium-hr can comminucate to internal network cidr 10.xx.0.0 on port 22 only and it can’t communicate on other port.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "cidr-l4-rule"
namespace: cilium-hr
spec:
endpointSelector:
matchLabels:
app: pmo
egress:
- toCIDR:
- 10.xx.0.0/24
toPorts:
- ports:
- port: "22"
protocol: TCP

Limit ICMP/ICMPv6 types

ICMP policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular ICMP/ICMPv6 type (both type (integer) and corresponding CamelCase message (string) are supported). If any ICMP policy is specified, layer 4 and ICMP communication will be blocked unless it’s related to a connection that is otherwise allowed by the policy.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "icmp-rule"
namespace: cilium-hr
spec:
endpointSelector:
matchLabels:
app: backend
egress:
- icmps:
- fields:
- type: 8
family: IPv4
- type: EchoRequest
family: IPv6

Layer7— Cilium Network Policies :

Cilium is capable of enforcing HTTP-layer (i.e., Layer7) policies to limit what URLs the endpoint is allowed to reach .Layer 7 policy rules can be embedded into Layer 4 rules and can be specified for ingress and egress.

If a layer 4 rule is specified in the policy, and a similar layer 4 rule with layer 7 rules is also specified, then the layer 7 portions of the latter rule will have no effect.

Path,Method,Host,Headers are the HTTP fields that can be used to in the network policies rule.

In the below example allows the endpoints with labelapp=web in namespace : cilium-webto only be able to receive packets on port 80 using TCP. While communicating on this port, the only API endpoints allowed will be GET /downloads, and PUT /upload .

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l7-rule"
namespace: cilium-web
spec:
endpointSelector:
matchLabels:
app: web
ingress:
- toPorts:
- ports:
- port: '80'
protocol: TCP
rules:
http:
- method: GET
path: "/downloads"
- method: PUT
path: "/upload"

Now let see how to validate the policies and as well use the Hubble observability UI.

[opc@bastion-vm ~]$ kubectl get ciliumnetworkpolicies -A
NAMESPACE NAME AGE
cilium-fin cidr-rule 19h
cilium-fin fqdn-rule 20h
cilium-hr cidr-l4-rule 19h
cilium-hr default-deny-all-cilium-hr 19h
cilium-hr fron-to-cluster 22h
cilium-hr icmp-rule 18h
cilium-hr l3-rule-all 23h
cilium-hr pmo-to-cluster 21h
cilium-mfg fron-to-cluster 23h
cilium-mfg l3-rule-all 23h
default to-fin-front-from-nodes 21h

Note : With kubectl we can use cnp as well in place of 'ciliumnetworkpolicies'
---------------------------------------------------------------------

>>>>>> kubectl -n kube-system exec cilium-n9fp7 -- cilium-dbg endpoint list

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
4 Enabled Disabled 13682 k8s:app=web 10.244.0.135 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
103 Disabled Disabled 49823 k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/name=cilium-cli 10.244.0.230 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-test
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-test
k8s:run=test-21491
168 Disabled Disabled 4 reserved:health 10.244.0.201 ready

514 Enabled Enabled 11488 k8s:app=frontend 10.244.0.184 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-hr
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default

What is Hubble ?

Observability is provided by Hubble which enables deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. Hubble is able to provide visibility at the node level, cluster level or even across clusters in a Multi-Cluster (Cluster Mesh) scenario.

We can observe the traffic flow and verdict via hubble cli as below

>>> cilium hubble port-forward &

hubble observe --verdict DROPPED

Aug 2 07:20:14.103: fe80::b0c2:caff:fe6e:2c93 (ID:36585) <> ff02::2 (unknown) Unsupported L3 protocol DROPPED (ICMPv6 RouterSolicitation)
Aug 2 07:21:22.487: cilium-mfg/frontend-5db6bcd6f7-mcpr5:52136 (ID:37473) <> cilium-hr/backend-6d4b99bdb9-fdkgl:80 (ID:14673) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Aug 2 07:21:22.487: cilium-mfg/frontend-5db6bcd6f7-mcpr5:52136 (ID:37473) <> cilium-hr/backend-6d4b99bdb9-fdkgl:80 (ID:14673) Policy denied DROPPED (TCP Flags: SYN)

hubble observe -f
Aug 2 07:29:37.096: cilium-fin/frontend-5db6bcd6f7-vcm7h:50414 (ID:32903) -> 10.xx.0.yy:22 (ID:16777433) to-stack FORWARDED (TCP Flags: ACK, FIN)
Aug 2 07:29:37.613: 10.244.0.8:52938 (host) <- cilium-test/echo-other-node-5d67f9786b-4726j:8080 (ID:36585) to-stack FORWARDED (TCP Flags: ACK, FIN)
Aug 2 07:29:37.616: 10.244.0.8:52938 (host) -> cilium-test/echo-other-node-5d67f9786b-4726j:8080 (ID:36585) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug 2 07:29:37.944: 10.244.0.132:48832 (host) -> cilium-test/echo-same-node-6698bd45b-thvl8:8080 (ID:35627) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 2 07:29:37.944: 10.244.0.132:50548 (host) -> cilium-test/echo-same-node-6698bd45b-thvl8:8181 (ID:35627) to-endpoint FORWARDED (TCP Flags: SYN)
Aug 2 07:29:37.944: 10.244.0.132:48832 (host) <- cilium-test/echo-same-node-6698bd45b-thvl8:8080 (ID:35627) to-stack FORWARDED (TCP Flags: SYN, ACK)
Aug 2 07:29:37.944: 10.244.0.132:50548 (host) <- cilium-test/echo-same-node-6698bd45b-thvl8:8181 (ID:35627) to-stack FORWARDED (TCP Flags: SYN, ACK

Lets use hubble ui

 cilium hubble ui

ℹ️ Opening "http://localhost:12000" in your browser...

We can see traffic flow and veridt to/from namespace:cilium-fin in above UI screenshot to/from ns: cilium-hr,cilium-fin . Also world refers to the Hosts/IPs not managed by Cilium .

Thanks for your time if you could read through and I hope you got a good understanding of using Cilium Network Policies with OKE.

References :

Disclaimer : All views expressed in this blog are my own and don’t represent opinions of Oracle.

--

--

Umashankar Sankaranarayanan

Have 19+ years of experience in Consulting,Implementation in Data management, Infrastructure space. Expertise in Cloud Native,Kubernetes,OCI ,DR,Terraform