Back to blog

Isovalent Enterprise for Cilium 1.14: introducing Cilium Multi-Network

Nico Vibert
Nico Vibert
Published: Updated: Cilium
Isovalent Enterprise for Cilium 1.14: introducing Cilium Multi-Network

We are delighted to announce Isovalent Enterprise for Cilium 1.14, introducing Cilium Multi-Network! 

Isovalent Enterprise for Cilium is the hardened, enterprise-grade, and 24×7-supported version of the eBPF-based cloud networking platform Cilium. In addition to all features available in the open-source version of Cilium, the enterprise edition includes advanced networking, security, and observability features popular with enterprises and telco providers. 

The highlight of this new enterprise release is undoubtedly native support for Multi-Network: the ability to connect a Kubernetes Pod to multiple network interfaces.

This is tremendously useful for a variety of use cases:

  • Network Segmentation: Connecting Pods with multiple network interfaces can be used to segment network traffic. For example, you can have one interface for internal connectivity over a private network and another for external connectivity to the Internet.
  • Multi-Tenancy: In a multi-tenant Kubernetes cluster, you can use Multi-Network alongside Cilium Network Policies to isolate network traffic between tenants by assigning different interfaces to different tenants or namespaces.
  • Service Chaining: Service chaining is a network function virtualization (NFV) use case where multiple networking functions or services are applied to traffic as it flows to and from a Pod. Multi-Network can help set up the necessary network interfaces for these services.
  • IoT (Internet of Things) and Edge Computing: For IoT and edge computing scenarios, Multi-Network can be used alongside Cilium Network Policies to impose network isolation on multi-tenant edge devices.

Importantly, Multi-Network with Isovalent Enterprise for Cilium is totally compatible with Cilium Network Policies and Hubble – meaning you don’t have to compromise on security and observability while using this feature!

Multi-Network Lab

In this lab, learn about the new Isovalent Enterprise for Cilium 1.14 feature - Multi-Network!

Start Multi-Network Lab

But there’s more! Isovalent Enterprise for Cilium also includes all the features introduced in Cilium 1.14.

Let’s review some of the Cilium 1.14 highlights before diving into Multi-Network!

What is new in Cilium & Isovalent Enterprise for Cilium 1.14 ?

  • Mutual Authentication: improve your security posture with zero effort (more details)
  • Envoy DaemonSet: a new option to deploy Envoy as a DaemonSet instead of embedded inside the Cilium agent (more details)
  • WireGuard Improvements: encryption with Cilium is getting better – you can now encrypt the traffic from node-to-node and also use Layer 7 policies alongside WireGuard (more details)
  • Gateway API Update: our leading Gateway API implementation is updated with support for the latest Gateway API version, additional route type support and multiple labs (more details)
  • L2 Announcements: Cilium can now natively advertise External IPs to local networks over Layer 2, reducing the need to install and manage tools such as MetalLB (more details)
  • BGP Enhancements: introducing support for better operational tools and faster failover (more details)
  • Multi-Pool IPAM: introducing support to allocate IPs to Pods from multiple IPAM pools. Multi-Pool IPAM is a requirement for the Multi-Network feature described below (more details)
  • BIG TCP for IPv4: after the introduction of BIG TCP support for IPv6 in Cilium 1.13, here comes IPv4 support. Ready for a 50% throughput improvement? (more details)

What is Multi-Network (Beta) in Isovalent Enterprise for Cilium 1.14?

An architecture diagram showing a pod having two networking interfaces, each connected to a different network.

Kubernetes is built on the premise that a Pod should belong to a single network. While this approach may work for most use cases, enterprises and telcos often require a more sophisticated and flexible networking model for their advanced deployments. 

There are many use cases where a Pod may require attachments to multiple networks with different properties via different interfaces. This is what Multi-Network in Isovalent Enterprise for Cilium 1.14 delivers.

Network Segmentation

In a typical Kubernetes environment, Pods will be attached to a single flat network, often with direct outbound Internet access. Certain organisations that operate within regulatory requirements often require more granularity. For example, there are cases where platform operators want certain Pods to have a public-facing interface to access the Internet and an internal private one. This becomes possible with Cilium Multi-Network.

It enables operators to build networking zones within their cluster to provide different levels of access and another method to enforce micro-segmentation.

Multi-Tenancy Isolation

Kubernetes environments are commonly shared by multiple organizations. Each tentant would share cluster resources and would be segregated using namespaces, resource quotas and limits, RBAC, and network policies.

Sometimes though, there’s a regulatory need to address a compliance requirement that segmentation not only happens in software but also in hardware.

Using Multi-Network alongside Cilium Network Policies, you can isolate network traffic between tenants by assigning different interfaces to tenants or namespaces.

Service Chaining

Most Telcos are adopting a Container Network Function (CNF) model where network functions are no longer enforced in hardware appliances but as virtual functions. With Cilium Multi-Network, operators can direct traffic to and from Pod to networks for service chaining.

Cilium Multi-Network Walkthrough

A Multi-Network model, where Pods have multiple network interfaces, imply the following requirements:

  1. Multiple pools of IPs the network interfaces can get their IPs from
  2. Multiple networks to which the network interfaces can be attached to
  3. Being able to apply different security policies depending on which network interface is being used
  4. Being able to observe the traffic using Hubble and discern between which network interface the traffic is coming from.

Let’s walk through an entire example on how Cilium Multi-Network addresses each requirement:

Multi-Pool IPAM and Multi-Network

Cilium 1.14 introduced the Multi-Pool IP Address Management (IPAM) mode, where multiple pools of IP ranges can be created.

With Multi-Pool IPAM, you can deploy a CiliumPodIPPool specifying the network range that will be allocated. The example below uses the 192.168.16.0/20 prefix.

apiVersion: cilium.io/v2alpha1
kind: CiliumPodIPPool
metadata:
  name: jupiter-ip-pool
spec:
  ipv4:
    cidrs:
    - 192.168.16.0/20
    maskSize: 27

The pool can then be attached directly to a network by using the IsovalentPodNetwork CRD. This CRD describes a virtual network a Pod can attach to.

---
apiVersion: isovalent.com/v1alpha1
kind: IsovalentPodNetwork
metadata:
  name: jupiter
spec:
  ipam:
    mode: multi-pool
    pool:
      name: jupiter-ip-pool
  routes:
  - destination: 192.168.0.0/16
    gateway: 192.168.0.1

When using the Multi-Network feature, you can use the network.v1alpha1.isovalent.com/pod-networks=secondary_network annotation to attach your Pod to secondary_network and get an IP address from the pool attached to the network (you will see more details about it shortly).

Let’s try:

$ kubectl run nginx --image nginx --annotations network.v1alpha1.isovalent.com/pod-networks=jupiter
pod/nginx created
$ kubectl get pod nginx -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          30s   192.168.16.48   kind-worker2   <none>           <none>

The Pod has received an IP from the correct pool and network and as you can see below, the 192.168.16.32/27 network was carved out for the kind-worker2 node.

$ kubectl get ciliumnode kind-worker2 -o json | jq ".spec.ipam.pools"
{
  "allocated": [
    {
      "cidrs": [
        "10.10.1.0/24"
      ],
      "pool": "default"
    },
    {
      "cidrs": [
        "192.168.16.32/27"
      ],
      "pool": "jupiter-ip-pool"
    }
  ],
  "requested": [
    {
      "needed": {
        "ipv4-addrs": 16
      },
      "pool": "default"
    },
    {
      "needed": {
        "ipv4-addrs": 3
      },
      "pool": "jupiter-ip-pool"
    }
  ]
}

One of the benefits of Multi-Pool is that it can dynamically allocate more CIDR blocks as demand for IPs grows.

Let’s verify this with a Deployment of 60 Pods attached to this network:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-nico
spec:
  selector:
    matchLabels:
      app: nginx-nico
  replicas: 60
  template:
    metadata:
      labels:
        app: nginx-nico
      annotations:
        network.v1alpha1.isovalent.com/pod-networks: jupiter
    spec:
      containers:
        - name: nginx
          image: nginx:1.25.1
          ports:
            - containerPort: 80

Once deployed, you will see that an additional IP block – 192.168.16.64/27 – has been added to support the demand:

$ kubectl get ciliumnode kind-worker2 -o json | jq ".spec.ipam.pools"
{
  "allocated": [
    {
      "cidrs": [
        "10.10.1.0/24"
      ],
      "pool": "default"
    },
    {
      "cidrs": [
        "192.168.16.32/27",
        "192.168.16.64/27"
      ],
      "pool": "jupiter-ip-pool"
    }
  ],
  "requested": [
    {
      "needed": {
        "ipv4-addrs": 16
      },
      "pool": "default"
    },
    {
      "needed": {
        "ipv4-addrs": 31
      },
      "pool": "jupiter-ip-pool"
    }
  ]
}

Note the needed field above: as the number of required IPs has significantly increased, another /27 CIDR was allocated to the Cilium Node.

The other field in the IsovalentPodNetwork manifest to notice is the following:

---
apiVersion: isovalent.com/v1alpha1
[...]
spec:
 [...]
  routes:
  - destination: 192.168.0.0/16
    gateway: 192.168.0.1

This adds a route to steer traffic towards the secondary network. Here is an illustration of the routing table once Multi-Network is configured.

An illustration of the routing table once Multi-Network in Isovalent Enterprise for Cilium 1.14 is configured.

With the network and pool configured, our Pod is ready to be attached to multiple networks.

Multi-Network Interfaces

To connect a Pod to multiple interfaces, we need to annotate them accordingly.

As you saw above, when we want to attach a Pod to additional networks, we use the network.v1alpha1.isovalent.com/pod-networks annotation.

This annotation consists of a comma-delimited list of IsovalentPodNetwork names. For example, the annotation network.v1alpha1.isovalent.com/pod-networks: default,jupiter causes Cilium to attach the client Pod to the default and jupiter networks.

Let’s deploy three Pods, with three different annotations:

---
apiVersion: v1
kind: Pod
metadata:
  name: client
  labels:
    app: client
  annotations:
    network.v1alpha1.isovalent.com/pod-networks: default,jupiter
spec:
  containers:
  - name: client-container
    image: quay.io/cilium/alpine-curl:v1.5.0
    command:
      - /bin/ash
      - -c
      - sleep 10000000
---
apiVersion: v1
kind: Pod
metadata:
  name: server-default
  labels:
    app: server-default
  annotations:
    network.v1alpha1.isovalent.com/pod-networks: default
spec:
  containers:
  - name: server-jupiter-container
    image: quay.io/cilium/json-mock:v1.3.5
---
apiVersion: v1
kind: Pod
metadata:
  name: server-jupiter
  labels:
    app: server-jupiter
  annotations:
    network.v1alpha1.isovalent.com/pod-networks: jupiter
spec:
  containers:
  - name: server-jupiter-container
    image: quay.io/cilium/json-mock:v1.3.5

To summarize:

  • server-default will be connected to the default network
  • server-jupiter will be attached only to the jupiter network.
  • client Pod will be attached to both.

Let’s verify that connectivity is working. First, let’s check our Pods:

$ kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP              NODE          NOMINATED NODE   READINESS GATES
client           1/1     Running   0          119s   10.10.2.159     kind-worker   <none>           <none>
server-default   1/1     Running   0          2m     10.10.2.105     kind-worker   <none>           <none>
server-jupiter   1/1     Running   0          119s   192.168.16.10   kind-worker   <none>           <none>

Note that, from this output, you can only see the primary IP address attached to client.

Next, let’s have a look at the Cilium Endpoints. All application containers which share a common IP address are considered a single Cilium Endpoint. All endpoints are assigned a security identity.

Let’s now verify that a CiliumEndpoint resource is present for each server Pod, and two CiliumEndpoint resources are present for the client Pod:

$ kubectl get ciliumendpoints.cilium.io
NAME             ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4            IPV6
client           1173          39016         <status disabled>     <status disabled>    <status disabled>   ready            10.10.2.159     
client-cil1      1569          29052         <status disabled>     <status disabled>    <status disabled>   ready            192.168.16.7    
server-default   2224          3321          <status disabled>     <status disabled>    <status disabled>   ready            10.10.2.105     
server-jupiter   3784          29478         <status disabled>     <status disabled>    <status disabled>   ready            192.168.16.10  

The client endpoint represents the primary Pod interface and the client-cil1 endpoint represents the secondary Pod interface of the client Pod. Both server Pods only have a single endpoint because they are each only attached to a single network.

Let’s verify that a secondary network interface cil1 was created in the client Pod, connecting it to the jupiter network:

$ kubectl exec -it client -- ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 7e:71:87:f9:ec:e1 brd ff:ff:ff:ff:ff:ff
    inet 10.10.2.159/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::7c71:87ff:fef9:ece1/64 scope link 
       valid_lft forever preferred_lft forever
17: cil1@if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 76:c2:81:95:6c:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.7/32 scope global cil1
       valid_lft forever preferred_lft forever
    inet6 fe80::74c2:81ff:fe95:6c35/64 scope link 
       valid_lft forever preferred_lft forever

Note the cil1 IP (192.168.16.7 in the example above) matches the client-cil1 interface IP address in the Cilium endpoint list above.

Next, let’s validate that the server is reachable by sending an HTTP GET request to the server’s /client-ip endpoint. In its reply, it will report the client’s IP address:

$ kubectl exec -it client -- curl $SERVER_DEFAULT_IP/client-ip
{
  "client-ip": "::ffff:10.10.1.173"
}

Finally, let’s validate connectivity in the jupiter network (192.168.16.0/20) from the multi-networked client pod to the server-jupiter pod:

$ kubectl exec -it client -- curl $SERVER_JUPITER_IP/client-ip
{
  "client-ip": "::ffff:192.168.16.12"
}

Connectivity was successful. The responses to the packets sent from client to both servers Pods show that the original packets were sent from a different IP. The source IP wasn’t masqueraded and remained that of the original Pod and it originated from the Pod’s correct network interface. HTTP traffic tests were both successful.

But can we apply different network policies to these different network interfaces?

Let’s find out.

Multi-Network with Network Policies

Security enforcement architectures have been traditionally based on IP address filters. Given the churn and scale of micro-services, it’s largely irrelevant in the cloud-native world. To avoid these complications, which can limit scalability and flexibility, Cilium entirely separates security from network addressing. Instead, security is based on the identity of a pod, which is derived through labels.

Let’s have a look at the identity of the client endpoint. Its ID is 32716:

$ kubectl get cep client -o json | jq .status.identity
{
  "id": 32716,
  "labels": [
    "k8s:app=client",
    "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default",
    "k8s:io.cilium.k8s.policy.cluster=default",
    "k8s:io.cilium.k8s.policy.serviceaccount=default",
    "k8s:io.kubernetes.pod.namespace=default"
  ]
}

Let’s now have a look at the identity of the client-cil1 endpoint – in other words, the secondary interface on the same client Pod. Its ID is 1640:

$ kubectl get cep client-cil1 -o json | jq .status.identity
{
  "id": 1640,
  "labels": [
    "cni:com.isovalent.v1alpha1.network.attachment=jupiter",
    "k8s:app=client",
    "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default",
    "k8s:io.cilium.k8s.policy.cluster=default",
    "k8s:io.cilium.k8s.policy.serviceaccount=default",
    "k8s:io.kubernetes.pod.namespace=default"
  ]
}

Both IDs are different, and there different network policies can be applied to each network interface.

Every Cilium Endpoint associated with a secondary pod interface has one additional label of the form cni:com.isovalent.v1alpha1.network.attachment=<network name> added by the Cilium CNI plugin.

This is how each network interface will have its own Cilium security identity, allowing us to write policies that only apply to a specific network by selecting this extra label.

Let’s review a couple of Cilium Network Policies:

---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: server-jupiter-policy
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      app: server-jupiter
  ingress:
    - fromEndpoints:
        - matchLabels:
            cni:com.isovalent.v1alpha1.network.attachment: jupiter
      toPorts:
        - ports:
          - port: "80"
            protocol: TCP
          rules:
            http:
            - method: "GET"
              path: "/public"
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: server-default-policy
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      app: server-default
  ingress:
    - fromEndpoints:
        - matchLabels:
            app: client
      toPorts:
        - ports:
          - port: "80"
            protocol: TCP
          rules:
            http:
            - method: "GET"
              path: "/client-ip"

The server-default-policy network policy allows traffic from the client Pod to server-default and only HTTP traffic of the method GET and to "/client-ip" path.

The server-jupiter-policy network policy allows traffic from endpoints in the jupiter network to server-jupiter and only HTTP traffic of the method GET and to "/public" path.

Therefore, doing a curl test from client to the servers would result in the following results:

From client/client-ip/public
To server-defaultForwardedDenied
To server-jupiterDeniedForwarded

Let’s verify:

$ kubectl exec -it client -- curl $SERVER_DEFAULT_IP/public
Access denied
$ kubectl exec -it client -- curl $SERVER_DEFAULT_IP/client-ip
{
  "client-ip": "::ffff:172.18.0.3"
}
$ kubectl exec -it client -- curl $SERVER_JUPITER_IP/public
[
  {
    "id": 1,
    "body": "public information"
  }
]
$ kubectl exec -it client -- curl $SERVER_JUPITER_IP/client-ip
Access denied

What this simple example highlights is that you can apply different network policies to your network interfaces!

Multi-Network with Isovalent Enterprise for Cilium 1.14 can be secured with granular network policies. But how can you observe the resulting traffic and quickly identify whether traffic was dropped or forwarded by a specific policy? By using Hubble, of course!

Multi-Network Observability with Hubble

If you are not familiar with the Hubble observability tool, check out the Hubble Series and download the Hubble Cheatsheet.

Using the hubble CLI, you can see all the requests from the client Pod – across both network interfaces it’s connected to:

$ hubble observe  --pod client --protocol icmp
Oct 20 10:12:35.767: default/client (ID:28437) <- default/server-default (ID:22670) to-stack FORWARDED (ICMPv4 EchoReply)
Oct 20 10:12:35.767: default/client (ID:28437) <- default/server-default (ID:22670) to-endpoint FORWARDED (ICMPv4 EchoReply)
Oct 20 10:12:40.900: default/client (ID:1613) -> default/server-jupiter (ID:12215) to-stack FORWARDED (ICMPv4 EchoRequest)
Oct 20 10:12:40.900: default/client (ID:1613) -> default/server-jupiter (ID:12215) to-endpoint FORWARDED (ICMPv4 EchoRequest)
Oct 20 10:13:49.431: default/client (ID:28437) -> default/server-default (ID:22670) to-stack FORWARDED (ICMPv4 EchoRequest)
Oct 20 10:13:49.431: default/client (ID:28437) <> default/server-default (ID:22670) policy-verdict:none INGRESS DENIED (ICMPv4 EchoRequest)
Oct 20 10:13:49.431: default/client (ID:28437) <> default/server-default (ID:22670) Policy denied DROPPED (ICMPv4 EchoRequest)
Oct 20 10:13:55.652: default/client (ID:1613) -> default/server-jupiter (ID:12215) to-stack FORWARDED (ICMPv4 EchoRequest)
Oct 20 10:13:55.652: default/client (ID:1613) <> default/server-jupiter (ID:12215) policy-verdict:none INGRESS DENIED (ICMPv4 EchoRequest)
Oct 20 10:13:55.652: default/client (ID:1613) <> default/server-jupiter (ID:12215) Policy denied DROPPED (ICMPv4 EchoRequest)

You can filter on this criteria with the --verdict flag, for example, executing:

$ hubble observe  --pod client --protocol icmp --last 50 --verdict=DROPPED
Oct 20 10:13:49.431: default/client (ID:28437) <> default/server-default (ID:22670) policy-verdict:none INGRESS DENIED (ICMPv4 EchoRequest)
Oct 20 10:13:49.431: default/client (ID:28437) <> default/server-default (ID:22670) Policy denied DROPPED (ICMPv4 EchoRequest)
Oct 20 10:13:55.652: default/client (ID:1613) <> default/server-jupiter (ID:12215) policy-verdict:none INGRESS DENIED (ICMPv4 EchoRequest)
Oct 20 10:13:55.652: default/client (ID:1613) <> default/server-jupiter (ID:12215) Policy denied DROPPED (ICMPv4 EchoRequest)

If you want to see traffic coming from a particular interface, you can use the identity assigned to the endpoint of the interface. In the example above, 28437 is the ID of the client and 1613 is the ID of the secondary interface (client-cil1).

Let’s extract the identity of the client and client-cli1 endpoints and assign them to environment variables:

$ CLIENT_ID=$(kubectl get cep client -o jsonpath='{.status.identity.id}')
$ CLIENT_CIL1_ID=$(kubectl get cep client-cil1 -o jsonpath='{.status.identity.id}')
$ echo $CLIENT_ID
28437
$ echo $CLIENT_CIL1_ID
1613

Filter to the interface of your choice with:

$ hubble observe  --pod client --protocol icmp --last 50 --verdict=DROPPED --identity $CLIENT_ID
Oct 20 10:13:49.431: default/client (ID:28437) <> default/server-default (ID:22670) policy-verdict:none INGRESS DENIED (ICMPv4 EchoRequest)
Oct 20 10:13:49.431: default/client (ID:28437) <> default/server-default (ID:22670) Policy denied DROPPED (ICMPv4 EchoRequest)

You can also filter based on HTTP attributes, like the HTTP status code. For example, you can find all successful responses with this command:

$ hubble observe --http-status 200
Oct 20 10:13:46.284: default/client:35584 (ID:28437) <- default/server-default:80 (ID:22670) http-response FORWARDED (HTTP/1.1 200 3ms (GET http://10.10.2.192/client-ip))
Oct 20 10:14:02.526: default/client-cil1:42362 (ID:1613) <- default/server-jupiter:80 (ID:12215) http-response FORWARDED (HTTP/1.1 200 6ms (GET http://192.168.16.17/public))

As you see, Multi-Network with Isovalent Enterprise for Cilium 1.14 works seamlessly with other Cilium features such as security and observability!

Multi-Network Status and Roadmap

While we are excited about the Multi-Network feature in Isovalent Enterprise for Cilium 1.14 – and I hope you are too! – it is a beta release and we plan to support more use cases in the future. We look forward to hearing from users on use cases it can unlock for you!

In future releases, we will be including IPv6 support for Multi-Network, integration with Segment Routing version 6 (SRv6 support was added in the previous Isovalent Enterprise for Cilium release), and much more!

We will also keep a close eye on the Multi-Network Kubernetes Enhancement Proposal (KEP) as it matures.

Core Isovalent Enterprise for Cilium features

Advanced Networking Capabilities

Isovalent Enterprise for Cilium 1.13 introduced a set of advanced routing and connectivity features popular with large enterprises and telco, including SRv6 L3 VPN support, Overlapping PodCIDR support for Cluster Mesh, Phantom Services for Cluster Mesh and much more.

Platform Observability

Isovalent Enterprise for Cilium includes Role-Based Access Control (RBAC) for platform teams to let users access dashboards relevant to their namespaces, applications, and environments.

This enables application teams to have self-service access to their Hubble UI Enterprise interface and troubleshoot application connectivity issues without involving SREs in the process.

Zero Trust Visibility Lab

Creating the right Network Policies can be difficult. In this lab, you will use Hubble metrics to build a Network Policy Verdict dashboard in Grafana showing which flows need to be allowed in your policy approach.

Start Zero Trust Visibility Lab

Forensics and Auditing

From the Hubble UI Enterprise, operators have the ability to create network policies based on actual cluster traffic. 

Isovalent Enterprise for Cilium also includes Hubble Timescape – a time machine for observability data with powerful analytics capabilities.

While Hubble only includes real-time info, Hubble Timescape is an observability and analytics platform capable of storing & querying observability data that Cilium and Hubble collect. 

Isovalent Enterprise for Cilium also includes the ability to export logs to SIEM (Security Information and Event Management) platforms such as Splunk or an ELK (Elasticsearch, Logstash, and Kibana) stack.

To explore some of these enterprise features, check out some of the labs:

Network Policies Lab

Create Network Policies based on actual cluster traffic!

Start Network Policies Lab

Connectivity Visibility with Hubble Lab

Visualize label-aware, DNS-aware, and API-aware network connectivity within a Kubernetes environment!

Start Connectivity Visibility with Hubble Lab

Advanced Security Capabilities via Tetragon 

Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats.

To explore some of the Tetragon enterprise features, check out some of the labs:

Security Visibility Lab

In this lab, you will simulate the exploitation of a nodejs application, with the attacker spawning a reverse shell inside of a container and moving laterally within the Kubernetes environment. In the lab, learn how Tetragon Enterprise can trace the Lateral Movement and Data Exfiltration of the attacker post-exploit.

Start Security Visibility Lab

TLS Visibility Lab

In this lab, learn how you can use Isovalent Enterprise for Cilium to inspect TLS traffic and identify the version of TLS being used. You will also learn how to report to export events in JSON format to SIEM.

Start TLS Visibility Lab

Enterprise-grade Resilience

Isovalent Enterprise for Cilium includes capabilities for organizations that require the highest level of availability. This includes features such as High Availability for DNS-aware network policy (video) and High Availability for the Cilium Egress Gateway (video).

Enterprise-grade Support

Last but certainly not least, Isovalent Enterprise for Cilium includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Customers also benefit from the help and training from professional services to deploy and manage Cilium in production environments.

Learn More!

If you’d like to learn more about Isovalent Enterprise for Cilium 1.14 and related topics, check out the following links:

  • Join the 1.14 release webinar – with Thomas Graf, Co-Creator of Cilium, CTO, and Co-Founder of Isovalent, to learn more about the latest and greatest open source and enterprise features of Isovalent Enterprise for Cilium and Cilium 1.14.
  • Request a Demo – Schedule a demo session with an Isovalent Solution Architect.
  • Read more about the Cilium 1.14 release – Effortless Mutual Authentication, Service Mesh, Networking Beyond Kubernetes, High-Scale Multi-Cluster, and Much More.
  • Learn more about Isovalent & Cilium at our resource library – It lists guides, tutorials, and interactive labs.

Feature Status

Here is a brief definition of the feature maturity levels used in this blog post:

  • Stable: A feature that is appropriate for production use in a variety of supported configurations due to significant hardening from testing and use.
  • Limited: A feature that is appropriate for production use only in specific scenarios and in close consultation with the Isovalent team.
  • Beta: A feature that is not appropriate for production use, but where user testing and feedback is requested. Customers should contact Isovalent support before considering Beta features.
Nico Vibert
AuthorNico VibertSenior Staff Technical Marketing Engineer

Industry insights you won’t delete. Delivered to your inbox weekly.