Multi-Tenancy Kubernetes in 2024
Multi-tenancy is not a new concept, but the technology to implement it has evolved significantly over the past few years. Before diving into the tools, let’s first understand what multi-tenancy in Kubernetes means, and then we will discuss the tools that simplify it for production use.
We all know that Kubernetes adoption is on a constantly rising trajectory, so for most organizations, the first step is getting started with Kubernetes. With this increase in adoption, day 2 challenges have become the focus, and one of the biggest challenges is managing several Kubernetes clusters! Yes, organizations are creating Kubernetes clusters for everything — per team, per user, per application and more.
Kubernetes is a complex system capable of handling massive workloads. However, creating more and more Kubernetes clusters has led to rising costs. According to a CNCF survey, 70% of organizations identified overprovisioned Kubernetes as the leading cause of increased spending. This clearly shows that having multiple Kubernetes clusters is not always the solution. Instead, using Kubernetes clusters more efficiently can make a significant difference.
This is where the concept of multi-tenancy comes in. In simple terms, multi-tenancy means dividing a Kubernetes cluster into multiple usable Kubernetes clusters. Let’s understand this concept with an example.
Imagine you are looking for a place to live, and you decide to book a house. A house gives you great security, peace of mind and complete ownership, however, it also comes with a high cost of purchase and maintenance. Meanwhile, if you opt for an apartment in a housing society, you still get an apartment that only you have access to, and you can decorate and modify it as you like. While maintaining that ownership, you also get to use shared resources such as the park, elevators and common areas, which are maintained by the housing society, thus reducing your maintenance overhead.
This is exactly what Kubernetes multi-tenancy means — you can create multiple Kubernetes clusters on a single host Kubernetes cluster. Although you don’t have access to the host cluster, you have complete ownership of a slice of the cluster, and you can reuse the applications from the host cluster.
In short, we can say that there are three pillars of multi-tenancy — isolation; fair resource usage and tenant autonomy — which, if achieved, make a cluster truly multitenant.
Natively within Kubernetes, there is a concept of namespaces, which is useful as many resources can be scoped to a namespace to create some level of isolation.
- Workload isolation can be achieved to a certain extent by using pod security standards and preventing privileged access with custom policy engines such as Kyverno or jsPolicy. Additionally, you can define a well-structured network policy to restrict traffic to and from pods. When different teams have only namespace-level isolation, you may want to prevent them from communicating with each other, while still allowing them to interact with the Kubernetes API. An example of this scenario can be as given below:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-policy namespace: tenant-1 spec: policyTypes: – Egress egress: – to: – ipBlock: cidr: 0.0.0.0/0 except: – 100.64.0.0/10 – 127.0.0.0/8 – 10.0.0.0/8 – 172.16.0.0/12 – 192.168.0.0/16 – namespaceSelector: matchLabels: tenant: tenant-1 – ports: – port: 53 protocol: UDP – port: 53 protocol: TCP – ports: – port: 443 – port: 8443 to: – ipBlock: cidr: ${KUBE_API}/32 |
- For managing resource usage, you can use Kubernetes objects such as ResourceQuota to define the limit of resources that can be created within a cluster. You can also add LimitRange to set CPU and memory limits.
While these namespace-level resources help create some isolation, achieving true multi-tenancy is still challenging due to several factors:
- It becomes difficult as the number of tenants increases.
- How do you distribute different kubeconfigs per team?
- How is cluster-level resource access, such as CRDs, managed?
- Is resource sharing still an issue?
- How do you handle different cluster versions?
- What about different versions of an application?
- There is still a single control plane and a single state for the cluster.
Yes, multi-tenancy is hard if we rely solely on native Kubernetes constructs. Even with these measures, automating the entire process instead of manually defining everything is a challenge.
This is where tools like vCluster help to address multi-tenancy challenges. vCluster is an open-source tool that helps you create virtual Kubernetes clusters, each with its own control plane components and cluster state, in an automated way.
When you create a virtual machine in your cloud account, you gain full access to that virtual machine, but it is actually a slice of physical hardware in a data center. Similarly, a virtual cluster is a slice of a Kubernetes cluster — you have full access to it and complete ownership, but ultimately, it is still a part of a larger Kubernetes cluster.
What are the benefits of using vCluster?
vCluster helps you achieve multi-tenancy.
Instead of managing multiple Kubernetes clusters, you can now have a single Kubernetes cluster and use the vCluster CLI to create virtual clusters. These virtual clusters can reuse the host cluster’s resources, such as Cert Manager, NGINX Ingress Controller, Vault and more. Each virtual cluster will have its own independent kubeconfig file, allowing teams to deploy their workloads independently. This approach is more secure than namespace-based isolation because each virtual cluster has its own control plane and state (with options such as SQLite, embedded etcd or external etcd).
Conclusion
We believe that multi-tenancy in Kubernetes is a solution to many existing challenges and allows you to use Kubernetes to its full potential. It significantly reduces the amount of wasted resources and brings down costs considerably. Whether you aim to save costs or build an internal developer platform, multi-tenancy is the right approach — though it can be challenging to implement. Solutions such as vCluster help address these challenges by offering a control plane per tenant, rather than a cluster per tenant.
If you’d like to try out vCluster or vCluster Pro (with enhanced capabilities for enterprises), please visit our website. You can also try a sample scenario on Killercoda.
To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12–15, 2024.