Latest Kubernetes Update Increases Enterprise Appeal
The Technical Oversight Committee (TOC) for Kubernetes this week released a 1.30 update that, among other capabilities, includes a Recursive Read-Only (RRO) Mounts feature. The new feature, in alpha, prevents any accidental modifications to data to improve cybersecurity.
In addition, there is also now in alpha a spec.trafficDistribution
field within a Kubernetes Service that enables teams to define preferences for how traffic should be routed to endpoints.
Finally, other capabilities being added in alpha include the ability to define when a Job can be declared “succeeded” and a change to recursive SELinux labels to improve performance on that platform.
There are quite a few new features. Among them: structured parameters for dynamic resource allocation, node memory swap support, user namespaces in pods and container resource-based pod autoscaling, and Common Expression Language (CEL) for admission control.
Paul Nashawaty, practice lead for application development for the Futurum Group, said these additions represent a concerted effort to optimize resource utilization, improve scalability, strengthen security measures, and empower developers. Many of those capabilities are, of course, capabilities that enterprise IT teams expect in a platform that continues to mature, he added.
In total, 22 capabilities previously available in beta have graduated to stable, including a capability to now determine whether a pod is actually ready to run a workload. Four capabilities previously available in alpha are now available in beta, including a node log query tool and an ability to inject more context into logs.
It’s not clear at what pace IT teams are rolling out the latest version of Kubernetes. Many will wait until their provider of a distribution of Kubernetes adds support for this release. However, it’s not uncommon for many organizations to run multiple versions of Kubernetes in production environments even though officially the TOC maintains active support is only provided for the three most current Kubernetes versions. The goal is to encourage IT teams to stay current, but many are concerned that upgrading a Kubernetes cluster will break an application that is dependent on application programming interfaces (APIs) that might be deprecated.
In the meantime, the debate over to what degree to embrace Kubernetes to run cloud-native applications continues unabated. Most developers today are working with containers that can be deployed anywhere, but considerably fewer take advantage of Kubernetes’s orchestration capabilities. That’s primarily because the platform is too complex to programmatically invoke. As a result, many organizations that do deploy cloud-native applications are setting up dedicated platform engineering teams to manage Kubernetes clusters on behalf of multiple application development teams.
Alternatively, many organizations might opt to rely more on managed services to manage Kubernetes clusters on their behalf to enable them to devote more resources to building applications.
One way or another, the overall percentage of cloud-native applications being built and deployed continues to steadily increase. Less clear is which platform lends itself best to running those applications, given all the options that IT teams have today.