Kubernetes: The ‘All Things’ Platform
Kubernetes has become solidly placed within the development life cycle. It’s emerged as a cornerstone technology, now being used (or evaluated) within 96% of organizations. But lately, Kubernetes has transitioned from being a mere tool for scaling applications into a comprehensive platform for a diverse range of cloud-native operations.
Kubernetes has been coined as the ‘OS of the cloud.’ For example, it’s common for many open source tools to be deployed as Kubernetes Operators. Of course, not everyone is drinking the Kubernetes Kool-Aid, but it sure has captured the attention of a sizeable chunk of enterprise IT. Just take the size of the latest KubeCon, which attracted over 7,000 in-person attendees.
I recently chatted with Omer Hamerman, principal DevOps engineer at Zesty, to gather more perspectives on the paradigm shift toward ‘everything Kubernetes.’ I also received insight from Venkat Ramakrishnan, VP of product management and engineering at Portworx by Pure Storage. Together, they shed light on Kubernetes’ expanding role, its management challenges and its effect on the modern developer experience.
The Expanding Role of Kubernetes in DevOps
The role of Kubernetes has evolved dramatically, transitioning from being a tool primarily focused on scaling applications to becoming a versatile platform for deploying virtually everything. From apps and databases to security systems, Kubernetes has become the deployment target for various components.
Hamerman noted that “everything is getting deployed into Kubernetes,” marking a significant transformation. However, he also cautioned that while it makes sense for Kubernetes to handle various components, it might not always be the best decision. It’s essential to balance what Kubernetes can handle effectively and the optimal architecture for each use case.
Considerations Against Using Kubernetes for Everything
While Kubernetes has become a platform for deploying a wide range of components, there are considerations against adopting it as a one-size-fits-all solution. Hamerman pointed out that operating Kubernetes on your own to deploy infrastructure requires a substantial management overhead compared to managed services that abstract away much of the underlying infrastructure management. Kubernetes necessitates managing nodes, pods, configurations, networking and more, which can become overwhelming as the complexity of components grows.
Containers, which form the fundamental unit in Kubernetes, are great for some purposes but not always the best fit for all components. Hamerman emphasized that databases, for instance, are not always suited for containers due to performance issues and challenges in configuring services in such environments. Regardless, StatefulSets attempts to address this by enabling the configuration of pods with attached storage.
Kubernetes Management Challenges and Solutions
There are a number of challenges associated with going all-in on Kubernetes. One significant challenge revolves around the tension between developers’ desire for broad access and the system’s need for stability. Continuous updates and deployments can also introduce disruptions, making it essential to maintain stability. Hamerman suggested that maintaining rolling updates as stable as possible is a key management challenge, requiring careful coordination and balance.
Another challenge is enabling observability for developers while adhering to policies and procedures. Kubernetes simplifies installing observability tools through Helm, which aids in adding monitoring and observability operators. However, ensuring that developers can access the necessary information without directly interacting with the system presents an authorization challenge.
Potential Challenges With K8s Ubiquity
Other potential concerns include stemming rising costs and dealing with heterogeneous environments. On the first one, smart optimization plays a pivotal role in reducing costs associated with Kubernetes and the cloud. Hamerman highlighted a few areas where automation can make a difference:
- Automated Storage Provisioning: Kubernetes applications often require persistent storage. Automating storage provisioning based on deterministic needs can prevent over-provisioning, leading to cost savings.
- Node Creation Optimization: Automation tools like AWS’s Carpenter can continuously monitor nodes and adjust their numbers based on demand, reducing unnecessary overhead.
- Pod Sizing Optimization: Vertical and horizontal scaling can be optimized using K8s-native tools like Vertical Pod Autoscaler and Horizontal Pod Autoscaler to ensure efficient resource utilization.
Furthermore, distributing Kubernetes across multiple clouds or clusters presents other complex challenges. While multi-cloud adoption is rare due to legal, regulatory and business considerations, using multiple clusters within a single cloud provider is more common. Yet, the engineering effort required to manage across diverse cloud providers is substantial, said Hamerman. Challenges arise regarding connectivity, network configurations and latency across different regions and availability zones.
The Modern Developer Experience in the Era of Kubernetes
Venkat Ramakrishnan, VP of product management and engineering at Portworx by Pure Storage, shed light on how the convergence around Kubernetes has significantly impacted the modern developer experience. He noted that Kubernetes enables more of a platform-based self-service model for developers. This shift empowers developers with more control over their applications, enabling rapid changes and accelerating time to production. As he explained:
“As developers have multiplied and apps have proliferated, the old concept of middleware—an app server that was ticket-based, but always on-call—is now occupied by platform engineering that provides a self-service model for developers. Additionally, under the platform-engineering umbrella, DevOps now has more organized resources, such as a budget, a team and a set of self-service tools so developers can manage their apps in production more directly. This shift from ticketing to using an elastic infrastructure that can be deployed using a platform approach indicates an improvement in responsiveness for developers. Now, developers can make changes to the applications they are working on very quickly, which can enable them to accelerate time to production.”
The Future Trajectory of Kubernetes
Hamerman envisions Kubernetes transitioning from being a platform mainly associated with cloud providers to becoming a versatile and vendor-neutral “all things” platform. This trajectory involves Kubernetes as a generic system for deploying various components, regardless of the underlying infrastructure. Looking to the future, Hamerman predicts that Kubernetes will continue to support a wide range of technologies, including platform engineering, AI/ML deployments and emerging cloud-native tools promoted by organizations like the CNCF.
In this evolving landscape, Kubernetes is poised to become a unifying force, enabling seamless deployment and management of diverse technologies across various cloud providers and environments. Yet, this trend does not come without its own set of drawbacks, namely regarding security. “Consuming services from cloud services is much safer than running them on your own in a cluster,” noted Hamerman.
No doubt, as the industry continues to navigate these changes, Kubernetes will remain at the forefront, driving efficiency, scalability and agility in modern development and operations practices.
Image Source