Why Kubernetes Would Benefit From a PaaS
A recent article explains that Kubernetes is now within reach for SMB adoption, and it got us thinking: What are the primary barriers to adoption for Kubernetes? Given the extraordinary popularity of the project, what are the issues that arise when a software engineering team looks to start using Kubernetes?
Several answers to this question show up when you search the web. They can largely be summarized as: Lack of capacity, inadequate training and difficulty navigating the complexities of the ecosystem. While these answers do cover a reasonable spread of issues, we find that they are not granular enough to do the topic justice.
First, let’s address why organizations might want to adopt Kubernetes. Why should a business care about the infrastructure abstraction their software engineering team uses?
In today’s digital landscape, it is critical for organizations to develop a high-quality digital experience. This is not only true for customer-facing applications and services, but also for internal platforms that are essential for productivity. A flawless experience is important for both these areas, and that is the promise of Kubernetes.
Let’s also break down the different personas that exist in a modern software engineering team. This will allow us to address specific concerns and distinct problems each group faces when working with Kubernetes and build those into the larger solution that a whole team/organization needs.
1. The Software Developer
The use of Kubernetes implies the use of containers. Creating containers requires skill and invites additional load in the form of Dockerfiles and other artifacts that need to be built and maintained. Increasing culture changes such as shift left and technology trends such as infrastructure-as-code (IaC) squarely place the responsibility of writing and maintaining various files upon the development team. The second big challenge facing developers is inner-loop development. What runs in initial tests may be very different from what runs in production, and Kubernetes is not immune to this. This perpetuates the “It runs on my machine!” problem. The other hurdle software developers face with the inner loop is the amount of time each rebuild takes.
2. The Operations Professional
This could be a DevOps engineer or a dedicated SRE―either group has their share of issues when working with Kubernetes. For those involved in ensuring application reliability, serving requests without latency and driving toward operational excellence, some ops practices that have been perfected over the years have to be rearchitected with Kubernetes. Some examples include how to provide updates to a running application? Do they make use of a rolling-update strategy or do blue-green deployments? How do they design a CI/CD pipeline that deploys repeatably? And should this CI/CD tool live independently of Kubernetes or run alongside pods that run the applications themselves? How are teams to maintain configuration for different target environments, especially to avoid drift or creep?
3. The Engineering Manager
Engineering managers care about what fraction of development is devoted to building actual applications versus that spent on adjacent tasks required to maintain stacks on Kubernetes. They care about the efficiency of the processes in place and the efficacy of the team. Engineering managers are also required to keep track of cloud spend incurred from Kubernetes deployments and manage that budget and spending properly.
4. The Business Stakeholder
Those who work with larger business goals in mind articulate problems in different ways. They tend to focus on the implications that would arise with Kubernetes infrastructure, rather than its internals. For example, they might want to know if applications running on complex stacks are able to fulfill business needs adequately. They would tend to bridge engineering efforts with go-to-market strategies and therefore would require assurance about engineering teams’ ability to release at a frequency harmonious with market demands.
Right now, there isn’t a silver bullet technology that can meet the needs of all these personas. However, PaaS tools can come close to addressing all of these concerns. The pillars of PaaS can help architect a solution to the problems associated with Kubernetes.
PaaS tools have perfected the source-code-to-URL experience. If developers trigger a workflow and provide the repository, PaaS tools are capable of returning a URL after processing the code, building it and deploying it. This way, developers can work quickly and efficiently without wrangling with complex deployment workflows.
By separating application development from the means to create immutable artifacts for deployment, software engineering teams can liberate individual engineers to focus on application development. This further reduces the need for skilling and reskilling on various tools that help developers perfect the build/deploy experience. Furthermore, service brokers can be used to add services to the application and connect with them all through internal configuration, eliminating the need for additional scripting.
By using the same deployment process, PaaS tools can ensure homogeneity across all builds, thereby increasing parity between production instances and all other remote ones. PaaS tools can also provide unified management consoles that can work with billing APIs. They can also be extended to add layers to help with further governance and audit of their applications.
Applying PaaS abstractions to Kubernetes infrastructure increases the value that teams can extract from them. We believe strongly that employing a PaaS abstraction can simplify the Kubernetes experience and greatly benefit software engineering teams.
Ram Iyengar, chief evangelist, Cloud Foundry, co-authored this piece.
To hear more about cloud-native topics, join the Cloud Native Computing Foundation and the cloud-native community at KubeCon+CloudNativeCon North America 2022 – October 24-28, 2022