Kubernetes: The End of the DIY Era?

Is Kubernetes too much of a good thing?

In the world of cloud technologies, working in open source generally means you have more and better choices. If your code is built on open source, you have more choices of platforms and cloud providers. If you’re a developer, open source probably means you have more choices of projects to work on, and possibly even a greater choice of employers.

But with choice, as with anything else, you can sometimes have too much of a good thing. Case in point: Kubernetes.

Kubernetes, which had its origins at Google, has now spread well beyond that, to the point that more than half of commits now come from outside of Google. The result is an explosion of options for Kubernetes and its surrounding ecosystem, which comprise one of the basic technology sets that any cloud-native‚ or cloud-adopting, team needs to learn.

On the face of it, the burgeoning of choices for Kubernetes is awesome. The same code can now run on nearly any cloud. Enterprises and developers have lots of options.

Unfortunately, there are way too many options. There’s just too much to learn, too quickly, and this high learning curve can complicate implementation and negatively affect organizational culture.

At the highest level, you have to decide whether to invest in an internal knowledge base or to vet external cloud services. The first option is incredibly labor-intensive and hard to scale, while the second raises concerns of vendor lock-in.

At the lowest level, you have to make decisions across networking, storage and cluster configuration. Knowing that these decisions are dynamic and often cascading, how should you set up your load balancers, ingress controllers and API gateway? Do you go with Flannel or Weave for cluster networking, and do you add Calico or Cilium for policy management?

Understanding the implications of storage choices (storage services or configuring your own inside the cluster) and multi-tenancy management (cluster per team or all within one cluster) can make a world of difference. Never mind the complexity when you start to add in features like pod auto scaling, node auto scaling and service meshes.

The Problems with DIY K8s

If your organization is embarking on a journey to move at least some of its operations to the cloud, or to start building cloud-native applications, you’ll have a lot riding on that first champion team—the team that will demonstrate that not only is it possible to move to the cloud, but it’s an excellent idea. It’s a little crazy to take that first champion team, tell them they’re in charge of investigating cloud-native technologies and let them loose on the more than 300 projects that make up the world of Kubernetes. It’s just too much.

Then, assuming they get their Kubernetes choices together, the team will need to become adept at all the lifecycle management aspects of Kubernetes: standing up clusters, maintaining them, networking, storage and keeping up with releases. In the Kubernetes world, those releases can come every three months. You’ll also be assuming the responsibility, and liability, for keeping the whole thing secure. Managing this all in-house means taking on a significant amount of risk.

As you’ve probably already found, it’s not easy to hire the right people to do this and it’s expensive. There’s just no escaping the fact that, in keeping all of this in-house, you’re investing resources, time and money in becoming an expert in an entire ecosystem, when you could be concentrating on the aspects of your business that set you apart and provide value to your customer.

In addition, Kubernetes already faces significant barriers to adoption in the enterprise. Large, traditional businesses have invested in established technologies and their IT teams have developed expertise in managing them. Getting these teams to pay attention to, and develop skills around, any new technology is daunting—let alone a new technology as broad and complex as Kubernetes.

Managing the Problem

Despite these issues, Kubernetes can help organizations deliver significant value. But how?

The challenges of standing up Kubernetes clusters, maintaining them, networking and storage are moving out of the realm of do-it-yourself. Managed services for Kubernetes have emerged to provide businesses with curated, integrated, Kubernetes stacks, and the open source Kubernetes certification alleviates lock-in concern.

These managed services relieve you of the burden of being a Kubernetes expert and provide a lower barrier to entry in the enterprise. Even the most advanced early users of Kubernetes are moving toward this model, because it enables their teams to move higher up the stack and frees them to do more important work that ultimately will differentiate them from their competitors.

When in doubt, start small with something new and take advantage of managed services to ease the burden of jumping into the cloud-native fray of Kubernetes.

Bob Quillin

Bob Quillin is Vice President of Developer Relations for Oracle Cloud Infrastructure. He was previously CEO and co-founder of StackEngine, an Austin-based Docker and container management platform startup acquired by Oracle.

Bob Quillin has 1 posts and counting. See all posts by Bob Quillin