GitLab Brings Kubernetes Operator to Red Hat OpenShift

GitLab announced today that the Kubernetes operator it created for its namesake continuous integration/continuous delivery (CI/CD) platform has been extended to now include support for the Red Hat OpenShift platform.

Red Hat OpenShift is an enterprise-grade instance of Kubernetes that is being employed both inside and outside of cloud computing environments to build and deploy microservices-based applications.

Joshua Lambert, director of product management for enablement at GitLab, says the Kubernetes operator created by GitLab makes it easier to deploy the same CI/CD platform that organizations already use to build monolithic applications in Kubernetes environments. That approach eliminates the need to deploy and manage two completely separate CI/CD platforms, notes Lambert.

Originally developed by CoreOS before being acquired by Red Hat, an operator automates installation and configuration of software on Kubernetes clusters in addition to making it possible to streamline the deployment of any subsequent upgrades. Many IT organizations are embracing operators as an alternative to Helm charts to deploy applications on Kubernetes clusters.

GitLab, however, also makes available a cloud-native Helm chart that some IT teams still prefer to use when deploying applications on vanilla instances of Kubernetes. Those Helm charts, in fact, provide configuration data that can be consumed by GitLab Operator. However, the cloud-native helm chart cannot be used to deploy GitLab on the Red Hat OpenShift platform.

In general, the rate at which microservices-based applications are being deployed on Kubernetes clusters is now steadily increasing. Rather than employing a CI/CD platform that has been constructed over time from disparate codebases, GitLab has been making a case for an integrated CI/CD platform based on a common code base that is simpler to manage. The overall goal is to enable organizations to spend less time maintaining a DevOps platform in favor of being able to devote more resources to writing code.

It’s not clear to what degree DevOps teams might consider replacing their existing platforms as they transition to building microservices-based applications. However, most organizations will find themselves building, deploying and updating both monolithic and microservices-based applications for years to come.

The bulk of applications running in IT environments today are monolithic, which requires them to be updated using patches any time an organization wants to add new functionality or address a security issue. Containers, conversely, make it possible to rip and replace artifacts to add new functionality or remediate a vulnerability. That approach not only accelerates the rate at which applications are updated but also improves the overall security posture of the application environment.

IT operations teams, meanwhile, are embracing Kubernetes because the platform enables them to consume IT infrastructure resources more efficiently as application requirements scale up and down. It may not be as easy for developers to build applications on Kubernetes clusters as many might like. However, because of the benefits containers and Kubernetes already provide the shift toward microservices-based applications is clearly starting to accelerate. The challenge now is figuring out how to build, deploy, secure and maintain those applications at an unprecedented scale.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1651 posts and counting. See all posts by Mike Vizard