Red Hat Extends OpenShift Operator to Integration Platform

Red Hat has now made it possible to install, upgrade and manage Red Hat Integration components using an operator it developed for the Red Hat OpenShift platform in addition to an existing operator for installing Red Hat Integration components on Kubernetes clusters.

Sameer Parulkar, director of product marketing for Red Hat Integration, says support for the Red Hat Integration Operator it developed for Red Hat OpenShift will make it simpler to deploy integration components at the same time IT teams are installing and updating a Red Hat OpenShift platform based on Kubernetes.

In addition, Red Hat has tightened the integration between the change data capture component of Red Hat Integration with its service registry to allow it to automatically populate JSON and Apache Avro schema for discovery and enforcement.

Finally, Red Hat has added a connector for IBM Db2 databases, which can now be used as a target database for capturing data alongside MongoDB, MySQL, PostgreSQL and Microsoft SQL Server.

Parulkar says the goal is to make it easier to build integration across what are becoming hybrid cloud computing environments based on Red Hat OpenShift that now include event-driven platforms based on, for example, open source Apache Kafka software that is increasingly employed to consume data in near-real-time. The Red Hat service registry components, based on the open source Debezium and Apicurio software, can identify changes in an application’s data that are then automatically published to a Kafka backbone.

The move to add support for Red Hat Integration to the operators used to deploy Red Hat OpenShift is coming at a time when IT organizations that have adopted Kubernetes are starting to appreciate how extensible the operators, originally developed by CoreOS, which was acquired by Red Hat in 2018, have become. In fact, today there are operators for deploying nearly every type of Kubernetes platform. The challenge is that operators are becoming too much of a good thing. IT teams will soon start moving toward building or extending a single operator to deploy either a full-stack environment, or some subset of one, that they have specifically defined.

Collectively, those operators are going a long way toward making Kubernetes more accessible, from a management perspective, for the average IT administrator. Arguably, one of the reasons that Kubernetes is not as widely deployed as it might be is because, in the absence of operators, the average IT administrator lacks the programming skills required to deploy and update all the components that make up a Kubernetes environment. In effect, the management of Kubernetes environments is becoming increasingly democratized.

In fact, IT teams that have embraced Kubernetes are already seeing operators proliferate. It’s not uncommon for there to be multiple operators for different open source platforms and tools as various members of those communities roll out their own operators. Vendors are often anxious to consolidate those efforts with varying degrees of success.

Regardless of who builds an operator, the management of Kubernetes is becoming more of a team sport. As Kubernetes becomes more accessible to IT administrators, it will become more commonplace for a DevOps teams to, for example, initially configure a Kubernetes cluster that is then managed by a traditional IT team. The challenge now the defining the best practices around which those teams will be organized.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1621 posts and counting. See all posts by Mike Vizard