The Cloud Native Computing Foundation (CNCF) announced today that Strimzi, an open source project that provides an Operator that enables IT teams to declaratively deploy Apache Kafka messaging software on Kubernetes clusters, has become an incubation-level project.
There are already more than 1,600 contributors from more than 180 organizations contributing to the Strimzi project initially launched by Red Hat. It has been a CNCF sandbox project since August 2019 and is being used in production environments by Axual, Atruvia, Decathlon, LittleHorse and SBB.
Strimzi itself has three core components. A Cluster Operator deploys an Apache Kafka cluster by starting the brokers with the desired configuration and manages rolling upgrades. The Topic Operator enables IT teams to create, update and delete topics using a KafkaTopic custom resource, also known as an application programming interface (API). The User Operator enables IT teams to define access permission for topics using the KafkaUser custom resource.
Other components provide support for the OAuth 2.0 protocol within Kafka, an HTTP-based endpoint to interact with a Kafka cluster, and the ability to configure Kafka using a ConfigMap tool or other environment variables.
The goal is to work with the CNCF to eventually create enough momentum around an effort to streamline the deployment of an Apache Kafka platform that IT teams employ for everything from sharing log data to building complex event-driven applications.
Jakub Scholz, a core maintainer for Strimzi, said as Apache Kafka continues to evolve, the project will add support for Raft (Kraft), a consensus protocol that eliminates the need for a separate ZooKeeper cluster to manage metadata within a Kafka environment.
Operators are extensions to Kubernetes custom resources that use a standard set of APIs and have emerged as a primary method for automating the deployment of software on Kubernetes clusters. In addition to playing a major role in reducing the complexity of Kubernetes environments, some IT teams are creating their own Operators to simplify the deployment of an entire stack of software.
It’s not clear how pervasively employed Operators are, but as the amount of software being deployed on Kubernetes steadily increases, so will the number of Operators being employed. In some instances, multiple Operators exist for the same application.
Overall, it’s clear deploying and managing applications on Kubernetes clusters is becoming simpler. The underlying clusters may still be more challenging to manage than other platforms, but as higher levels of abstractions for automating the management of these environments are employed, the level of expertise required is declining. In fact, as it becomes simpler for IT administrators to manage these environments, the pace at which software is being deployed on Kubernetes clusters should increase.
Of course, there are messaging platform options other than Kafka, but the need for this type of capability is becoming more crucial as the volume of data that needs to be shared between applications only continues to increase. The issue now is finding a way to manage Kafka and Kubernetes together in a way that doesn’t necessarily always require the skills of a software engineering team that are often both hard to find and retain.