pgEdge Adds Ability to Distribute Postgres Across Multiple Kubernetes Clusters
pgEdge today revealed it has made available a distribution of the open source Postgres database it provides that can now be distributed across multiple Kubernetes clusters.
Antony Pegg, director of product management for pgEdge, said pgEdge Containers on Kubernetes enables IT teams to deploy logical instances of a Postgres database across multiple clusters. That approach makes it simpler to horizontally scale a database across a distributed computing environment in a way that helps reduce overall latency, he added.
For example, an instance of a Postgres database can be physically located closer to the point where data is being created and consumed, while still being centrally managed as one logical instance, he added.
Alternatively, IT teams may opt for this approach to ensure high availability of Postgres databases by making it possible to failover to another in case of an outage.
IT teams now have two options for deploying a distributed instance of Postgres using container images that support versions 16 through 18 of Postgres. The first is a minimal version that includes core pgEdge extensions and a standard edition that includes extensions to the core database, such as pgVector, PostGIS, and pgAudit.
The core database itself is available under an OSI-approved PostgreSQL License, with pgEdge Containers on Kubernetes available on the GitHub Container Registry.
Previously, pgEdge made available an open source operator for deploying its database, but is now deepening its integration by enabling it database to be deployed using a set of distributed containers. That effort is now being advanced under the auspices of the Cloud Native Computing Foundation (CNCF). PgEdge also provides a Helm chart alternative that has been updated to add support for pgEdge Containers on Kubernetes, as well as Patroni, a Python tool for deploying high-availability instances of Postgres.
Exactly how widely a Postgres database might be distributed depends on the nature of the application and the amount of network bandwidth made available to replicate data, said Pegg. The more the application depends on reads versus writes to the database the easier it becomes to manage latency requirements, he added. There is at least one organization that has distributed an instance of Postgres across 20 clusters.
It’s not clear just how many stateful applications are now being deployed on Kubernetes clusters, but as more organizations build and deploy cloud-native instances of these applications, there has been a significant increase in database deployments on Kubernetes clusters. Many of those databases are based on Postgres, which has gained significant traction as an open source relational database alternative to proprietary databases.
Of course, as those databases increase, there is going to be more need for collaboration between database administrators (DBAs) and the DevOps teams that typically manage Kubernetes clusters. There may even be a greater incentive to bridge the culture gap that exists between these disparate IT disciplines within a larger platform engineering team.
Regardless of approach, the one certain thing is that cloud-native application environments are only going to become that much more complex to manage as more databases are increasingly added.


