Google Adds GKE Option to GDC Platform for Hybrid IT Environments
Google this week added the Google Kubernetes Engine (GKE) to the Google Distributed Cloud (GDC) platform that enables IT teams to deploy the IT infrastructure stacks defined by Google on any cloud.
Announced at the Google Cloud Next ’24 conference, this extension to the GDC platform promises to make it easier for IT teams to centralize the management of cloud-native applications running on Kubernetes clusters alongside other applications.
Bobby Allen, group product manager for Google Cloud, said Google has a multi-pronged approach to building and deploying cloud-native applications. The primary focus is on Cloud Run, an instance of a platform for building and deploying applications based on containers without the orchestration capabilities provided by Kubernetes. The platform provides application developers with a simpler experience that in terms of the level of skills required is more accessible, he said.
GKE, meanwhile, should be used to build and deploy more complex cloud-native applications that require a platform engineering team to orchestrate, said Allen. A container application deployed on Cloud Run might one day migrate to GKE, but IT teams should take the time to determine which applications are best fit for purpose rather than defaulting to using Kubernetes for every container application, he noted.
At the same time, Google provides access to two flavors of a managed serverless computing platforms via Google Cloud Serverless. One platform provides access to functions that are generally used to invoke IT infrastructure resources on demand when needed, while the other provides developers with a more familiar container construct for building and deploying applications that will scale up and down as needed.
GDC, meanwhile, provides IT teams with an alternative approach that enables them to deploy a cloud computing environment they manage.
In general, IT organizations need to be more deliberate about the platforms they employ depending on the complexity of the application, said Allen. Many platform engineering teams, for example, may have a bias toward Kubernetes that doesn’t account for the level of cognitive load this platform places on developers building cloud-native applications, he noted. There are still plenty of applications that may require a platform engineering team to manage Kubernetes, but the bulk of most container applications today can be deployed more easily on Cloud Run, said Allen.
This week, for example, Google demonstrated how a generative artificial intelligence (AI) application could be provisioned on Cloud Run in under five minutes.
Regardless of the platform selected, the number of cloud-native applications being built and deployed continues to steadily increase. Each organization will need to decide to what degree they prefer to manage the underlying infrastructure required to deploy those applications but it’s clear more organizations are leaning toward various classes of managed cloud services to enable them to devote more resources to building and deploying applications.
It’s still relatively early days so far as building and deploy cloud-native applications in the enterprise is concerned, but the one thing that is clear is there is no shortage of platform options. The challenge, as always, is determining how best to weave them together within the context of a larger IT strategy.