Google Adds Additional Storage Service for GKE

As part of a broader expansion of its cloud storage services, Google is extending its Filestore Enterprise for accessing NFS-based storage to be accessible by Google Kubernetes Engine (GKE) clusters running on the Google Cloud Platform (GCP) without the need for any additional configuration.

Combined with Backup for GKE, this enables enterprises to modernize by bringing their stateful workloads into GKE.
At the same time, Google has Google Cloud Hyperdisk, a next-generation instance of an existing persistent disk service that makes it easier to tune storage requirements to a stateful workload.

Google is also adding a Cloud Storage Autoclass tool that will automatically move objects to lower-cost Google storage services based on policies set by IT teams and is making available a Storage Insights tool to provide actionable insights about the objects stored in Google Cloud Storage (GCS).

Finally, Google has added a Google Backup and Data Recovery (BADR) data protection service for critical applications and databases. Google already provides a recovery time objective (RTO) capability of zero and an optional recovery point objective (RPO) capability of less than 15 minutes.

Sean Derrington, group product manager for storage on Google Cloud, says Google Cloud storage is gaining traction because it’s accessible via a single application programming interface (API) to access multiple storage services. That’s critical because as more stateful applications are built and deployed in the cloud, application developers need a simple way to invoke a range of storage services, he notes.

At the core of Google Cloud Storage service is Colossus, a cluster-level global file system that stores and manages data across all the Google cloud storage services.

It’s not clear to what degree the storage services that a cloud service provider offers are a major factor in driving IT organizations to embrace one cloud service versus another. However, more organizations are now starting to build and deploy stateful cloud-native applications in the cloud that require access to a range of low-latency storage services.

Exactly how all that data will be managed also remains undetermined in most organizations. Unlike on-premises IT environments, there isn’t usually a dedicated storage administrator for data stored in the cloud. However, as the volume of data continues to grow, organizations are investing more in cloud data warehouses and data lakes to manage data regardless of the underlying storage technology employed. In most cases, multiple applications are now trying to access a common pool of data that is centrally managed.

Inevitably, data will be strewn across multiple clouds and on-premises IT environments running both cloud-native and legacy monolithic applications. How well all that data is managed will vary widely based on the organization, but the rate at which data is being created continues to exponentially increase. More challenging still, the types of data being stored has never been more varied. The issue now is finding the best way to turn all the data into an actual business asset.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1723 posts and counting. See all posts by Mike Vizard