NetApp Promises Kubernetes-Native Unified Data Store

NetApp this week at the KubeCon + CloudNativeCon North America conference announced it will soon make available a preview of a data store for containers and virtual machines that runs natively on Kubernetes.

Eric Han, vice president of product management for NetApp, says the NetApp Astra Data Store also lays the groundwork for providing IT teams with a data store that eventually will be able to support any storage protocol.

NetApp Astra Data Store is scheduled to be available in the first half of 2022,  and will initially provide access to file services running on Kubernetes clusters. Most file services provided on Kubernetes today are layered on top of object storage systems that require IT teams to deploy client software to access rather than simply being able to employ the same network file system (NFS) client used to access file services everywhere else, notes Han. That requires applications that need to access file services to be rearchitected, adds Han.

IT teams are also required to manage separate data stores for containers and virtual machines. The NetApp Astra Data Store will ultimately reduce the management headaches that those separate data stores create today, says Han.

That capability will prove critical as organizations look to build and deploy a mix of legacy monolithic and microservices-based applications across a hybrid cloud computing environment, adds Han. In fact, the multiple parallel file system based on a common pool of storage resources will enable IT teams to manage storage more efficiently, notes Han.

NetApp Astra Data Store extends the company’s portfolio of offerings for Kubernetes that includes the NetApp Astra Control, a fully managed application-aware Kubernetes data management service and NetApp Astra Trident, an external provisioning tool for Kubernetes clusters that provides an alternative to the container storage interface (CSI).

There’s currently a lot of debate over the degree to which data should be stored on a Kubernetes cluster versus deploying stateless applications that store data on an external storage platform. Kubernetes itself creates permanent storage mechanisms for containers based on Kubernetes persistent volumes (PV). This makes it possible to access data far beyond the lifespan of any given pod. Kubernetes Volumes allows users to mount storage units to expand how much data they can share between nodes.  Regular volumes will still be deleted if and when the pod hosting that particular volume is shut down. The permanent volume, however, is hosted on its own pod to ensure data remains accessible. Upon creation, the PV is bound to the pod that requested the PVC. IT teams can then manage storage in Kubernetes via a PersistentVolumeClaim (PVC) function to request storage; a PersistentVolume (PV) to manage storage life cycle and a StorageClass function that defines different classes of storage services.

In most cases, IT teams will find themselves needing to access data stored both on a Kubernetes cluster and an external storage system. The challenge will be managing all the data being accessed by both cloud-native and monolithic applications regardless of where it happens to be physically stored.

 

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1620 posts and counting. See all posts by Mike Vizard