MinIO Allies With F5 to Optimize AI Workloads Deployed on Kubernetes Clusters
MinIO, a provider of an object-based storage platform, has partnered with F5 to make it simpler for organizations to deploy artificial intelligence (AI) workloads on Kubernetes clusters.
Jonathan Symonds, chief marketing officer for MinIO, said that as more organizations look to operationalize AI, it’s become apparent that load balancing has become a critical element of ensuring performance requirements are met and maintained.
MiniIO and F5 are working together to integrate their respective platforms to, for example, ensure that hundreds of petabytes of data stored on the AIStor platform from MinIO is made accessible across a highly distributed computing environment, said Symonds.
With the rise of AI workloads, data management has become a major challenge. Many AI workloads require access to data in near real-time, which requires IT teams to adopt object storage systems capable of processing data at very low latency. MinIO, for example, claims its AIStor platform is capable of processing data at 2.2 TiB/sec and 1.0 TiB/sec on 260 nodes of NVMe drives being accessed across a 100GbE network.
That approach provides the added benefit of making it simpler to take advantage of multiple classes of processors to train and deploy AI models without becoming locked into a specific platform, noted Symonds. While, for example, graphical processor units (GPUs) are employed heavily to train AI models, in the future more organizations will find themselves employing multiple classes of processors to train AI models at a lower cost, he added.
The alliance with F5 will make it simpler to ensure that data is securely made available across both at the right place and at the right time, said Symonds.
It’s not clear to what degree data management and other related IT infrastructure challenges are holding back deployments of AI applications, but many legacy platforms will need to be replaced. Traditional file servers, for example, are not designed to provide access to massive amounts of unstructured data at the level of scale many AI workloads will require.
The one certain thing is that most of those AI workloads are going to be deployed on Kubernetes clusters that more easily scale up and down to meet those processing requirements. As such, the need to support stateful applications running on Kubernetes clusters is only going to continue to expand.
In the meantime, the race to operationalize AI is on. Organizations of all sizes are going to be routinely working with terabytes or more of data to train, customize and deploy AI models. Just about every organization regardless of size is going to need access to an enterprise-class storage system. While there is no shortage of these systems to be found on any cloud service, many of these organizations are also going to find that because of compliance issues and security concerns, they will need to maintain control of all that data themselves. That, of course, is only possible if the IT organization actually controls the underlying storage platforms being used to manage that data.