Akamai Unfurls Managed Container Service for Distributed Applications
Akamai Technologies today launched a Managed Container Service that enables organizations to deploy cloud-native applications across a global network made up of more than 4,300 points of presence (PoPs).
Ari Weil, vice president of product marketing for Akamai, said the goal is to enable application developers to deploy business logic across a platform-as-a-service (PaaS) environment spanning PoPs that are located in more than 700 cities.
Unlike a traditional PaaS offering, however, Akamai enables IT teams to specify where they want their application to run on what class of IT infrastructure, including graphical processor units (GPUs), object storage systems and databases, he added.
Akamai is initially supporting Docker containers in applications that are now being tested in more than 100 cities. Support for Kubernetes and other types of containers will shortly follow, said Weil.
In effect, Akamai is embedding a series of cloud services it gained with the acquisition of Linode into a distributed network that is already being used to deploy traditional web applications, noted Weil. That capability enables Akamai to provide a set of platform engineering services, including observability, that enables organizations to devote more resources to building applications, said Weil.
Ultimately, Akamai plans to give IT teams access to a set of artificial intelligence (AI) agents through which they can simply specify the level of performance and throughput needed for each container application, added Weil. IT teams will then be able to deploy their applications via a set of natural language prompts.
It’s not clear how many containerized applications organizations are planning to deploy at the network edge, but, as IT continues to evolve, more applications that need to process and analyze data as close as possible to where that data is being created and consumed are being created. Exactly where data is located, in relation to the business logic being used to process it, is becoming a more critical consideration. The Akamai Managed Container Service provides access to a distributed computing environment that will, for example, make it simpler to deploy a latency-sensitive AI application embedded in a set of containers at the network edge, noted Weil.
That’s critical because many AI applications need to be able to provide an interactive user experience that is highly sensitive to network latency, he added.
The number of organizations that will need to take advantage of a distributed computing environment to reduce application latency far exceeds the number that can build and maintain one on their own. As a consequence, most organizations are going to need to rely on some type of service to deploy distributed applications. The issue then becomes determining which service has the best reach into geographic locations that are closest to the user base an application serves.
There will inevitably be multiple service providers offering these capabilities, but the ones capable of providing the most amount of network throughput will eventually separate themselves from the rest as the number of latency-sensitive applications being deployed only continues to exponentially increase.