Triggermesh Melds AWS Lambda with Kubernetes

IT organizations looking to move functions that were developed for the Lambda serverless computing framework from Amazon Web Services to Kubernetes clusters now can take advantage of an open source TriggerMesh Knative Lambda RuntimeKLR project.

The TriggerMesh Knative Lambda RuntimeKLR project was developed as part of a management framework created by TriggerMesh for serverless computing frameworks. A TriggerMesh cross-cloud event bus allows users to trigger functions from event sources from any cloud, including on-premises IT environments.

TriggerMesh co-founder Mark Hinkle says it’s clear Kubernetes will be the dominant platform on which next-generation applications are deployed. The TriggerMesh Knative Lambda RuntimeKLR project extends the reach of Knative middleware software, which was developed by Google to support multiple open source serverless computing frameworks, to AWS Lambda, which is based on a proprietary architecture.

As cloud-native applications continue to evolve, serverless computing frameworks are emerging as natural extensions to Kubernetes clusters. Serverless computing frameworks, also known as functions-as-a-service (FaaS), make use of event-driven architectures to create a layer of abstraction based on containers that eliminates the need to know server constraints when building an application. When additional resources are required, a developer creates a function that calls a stateless set of compute functions, which then are made available. Each function is a self-contained module of code that accomplishes a specific task and, once a function is written, it can be reused multiple times. The TriggerMesh Knative Lambda RuntimeKLR project makes it possible to reuse functions developed for AWS Lambda on any Knative-compatible serverless computing frameworks.

Knative is gaining momentum as a method for integrating a wide range of serverless computing frameworks that can be deployed on top of Kubernetes. IBM, Pivotal, Red Hat and SAP have embraced Knative.

A major challenge IT organizations will face as they embrace serverless computing frameworks will be the need to adjust their DevOps processes. In effect, a function makes available a set of resources for processing tasks that need to occur in parallel with the primary workload. Developers first must develop their applications in a way that embraces those parallelization capabilities, which then need to be integrated into a larger continuous integration/continuous deployment (CI/CD) framework as functions are added and reused.

On the plus side, the cost of processing workloads that need access to compute resources for only a few seconds should drop substantially, as the amount to infrastructure that must be allocated to an application is reduced significantly.

Hinkle goes so far as to say containers, microservices, Kubernetes and serverless computing frameworks are modern instantiations of services-oriented architectures (SOA) pioneered three decades earlier. The difference today is these technologies are now being adopted at a rapid clip.

The next big DevOps issue, of course, is determining precisely when to invoke and where to deploy a serverless computing framework, as not all clouds run the same types of application workloads equally well. Fortunately, thanks to the advent of Knative middleware and related TriggerMesh Knative Lambda RuntimeKLR projects, the cost of guessing wrong initially won’t be nearly as high as it might have been.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1658 posts and counting. See all posts by Mike Vizard