LLMs

Docker, Inc. Makes Invoking LLMs Simpler for Application Developers
Docker Inc. this week added an ability to make it possible for application developers using its tools to build cloud-native applications to run large language models (LLMs) on their local machines. Available ...

Best of 2024: CAST AI Helps Cost-Optimize LLMs Running on Kubernetes
AI Wayfinder determines which cloud instance of a GPU will run an AI model most efficiently ...

The Kubernetes Annotation Pitfall: The One Word That Puts Your AWS Load Balancers at Risk
Misconfiguring just one word in Kubernetes can expose your AWS environment to the internet, putting your data and applications at serious risk ...

Tetrate Allies With Bloomberg to Build AI Gateway Based on Envoy and Kubernetes APIs
Tetrate and Bloomberg revealed today they will collaborate on the development of an artificial intelligence (AI) gateway that is based on the Envoy Gateway project launched by the Cloud Native Computing Foundation ...

CAST AI Helps Cost-Optimize LLMs Running on Kubernetes
AI Wayfinder determines which cloud instance of a GPU will run an AI model most efficiently ...

VMware Extends Tanzu to Simplify Application Development
VMware has added a large language model (LLM), available in beta, that it has trained to bring generative AI capabilities to the Tanzu platform ...