LLMs
From PagerDuty to ‘Agentic Ops’: The Rise of Self-Healing Kubernetes
Explore how the role of Site Reliability Engineers (SREs) is transforming with Agentic Ops, integrating technologies like eBPF, LLMs, and Kubernetes Operators to shift problem-solving from humans to intelligent systems ...
Pavan Madduri | | 3 A.M. PagerDuty, Agentic Ops, AI in DevOps, Automated Ops, cloud cost optimization, devops, eBPF, incident management, Kubernetes operators, LLMs, observability, policy as code, predictive scaling, root cause analysis, Site Reliability Engineer, SRE, System Automation, Technology Evolution
Do You Even Need Kubernetes for Reliable Service Delivery?
Kubernetes has become the default backbone of cloud native architecture. But does it actually help you ship services more reliably, or is it just more moving parts? Despite Betteridge’s law of headlines, ...
LLMs & Kubernetes Configuration: Automating Hardening, Drift Detection and Policy Enforcement
Kubernetes misconfigurations remain the top security risk. AI copilots promise automated hardening, drift detection, and policy enforcement to make clusters safer ...
Alan Shimel | | admission controllers, AI copilots, AI in Kubernetes, cloud native security, cncf, drift detection, GitOps, KubeGuard, kubernetes, Kubernetes governance, kubernetes hardening, Kubernetes misconfiguration, Kubernetes security, Kyverno, large language models, LLMs, OPA, OpenTelemetry, platform engineering, RBAC, YAML Jenga
CNCF Cloud-Native Frameworks Accelerate AI Readiness
Scaling AI safely means going cloud native — using CNCF tools to keep workloads portable, secure, and under your control ...
Open Source Tooling to Run Large Language Models Without GPUs Locally
The best way to prototype is to start by running the models locally. In this article, we will explore the various options available for running models locally, along with the trade-offs involved ...
Docker, Inc. Makes Invoking LLMs Simpler for Application Developers
Docker Inc. this week added an ability to make it possible for application developers using its tools to build cloud-native applications to run large language models (LLMs) on their local machines. Available ...
Best of 2024: CAST AI Helps Cost-Optimize LLMs Running on Kubernetes
AI Wayfinder determines which cloud instance of a GPU will run an AI model most efficiently ...
The Kubernetes Annotation Pitfall: The One Word That Puts Your AWS Load Balancers at Risk
Misconfiguring just one word in Kubernetes can expose your AWS environment to the internet, putting your data and applications at serious risk ...
Tetrate Allies With Bloomberg to Build AI Gateway Based on Envoy and Kubernetes APIs
Tetrate and Bloomberg revealed today they will collaborate on the development of an artificial intelligence (AI) gateway that is based on the Envoy Gateway project launched by the Cloud Native Computing Foundation ...
CAST AI Helps Cost-Optimize LLMs Running on Kubernetes
AI Wayfinder determines which cloud instance of a GPU will run an AI model most efficiently ...

