Kubernetes v1.36 Promotes Stability, Compatibility & Reproducibility
Kubernetes v1.36 is here. Like previous releases, this Spring 2026 iteration introduces new stable, beta, and alpha features.
The Linux Foundation points us to a total of 71 enhancements and of those enhancements, 18 have graduated to Stable, 26 are entering beta, and 25 have graduated to alpha status.
Fine-Grained API Authorization
Fine-grained kubelet API Authorization reaches General Availability (GA) in Kubernetes v1.36, marking a major advancement in node-level security. The kubelet is a node-level agent that ensures containers are running in pods by following specifications from the Kubernetes control plane.
As technologies at this level evolve (and enhance) under the eyes of the foundation and the developers who work to maintain them, the transition to beta will see the software’s feature gate (functions that manifest themselves at runtime) locked to enabled.
According to an official statement, by providing precise, least-privilege access control over the kubelet’s HTTPS API, this feature eliminates the “historical dependency” on the overly broad nodes/proxy permission for monitoring and observability tasks.
Cluster operators can now grant specific access to individual kubelet endpoints, which hardens the security posture of the cluster by ensuring that auxiliary services only possess the exact permissions required to function.
Resource Health Status
Previously, Kubernetes lacked a native way to report the health of allocated devices, making it difficult to diagnose Pod crashes caused by hardware failures. Building on the initial Alpha release in v1.31, which focused on device plugins, Kubernetes v1.36 expands this feature to support Dynamic Resource Allocation (DRA), introducing the allocated ResourcesStatus field to provide a unified health reporting mechanism for all specialized hardware.
Increasingly popular in the componentized and containerized world of cloud-native, Dynamic Resource Allocation is, of course, appealing because it moves beyond rigid, pre-defined resource limits, enabling efficient hardware utilization and complex sharing across diverse workloads.
Alpha: Workload Aware Scheduling (WAS)
Previously, the Kubernetes scheduler and job controllers managed pods as independent units, often leading to fragmented scheduling or resource waste for complex, distributed workloads. Kubernetes v1.36 introduces a suite of Workload Aware Scheduling (WAS) features in Alpha, natively integrating the Job controller with a new Workload API and a decoupled PodGroup API to treat related pods as a single logical entity.
“Now, the scheduler can perform Gang Scheduling by ensuring a minimum number of pods are ready before any are bound, while new Topology-Aware and Preemption policies optimize placement within specific network or rack domains. This evolution significantly reduces the need for third-party schedulers in AI/ML and batch processing, allowing users to guarantee the tight physical co-location and atomic resource acquisition required for high-performance distributed training,” stated the Linux Foundation.
Again, increasingly popularized in recent times, Gang Scheduling ensures multiple pods in a workload start simultaneously, preventing resource deadlocks by requiring all members to be ready.
Volume Group Snapshots
After several cycles in beta, Volume Group Snapshots reach General Availability (GA) in Kubernetes v1.36. This feature allows developers to take “crash-consistent snapshots” (actions to capture disk state simultaneously, mirroring data after an abrupt shutdown) across multiple PersistentVolumeClaims (requests for storage resources by users, linking pods to volumes) simultaneously.
By ensuring that data and logs across different volumes remain synchronized, this enhancement provides a solution for protecting complex, multi-volume workloads. With this release, the API version is promoted to v1 and the CSIVolumeGroupSnapshot feature gate is now locked to enabled.
Secure Storage Services
Security for storage integrations reaches a higher standard in Kubernetes v1.36 with the graduation of CSI Service Account Token Secret Redaction to Stable. This improvement eliminates what the foundation defines as a “long-standing security risk” where sensitive service account tokens, intended only for the storage driver, were inadvertently exposed within the Secret field of CSI volume objects, making them visible to unauthorized users with basic read access to the API.
“Now, Kubernetes automatically ensures that these short-lived tokens are handled through dedicated, secure channels rather than being bundled into persistent secrets. This change hardens the security posture of clusters by enforcing the principle of least privilege, preventing token leakage and ensuring that workload identities remain protected throughout the entire storage lifecycle,” says the team.
API for External Signing
In Kubernetes v1.36, the ExternalServiceAccountTokenSigner feature for service accounts graduates to stable, making it possible to offload token signing to an external system while still integrating cleanly with the Kubernetes API. While ExternalServiceAccountTokenSigner doesn’t exactly roll off the tongue, this function delegates signing service account tokens to an external provider securely.
Clusters can now rely on an external JWT signer (a mechanism that applies a cryptographic digital signature to a JSON Web Token to verify its authenticity) for issuing projected service account tokens that follow the standard service account token format, including support for extended expiration when needed. This is especially useful for clusters that already rely on external identity or key management systems, allowing Kubernetes to integrate without duplicating key management inside the control plane.
Know Your Node Logs
Previously, Kubernetes required cluster administrators to log into nodes via SSH or implement a client-side reader for debugging issues pertaining to control-plane or worker nodes. While certain issues still require direct node access, issues with the kube-proxy or kubelet can be diagnosed by inspecting their logs.
Node logs now offer cluster administrators a method to view these logs using the kubelet API and kubectl plugin to simplify troubleshooting without logging into nodes, similar to debugging issues related to a pod or container.
Key Takeaways: Stability, Compatibility & Reproducibility
There’s so much to unpack here that it’s hard to draw too many lines around what the Linux Foundation and the Cloud Native Computing Foundation themselves would find the most compelling updates.
Key themes here feel like long-term API stability, backward compatibility and reproducibility; we’re seeing more and more instances of technologies coming forward that enable software developers to implement sophisticated resource-sharing policies and administrative overrides that are essential for large-scale GPU clusters and multi-tenant AI platforms.
Everywhere you look through this update, there are stabilized toolsets and baked validation directly into the development workflow to eliminate configuration drift. It’s all about stability and the need to progress to the core architectural foundation for next-generation resource management.


