DevSecOps for Kubernetes: 15 Best Practices for 2025
In today’s environment, it is becoming harder to build secure applications. Applications are becoming increasingly complex, relying on more and more dependencies and components provided by vendors than ever before. These components enable rapid application development but bring additional risks to the reliability and security of the consuming application. These risks must be effectively managed at all stages of the software development lifecycle by following good DevSecOps practices.
Here are our recommended best practices for securing applications in your Kubernetes environment, significantly enhancing their overall security.
Security & Access Management
Role-Based Access Control (RBAC)
When it comes to initially setting up a software environment, it is easy to make every user and application an administrator with the ability to perform all actions. However, this simplicity makes it easy for malicious parties to exploit. Ideally, each user, application and service should have a unique identity with scoped permissions, allowing them to perform only the intended actions while following the principle of least privilege. For example, if a reporting engine only needs to read from a database, it should be using credentials that prevent write access. This prevents bad actors from using a vulnerability in the report engine to gain access to other parts of the system by updating their account in the database. Additionally, it prevents bugs in the report engine from accidentally deleting your products table.
There is a tradeoff between granularity and complexity here. In an ideal world, you might want to define row- and column-level permission on your database tables, but this would require a lot of effort to keep those permissions up to date. Spend some time to figure out what level of granularity makes sense for your environment, aiming to make it as granular as possible without adding too much overhead.
Secrets Management
Use a dedicated tool for handling sensitive data, such as credentials, private keys and certificates. While it is simpler to insert credentials directly into configuration files or environment variables, this leaves them unprotected and makes it harder to track them down when they need to be rotated. By using a dedicated secret management tool such as HashiCorp Vault or Azure Key Vault, you can ensure that secrets are stored in a secure fashion and configure permissions to allow access only to authorized users and applications. When it is time to rotate secrets, the contents of the vault can be updated without requiring the redeployment of the consuming applications. The tooling can also help with identifying when secrets are due for rotation and even be configured to rotate them automatically. By combining the use of a secrets management tool with RBAC and managed identities, you can ensure that those credentials are only accessible in their respective environments, avoiding accidental use of production credentials in a development environment.
Network Policies
In a similar vein to access control, it is easy to set up all Kubernetes pods on a single network so that they can easily talk to each other. However, it is better to define explicit network policies to lock down communication between pods. Likewise, if a pod doesn’t need internet access, it shouldn’t have it. This prevents lateral movement between pods as well as exfiltration of data or infiltration of other software.
Authentication and Authorization
There are various options available for authenticating to a Kubernetes cluster, with the default option usually using a local credential store within the cluster. However, the cluster is likely part of a wider environment, such as an existing Lightweight Directory Access Protocol (LDAP) domain or a cloud provider subscription, so it is advisable to integrate the cluster with the existing authentication infrastructure to avoid the need for additional user management. For example, in an Azure environment, you can use Microsoft Entra ID with Azure RBAC to manage all the users and permissions within the existing Entra directory.
When it comes to machine-to-machine (M2M) communication, use system-managed identities to avoid the need to store additional credentials and to prevent credentials from being exfiltrated. System-managed identities enable the machines and cloud resources to have their own identities created for them automatically instead of using user credentials, while still making it easy to apply RBAC permissions. If system-managed identities are not possible, such as in cross-cloud environments, avoid using passwords and static tokens. Instead, opt for certificates or signed tokens.
Secure Configurations
Audit Kubernetes Configuration
Use linting tools like KubeLinter or Config-lint to validate that the YAML configuration files match the schema and follow best practices (see a full list of Kubernetes management tools). Ensure configuration files are managed in a version control system to easily audit when changes are made and review previous versions when changes need to be reverted. Use secret scanning tools to ensure that no secrets are stored in the configuration, they should all be in secure storage instead.
Pod Security Standards
By default, pods can request several privileged capabilities, but most workloads don’t need them. Use pod security standards like the restricted policy to prevent containers from running as the root user, escalating privileges and requesting additional capabilities. These policies are enforced by the Pod Security Admission Controller, preventing any pods that don’t meet the requirements from being launched.
Compliance Monitoring
The Center for Internet Security (CIS) Kubernetes Benchmarks provide security recommendations to reduce potential attack vectors and improve the overall security of the Kubernetes cluster. Use compliance tools like Kube-Bench or the recommendations in Azure’s Managed Kubernetes Services to check what actions can be taken to harden the cluster.
Supply Chain Security
Container Image Scanning
Use tools like Trivy to scan your container images for vulnerabilities both at build-time with their continuous integrations and at runtime with a Kubernetes operator. New security issues are identified all the time, so an image that was considered safe yesterday may no longer be safe today. Only use trusted and verified container image repositories.
Immutable Infrastructure
A lot of Dockerfiles use ‘apt install’ or ‘curl’ to download executable files at runtime, which means that the container could run a completely different version of software each time it is launched. Make container images reproducible by including all dependencies in the image itself, performing the downloads at build time instead of runtime. This avoids situations where an upstream dependency could be modified unexpectedly, causing unexpected behavior in the containers that consume it. It also avoids the need for the container to have outbound internet access and prevents issues with the remote server from preventing the container from starting. Consider using a read-only filesystem in the container and storing all states in external volumes or databases where the files cannot be executed.
Dependency Management
Tools such as Renovate and Dependabot can scan your source code repositories for dependencies and provide notifications and pull requests when updates are available. This simplifies responding to security vulnerabilities in those dependencies and ensures that the latest stable versions of each component are in use. Only use dependencies that come from trusted sources and enforce the use of signed packages and images where possible to ensure that the component has not been tampered with.
Continuous Integration and Delivery (CI/CD)
Pipeline Security
The CI/CD pipeline should include steps to verify and validate the contents of the images that are being built. Use build-time tooling to scan secrets unexpectedly stored in source code, linting to ensure the source code is free of obvious errors and warnings and vulnerability scans to check for potential security issues. Configure these to ensure that the build fails if any of the checks fail, preventing any unexpected issues from reaching the production environment.
Secret Injection in CI/CD
Secrets and credentials should differ based on the environment and target. Rather than baking them into the source code, use integrations with a secret management tool to retrieve the appropriate secrets while the build or deployment pipeline is running. Ensure that secrets are never exposed in logs or outputs and avoid passing them as command-line arguments, as they can be logged in shell history or exposed in process monitors.
Deployment Observability
It is important to be able to quickly identify what versions of each service are running in production, along with when they were last deployed. When things go wrong, there should be one place to look to see if the problem was introduced with a new version of the software. Use a deployment tool that can provide an easy way to see this information on a dashboard, such as Octopus Deploy.
Runtime Security
Real-Time Monitoring
Centralized monitoring infrastructure using tools like Prometheus and Grafana can help visualize the activity within a Kubernetes cluster. These tools make it easier to determine how much compute, memory or network capacity is available within a cluster, and they can bring to light anomalous behavior like unexpected load or excessive network activity. Configure alerting to notify the operations team when metrics are abnormal so that they can respond early to potential issues.
Logging and Incident Management
Centralized logging infrastructure can make it easier to visualize interactions between different services running in the cluster. Use tools such as Loki or Elasticsearch for centralized log management and searching. Enable OpenTelemetry within each service to support the tracing of requests between different services. Consider using tools like Rootly to implement incident response workflows integrated with collaboration tools, ensuring a standard process is followed for all incidents.