Zero-Trust on OKE: How to Actually Secure Your Clusters With Terraform
Spinning up a Kubernetes cluster takes about five minutes. Securing it for production takes a lot longer.
When you migrate mission-critical workloads — like financial systems or anything handling PII — the default Kubernetes configuration is a massive liability. Out of the box, the network is flat, the API server is exposed and workloads often have way too much access to the underlying cloud provider.
I spend a lot of time tearing down default configurations and rebuilding them to meet strict compliance standards. On Oracle Cloud Infrastructure (OCI), getting this right means moving away from standard cluster deployments and utilizing the advanced hardware and network isolation features of Oracle Kubernetes Engine (OKE).
Here is how I build a highly available, cryptographically secure OKE environment using Terraform.
1. Ditch the Overlay Network
Historically, we all relied on overlay networks such as Flannel or Calico. They work, but packet encapsulation adds overhead and trying to trace pod traffic at the physical network layer during an incident is a headache.
Instead, I use the OCI VCN-Native CNI.
With this setup, pods get IP addresses directly from your OCI virtual cloud network (VCN). You eliminate the encapsulation penalty, which drops latency. More importantly, it lets you attach OCI network security groups (NSGs) directly to individual pods. You are enforcing micro-segmentation at the cloud infrastructure level, rather than just relying on software network policies that can easily be bypassed if the cluster is misconfigured.
2. Force the API Server off the Internet
If your Kubernetes API server has a public IP, you are eventually going to have a bad time.
To lock this down, and to get access to advanced add-on management, you need to provision an enhanced cluster rather than a basic one.
Here is the Terraform block I use to deploy a strictly private, highly available OKE control plane.
Notice that is_public_ip_enabled is explicitly set to false.
Note: To actually manage this cluster, you will have to route your kubectl traffic through an OCI Bastion service or an IPsec VPN. It is an extra step for developers, but it is non-negotiable for security.
3. Encrypt the Memory (Confidential Computing)
Network isolation is great, but what happens if the underlying hypervisor is compromised? A malicious actor on the host could theoretically dump the memory contents of your pods.
To mitigate this, I use confidential computing backed by AMD Secure Encrypted Virtualization (SEV). This hardware-level feature encrypts the VM’s memory while it is in use. Combine this with shielded instances (which use a virtual TPM to verify the boot sequence), and you lock down the node at the hardware level.
Here is how you declare a confidential node pool spread across multiple fault domains:
resource “oci_containerengine_node_pool” “confidential_pool” { cluster_id = oci_containerengine_cluster.secure_cluster.id compartment_id = var.compartment_ocid
kubernetes_version = “v1.31.1”
name = “secure-amd-pool” node_shape = “VM.Standard.E5.Flex”
node_shape_config { ocpus = 4
memory_in_gbs = 32
}
node_config_details { placement_configs {
availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
subnet_id = oci_core_subnet.worker_subnet.id fault_domains = [“FAULT-DOMAIN-1”, “FAULT-DOMAIN-2”,
“FAULT-DOMAIN-3”]
}
size = 3 is_pv_encryption_in_transit_enabled = true
}
node_source_details { source_type = “IMAGE”
image_id = data.oci_core_images.oke_image.images[0].id
}
# This is where the hardware-level security happens node_options {
is_shielded_instance_enabled = true is_confidential_computing_enabled = true
}
}
4. Stop Hardcoding Secrets (OCI Workload Identity)
If I see one more long-lived API key stored in a plain-text Kubernetes Secret, I am going to lose it.
Because we provisioned an enhanced cluster earlier, we get native support for OCI IAM workload identity. This means we can stop mounting static credentials entirely.
Instead, you annotate a Kubernetes service account with the OCID of a specific OCI IAM policy. The pod is dynamically issued a short-lived, cryptographic token tied directly to that policy.
If attackers manage to pop a container, they only get access to the exact resources granted to that specific microservice, and the token expires quickly. The blast radius is effectively zero.
The Bottom Line
Defense-in-depth on Kubernetes is not just about writing better YAML; it requires leveraging the actual infrastructure provider’s capabilities. By combining VCN-native routing, AMD SEV memory encryption and workload identity, you can build an OKE environment that passes even the most brutal security audits.


