Cloud-Native Development

Deciding Which – and How Many – Workloads to Containerize

No matter how popular containers become, the reality is that they are going to coexist with virtual machines and bare-metal servers at most organizations. They won’t totally replace older forms of infrastructure. That’s why it’s important to decide how much of your infrastructure to containerize and which parts are the best fit for containers. Here’s a guide to answering those questions.

What to Containerize, What Not to Containerize

Most infrastructures are a complex mix of different hardware components, and they host a diverse set of workloads. Some parts of your infrastructure are a better fit for containers than others.

How do you tell which parts of the infrastructure to containerize, and which to leave as is? Start by considering the following points:

  • Persistent data needs. Containers can certainly support persistent data storage. However, stateless applications—that is, those that don’t need persistent data storage—are always more obvious candidates for containerization. The more persistent data you have to store, and the more complex your storage needs are, the more complicated it will be to connect that data to a containerized application in a way that is scalable meets security and access control requirements.
  • Workload isolation. Containers provide less isolation between workloads than virtual machines. On the other hand, they provide more isolation than you’d get when you run different workloads side by side on a bare-metal server. When deciding what to containerize, think about which workloads can benefit from some separation, but don’t need the strict isolation provided by virtual machines.
  • Scalability. Containers are ideal for building highly scalable infrastructure and applications, because you can spin container instances up and down quickly. If you have workloads that need to scale significantly, containers could be a good fit for them.
  • Updates. Containers are also a good fit for applications that need to be updated quickly and continuously. This does not mean that you can’t do continuous updates using other deployment technologies; however, because containers allow you to push out updates quickly and immutably by deploying new container images, they offer a good solution for workloads where fast, reliable updates are a priority.

How Much Should You Containerize?

The extent of your infrastructure that you containerize will vary depending on your needs, of course. The more workloads you have that are a good fit for containers, the more you’ll likely want to containerize.

However, there are other factors to consider when deciding how much to containerize, including:

  • How much container expertise does your team have? Do your engineers already know how to deploy Docker and integrate containers into their CI/CD pipeline? Or will you have to hire new admins or retrain your team?
  • How open is your container solution? If you use a pure-play platform to deploy containers, you’ll likely be able to adapt as your needs change. But if you go with a heavily proprietary solution, you may want to limit the extent to which you containerize, in case your needs change in the future and migrating to a new container platform proves difficult.

You don’t want to containerize more than you can manage, even if you have the technical means.

Christopher Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with Fixate.io and Sweetcode.io.

Recent Posts

CloudBolt Taps StormForge to Help Rein in Kubernetes Costs

StormForge's machine learning algorithms that optimize Kubernetes clusters will be fed into CloudBolt's Augmented FinOps tools.

2 days ago

NVIDIA Acquires Run:AI to Run AI Workloads on Kubernetes More Efficiently

Run:ai enables IT teams to take advantage of container orchestration to schedule AI workloads across multiple GPUs.

3 days ago

Cosmonic Donates Kubernetes Operator for wasmCloud to CNCF

Cosmonic has contributed an Operator developed for the wasmCloud platform to the CNCF, enabling WebAssembly applications to run on Kubernetes…

1 week ago

Ensuring Efficient Cloud-Native Backup and Recovery

Cloud-native backup and recovery solutions can improve an organization’s cloud data resilience against accidents and online cyberthreats.

1 week ago

Latest Kubernetes Update Increases Enterprise Appeal

In total, 22 capabilities previously available in beta have graduated to stable. Many of those Kubernetes features appeal primarily to…

1 week ago

Red Hat Adds Developer Tools to Extend DevSecOps Reach

Red Hat added three new developer tools, expanding its DevSecOps portfolio for building secure cloud-native applications.

1 week ago