5 Docker Best Practices You Should Follow

Docker is not the only container technology out there, but it is the de facto standard bearer for containers and it’s the most recognizable brand among the container players. Support for Docker has been integrated into a wide array of products and platforms and many organizations are either already using Docker containers or trying to understand how to get on the bandwagon.

It isn’t too difficult to succeed with Docker, but there are some tips and tricks you should follow to use it more effectively. Here are five Docker best practices you should keep in mind whether you’re already using Docker, or just thinking about it:

1. Beware of inheritance and dependencies

Your containers inherit a parent image that generally includes its base operating system and dependencies—things like dependent packages, default users, etc. Those inherited attributes and dependencies might expose your containers to unnecessary risk. Make sure you’re aware of the inherited attributes and take any additional steps necessary to further isolate and protect your containers.

2. Limit container interaction

Container security has emerged as a serious concern for many organizations—specifically how containers interact with one another and with the outside world. Your containers should not accept connections on exposed ports through any network interface. You should take steps both to control how—and how much—containers interact with each other internally, and limit the number of containers that have contact with the outside world so you can minimize exposure to external risks.

3. Monitor containers for vulnerabilities

One of the challenges of using a code repository like Docker Hub is that once a container image is uploaded to the repository nobody takes responsibility for keeping it patched and secure. It might be fine when originally created, but over time new vulnerabilities and exploits are discovered and you need to scan for those before using containers in production. A tool like Twistlock can help you monitor for and identify vulnerabilities in your container images.

4. Run containers as read only where possible

One of the best and simplest ways to limit exposure to risk for container is to run them in read-only mode. That obviously won’t work for all containers—there will be containers that must accept input of some sort in order for apps to work, but containers that can be run in read-only mode should be. You should also never run containers in privileged mode.

5. Keep it simple

Try to keep your Docker container ecosystem as simplified as possible. You should run processes in separate, individual containers. If there are services that are dependent on one another you should use the container linking feature to connect two containers rather than combining them in the same Docker container. You should also focus on keeping the footprint of containers small—don’t load unnecessary packages or services that just make the file larger and waste resources—and make sure that your containers are designed to be easy to replace. Container ecosystems tend to be very volatile and the containers should be easy to delete and recreate as necessary.

This list is by no means comprehensive. Honestly, I could probably write a top 25, or even top 50 Docker best practices. This is a great start, though, to help you maximize the value of your Docker containers while also taking steps to make sure your containerized apps and data are secure.

Tony Bradley

I have a passion for technology and gadgets--with a focus on Microsoft and security--and a desire to help others understand how technology can affect or improve their lives. I also love spending time with my wife, 7 kids, 2 dogs, 4 cats, 3 rabbits, 2 ferrets, pot-bellied pig and sulcata tortoise, and I like to think I enjoy reading and golf even though I never find time for either. You can contact me directly at [email protected]. For more from me, you can follow me on Twitter and Facebook.

Tony Bradley has 46 posts and counting. See all posts by Tony Bradley