A Simple Service Discovery Solution for Docker
The ambassador pattern is an approach for multi-host Docker application deployment. It helps containers on different hosts to discover each other and communicate. Despite of the variants, the idea can be illustrated as the following:
(web) –link–> (web proxy) —network—> (db proxy) –link–> (db)
Instead of the `web` connecting with the `db` directly,
1. A local web proxy is linked with `web` and `web` is configured to send all traffic to `localhost`, which is received by the web proxy
2. Alongside this, the db proxy announces the `db` container with a predefined key in a key/value registry (etcd, consul, etc). The key is also used by the web proxy to discover the `db` container in the registry
3. Once discovered, the web proxy will route the traffic to the db proxy, which is linked with the `db` container.
As a result, the connection is setup.
Pros & Cons
This pattern has a number of advantages:
- Non-intrusive: The Docker image can be used without any modification, as it is configured statically
- Dynamic: Containers can discover each other in a dynamic way, particularly valuable to the autoscaling/load balancing case
- Resilient: Failover is made transparent to the container
However, there is also some amount of tradeoff for the above benefits:
- Complex: There are more components to learn and manage
- More moving parts: A wrong key or value could lead to some bizarre results, which may be particularly difficult to debug
- Inflexible: The proxy is good for 1-to-1 or a load-balanced connection, but it couldn’t handle other topology, such like Dynamo ring
Solution
Compared with the ambassador pattern, another solution of service discovery is to leverage the [file mount](https://docs.docker.com/v1.2/userguide/dockervolumes/#mount-a-host-file-as-a-data-volume) feature to mount the application configuration file from the host instance into the container:
$ sudo docker run —rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
The approach is:
- Simple: No proxy, no registry, easy configuration files
- Non-intrusive: No code changes or image modification
- Flexible: Works with any application configuration
- No moving parts: Easy to reason about
One may argue that this is a static configuration of the containers, which does not cope with scenarios like autoscaling and failover. The answer to this is an [orchestration engine](www.visualops.io). The engine need to keeps a watchful eye over the cluster and re-generate the configuration file to make sure the containers always get the correct connection whenever something happens.
Technically, the solution can be illustrated as:

Instead of letting instances announce and discover each other via a central database, this explicit route specifies the logical relationship between services. Then the engine renders the file upon provisioning:
mysql://root@@{db.PrivateIpAddress}:3306 — -> mysql://[email protected]:3306
Using this approach to deploy a multi-instance Docker application, all you need to do is to specify three things:
- which docker image to run, i.e. my/node
- the container setup, i.e. port, cpu, mem
- the app configuration file content and link with other containers
This solution does not look particularly sexy, but what it is, is dead simple and robust; and works an absolute charm for any type of application.