Target audience: Developers who want to (continuously) deploy their Dockerized application.
TL;DR: SSH can be used as an alternative to deploying Dockerized applications, this works best with Docker Swarm.
Many Docker projects start life as a Docker Compose based development setup. At a given moment these projects need to be deployed to production. This article will explain an alternative way deploying to a remote Docker server/cluster using the Docker socket and SSH instead of the Docker HTTP API.
Why deploy Dockerized applications over SSH instead of directly via the API?
Disclaimer this article doesn't say you MUST but you COULD deploy over SSH.
Docker offers a secured API which can be used for deployments. However... not everyone (or one's system administrator) wants to expose yet another port to the outside world. Also the Docker API requires setting up some TLS certificates. Docker has a tool named 'Machine' that can manage those certificates however Machine comes with its own configuration which along with the certificates isn't very portable.
An alternative solution is to use the Docker socket and let 'come to you' over a secure connection that in most cases is already present: SSH. Or more specific: an SSH tunnel. Since SSH supports public key authentication granting deployment access to a co-worker or CI server is just a matter of copying their public key to the server/cluster.
By using an SSH tunnel it's possible to 'trick' Docker by letting it think it's deploying to a local port which is actually a Docker socket on a remote system.
But should I expose SSH at all?
Don't install SSH in prod 😀😀 https://t.co/2pjAoWPhK6— David McKay (@rawkode) 19 december 2017
Well David has a fair point to consider. In this case I suggest NOT to use SSH to make changes to the server - #immutabilityFTW - but ONLY for deployments.
- A server with Docker installed.
- A Docker Image registry like Docker Hub.
- A way to (preferably an automated CI pipeline) build Docker images and push them to the registry. (If you're unsure how to set this up, let me know in the comments. I might write an article about it!).
- Knowledge of writing Docker Compose v3 files
What needs to be done to deploy Dockerized applications via SSH?
- Define which services should run e.g. in a (Docker stack deploy compatible)
docker-compose.ymlV3 file, which is probably similar to the development setup.
- Define environment specific config either the
environmentsection of a
docker-compose.ymlfile or in a separate
- Tell the server to start/update these services with the specified config.
Why Docker Swarm should be used for production rather than Docker Compose
Earlier in this article Docker Swarm is mentioned a few times, why not use Docker Compose? A common first approach to deploy application to a production server is to use Docker Compose since that is often the tool used during development. While using Docker Compose for production is possible, Docker has a better alternative built in named: Swarm. Swarm - from a deployment perspective - works very similar to Docker Compose but has more advanced orchestration capabilities.
Compared to Docker Compose, Docker Swarm has the following advantages:
- It doesn't require installing Docker Compose on the server
- It doesn't require files like
*.envto be copied to the remote server
- It supports multiple servers for high availability (or just to provide more resources)
- It supports storing secrets for e.g. credentials that cannot be provided as an environment variable
So let's start!
Step 1: preparing a server for use with Docker Swarm
To converting an existing Docker (1.13+) server into a Swarm node all that needs to be done is run the following command on the server:
Note: When a multi server ('node' in Docker Swarm speak) setup is desired a bit more is required -> read the docs.
Note: if containers started by
docker(-compose) are already running on the host the need to be stopped and restarted via
docker stack deploy.
Step 2: setting up a tunnel to the Docker socket
As explained earlier the goal is to tunnel the remote Docker socket to the local system There's one caveat though: this only works on SSHv6.7+ while most older OSes are stuck on SSHv6. However there's no need for updating since you can run an SSH tunnel in Docker.
Note: while it's possible to tunnel a remote socket to a local socket in this example the remote Docker socket is tunneled to a locale port. This prevents having to deal with file permissions of the socket.
Note: Above setup is known to work on Debian like distros,
Red Hat like distros do not seem to have a
SSH_AUTH_SOCK environment variable.
Suggestions to make this work on Red Hat like distros are welcome.
Step 3: waiting until the tunnel is established
Now the tunnel has been started (in the background) any further commands should wait until it's actually usable.
An easy way to do that is to poll the tunnel by executing a simple docker command like
Note: if the connection cannot be established for some reason the
until loop will run forever.
If desired a
timeout can be added to stop the polling after a given amount of time.
This requires wrapping the loop in a separate bash script (or make target)
Step 4: deploying with
docker stack deploy
Once an SSH tunnel has been established Docker can use it to deploy the stack to a remote Swarm node:
Step 5: closing the tunnel again
After the deploying has either succeeded or failed the tunnel should be closed again.
A final overview of the total script
Note: this example is a simplified version of the deployment setup of this website which relies heavily GNU Make for orchestrating testing, building and deploying. If you're not experience with GNU Make yet please give it a try, especially the pre requisites are very powerful once you grasp the concept.
Enjoy deploying your applications!
To see this in more context check the deploy setup of this site at the time of writing this article