Deploy your project using docker-compose
Docker and Docker Compose recently added the context feature, which allows for building and running instances on different environments just by switching the context. There is even an ECS context type to help create the required CloudFormation configuration, although it seems in a rough state, which was the reason for this article.
I will guide you assuming you have access to a remote server(like EC2) using a PEM file. First of all, install the latest stable Docker and Docker Compose, 19.03 and 1.27.4 at the time of this article binaries both in the remote server and on your workstation. Then, add to
~/.ssh/config your remote server details, docker context uses this ssh-agent config to work, an example:
# ~/.ssh/config Host your_server_ip HostName your_server_ip User your_server_user IdentityFile ~/your_private_key.pem
Now let's create a context pointing to the remote server:
docker context create remote --docker host=ssh://your_server_user@your_server_ip
Test if you can access the context correctly:
docker --context remote ps
Now go to your Dockerfile/docker-compose folder and run docker-compose with the remote context:
docker-compose --context remote up -d
If you see any errors now, that might be related to the default number of SSH sessions allowed in the server(usually 10). Docker Compose spawns many more sessions during its execution(don't ask me why), so you need to raise the MaxSessions value in
/etc/ssh/sshd_config to a higher number like 500 and restart the sshd service.
Then you should see the same output as your local machine, but it's actually running on the remote server! Redeploying in case something changes is just a matter of rebuilding:
docker-compose --context remote down docker-compose --context remote up --build -d
I hope this was helpful for anyone developing MVPs and not yet into the production stage of adding Kubernetes to the project, if you have any suggestions, please comment below, it's my first post!