Part 9. Docker, Docker Compose, Complete Intro

There are many different tutorials about Docker and a huge amount of detailed documentation, but you might need to go through tons of them to understand why docker is needed, and how to use it. Even after that, you probably don’t get a whole picture. In this part, I aggregate all the key information required to understand what is docker, why you need it, which role it plays in development and deployment, and how it can simplify a lot of things for you.

Theory

What is docker?

What if you need to install an operating system and a list of software to multiple identical personal computers and pre-configure it? The easiest way will be:

  1. Install all the required software and pre-configure it on this computer;
  2. Create an image of the hard drive of this computer, and copy it to the hard drives of all other personal computers. Done… You can save the image for the next set of computers which need the same OS and Software.
  1. Install the application & pre-configure it in this operating system;
  2. The image with the application is ready. Now, you can run the resulted image, it means your application, on a machine where you built the image using a docker, or upload the image to a private or public image registry and run it anywhere you want.
Docker Overlook
  • Run any version of the containerized application on a regular basis or to experiment;
  • Isolated environment of the running application, it doesn’t know about the host system; the running application takes all the resources only from the image;
  • Easy to run and clean up, easy to deploy, easy to revert;
  • Limit/Specify CPU and memory resources of each container;
  • Scale Vertically or Horizontally even on the same host (the base principles are below with an example);
  • Easy to combine multiple applications (web server, API servers, DBs, caches, etc) into a single system (below with an example);
  • A basis for Kubernetes and other orchestration layers.

Running a docker container

To run a docker container from a docker image, you might need to pass a configuration to it, including:

  • volumes or just configuration files, for example, html content or/and public/private keys for nginx, or database folder for mysql or mongo to persist it on a host hard drive;
  • environment variables required by the running application.
Running a docker image
$ docker run --rm -p 3306:3306 --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
  1. Pass the required content with parameters sharing it from your host filesystem to the running container when you run the image. We reviewed it already above.
  2. Combine 1 and 2 based on the needs of your deployment strategy.

Running multiple docker containers together as a cluster

Running a single docker image can help you during development, to run swagger, database, etc. But what about simple deployment?

Simple Deployment Of Node.js App
  • The outside world can talk to the cluster only via exposed IP port(s) which is(are) bound to a specific service (running container, nginx on the example above);
  • Any container can be scaled vertically (give more CPU and memory) and horizontally (run multiple instances of the same container); the scaling implementation varies on different cluster orchestration layers and can be automated (more incoming requests leads to more running containers);
  • The cluster of containers can be executed on a single physical machine, multiple machines, or without managing a physical infrastructure, like AWS Fargate.
Our goal in the practice below

Practice

In the tutorial, we developed a backend REST API service with Node.js. Let’s use it as a base to build a docker image and run it in a docker swarm cluster. Once you finish this part, you’ll have a complete picture, and you will be able to understand and use any other container orchestration layer.

$ git clone https://github.com/losikov/api-example.git
$ cd api-example
$ git checkout tags/v8.0.0

Build a Docker Image of a Node.js App

Create a Dockerfile file in the root of the project with the following content:

$ docker build -t api-example .
$ docker build -t <your account>/api-example . # losikov/api-example
$ docker image ls
$ docker image rm <repository name or image id>
$ docker image prune --all --force # remove all unused image
$ docker push <your account>/api-example # losikov/api-example

Run a Docker Image of a Node.js App

Above, there was an example of a run command for mysql. Let’s run it now and review the arguments:

$ mkdir db
$ docker run --rm -p 3306:3306 --name some-mysql -v db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest
# --rm - remove the container automatically when it exits/killed
# -p 3306:3306 - bind <host port>:<to container port>, you can specify ranges tcp(default)/udp, set a different host port then a default port on which service in a container is running
# --name some-mysql - to use in CLI instead of ID
# -v db:/var/lib/mysql - pass volume the the container, local db folder as /var/lib/mysql
# -e MYSQL_ROOT_PASSWORD=my-secret-pw - environment variable
# -d - run as detached, in background - try without it
# mysql:latest - container name in the registry and tag
$ docker ps # list running containers
$ docker stats # info about running containers (unix top)
$ docker logs -f some-mysql # show logs (-f == tail -f)
$ docker exec -it some-mysql /bin/bash # join container's bash
$ docker exec some-mysql /usr/bin/mysqldump --password=my-secret-pw user # execute a command in a container
$ docker cp test.file some-mysql:/tmp/test.file # copy a file from your host file system to a container's file system
$ docker kill some-mysql # or by container id
$ ./scripts/run_dev_dbs.sh -r
$ docker run --rm -p 3000:3000 --name api-example -e REDIS_URL=redis://192.168.1.4:6379 -e MONGO_URL=mongodb://192.168.1.4/exmpl api-example

Run Local Cluster with Docker Compose

Check the latest digram above with the cluster structure. The cluster has nginx working as a reverse proxy, giving a lot of benefits described in Theory section, and serving all incoming HTTP(s) requests proxying them to api-example and wp services. Let’s define a config for it. Create config/swarm/nginx-reverse folder and nginx.conf file in it with the following content:

$ echo "127.0.0.1 api-example.local" | sudo tee -a /etc/hosts
Networks inside Cluster
$ mkdir -p ../docker/mongodb # (relative to root of the project)
$ mkdir -p ../docker/mysql
# for interactive mode, to see logs, to debug:
$ docker-compose -f config/swarm/docker-compose.yml up
# CONTROL-C to interrupt/kill it.
# in background mode:
$ docker-compose -f config/swarm/docker-compose.yml up -d
# to kill/remove running in background mode:
$ docker-compose -f config/swarm/docker-compose.yml down
docker stats output
docker ps output
$ docker-compose -f config/swarm/docker-compose.yml up -d
$ while true ; do curl --silent -H "Host: api-example.local" http://127.0.0.1/api/v1/hello > /dev/null ; done

How to update already running services in the cluster?

After your do any code changes, you can rebuild the image as before. But use another command now:

$ docker-compose -f config/swarm/docker-compose.yml build
$ docker-compose -f config/swarm/docker-compose.yml push
$ docker-compose -f config/swarm/docker-compose.yml up -d
$ docker swarm init
Master: docker swarm init
Worker: docker swarm join
Master: after worker joined
$ docker stack deploy -c config/swarm/docker-compose.yml example
$ docker stack rm example

How about mounted volumes if a service is running on a random node in a cluster?

We can either mount a volume to all running nodes (like nfs), or make a service which mounts a volume to run on a specific node where that volume exist — let’s check how to make it.

Services Distribution over Nodes in Cluster

To Sum Up

Let’s list the key things we went through:

  • how to push an image to a repository
  • how to run a container and pass arguments to it (exposed ports, volumes, environment variables);
  • how to manage running containers;
  • how to manage local repository;
  • how to run a cluster with connected services in it using docker-compose.yml;
  • how to build and push multiple images using docker-compose.yml;
  • how to mount volumes;
  • how to pass environment variables;
  • how to scale vertically and horizontally;
  • how to create create a swarm and add nodes to it;
  • how to assign services to run on specific nodes in a cluster.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store