Using Docker Compose to easily scale your engineering team

Onboarding new developers in a project is always nice: they bring new ideas, different expertises and outside-the box thoughts. They tend to tackle problems and create solutions in a creative way, adding even more enthusiasm to the team. But before getting down to code, they need to set up their own development environment, which can easily become a headache.

Installing a local database, compiling the right program language version and solving library dependencies – possibly across different operating systems – are a few tasks inside this challenge.

Today I’ll introduce to you Docker and Compose – a container platform and its simple configuration tool – to help your team get up and running as fast as possible.

“It runs on my machine”

Software development has quickly evolved in the past years, but it still hard to create a unique setup that works on development, staging and production. Currently there are three main ways of doing it:

  • Installing everything locally is the easier approach, but not recommended in the long run. You may run into conflicts in every part of the stack! Wrong versions, dependencies, operating system, everywhere. Also, it probably won’t match your production environment, bringing nightmares the night before deployment.
  • Using virtual machines. They provide isolation and portable development environments, but as your stack grow, they might cause overhead. Running the same operating system multiple times – and on top of each other – is not resource-wise.
  • Containers! They are awesome! You can run the exact same environment with a small footprint, everywhere. For the developer, it means installing the platform and running only a few commands.

If you need more information on the process evolution, be sure to read this post, it is an awesome guide from understanding better the concept of virtual machines and containers. Also Docker Youtube’s channel has great content about its entire stack!

All options have their benefits and downsides, but the software world is shifting towards a containerized approach. Why? Docker!

What is this blue whale?

Docker WhaleQuoting the documentation:

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

To provide its magic, Docker made Linux kernel containerization simple and accessible for everyone, providing an API to resources like cgroups and namespaces. It also leverages its own characteristics, like images, registries, onion filesystem, plain text configuration file and so on. I could write about each one, but I would just be duplicating this amazing article from the official documentation. Be sure to check that out 😀

For the developer, this means that now it is possible to have application isolation, little overhead and the same setup as their colleagues quickly and easy.

To speed up the process even more, we will be using Compose: a tool for defining and running multi-container Docker applications. It uses a single file to declare your entire stack and just one command to bring it up.

Enough talk, show me the code!

I assume you have Docker already installed on your machine, but if you don’t, the installation process is well-documented here.

The file below is called docker-compose.yml. It is responsible for defining your containers’ information like storage, networking and so on and usually lives in the project’s root folder. I made some comments to describe the most important lines, yet you can find useful information at the official documentation.


version: '2'
# Containers are described as services
services:
  # Service name
  db:
    # Docker image to be pulled from the registry
    image: postgres
    environment:
      POSTGRES_PASSWORD: '123456A!'
    # Named volume mapping to store database information
    volumes:
      - data-postgres:/var/lib/postgresql/data

  web:
    # Instead of using an image, we build our container with a Dockerfile inside the ./web directory
    build: ./web
    # Command to start the Django webserver
    command: bash -c "sleep 10 && python manage.py runserver 0.0.0.0:8000"
    # Mapping of Django app code (you should point to yours)
    volumes:
      - "./django-project:/code"
    # Port to listen on the localhost
    expose:
      - "8000"
    depends_on:
      - db

# Database's volume definition
volumes:
  data-postgres:

After creating it, you simply need to run docker-compose up to get your environment up and running!

With this simple file, you are able to run a Django + PostgreSQL project within Docker containers! You may have to change some environment variables or make minor changes to get it working. You can read about a full configuration of a Django project, with PostgreSQL and NGINX as a reverse proxy at my personal blog.

Wrapping up!

As the title says, you can use this approach to quickly and easily scale your engineering team. By having Docker Compose and a docker-compose.yml in your project, you enable anyone to start working with your team with a few commands. Also, you create a normalized development environment for everyone, which can even be run in production – but this is an adventure for my next post. Stay tuned!

About the author.

Jonatas Baldin
Jonatas Baldin

Believes in love-driven development. Linux guy and Python addicted. Enthusiast in DevOps culture and open-source projects.