How to run migrations inside Dockerfile

During modern software development, it is necessary to version, maintain previous versions, and control possible risks that may occur during this process.

In the context of data persistence, databases contain migrations, which is a tool aims to manage all the structural changes we make to our databases.

However, when using a containerized environment, we might encounter the following issue:

💡
How can I keep my database updated in an automated way?

And this is the question I will answer in this article. In this scenario, I will use the containerization of a back-end application created with Node.js as an example. Below is its Dockerfile:

FROM node:18-alpine
WORKDIR /app
COPY package.json pnpm-lock.yaml .
RUN npm install -g pnpm
RUN pnpm install
COPY --from=ghcr.io/ufoscout/docker-compose-wait:latest /wait /wait
COPY . .
RUN pnpm build
EXPOSE 3000
CMD /wait && pnpm knex migrate:latest && node dist/src/main/index.js

So, as you can see, it's an image based on another Node.js image version 18 (using the alpine instance to reduce the image size). In addition, I install pnpm as the package manager and, after that, transpile the TypeScript code to JavaScript, exposing port 3000.

💡
But, you didn't mention what the "docker-compose-wait" script does!

That's true, my dear friend. This is because this tool will definitely help us achieve our goal: automating the database updates through migrations.

According to the utility's repository documentation, they define it as:

A small command-line utility to wait for other docker images to be started while using docker-compose (or Kubernetes or docker stack or whatever).

So, in this way, we add a copy of the utility's /wait folder to our image built with the Dockerfile, and then, finally, we modify our docker-compose.yml to wait for the service (in this case, using the port that my MySQL database will use within the Docker network):

version: "3.9"
services:
  conversations-getter:
    container_name: conversations-getter
    build:
      context: ./conversations-getter
    environment:
      WAIT_HOSTS: db:3306
    ports:
      - 3000:3000
    depends_on:
      - db
    networks:
      - reports
  conversations-processor:
    container_name: conversations-processor
    build:
      context: ./conversations-processor
      target: production
    ports:
      - 3333:3333
    depends_on:
      - db
    networks:
      - reports
  db:
    container_name: reports-db
    image: mysql:5.7
    restart: always
    ports:
      - 3306:3306
    expose:
      - 3306
    environment:
      MYSQL_DATABASE: "reports-db"
      MYSQL_USER: "user"
      MYSQL_PASSWORD: "password"
      MYSQL_ROOT_PASSWORD: "password"
    volumes:
      - data:/data/db
    networks:
      - reports

networks:
  reports:
    driver: bridge

volumes:
  data:

Perfect! Now, using the WAIT_HOST environment variable passed into the container, we are waiting for the container's db TCP port 3306 to be listened to so that we can execute the migrations and then, start our service.

0
Subscribe to my newsletter

Read articles from Thalles Lossurdo directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Thalles Lossurdo
Thalles Lossurdo