Simple Steps to Install Local Microservices Tools Such as RabbitMQ, Kafka, Redis, etc. with Docker

Suraj ShettySuraj Shetty
5 min read

Let's first understand what Docker is and the features of containerization.

Docker is a platform that enables developers to develop, ship, and run applications in containers. Containers allow you to package an application with all its dependencies into a standardized unit for software development. Docker provides tools and a platform to manage these containers efficiently.

Here are the key components of Docker:

  1. Docker Engine: The core component of Docker. It's a lightweight runtime and packaging tool for containers. Docker Engine enables you to create, manage, and run containers on a host machine.

  2. Docker Image: An immutable, standalone, and executable package that contains all the necessary dependencies (libraries, binaries, code, runtime, etc.) to run an application. Images are the building blocks of containers.

  3. Docker Container: An instance of a Docker image that runs as a process on the host machine's operating system. Containers are lightweight, portable, and isolated environments that run applications.

Now, let's talk about Docker Compose:

Docker Compose is a tool that lets you define and run multi-container Docker applications using a YAML file to set up services, networks, and volumes, simplifying the management of complex applications and their dependencies.

A Docker Compose file (docker-compose.yml) defines your application's services configuration, including services, networks, and volumes sections.

Below is an example of an nginx and MySQL configuration in a docker-compose file.

Here's a step-by-step explanation of a Docker Compose file:

version: '3.8'  # Version of the Docker Compose file format

services:  # Defines the services/containers in your application
  web:  # Name of the service
    image: nginx:alpine  # Docker image to use for this service
    ports:
      - "8080:80"  # Maps port 8080 on the host to port 80 on the container
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf  # Mounts a local file into the container
    networks:
      - my-network  # Specifies the network the container should connect to

  db:  # Another service for database
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: example  # Environment variables for the container
    volumes:
      - db-data:/var/lib/mysql  # Mounts a volume to persist data
    networks:
      - my-network

networks:  # Defines networks for your application
  my-network:  # Name of the network
    driver: bridge  # Specifies the network driver

volumes:  # Defines volumes for your application
  db-data:  # Name of the volume

Explanation of the Docker Compose file:

version: Specifies the version of the Docker Compose file format. services: Defines the services/containers in your application, with each service consisting of configuration options such as image, ports, volumes, environment variables, etc. networks: Defines networks for your application, enabling communication between containers. volumes: Defines volumes for your application, allowing you to persist data across container restarts.

This Docker Compose file specifies two services (web using the Nginx image, exposing port 8080, mounting a local config file, and db using the MySQL image, setting environment variables, persisting data with a volume), a network (my-network), and a volume (db-data), with both services connecting to my-network.

You can run docker-compose up command in the directory containing the docker-compose.yml file to start your application defined in the Docker Compose file. Similarly, you can use docker-compose down to stop and remove the containers.

Now that you understand what Docker and Docker Compose files are, there's a list of microservice tools accompanied by their Docker Compose files. These can be used for local setup and integrated into your project.


RabbitMQ

RabbitMQ is an open-source message-broker software that initially supported the Advanced Message Queuing Protocol but now includes a plug-in architecture for other protocols like Streaming Text Oriented Messaging Protocol and MQ Telemetry Transport.

version: "3.2"

services:
  rabbitmq:
    image: rabbitmq:3-management
    container_name: 'rabbitmq-container'
    hostname: rabbitmq
    ports:
        - 5672:5672
        - 15672:15672
    volumes:
        - ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
        - ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
    networks:
        - rabbitmq_go_net

networks:
  rabbitmq_go_net:
    driver: bridge

Elastic Stack (Elasticsearch, Logstash, Kibana)

A set of tools that can securely and reliably gather data from any source, in any format, and then allow you to search, analyze, and visualize it in real-time.

version: '3'

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"
    networks:
      - elastic

  kibana:
    image: docker.elastic.co/kibana/kibana:7.15.2
    container_name: kibana
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    ports:
      - "5601:5601"
    networks:
      - elastic

  logstash:
    image: docker.elastic.co/logstash/logstash:7.15.2
    container_name: logstash
    volumes:
      - ./logstash-config/:/usr/share/logstash/pipeline/
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    ports:
      - "5000:5000"
      - "9600:9600"
    networks:
      - elastic

  sample_data:
    image: docker.elastic.co/beats/metricbeat:7.15.2
    container_name: sample_data
    command: metricbeat -e -E output.elasticsearch.hosts=["elasticsearch:9200"]
    depends_on:
      - elasticsearch
    networks:
      - elastic

networks:
  elastic:

Note: You might encounter errors when using the Elastic Stack due to the Java JDK dependency.


Apache Kafka

A distributed event store and stream-processing platform

version: "2"

services:
  zookeeper:
    image: docker.io/bitnami/zookeeper:3.8
    ports:
      - "2181:2181"
    volumes:
      - "zookeeper_data:/bitnami"
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: docker.io/bitnami/kafka:3.3
    ports:
      - "9092:9092"
    volumes:
      - "kafka_data:/bitnami"
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper

volumes:
  zookeeper_data:
    driver: local
  kafka_data:
    driver: local

Redis

Redis, previously open-source and now "source available," serves as a distributed, in-memory key-value database, cache, and message broker, offering optional durability.

version: '3.6'

services:
  redis:
    container_name: redis-container
    image: "redis:alpine"
    hostname: redis
    ports:
      - "6379:6379"
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

Prometheus and Grafana

Grafana and Prometheus are leading tools in application monitoring and analytics, with Prometheus being an open-source platform for monitoring and alerting that collects and stores metrics as time-series data, and Grafana as an open-source web application for analytics and interactive visualization.

version: '3'

volumes:
  prometheus-data:
    driver: local
  grafana-data:
    driver: local

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    restart: unless-stopped
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
    networks:
     - monitoring

  grafana:
    image: grafana/grafana-oss:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    restart: unless-stopped
    networks:
     - monitoring

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:v0.45.0     
    container_name: cadvisor
    ports:
      - "8080:8080"
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
    devices:
      - /dev/kmsg
    restart: unless-stopped
    privileged: true
    networks:
     - monitoring

   node_exporter:
     image: quay.io/prometheus/node-exporter:latest
     container_name: node_exporter
     command:
       - '--path.rootfs=/host'
     pid: host
     restart: unless-stopped
     volumes:
       - '/:/host:ro,rslave'
     networks:
      - monitoring

networks:
  monitoring:

SonarQube

SonarQube, developed by SonarSource, is an open-source platform for continuous code quality inspection, automatically reviewing code with static analysis to identify bugs and code smells across 29 programming languages.

version: '3'

services:
  sonarqube:
    image: sonarqube:latest
    container_name: sonarqube
    ports:
      - "9000:9000"
    environment:
      - SONARQUBE_JDBC_URL=jdbc:postgresql://sonarqube-db:5432/sonar
      - SONARQUBE_JDBC_USERNAME=sonar
      - SONARQUBE_JDBC_PASSWORD=sonar
    networks:
      - sonarnet
    depends_on:
      - sonarqube-db

  sonarqube-db:
    image: postgres:alpine
    container_name: sonarqube-db
    environment:
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=sonar
      - POSTGRES_DB=sonar
    networks:
      - sonarnet

networks:
  sonarnet:
    driver: bridge

Conclusion:

There might be challenges when installing Docker on your system, or you might encounter errors with the Docker Compose file. Working with microservices and integration can be tough at the beginning, but once you understand how to use this tool, it becomes as easy as pie.

Thanks For Reading This Blog.

0
Subscribe to my newsletter

Read articles from Suraj Shetty directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Suraj Shetty
Suraj Shetty

Passionate backend developer with a strong foundation in designing and implementing scalable and efficient server-side solutions. Specialized in creating robust APIs and database management. Committed to staying ahead in technology trends, I have a keen interest in DevOps practices, aiming to bridge the gap between development and operations for seamless software delivery. Eager to contribute to innovative projects and collaborate with like-minded professionals in the tech community. Let's connect and explore the possibilities of creating impactful solutions together! #BackendDevelopment #DevOps #TechInnovation