GitLab Runner Architecture

OLUWASEUNOLUWASEUN
5 min read

The GitLab Runner is a core component of the GitLab CI/CD architecture, responsible for running the jobs defined in the pipeline. It’s a lightweight agent that fetches and executes jobs from GitLab, whether on a local machine, a cloud instance, or a Kubernetes cluster. The GitLab Runner works by polling GitLab for jobs, running them on the specified environment, and returning the results (such as test results, build artifacts, or deployment logs) to GitLab.

GitLab Runners provide an isolated environment for running CI/CD jobs, ensuring that jobs don’t interfere and are executed consistently. Runners are highly customizable, and different executors can be used depending on the workload and environment requirements.

How GitLab Runner Works

  1. Registration: GitLab Runners must first be registered with a GitLab instance. Once registered, the runner becomes available to execute jobs. Runners can be shared across multiple projects (shared runners) or dedicated to specific projects (specific runners).

  2. Polling for Jobs: Runners regularly poll the GitLab instance for pending jobs. Once a job is assigned, the runner fetches the necessary code and artifacts to execute the job.

  3. Executing Jobs: The runner executes the job using one of its configured executors (Docker, shell, Kubernetes, etc.), providing the necessary environment for the job to run.

  4. Job Isolation: Each job runs in isolation, which ensures that changes made during one job (e.g., creating files or modifying system settings) do not affect other jobs.

  5. Returning Results: After job execution, the runner collects the job’s output (logs, artifacts, etc.) and sends it back to the GitLab instance, marking it as completed.

Runner Executors

GitLab Runners support several types of executors, which define how and where the jobs are run. Executors provide flexibility, allowing CI/CD pipelines to be executed in different environments based on the project’s requirements. The most commonly used executors are Docker, Shell, and Kubernetes.

  1. Docker Executor: The Docker executor is among the most popular and widely used executors in GitLab CI/CD. It allows jobs to be run in isolated Docker containers, which provides a clean and reproducible environment for each job. Using the Docker executor, jobs are executed in containers spawned specifically for each pipeline run, ensuring that the environment is always consistent and independent of the host machine.

    • How it works: When a job is triggered, the runner starts a new Docker container using a pre-defined image (or a custom image specified in the .gitlab-ci.yml file). The job’s commands are then executed inside this container. The container is destroyed after the job is completed, ensuring no residual state is carried over to future jobs.

    • Advantages:

      • Isolation: Each job runs in its own container, ensuring complete isolation from other jobs.

      • Consistency: Using Docker images ensures that the same environment (with the same dependencies and tools) is used across all pipelines.

    • Use Cases:

      • When you need consistent environments across multiple jobs and pipelines.

      • When you want to leverage Docker’s containerization capabilities for faster, repeatable builds.

      • Ideal for projects using containerized applications or microservices.

Example of using the Docker executor in a .gitlab-ci.yml file:

        image: node:22

        stages:
          - build

        build-job:
          stage: build
          script:
            - npm install
            - npm run build

This configuration uses the node:22 Docker image, ensuring the job runs in a container with Node.js 22 installed.

  1. Shell Executor: The shell executor runs jobs directly on the machine where the GitLab Runner is installed, using the system’s native shell (e.g., Bash, PowerShell, or Zsh). Unlike the Docker executor, the shell executor does not provide isolation—jobs run in the same environment as the host machine.

    • How it works: The runner executes the job using the machine’s shell, allowing the job to directly interact with the host system’s file system, network, and processes.

    • Advantages:

      • Simple Setup: The shell executor is easy to set up, requiring no additional configuration for Docker or virtual machines.

      • Access to the Host System: Since jobs run directly on the host machine, they can access system resources like files, databases, and network interfaces.

    • Disadvantages:

      • No Isolation: Jobs can affect the host system’s state, which means one job could interfere with others, leading to inconsistent results.

      • Security Risks: Since jobs have access to the host system, there’s a higher risk of security breaches, especially when running untrusted code.

    • Use Cases:

      • It is ideal for small projects or internal pipelines where speed and simplicity are more important than isolation.

      • When you need to access specific system resources that are not readily available inside a Docker container.

Example of using the Shell executor in a .gitlab-ci.yml file:

  •   stages:
        - test
    
      test-job:
        stage: test
        script:
          - echo "Running tests"
          - ./run-tests.sh
    
  1. Kubernetes Executor: The Kubernetes executor allows jobs to be executed in a Kubernetes cluster. This executor is perfect for cloud-native applications and large-scale systems that use Kubernetes for container orchestration. Instead of running jobs on the local machine or in a single Docker container, the Kubernetes executor spawns a pod (a collection of containers) for each job, which is then scheduled and managed by the Kubernetes cluster.

    • How it works: When a job is triggered, the Kubernetes executor starts a pod in the configured Kubernetes cluster. Each pod contains the necessary containers to run the job, which can include a build container, test container, and any supporting services. Once the job completes, the pod is destroyed, ensuring no residual state.

    • Advantages:

      • Scalability: Kubernetes automatically manages the resources needed for job execution, scaling as needed based on demand.

      • Isolation: Each job runs in its own pod, ensuring complete isolation and consistency across jobs.

      • Cloud-Native Workflows: The Kubernetes executor is ideal for running jobs in cloud-native environments that are already using Kubernetes for application deployment and management.

    • Use Cases:

      • When working with microservices and containerized applications that are already deployed to Kubernetes.

      • When you need to scale CI/CD pipelines horizontally across a large number of jobs and environments.

      • Ideal for cloud-native applications that need to integrate seamlessly with Kubernetes orchestration.

Example of using the Kubernetes executor in a .gitlab-ci.yml file:

        image: python:3.9

        stages:
          - deploy

        deploy-job:
          stage: deploy
          script:
            - kubectl apply -f deployment.yaml
            - kubectl rollout status deployment/my-app

This configuration deploys an application to a Kubernetes cluster using the kubectl tool.

By understanding the strengths and use cases of each executor—Docker, shell, and Kubernetes—you can make informed decisions about how to configure your GitLab CI/CD pipelines to best fit your project’s needs.

You can check out this playlist to learn more about GitLab CICD.

0
Subscribe to my newsletter

Read articles from OLUWASEUN directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

OLUWASEUN
OLUWASEUN

Oluwaseun is a versatile Network, Cloud, and DevOps Engineer with over six years of experience. He also possesses DevOps and DevSecOps expertise, ensuring efficient and secure solutions in cloud environments. With over two years of experience as a trainer, he has trained over 200 participants in Cloud and DevOps and manages a YouTube channel dedicated to sharing his knowledge. Oluwaseun has a proven reputation for delivering flexible, scalable, and secure cloud solutions using the AWS Well-Architected Framework. He collaborates seamlessly with business stakeholders to achieve project objectives on time. A visionary professional, he excels in researching and adopting new technologies aligned with strategic business needs. He is meticulous, creative, and adaptable to diverse work cultures.