Why We Transitioned to Downstream Pipelines in GitLab

Tyler DurhamTyler Durham
2 min read

CI/CD Pipeline Refactor: From Monolith to Downstream

Overview

We’ve refactored our CI/CD setup to adopt downstream pipelines, replacing the monolithic structure we previously had. This change improves clarity, speed, reliability, and scalability across all environments (sandbox, UAT, production).


Before: Monolithic Pipeline

Issues with the Old Setup:

  • All jobs lived in one pipeline — tightly coupled build, deploy, security, test jobs

  • Deploys were tightly linked to builds — you couldn’t deploy without rebuilding

  • Cluttered UI — all environments were visible even when not relevant

  • Hard to debug or retry — a failed job often meant restarting everything Scaling was painful — adding new environments or services bloated the pipeline


After: Modular Downstream Pipelines

Parent Pipeline (main pipeline)
├── build_job (build + artifact)
└── trigger_deploy_pipeline (child, strategy: depend)
      ├── deploy_sandbox
      ├── deploy_uat
      └── deploy_prod

Benefits of the New Setup:

  • Separation of concerns — build and deploy logic are decoupled

  • Artifact reuse — deploy pipelines consume artifacts from the build job

  • Re-run deploys — re-run only the deploy pipeline without triggering a new build

  • Environment isolation — deploys are environment-specific and scoped

  • Better UI — each pipeline shows only the jobs relevant to its stage

More Info on Downstream Pipelines: https://docs.gitlab.com/ci/pipelines/downstream_pipelines/

Sample Layout


Developer-Focused Comparison

FeatureOld PipelineDownstream Pipeline Setup
Job couplingAll jobs in one pipelineParent/child pipelines
Deploy methodRebuild + deploy every timeBuild once, deploy many
UI visibilityEverything visible every runClean, contextual pipelines
Retry supportRetry = rerun entire flowRecreate deploy pipeline only
ReleasesNon-ExistantRelease jobs for:
Automatic Semantic VersioningNon-ExistantSemantic versions automatically updated based on commits, tags, and branching

What You Can Do Now

  • Trigger deploys manually or from MR merges

  • Recreate downstream pipelines to redeploy without rebuilding

  • Extend pipeline logic easily using includes and shared templates

  • Debug more cleanly — failures are isolated to one stage or environment

  • Releases automatically build for sprint releases, hotfix's, offcycle's


Summary

This new structure reflects how we build software: modular, testable, maintainable. It enables us to move faster, deploy safer, and scale CI/CD with our growth.

0
Subscribe to my newsletter

Read articles from Tyler Durham directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tyler Durham
Tyler Durham

Hi everyone, Nice to meet you all! I found Hashnode from a cloud tech channel that I follow and I am glad to be here.Tech has always been a passion of mine and I enjoy how expansive it is. A little about me, I've been in tech for ~12 years, started by doing a help desk role supporting the first iPhone. Most of my experience is in QA/Test from mobile devices, to SaaS platforms, automation, Ci/CD, etc... Currently, I am looking to learn all can about "the cloud" in the form of AWS. I am very interested in building a career as a DevOps/SRE. Most of blog will be about learning cloud technologies and how they fit together.