[FIXED] Avoid Git Headaches with Dynamic Nested Data Pipelines in Fabric


Introduction
If you’ve worked with Fabric pipelines long enough, chances are you’ve already run into this peculiar “bug”. You build a layered, modular pipeline setup—one pipeline gracefully invoking another, everything is nice and clean. Then you hit “Sync to Git” or deploy across environments, and suddenly things fall apart.
Sidebar:
Is it just me, or has anyone else started avoiding em dashes (—) ever since ChatGPT made them its telltale sign? They used to feel sharp—now they just feel suspicious.
Nested pipelines in Fabric don’t behave well with Git integration. The moment you start using “Invoke Pipeline” with hard-coded references, the whole setup becomes fragile and Fabric will lose track of what it’s referencing. You’ll either get cryptic errors or broken links, and good luck debugging which pipeline reference failed and why.
Why Does This Matter?
This is more than just an annoyance. When your Git-integrated setup breaks down over something as basic as pipeline modularity, you end up with bad manual fixes, lost productivity, and deployment nightmares.
If you're working in a multi-environment setup (dev, test, prod) this gets even worse. Static pipeline names or IDs just don’t cut it.
This blog will guide you through a simple, straightforward solution to dynamically invoke nested Fabric pipelines using Web Activities and REST APIs. It will show you step-by-step how to replace static pipeline references with dynamic lookups, ensuring smooth and reliable synchronization across environments.
Pre-requisites
To follow along, ensure you have:
(Obviously) a Fabric workspace with at least two pipelines (one calling the other).
A working Fabric Pipelines connection already configured.
A Fabric API connection set up to authenticate and call the Fabric REST APIs from Web Activities.
Familiarity with REST APIs (no advanced API knowledge required).
Solution: Dynamic Invocation
One of the most common patterns where nested pipelines come into play is in a metadata-driven framework. For instance, you might have a “Job Runner” pipeline that loops through rows in a metadata table. Based on each row, it dynamically decides which notebook or data transformation to run by calling a “Notebook Runner” pipeline passing in notebook name, table, or load type among other parameters.
If you set this up using the Invoke Pipeline activity and simply select the target pipeline from the dropdown, everything works fine; until you push it to Git.
As soon as you switch branches or deploy to another workspace, the reference to the nested pipeline breaks. Fabric hard-codes an internal reference ID behind the scenes, and this ID may not mean anything in the new environment. Or the caller pipeline might be synced before the referenced pipeline. Your pipeline looks fine in the UI, but it fails at runtime.
Instead of relying on the hard-coded reference, you can use a preceding Web Activity calling the Fabric REST API to trigger the nested pipeline that follows dynamically. Here’s how it works:
Option 1: Use Web Activity + Invoke Pipeline Activity (Internal Fabric Flow)
Use a Web Activity to hit the the List Items API to fetch all pipelines details.
Connection URL:
https://api.fabric.microsoft.com/v1
Relative URL:
@concat('workspaces/', pipeline().DataFactory, '/items?type=DataPipeline')
Follow it up with a Filter Activity to extract the
id
of the target pipeline by matching its display name:Items:
@activity('Get Pipelines').output.value
Condition:
@equals(item().displayName, 'Notebook Runner')
Add a Set Variable activity to extract the
id
from the filtered result in a pipeline variable:- Variable value:
@first(activity('Select Pipeline').output.value).id
- Variable value:
Use an expression in the Invoke Pipeline activity to use the variable from the previous step.
By dynamically extracting the pipeline ID at runtime from the List Items API output, this method avoids hard-coded references, ensuring that pipeline calls remain intact across Git branches and environments.
Option 2: Use Web Activity + Filter + On Demand Run API (External Call Flow)
Use the same Web Activity as above to hit the List Items API.
Use the same Filter Activity settings as above.
Use another Web Activity to hit the Run On Demand Item Job API:
Connection URL:
https://api.fabric.microsoft.com/v1
(same as above)Relative URL:
@concat('workspaces/', pipeline().DataFactory, '/items/', first(activity('Select Pipeline').output.value).id, '/jobs/instances?jobType=Pipeline')
jobType=DefaultJob
in its example request, that will fail; you must use jobType=Pipeline
for running Fabric pipelines.By directly triggering the pipeline through Fabric's REST API using the retrieved pipeline ID, this method completely bypasses static bindings and eliminates any dependency on environment-specific references.
Both options are valid. Choosing one over the other is mostly a design decision based on how much control and flexibility you need. See below:
Criteria | Option 1: Web Activity + Invoke Pipeline | Option 2: Web Activity + API Call |
Simplicity | Easier to set up, mostly stays within Fabric's native activities. | Slightly more complex due to direct API management. |
Flexibility | When everything needs to stay inside the same workspace. | When you need full control or want to call pipelines are heavily parameterized or span across projects/workspaces. |
Monitoring Behavior | Parent pipeline waits for child pipeline to finish; status is visible inside current run. | Parent pipeline fires the child pipeline and immediately finishes; child pipeline runs independently. |
Maintenance Overhead | Lower; easier for most teams to understand and maintain. | Slightly higher; needs API familiarity for troubleshooting. |
TL;DR
Hard-coding pipeline references in Fabric breaks Git synchronization.
Both approaches in this post fix the problem by resolving pipeline IDs dynamically at runtime; no hard-coding, no broken deployments.
Two ways to fix it:
Web Activity + Invoke Pipeline:
Use a Web Activity to fetch pipelines using the List Items API, then dynamically reference the child pipeline ID inside an Invoke Pipeline activity.Web Activity + API Call:
Use a Web Activity to fetch pipelines, filter to find the correct child pipeline ID, and then trigger the pipeline directly using the Run On Demand Item Job API.
Both methods make your nested pipelines Git-friendly and fully portable across environments. No more manual updates or broken deployments. As long as the target pipeline exists in the workspace, it works — every time, across every branch.
References
Subscribe to my newsletter
Read articles from Ronak Vachhani directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
