Fabric CLI Beyond Shell Commands

Peer GrønnerupPeer Grønnerup
4 min read

The Microsoft Fabric CLI has become an essential tool for automating and managing your Fabric environments. There are many great articles out there on how to use the CLI locally, from your CI/CD pipelines, and even from within Fabric Notebooks - whether that’s using Python’s subprocess to run CLI commands, the ! operator for single shell commands, or magic commands such as %%sh or %%bash to execute entire cells in a subprocess.

👉 Sandeep Pawar recently wrote an excellent article on using the Fabric CLI in notebooks, which you can find here: Using Fabric CLI in Fabric Notebook.

👉 I also recently shared a blogpost on automating feature workspace maintenance in Microsoft Fabric using Python, the Fabric CLI, and GitHub Actions: Read it here.

So why explore Fabric CLI Python modules directly?

With Fabric User Data Functions (UDFs) now in public preview, I decided to investigate whether we could bypass the system shell entirely and leverage the Fabric CLI’s Python modules and functions directly instead of executing fab commands in a subprocess.

Why not just use the subprocess module?

Even though you can add the ms-fabric-cli library from PyPI in the library management section of your UDF, running shell commands (subprocess.run(["fab", ...])) doesn’t work because:

  • Fabric UDFs run in sandboxed environments where direct shell access is restricted.

  • The subprocess module is often locked down or lacks access to the underlying system shell.

  • It’s a security and resource isolation measure to ensure reliability and consistency of Fabric workloads.

For standard CLI usage, you should instead run the CLI on your local machine, on a VM, in a container, or through your CI/CD environment (like Azure DevOps or GitHub Actions). Or simply use the public REST APIs instead, which are HTTP-based and work well directly in code.

My Approach: Directly Using Fabric CLI Python Modules

Since the Fabric CLI is written in Python and is installed via pip, I thought - why not see if I could use the underlying Python modules directly?

My goal was to create a Fabric UDF that would run a job synchronously using the same logic as the fab job run command.

To make this work, you must add the ms-fabric-cli library from PyPI in the Library management section of your Fabric UDF.

How It Works

First observation - each CLI command has its own dedicated subpackage:

  • auth, config, jobs, fs (for filesystem commands), acl, and more.

For example, to configure encryption fallback and log in using a service principal, you can directly import and use:

from argparse import Namespace
from fabric_cli.commands.config import fab_config
from fabric_cli.commands.auth import fab_auth

# Set encryption fallback
args = Namespace(
    command_path=["/"],
    path=["/"],
    command="config",
    config_command="set",
    key="encryption_fallback_enabled",
    value="true"
)
fab_config.set_config(args)

# Login using service principal
args = Namespace(
    auth_command="login",
    username="*****",
    password="*****",
    tenant="*****",
    identity=None,
    federated_token=None,
    certificate=None
)
fab_auth.init(args)
fab_auth.status(None)  # Check current authentication status
This approach is purely experimental! Hardcoding client IDs and secrets directly in a UDF is not recommended for production scenarios. Currently, UDFs don’t support features like notebookutils to fetch tokens or Key Vault secrets. However, you could create a connection to a Fabric Lakehouse or a Fabric SQL Database containing the credentials for the service principal.

Running a Fabric Job

To run a job (similar to the fab job run command), you import the fab_jobs module. In my implementation, I take the workspace name, item name, and item type as input parameters to build the path for the item to run. These parameters are then used as arguments when executing the Fabric UDF.

from fabric_cli.commands.jobs import fab_jobs

args = Namespace(
    command_path=["/"],
    command="job",
    jobs_command="run",
    path=[f"{workspacename}.Workspace/{itemname}.{itemtype}"]
)
fab_jobs.run_command(args)

And voilà! 🎉 With this concise yet powerful Fabric UDF, you can expose job execution to business super users. For example, finance teams can now trigger ad-hoc jobs during month-end close using a transalytical task flow to run a Fabric User data function handling the job execution - without granting the users deep admin access to the entire Fabric workspace.

Additional Thoughts

One important consideration when using this approach is that you won’t see console outputs (like print statements) in the execution logs of your UDF runs. This can make troubleshooting or understanding the full execution flow challenging.

To address this, I’ve added logging of outputs and errors from the CLI commands directly into the UDF source code. You can find this implementation in the downloadable UDF example on my GitHub.

This ensures that all important outputs and errors are written to the log - making it much easier to monitor and debug these Fabric CLI-based UDFs in action.

Conclusion

While directly using the Fabric CLI’s Python modules in a UDF isn’t the recommended approach - many would argue that the public REST APIs are better suited for managing Fabric items within UDFs - this experiment showed that it’s possible to run CLI commands directly without relying on the shell.

It highlights the flexibility of the Fabric CLI’s architecture and suggests exciting future possibilities - imagine how powerful it would be to have a dedicated, fully supported Python interface for Fabric!

👉 You can find the full Fabric User Data Function implementation on my GitHub.

0
Subscribe to my newsletter

Read articles from Peer Grønnerup directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Peer Grønnerup
Peer Grønnerup

Principal Architect | Microsoft Fabric Expert | Data & AI Enthusiast With over 15 years of experience in Data and BI, I specialize in Microsoft Fabric, helping organizations build scalable data platforms with cutting-edge technologies. As a Principal Architect at twoday, I focus on automating data workflows, optimizing CI/CD pipelines, and leveraging Fabric REST APIs to drive efficiency and innovation. I share my insights and knowledge through my blog, Peer Insights, where I explore how to leverage Microsoft Fabric REST APIs to automate platform management, manage CI/CD pipelines, and kickstart Fabric journeys.