Pipelines and Testing

Doug DawsonDoug Dawson
4 min read

In about mid-2024, I directed my team to remove the Postman test execution step from the Azure pipelines we used. The company decided to move away from a paid Postman plan so we had to decommission Postman. It was a cost-saving measure that I was disappointed to implement. Postman is a good tool and we had invested a significant amount of time into building out and updating our tests. That said, I was not fond of having our pipelines execute Postman tests.

To be fair, I’m sure all the issues I’m going to list are due to poorly executed pipeline design and implementation. That’s the real point of this post; be intentional with your pipelines.

First, let me share how our pipeline was setup. I’m not an Azure DevOps engineer, so this will likely be awkwardly worded. Our pipelines were YAML-based and the environment destinations were arranged in a linear fashion: DEV to QA, QA to UAT, UAT to PROD. We had roles setup so only the QAs and Release Managers could move code to QA, UAT, and PROD. Our code was arranged in a microservice architecture of sorts, so we actually had several pipelines to run to deploy the entire service. Some of our pipelines were configured to detect if we’re trying to deploy code that is the same or older and block the deployment. This will matter later.

The first issue we had using Postman with our pipelines was when a Postman test glitched or failed for a transitive reason. We were organized enough that our tests were split into basically two large groups: smoke tests and functional tests. We set the pipeline to point to the appropriate smoke test collection in Postman, but some of the tests included response time checks. We dabbled in having performance tests that failed if the API took longer than 2 seconds to respond. For a variety of reasons, sometimes our service would spin up in QA or UAT, the pipeline would then run Postman smoke tests, and a response took 2.1 seconds to respond, the test would fail, the pipeline step would fail, and now our pipeline was in a bad state. We setup rolling deployments across six instances, so it was taking a good 20 minutes to deploy. Re-running the entire deployment step to clear a Postman error was absolutely silly.

Remember that version check? Sometimes we had to delete an app from Service Fabric so we could re-run a deployment step quickly or we had to start over and push a new version (because the developer didn’t have access to Service Fabric). It was not a great developer experience.

My Suggestions

I’m going to try to generalize my thoughts so that these aren’t just Postman related.

  1. Separate Test Collections for Pipelines
    I want the QAs to be able to write tests with impunity. However, to avoid affecting pipeline execution, consider having a separate set of tests dedicated to pipelines. Changes or additions to those tests should be discussed with the development team as a whole and serve the deployment process.

  2. Make Sure Smoke Tests Are Smoke Tests
    One of the issues I noticed often with our smoke tests is that they were looking more and more like functional or integration tests. Review your smoke test collection to make sure they’re not more than needed.

  3. Pipeline Warnings
    It is possible to have integrated smoke tests or functional tests fail with a warning instead of an error. This will take some thought, though, and might cause more harm than help. You don’t want to condition the team to ignore warnings. If your tests generally pass, your team has good discipline, and/or there is value in not blocking a deployment step, this is an option.

  4. Make Pipeline Steps Rerunnable
    In my experience so far, I’ve seen perfectly good code that needed to be redeployed and it was blocked by pipeline validation. Being able to re-run steps (especially non-PROD steps) is wise.

  5. Consider Non-Linear Pipelines
    This suggestion likely depends on your branching strategy, number of environments, or other code handling practices. I prefer pipelines that allow code to be pushed to whichever environment is needed. We’ve had issues in the past where we had to hotfix a release during a release window and had to wait for the code to deploy to DEV, QA, UAT to finally get to PROD.

I’m not a CI/CD or QA expert, but I believe these are some reasonable suggestions. I’d loved to hear more suggestions. Feel free to comment.

Cheers!

0
Subscribe to my newsletter

Read articles from Doug Dawson directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Doug Dawson
Doug Dawson

I've been doing computer-related things since I was a kid on my dad's Franklin ACE 1000 and his Tandy. I've built PCs, repaired servers, wired networks by hand, administered servers and built numerous applications. I've coded in Perl, PHP, Java, VB, C#, VB.NET, JS and probably a few others. I'm a jack-of-trades technologist. I transitioned into leadership several years ago from a senior .NET developer. I'm currently a Delivery Manager and I lead an agile software development team.