Running microservices' E2E tests on a pipeline

Hi. This is the seventh part of the diary of developing the “Programmers’ diary” blog. The open source code of this project is on https://github.com/TheProgrammersDiary. The sixth part: https://hashnode.programmersdiary.com/e2e-testing-microservices.

Next entry:

———————

2023-09-04, 09-10, 09-11

The goal is to run E2E tests on a pipeline. However, I have forgotten to define what I call an E2E test: a test that involves two or more microservices. It could be Selenium tests, test with testcontainers I have recently wrote etc.

Currently I am reading Building Microservices Designing Fine-Grained Systems by Sam Newman (https://samnewman.io/). There is a Testing chapter. It explains the tradeoff between small scope and large scope test: small scope tests run quicker, you can have thousands of them and they can point very accurately there the problem is (in a matter of code lines). Also, small scoped tests are good at supporting code refactoring. Large scope tests gives you more confidence that your system as a whole works, however are slow to run so we should have few of them (and then opportunity comes: convert large scope tests to smaller scope tests). The chapter also suggests methods to execute E2E tests.

There is also a good article about running E2E tests: https://medium.com/javarevisited/e2e-testing-in-ci-environment-with-testcontainers-ea7537697bd9.

I have also talked with colleagues about E2E testing, the advice was to create non-blocking workflow: run local tests on PR raise/update, if they succeed allow the merge. That way, if you need your images in PROD quickly you don’t need to wait for E2E tests to finish [One of microservices advantages is being able to publish code quickly (compared to monolith), If you can’t publish new code reasonably quickly then you don’t enjoy full benefits of microservices]. After local tests are run E2E tests will be triggered but will not block the merge. If the developer has the luxury of waiting for E2E tests to finish - let him. He then can fix the PR, or merge after E2E tests pass.

Let’s implement the pipeline:

Our E2E tests are in a separate repository. It’s not so simple to transfer the code from other repositories so E2E tests service could execute it. One of the ways is to push services’ images to Docker Hub and then pull them in our E2E testing service (aka GlobalTests/docker service).

That’s what I have done.

In post microservice modify .github\workflows\main.yml:

name: Post microservice CI/CD
on:
 push:
    branches: [main]
 pull_request:
    branches: [main]
jobs:
  package:
    runs-on: ubuntu-latest
    **permissions:
      checks: write
      contents: read**
    steps:
    - uses: actions/checkout@v3
    - name: Set up JDK 17
      uses: actions/setup-java@v3
      with:
        java-version: 17
        distribution: temurin
    - name: Cache
      uses: actions/cache@v3
      with:
        path: ~/.m2
        key: ${{runner.os}}-m2-${{hashFiles('post/pom.xml')}}
        restore-keys: ${{runner.os}}-m2
    - name: Package
      run: mvn -B -f post/pom.xml package
    **- name: Publish Test Report
      if: success() || failure()
      uses: scacap/action-surefire-report@v1
    -
      name: Set up QEMU
      uses: docker/setup-qemu-action@v2
    -
      name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    -
      name: Login to Docker Hub
      uses: docker/login-action@v2
      with:
        username: ${{secrets.DOCKERHUB_USER}}
        password: ${{secrets.DOCKERHUB_PASSWORD}}
    -
      name: Build and push docker image
      uses: docker/build-push-action@v4
      with:
        context: .
        push: true
        tags: localstradedocker/blog:post_latest
    -
      name: run E2E tests
      run: >
        curl
        -X POST <https://api.github.com/repos/EvalVis/blogDocker/dispatches>
        -u ${{secrets.GITHUBTOKEN}}
        -H "Accept: application/vnd.github.everest-preview+json"
        -H "Content-Type: application/json"
        --data '{"event_type": "trigger_tests_post"}'**

Notice, that we are publishing surefire report: this will help us with debugging (permissions are added because surefire report action needs to access code files to find and publish surefire files).

After surefire report, we are setting Docker environment, logging into it: this is to prepare build and push of Docker image. If you want to hide your Docker Hub username, remember to also hide it in other places, not just Docker Hub login (e.g. tags, which I forgot to do). I am planning to make the Docker Hub and Github repos public when I have main features I can host blogging website with [already done].

After the image is pushed, the dispatch is called: it sends a message to our testing service via Github API that E2E tests need to be triggered. However, if local tests succeed - the workflow succeeds. E2E test workflow success (which gets triggered by the dispatch) is a matter of E2E service and does not block the Pull request.

Same things happen in monolith repo expect for two differences:

  • instead of tags: localstradedocker/blog:post_latest, the value is localstradedocker/blog:blog_latest. I have two images (post and blog) and hold them in the same Docker Hub repository. So instead of one latest tag I have several: each for different microservices.

  • instead of --data '{"event_type": "trigger_tests_post"}' it’s --data '{"event_type": "trigger_tests_monolith"}'. It allows to easily view which repo triggered the action. If the build fails, most likely it’s that repos fault. A developer can quickly see where to look for bugs. It looks like this in E2E repo:

Untitled

Monolith and blog microservice terms are used interchangibly since I will continue adding more features to blog microservice while post will remain mostly the same size as it is.

In GlobalTests service docker-compose-test.yaml file only two changes are made:

  • image: docker-blog → image: localstradedocker/blog:blog_latest

  • image: docker-post → image: localstradedocker/blog:post_latest

In GlobalTests .github/workflows/main.yml:

name: GlobalTests CI/CD
on:
  repository_dispatch:
    types:
      **- trigger_tests_monolith
      - trigger_tests_post**
jobs:
  run_tests:
    runs-on: ubuntu-latest
    permissions:
      checks: write
      contents: read
    steps:
    - uses: actions/checkout@v3
    - name: Set up JDK 17
      uses: actions/setup-java@v3
      with:
        java-version: 17
        distribution: temurin
    - name: Cache
      uses: actions/cache@v3
      with:
        path: ~/.m2
        key: ${{runner.os}}-m2-${{hashFiles('GlobalTests/pom.xml')}}
        restore-keys: ${{runner.os}}-m2
    -
      name: Set up QEMU
      uses: docker/setup-qemu-action@v2
    -
      name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    -
      name: Login to Docker Hub
      uses: docker/login-action@v2
      with:
        username: ${{secrets.DOCKERHUB_USER}}
        password: ${{secrets.DOCKERHUB_PASSWORD}}        
    - 
      name: Test
      run: mvn -B -f GlobalTests/pom.xml test
    - 
      name: Publish Test Report
      if: success() || failure()
      uses: scacap/action-surefire-report@v1

The script is triggered by the repository dispatch, which has events of type: trigger_tests_monolith or trigger_tests_post. By this type the workflow name in the dashboard is given. It needs to set up Docker environment to run E2E tests (since these tests use testcontainers library). Login to Docker Hub is needed to download remote Docker images.

To recap the whole workflow:

  1. A developer pushes commit(s) to main branch or raises pull request.

  2. Local tests are run.

  3. Build is packaged.

  4. Docker image is build.

  5. Docker image is published in Docker Hub.

  6. Dispatch to run E2E tests is sent. Microservice’s workflow succeed.

  7. E2E test service receives the dispatch.

  8. Latest images are downloaded

  9. E2E tests are executed.

  10. The developer sees which repo triggered the E2E tests workflow. If the workflow fails, the developer can go fixing the microservice which is faulty.

Note that in monolith and post microservice we could push Docker image with e.g. dev tag (instead of post_latest → post_dev_latest) which would mean the image is not ready for prod. When, if E2E test are successful the dev image could be deleted and prod ready post_latest image could be pushed. [It would make sure prod-ready microservice images are only published if E2E tests pass]. However, I’m not doing it, at least for now.

We have created an infrastructure for running E2E test on cloud. As mentioned in Sam Newman’s book, the E2E tests should be sparse (we should have many unit tests instead), however I wanted to have the infrastructure part ready. Next time the real fun will begin: UI coding will start!

Note from 2024-05-02

I did E2E pipeline because I was enthusiastic to see how it works. However, while working on a job, having E2E tests pipelines so early is not recommended. Better to have lots of unit tests first, have a decent number of features and only then start doing E2E test pipelines. While coding, I needed to update E2E tests frequently which slowed my development significantly (I would probably have finished a project month (or more) earlier if not for my E2E tests). In my scenario E2E tests wasted more time than provided benefits. In real job E2E tests provide better guarantees for risky projects, in these scenarios E2E tests are worth it. In my scenario, however, I learned a lesson I could share with you (which I just did).

———————

Thanks for reading.

The project logged in this diary is open source, so if you would like to code or suggest changes, please visit https://github.com/TheProgrammersDiary.

Next part: https://hashnode.programmersdiary.com/reflection-on-mistakes-of-eagerness-in-programming.

0
Subscribe to my newsletter

Read articles from Evaldas Visockas directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Evaldas Visockas
Evaldas Visockas