Common Questions About Software Testing
Recently, I was approached by an automation developer who was considering transitioning from Cypress to Playwright (smart developer!), asking me for some advice on the transition. He shared a set of questions, and as those were pretty generic, I took the freedom to write a blog post with my responses.
Note of caution: this is opinionated and based on my experience; however, it is never a one-size-fits-all thing, and the suggestions here may not fit you.
What is the best way to use Locators (selectors) by data-test-id attributes or something else?
Playwright documentation says:
Automated tests should verify that the application code works for the end users, and avoid relying on implementation details[…]. The end user will see or interact with what is rendered on the page, so your test should typically only see/interact with the same rendered output.
I completely agree with this approach. User-facing selectors should be priority number one. That is getByText, getByLabel etc. followed by semantic HTML selectors (i.e. getByRole), and then CSS selectors.
Test-id attributes are not that useful in most cases. First, they require duplicate work on the development side, to define and maintain the ids. Also, they are prone to errors, as the developer might change the functionality and forget to update the ids, hence causing tests to fail for no reason, or worse - false positive tests.
User-facing selectors also include things like: my-icon[name=”trash”]
; while the user does not actually see the name of the icon, we can assume that the name depicts some real visual.
Fun story: we had a set of actions that were dynamically generated based on attributes of the entity (think about Gmail mail actions icons): One of our tests was clicking on a locator similar to the above. Lol and behold, the trash icon was used for two separate actions, both showing on the same icon toolbar. Thanks to Playwright locator strictness, the test failed because of duplicate icons. Not only the test identified a logical bug, but also a UX bug!
What’s the best way to share these selectors between repos?
Tests are code, and as such they should be shared like any other code. That could be via packages or by using a monorepo.
Selectors that need to be shared are likely related to some production code that is also shared between repos, and therefore the best method is to follow the same practice used to share the code. If the code is shared via packages, testing utilities (sometimes called test drivers) can be added to the same package, thus ensuring good versioning practices. Any change to the code will also include the changes to the relevant test code.
We thought about using the POM model, does it mean that we should use class structure or can it be used with functional code as well?
The POM model is inherited from strongly typed Object-oriented languages (read: Java). when using dynamic functional languages such as Javascript/Typescript, a functional approach is preferred. You can simply export simple functions that abstract repetitive functionality.
Having said that, you can still use POM in JS(TS) by defining classes and instantiating them with a page, and creating high-level functions. You can see an example in Playwright's documentation.
Moreover, with Playwright capabilities, you will find that in most cases high-order abstraction is not really required. Unlike the glory days of Selenium where you had to wrap a lot of low-level browser access APIs to make them stable, this is not required in Playwright, and you might find yourself writing tons of one-liner functions.
Last, but not least at all, the term POM itself refers to the days where we wrote web applications with pages. Most modern frameworks are now components based, so we should rethink our POMs as COM (Component Object Model). Assume you have some fancy date and time picker that is used in multiple places in your application, it makes more sense to abstract the selection of the data at the component level and not at the page level.
As of now, we have a constants file separated from the test/logic file, we have a lot of selectors- what’s the best way to manage them?
In general, I am completely against kitchen sink code files. Code (and constants are code) should be split by concerns. Constants code should be kept next to the functionality they serve. Export whatever is needed from the relevant context.
Is there a preferred way to generate and send data to the db before the test? Is it crucial to clear db between suites?
Test data management is a huge topic. But to keep it short: yes. Your test should be isolated in every possible way. It should not rely on another test to run before, and it should explicitly define any data or dependencies it requires.
The best way to generate data is using your application APIs, if such exist. You can use Playwright Context to call those APIs. Axios is another very popular API calling library. While using Playwright is straight-forward, I found that Axios built-in interceptors can be extremely useful when calling APIs.
If you cannot generate data by calling APIs, use any library that fits your database to insert data directly into the DB.
You may or may not delete your data, as long as the data for your test is completely segregated, in order not to pollute your test or other tests.
Let’s take an example: your test starts with the user creating an order. Then the user views its orders and verifies it has 1 order. If your data is shared between multiple tests that are running in parallel, the user may see multiple orders which makes the test unstable. A possible solution is for each test to create a dedicated user that will be used by the test (generate usernames dynamically based on some random string) and then each user will only have the orders relevant to the test. At the end of the test it is recommended to delete the specific test data. However, it is not required, as the data will not be visible to other tests that are not reusing the same user. However, it might be a good idea to delete the data, especially if your tests are running in large volumes to avoid performance issues on the DB. A method that I found useful is to create the data for each test. Successful tests delete their data, while failing test keep the data to allow for interrogating the failure. A weekly process is resetting the whole DB so we do not end up with high (and costly) volumes.
Subscribe to my newsletter
Read articles from Tally Barak directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Tally Barak
Tally Barak
I have been around since the 90's (that's 1990) and even before as a geek girl. I have done development, product management, consulting, and system analysis, but my ultimate passion is system architecture and good coding practices. Today I work as an architect responsible for all the Frontend design, tooling, testing, and DevOps processes. I love Javascript and its ecosystem and will happily share this knowledge with fellow developers. Also, I am a Playwright Ambassador.