๐ŸŽญ Building Playwright Framework Step By Step - Implementing UI Tests

Ivan DavidovIvan Davidov
6 min read

๐ŸŽฏ Introduction

UI (User Interface) tests are a critical component of the software testing process, focusing on evaluating the visual elements that users interact with! ๐Ÿ–ฅ๏ธ These tests simulate user interactions with a website or application, such as clicking buttons, entering text, navigating menus, and validating that the UI responds as expected. The primary goal is to ensure that the UI provides a seamless and intuitive experience for users, adhering to design specifications and functional requirements.

๐ŸŽจ Why UI Testing Matters: In today's user-centric digital world, a flawless user interface can make or break your application's success!

๐ŸŒŸ Key Aspects of UI Testing:

  • ๐Ÿ”ง Functionality: Verifying that UI elements function correctly in response to user actions

  • ๐ŸŽฏ Consistency: Ensuring the UI looks and behaves consistently across different devices, browsers, and screen sizes

  • ๐Ÿ‘ฅ Usability: Assessing the ease with which users can navigate and interact with the application

  • ๐Ÿ“ฑ Responsiveness: Testing the UI's adaptability to various screen sizes and orientations

  • โ™ฟ Accessibility: Checking that the UI is accessible to users with disabilities, following guidelines like WCAG

๐Ÿ’ก Key Insight: UI tests are indispensable for delivering a high-quality user experience, playing a vital role in identifying visual and interaction issues before a product reaches the end user.

๐Ÿ› ๏ธ Create UI Test Cases

๐Ÿ“ฆ Add npm Scripts for Running Tests in package.json file

๐Ÿš€ Pro Tip: Well-organized npm scripts make your testing workflow efficient and team-friendly!

{
  "name": "pw-framework-step-by-step",
  "version": "1.0.0",
  "description": "This repository offers a comprehensive, step-by-step guide to building an automation testing framework using Playwright. Designed with junior Automation QA engineers in mind, it demystifies the process by breaking down the development of a framework from the ground up. Each commit is thoughtfully crafted to explain not just 'how' but 'why' each part of the framework is implemented, providing a solid foundation for understanding and extending it. Whether you're starting your journey in automation testing or looking to strengthen your understanding of testing frameworks, this repository serves as a practical, educational tool, guiding you through the nuances of creating a robust, scalable testing framework with Playwright.",
  "main": "index.js",
  "scripts": {
      "test": "npx playwright test --project=chromium",
      "ci": "npx playwright test --project=chromium --workers=1",
      "flaky": "npx playwright test --project=chromium --repeat-each=20",
      "debug": "npx playwright test --project=chromium --debug",
      "ui": "npx playwright test --project=chromium --ui",
      "smoke": "npx playwright test --grep @Smoke --project=chromium",
      "sanity": "npx playwright test --grep @Sanity --project=chromium",
      "api": "npx playwright test --grep @Api --project=chromium",
      "regression": "npx playwright test --grep @Regression --project=chromium",
      "fullTest": "npx playwright test"
  },
  "keywords": [],
  "author": "Ivan Davidov",
  "license": "ISC",
  "devDependencies": {
    "@faker-js/faker": "^9.8.0",
    "@playwright/test": "^1.52.0",
    "@types/node": "^22.15.21"
  },
  "dependencies": {
    "dotenv": "^16.5.0"
  }
}

๐Ÿ“Š Create Test Data

Test Data plays a pivotal role in test automation, serving as the input that drives the verification of software functionalities under test conditions! ๐Ÿ“ˆ It is the foundation upon which test cases are executed, determining the accuracy and reliability of automated tests. Properly managed and strategically utilized test data can significantly enhance the effectiveness of an automation strategy by covering a wide range of testing scenarios.

๐Ÿ’ก Pro Tip: Quality test data is the backbone of reliable automated testing - invest time in creating comprehensive and realistic datasets!

๐ŸŽฏ Crucial Aspects of Test Data in Test Automation:

  • ๐ŸŒˆ Variety and Volume: Incorporating a diverse and sufficient volume of test data ensures that applications are tested against all possible inputs, including edge cases

  • ๐Ÿ’Ž Data Quality: High-quality, realistic test data increases the reliability of test results, offering a true reflection of real-world usage

  • ๐Ÿ—‚๏ธ Management: Efficient test data management strategies, such as using data pools or generators, help maintain the integrity and organization of test data, making it easily accessible and reusable across test cases

  • ๐Ÿ”’ Security: When dealing with sensitive information, securing test data is paramount to prevent exposure of confidential data

  • โšก Dynamic Data Generation: Generating test data dynamically can provide flexibility and efficiency, allowing tests to adapt to new conditions without manual intervention

๐ŸŽฏ Key Insight: Effectively leveraging test data in automation frameworks is essential for validating software behavior, ensuring that applications meet their intended specifications and user expectations.

๐ŸŽฒ Install faker.js for Dynamic Data Generation

Install the powerful Faker.js library for generating realistic test data:

npm install --save-dev @faker-js/faker

๐ŸŽญ Why Faker.js?: Generates realistic fake data for names, addresses, emails, and more - perfect for comprehensive testing scenarios!

๐Ÿ“ Create Test Data for Articles

Create folder test-data and create a file articleData.json:

{
    "create": {
        "article": {
            "title": "Playwright Framework",
            "description": "Example Playwright Automation Framework",
            "body": "MVP for Playwright Automation Framework. Features TypeScript, Page Object Model Design Patter, Custom Fixtures, REST API Testing and Mocking, Schema Validation with Zod, Environment Utilization and CI/CD integration with GitHub Actions",
            "tagList": [
                "Step By Step",
                "testing",
                "playwright",
                "automation",
                "typescript",
                "POM",
                "api",
                "mocking",
                "Zod",
                "CI/CD",
                "GitHub Actions"
            ]
        }
    },
    "update": {
        "article": {
            "title": "In today's AI-driven world, Quality Assurance holds more importance than ever.",
            "description": "Example Playwright Automation Framework",
            "body": "MVP for Playwright Automation Framework. Features TypeScript, Page Object Model Design Patter, Custom Fixtures, REST API Testing and Mocking, Schema Validation with Zod, Environment Utilization and CI/CD integration with GitHub Actions",
            "tagList": [
                "testing",
                "playwright",
                "automation",
                "typescript",
                "POM",
                "api",
                "mocking",
                "Zod",
                "CI/CD",
                "GitHub Actions"
            ]
        }
    }
}

๐ŸŽญ Create UI Tests

๐Ÿ”— Repository Reference: All tests are implemented in the repo. We will take a look into the one for Article functionalities.

๐Ÿ“‹ Test Organization Strategy

As you will discover, I separate tests into test steps to improve code readability and generated reports! ๐Ÿ“Š

๐Ÿ’ก Best Practice: Breaking tests into logical steps makes debugging easier and reports more informative.

Test Steps Visualization

import { test, expect } from '../../fixtures/pom/test-options';
import { faker } from '@faker-js/faker';

test.describe('Verify Publish/Edit/Delete an Article', () => {
    const randomArticleTitle = faker.lorem.words(3);
    const randomArticleDescription = faker.lorem.sentence();
    const randomArticleBody = faker.lorem.paragraphs(2);
    const randomArticleTag = faker.lorem.word();

    test.beforeEach(async ({ homePage }) => {
        await homePage.navigateToHomePageUser();
    });

    test(
        'Verify Publish/Edit/Delete an Article',
        { tag: '@Sanity' },
        async ({ navPage, articlePage }) => {
            await test.step('Verify Publish an Article', async () => {
                await navPage.newArticleButton.click();

                await articlePage.publishArticle(
                        randomArticleTitle,
                        randomArticleDescription,
                        randomArticleBody,
                        randomArticleTag
                    )
            });

            await test.step('Verify Edit an Article', async () => {
                await articlePage.navigateToEditArticlePage();

                await expect(articlePage.articleTitleInput).toHaveValue(
                    randomArticleTitle
                );

                await articlePage.editArticle(
                    `Updated ${randomArticleTitle}`,
                    `Updated ${randomArticleDescription}`,
                    `Updated ${randomArticleBody}`
                );
            });

            await test.step('Verify Delete an Article', async () => {
                await articlePage.deleteArticle();
            });
        }
    );
});

๐ŸŽฏ Mock API Responses for Edge Cases

Since most of the time there is no way to test specific edge cases, mocking API responses are a convenient solution! ๐ŸŽญ

๐Ÿ’ก Why Mock APIs?: Mocking allows you to simulate various scenarios including error states, slow responses, and edge cases that are difficult to reproduce in real environments.

    test(
        'Mock API Response',
        { tag: '@Regression' },
        async ({ page, homePage }) => {
            await page.route(
                `${process.env.API_URL}api/articles?limit=10&offset=0`,
                async (route) => {
                    await route.fulfill({
                        status: 200,
                        contentType: 'application/json',
                        body: JSON.stringify({
                            articles: [],
                            articlesCount: 0,
                        }),
                    });
                }
            );

            await page.route(
                `${process.env.API_URL}api/tags`,
                async (route) => {
                    await route.fulfill({
                        status: 200,
                        contentType: 'application/json',
                        body: JSON.stringify({
                            tags: articleData.create.article.tagList,
                        }),
                    });
                }
            );

            await homePage.navigateToHomePageGuest();

            await expect(homePage.noArticlesMessage).toBeVisible();

            for (const tag of articleData.create.article.tagList) {
                await expect(
                    page.locator('.tag-list').getByText(tag)
                ).toBeVisible();
            }
        }
    );

๐ŸŽฏ What's Next?

In the next article we will implement API Fixtures - the foundation for robust API testing capabilities! ๐Ÿš€

๐Ÿ’ฌ Community: Please feel free to initiate discussions on this topic, as every contribution has the potential to drive further refinement.


โœจ Ready to enhance your testing capabilities? Let's continue building this robust framework together!

0
Subscribe to my newsletter

Read articles from Ivan Davidov directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ivan Davidov
Ivan Davidov

Automation QA Engineer, ISTQB CTFL, PSM I, helping teams improve the quality of the product they deliver to their customers. โ€ข Led the development of end-to-end (E2E) and API testing frameworks from scratch using Playwright and TypeScript, ensuring robust and scalable test automation solutions. โ€ข Integrated automated tests into CI/CD pipelines to enhance continuous integration and delivery. โ€ข Created comprehensive test strategies and plans to improve test coverage and effectiveness. โ€ข Designed performance testing frameworks using k6 to optimize system scalability and reliability. โ€ข Provided accurate project estimations for QA activities, aiding effective project planning. โ€ข Worked with development and product teams to align testing efforts with business and technical requirements. โ€ข Improved QA processes, tools, and methodologies for increased testing efficiency. โ€ข Domain experience: banking, pharmaceutical and civil engineering. Bringing over 3 year of experience in Software Engineering, 7 years of experience in Civil engineering project management, team leadership and project design, to the table, I champion a disciplined, results-driven approach, boasting a record of more than 35 successful projects.