Enhancing Your Framework with Extent Reports, Screenshots, and Retry Mechanisms (Part 5)


Welcome to the fifth installment of our Selenium Framework Design! In this blog we’ll integrate Extent Reports for professional HTML reporting, attach screenshots to failed tests, ensure thread safety for parallel execution, and implement a TestNG retry mechanism for flaky tests.
Let’s dive in!
Recap: Where We Are
In Part 4, we:
Implemented TestNG DataProvider for data-driven testing with multiple data sets.
Used HashMaps to handle complex data sets cleanly.
Created a JSON reader utility (
getJsonDataToMap
) to externalize test data.Built a screenshot utility (
getScreenshot
) inBaseTest
to capture screenshots for failed tests.Prepared for Extent Reports integration to generate HTML reports with screenshots.
Now, we’ll focus on:
Setting up Extent Reports with basic configurations for a standalone test (Phase 1).
Integrating Extent Reports into our framework using TestNG Listeners for automatic reporting.
Attaching screenshots to failed tests in reports.
Ensuring thread safety for parallel test execution using
ThreadLocal
.Implementing a TestNG retry mechanism to rerun flaky tests.
This is the final phase of our framework, making it professional-grade with automated reporting and failure handling.
Step 1: Understanding Extent Reports (Phase 1)
Extent Reports is a popular open-source library for generating interactive HTML reports for test execution. It provides:
Pie charts and diagrams showing pass/fail counts.
Detailed logs for each test, including failure reasons.
Screenshots for failed tests.
Metadata like tester name, execution time, and duration.
Extent Reports is widely used in Selenium automation for its visual appeal and customization options. In this section, we’ll learn the basic configuration for Extent Reports using a standalone test. In a later section (Phase 2), we’ll integrate it into our framework with TestNG Listeners for advanced features like screenshots and parallel execution support.
Creating a Maven Project for Extent Reports Demo
To understand Extent Reports, let’s create a simple Maven project:
In Eclipse, right-click > New > Project > Maven Project.
Select the quickstart template (ideal for automation) and click Next.
Provide Group Id, Artifact Id and Version.
Click Finish.
Adding Dependencies
We need dependencies for Extent Reports, TestNG, and Selenium in pom.xml. Delete the default JUnit dependency and add:
<dependencies>
<!-- Extent Reports -->
<dependency>
<groupId>com.aventstack</groupId>
<artifactId>extentreports</artifactId>
<version>5.0.9</version> <!-- Use the latest version -->
</dependency>
<!-- TestNG -->
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>7.8.0</version> <!-- Use the latest version -->
<scope>test</scope>
</dependency>
<!-- Selenium -->
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.8.0</version> <!-- Use the latest stable version -->
</dependency>
</dependencies>
To find the latest versions:
Search for “extent reports maven” on Google or visit Maven Repository.
Use the latest Extent Reports version (e.g., 5.0.9 at the time of writing).
Update TestNG and Selenium versions similarly.
Writing a Standalone Test
Delete the default test class (AppTest.java
) and create a new class ExtentReportDemo
in src/test/java:
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.Test;
public class ExtentReportDemo {
@Test
public void initialDemo() {
WebDriver driver = new ChromeDriver();
driver.get("https://www.selenium.dev/");
System.out.println(driver.getTitle());
driver.quit();
}
}
Explanation:
- A simple test that opens Chrome, navigates to the Selenium website, prints the page title, and closes the browser.
Configuring Extent Reports
Add Extent Reports configuration in a @BeforeTest
method to set up reporting before the test runs:
import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.ExtentTest;
import com.aventstack.extentreports.reporter.ExtentSparkReporter;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
public class ExtentReportDemo {
ExtentReports extent;
@BeforeTest
public void config() {
String path = System.getProperty("user.dir") + "\\reports\\index.html";
ExtentSparkReporter reporter = new ExtentSparkReporter(path);
reporter.config().setReportName("Web Automation Results");
reporter.config().setDocumentTitle("Test Results");
extent = new ExtentReports();
extent.attachReporter(reporter);
extent.setSystemInfo("Tester", "Samiksha Kute");
}
@Test
public void initialDemo() {
ExtentTest test = extent.createTest("Initial Demo");
WebDriver driver = new ChromeDriver();
driver.get("https://www.selenium.dev/");
System.out.println(driver.getTitle());
driver.quit();
extent.flush();
}
}
Explanation:
ExtentSparkReporter: Configures the HTML report’s location and appearance.
path
: Dynamically creates a reports folder in the project directory (System.getProperty("user.dir")
) and names the reportindex.html
.setReportName
: Sets the report title to “Web Automation Results”.setDocumentTitle
: Sets the browser tab title to “Test Results”.
ExtentReports: The main class that manages reporting.
attachReporter
: Links theExtentSparkReporter
configuration to the main reporting engine.setSystemInfo
: Adds metadata (e.g., tester name “Samiksha Kute”).
Global Variable: ExtentReports extent is declared at the class level to be accessible across methods.
createTest: Creates a test entry named “Initial Demo” in the report to track its status (pass/fail).
flush: Finalizes the report after the test, ensuring it’s generated.
Running the Test
Run ExtentReportDemo
as a TestNG test. The test:
Opens Chrome and navigates to https://www.selenium.dev/.
Prints the title and closes the browser.
Generates a report in the reports folder (
index.html
).
Refresh the project in Eclipse, open reports/index.html
in a browser, and you’ll see:
Dashboard: Shows “Web Automation Results” with one test passed.
Title: “Test Results” in the browser tab.
Metadata: Tester name “Samiksha Kute”.
Test Details: “Initial Demo” marked as passed.
Simulating a Failure
To see how Extent Reports handles failures, modify initialDemo
:
@Test
public void initialDemo() {
ExtentTest test = extent.createTest("Initial Demo");
WebDriver driver = new ChromeDriver();
driver.get("https://www.selenium.dev/");
System.out.println(driver.getTitle());
test.fail("Result do not match");
driver.quit();
extent.flush();
}
Explanation:
ExtentTest: Captures the test object created by
extent.createTest
to log results.test.fail: Explicitly marks the test as failed with the message “Result do not match”.
No Screenshots Yet: We’ll add screenshots in Phase 2 with TestNG Listeners.
Run the test again, refresh reports/index.html
, and observe:
“Initial Demo” is marked as failed with the message “Result do not match”.
The dashboard shows one failed test as shown in the image below:
Why This Matters?
Basic Configuration: You’ve learned the core classes (
ExtentReports
,ExtentSparkReporter
,ExtentTest
) and methods (createTest
,flush
).Standalone Example: Simplifies understanding before integrating into a framework.
Phase 1 Limitation: Manually adding reporting code in each test is inefficient, and we can’t handle dynamic failures or screenshots yet.
We’ll stop Phase 1 here and revisit Extent Reports in Phase 2 (below) when integrating with our framework using TestNG Listeners.
Step 2: Integrating Extent Reports into the Framework (Phase 2)
Now, let’s apply Extent Reports to our e-commerce framework from Part 4, optimizing it to avoid repetitive code in test classes. We’ll use TestNG Listeners to automate reporting and screenshot attachment for failures.
Adding Extent Reports Dependency
Add the Extent Reports dependency to the framework’s pom.xml
:
<dependency>
<groupId>com.aventstack</groupId>
<artifactId>extentreports</artifactId>
<version>5.0.9</version>
</dependency>
Verify other dependencies (TestNG, Selenium, Jackson, Commons IO) are present from Part 4.
Creating an Extent Reporter Utility
To centralize report configuration, create a new class ExtentReporterNG
in src/main/java/resources
:
package resources;
import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.reporter.ExtentSparkReporter;
public class ExtentReporterNG {
public static ExtentReports getReportObject() {
String path = System.getProperty("user.dir") + "\\reports\\index.html";
ExtentSparkReporter reporter = new ExtentSparkReporter(path);
reporter.config().setReportName("Web Automation Results");
reporter.config().setDocumentTitle("Test Results");
ExtentReports extent = new ExtentReports();
extent.attachReporter(reporter);
extent.setSystemInfo("Tester", "Samiksha Kute");
return extent;
}
}
Explanation:
Reuses the Phase 1 configuration (report path, name, title, tester).
Static Method:
getReportObject
returns anExtentReports
object, accessible without instantiatingExtentReporterNG
.Placed in resources to separate utility logic from test components.
Creating a TestNG Listener
To automate reporting, create a Listeners class in src/test/java/testComponents
that implements TestNG’s ITestListener
interface:
package testComponents;
import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.ExtentTest;
import com.aventstack.extentreports.Status;
import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestResult;
import resources.ExtentReporterNG;
public class Listeners implements ITestListener {
ExtentReports extent = ExtentReporterNG.getReportObject();
ExtentTest test;
@Override
public void onTestStart(ITestResult result) {
test = extent.createTest(result.getMethod().getMethodName());
}
@Override
public void onTestSuccess(ITestResult result) {
test.log(Status.PASS, "Test Passed");
}
@Override
public void onTestFailure(ITestResult result) {
test.fail(result.getThrowable());
}
@Override
public void onFinish(ITestContext context) {
extent.flush();
}
}
Explanation:
ITestListener: Provides methods to hook into test events (start, success, failure, finish).
onTestStart: Creates a report entry for each test using the test method name (
result.getMethod().getMethodName()
), dynamically fetched fromITestResult
.onTestSuccess: Logs a “Test Passed” status for successful tests.
onTestFailure: Marks the test as failed and logs the error message (
result.getThrowable()
).onFinish: Calls
extent.flush()
after all tests to generate the report.Global Variables: extent and test are class-level to be shared across methods.
Attaching Screenshots on Failure
Update Listeners to call the getScreenshot
method from BaseTest
and attach screenshots for failed tests:
package testComponents;
import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.ExtentTest;
import com.aventstack.extentreports.Status;
import org.openqa.selenium.WebDriver;
import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestResult;
import resources.ExtentReporterNG;
import java.io.IOException;
public class Listeners extends BaseTest implements ITestListener {
ExtentReports extent = ExtentReporterNG.getReportObject();
ExtentTest test;
@Override
public void onTestStart(ITestResult result) {
test = extent.createTest(result.getMethod().getMethodName());
}
@Override
public void onTestSuccess(ITestResult result) {
test.log(Status.PASS, "Test Passed");
}
@Override
public void onTestFailure(ITestResult result) {
test.fail(result.getThrowable());
try {
driver = (WebDriver) result.getTestClass().getRealClass().getField("driver").get(result.getInstance());
} catch (Exception e) {
e.printStackTrace();
}
try {
String filePath = getScreenshot(result.getMethod().getMethodName());
test.addScreenCaptureFromPath(filePath, result.getMethod().getMethodName());
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void onFinish(ITestContext context) {
extent.flush();
}
}
Explanation:
Extends BaseTest: Inherits
getScreenshot
and driver.Driver Access: Retrieves the test’s WebDriver instance dynamically:
result.getTestClass().getRealClass()
: Gets the test class (e.g.,ErrorValidationsTest
).getField("driver")
: Accesses the driver field defined in the test class (inherited from BaseTest).get(result.getInstance())
: Gets the driver instance for the current test.
Exception Handling: Uses a generic Exception to catch multiple errors (e.g.,
IllegalAccessException
,NoSuchFieldException
).getScreenshot: Calls the screenshot utility with the test method name (e.g., loginErrorValidation).
addScreenCaptureFromPath: Attaches the screenshot to the report, using the file path and test name.
Try-Catch: Handles potential IOExceptions during screenshot capture.
Updating TestNG XML
Modify testng.xml to include the Listeners class:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite" parallel="tests">
<listeners>
<listener class-name="testComponents.Listeners"/>
</listeners>
<test name="Test1">
<classes>
<class name="tests.ErrorValidationsTest"/>
</classes>
</test>
<test name="Test2">
<classes>
<class name="tests.SubmitOrderTest"/>
</classes>
</test>
</suite>
Explanation:
<listeners>: Informs TestNG to use the Listeners class for all tests.
parallel="tests": Runs test classes (
ErrorValidationsTest
,SubmitOrderTest
) in parallel, as set up in Part 3.Class Path: Points to testComponents.Listeners.
Intentionally Failing a Test
To test reporting and screenshots, modify ErrorValidationsTest
to fail intentionally:
@Test(groups = {"ErrorHandling"})
public void loginErrorValidation() {
landingPage.loginApplication("your_email", "your_password");
Assert.assertEquals(landingPage.getErrorMessage(), "Incorrect email or password123.");
}
Explanation:
Changes the expected error message to Incorrect email or password123. (incorrect), causing the assertion to fail.
Uses TestNG’s Assert.
Running the Tests
Run testng.xml
as a TestNG Suite. The framework:
Opens two browsers (parallel execution).
Runs
SubmitOrderTest
(two runs due to parameterization) andErrorValidationsTest
.Captures a screenshot for the failed
loginErrorValidation
test.Generates a report in
reports/index.html
.
Refresh the project, open index.html
, and observe:
Dashboard: Shows 4 passed, 1 failed (pie chart).
Test Details: loginErrorValidation marked as failed with the assertion error and a screenshot (
loginErrorValidation.png
).Metadata: Report name “Web Automation Results”, title “Test Results”, tester “Samiksha Kute”.
Issue Observed: The report incorrectly shows submitOrder
as failed instead of loginErrorValidation
. This is a concurrency issue due to parallel execution, which we’ll fix next.
Step 3: Fixing Concurrency Issues with ThreadLocal
When running tests in parallel, the ExtentTest
test variable in Listeners is shared across tests, causing race conditions. For example:
ErrorValidationsTest
creates a test entry (loginErrorValidation
).SubmitOrderTest
runs concurrently, overwriting test with its entry (submitOrder).When
ErrorValidationsTest
fails, it updates the test variable, which now points to submitOrder, causing the wrong test to be marked as failed.
Running tests serially (parallel="none") avoids this, as tests execute sequentially, but we want parallel execution for efficiency. The solution is to make test thread-safe using Java’s ThreadLocal
class.
Updating Listeners for Thread Safety
Modify Listeners to use ThreadLocal
:
package testComponents;
import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.ExtentTest;
import com.aventstack.extentreports.Status;
import org.openqa.selenium.WebDriver;
import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestResult;
import resources.ExtentReporterNG;
import java.io.IOException;
public class Listeners extends BaseTest implements ITestListener {
ExtentReports extent = ExtentReporterNG.getReportObject();
ThreadLocal<ExtentTest> extentTest = new ThreadLocal<>();
@Override
public void onTestStart(ITestResult result) {
ExtentTest test = extent.createTest(result.getMethod().getMethodName());
extentTest.set(test);
}
@Override
public void onTestSuccess(ITestResult result) {
extentTest.get().log(Status.PASS, "Test Passed");
}
@Override
public void onTestFailure(ITestResult result) {
extentTest.get().fail(result.getThrowable());
try {
driver = (WebDriver) result.getTestClass().getRealClass().getField("driver").get(result.getInstance());
} catch (Exception e) {
e.printStackTrace();
}
try {
String filePath = getScreenshot(result.getMethod().getMethodName());
extentTest.get().addScreenCaptureFromPath(filePath, result.getMethod().getMethodName());
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void onFinish(ITestContext context) {
extent.flush();
}
}
Explanation:
ThreadLocal<ExtentTest>: Creates a
ThreadLocal
variable to storeExtentTest
objects, ensuring each test thread has its own instance.set: In
onTestStart
, stores theExtentTest
object inThreadLocal
withextentTest.set(test)
, associating it with the current thread’s ID.get: In
onTestSuccess
,onTestFailure
, and screenshot attachment, retrieves the thread-specific ExtentTest withextentTest.get()
.Thread Safety: Each test (e.g.,
ErrorValidationsTest
,SubmitOrderTest
) runs in its own thread, andThreadLocal
maintains a separateExtentTest
mapping for each thread, preventing overrides.
How ThreadLocal Works
When
ErrorValidationsTest
starts, it creates anExtentTest
entry and sets it inThreadLocal
, mapped to its thread ID.SubmitOrderTest
does the same, mapped to its own thread ID.When
ErrorValidationsTest
fails,extentTest.get()
retrieves its specificExtentTest
(not SubmitOrderTest’s), ensuring the correct test is updated.ThreadLocal
internally maintains a map of thread IDs to objects, ensuring thread isolation.
Rerunning Tests
Restore parallel="tests" in testng.xml
and run the suite again. Refresh index.html
and verify:
loginErrorValidation
is correctly marked as failed with the assertion error and screenshot.submitOrder
tests are marked as passed.The dashboard shows 4 passed, 1 failed, with proper metadata.
Why ThreadLocal?
Concurrency: Prevents race conditions in parallel execution.
Scalability: Supports any number of parallel tests without conflicts.
Interview Tip: Explain
ThreadLocal
as a way to maintain thread-specific data, crucial for parallel test frameworks.
Step 4: Implementing TestNG Retry Mechanism
Tests can fail due to flakiness (e.g., temporary network issues, application instability), leading to false failures. TestNG’s IRetryAnalyzer interface allows rerunning failed tests to confirm if the failure is genuine. We’ll add a retry mechanism to rerun failed tests once.
Creating a Retry Class
Create a new class Retry in src/test/java/testComponents:
package testComponents;
import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;
public class Retry implements IRetryAnalyzer {
int count = 0;
int maxTry = 1;
@Override
public boolean retry(ITestResult result) {
if (count < maxTry) {
count++;
return true;
}
return false;
}
}
Explanation:
IRetryAnalyzer: Provides the retry method to decide if a failed test should rerun.
Variables:
count: Tracks retry attempts (starts at 0).
maxTry: Sets the maximum retries (1 for one retry).
Logic:
If count < maxTry (e.g., 0 < 1), increments count and returns true to trigger a retry.
If count >= maxTry (e.g., 1 >= 1), returns false to stop retrying.
Behavior: A failed test reruns once. If it fails again, it’s marked as a failure.
Applying Retry to a Test
Add the retryAnalyzer
attribute to loginErrorValidation
in ErrorValidationsTest
:
@Test(groups = {"ErrorHandling"}, retryAnalyzer = Retry.class)
public void loginErrorValidation() {
landingPage.loginApplication("your_email", "your_password");
Assert.assertEquals(landingPage.getErrorMessage(), "Incorrect email or password123.");
}
Explanation:
retryAnalyzer: Links the test to the Retry class.
Only tests with this attribute will retry on failure. Others (e.g.,
submitOrder
) won’t.Use Case: Apply to tests prone to flakiness (e.g., UI tests affected by network delays).
Running the Tests
Run testng.xml
again. The framework:
Executes all tests, with
loginErrorValidation
failing.Triggers a retry for
loginErrorValidation
(sinceretryAnalyzer
is set).Marks the first attempt as skipped and the second as failed (since it fails again).
Check the TestNG results tab:
Total tests: 6 (4 from
SubmitOrderTest
, 2 fromErrorValidationsTest
due to retry).loginErrorValidation
shows:First run: Skipped (retry triggered).
Second run: Failed (same assertion error).
Open reports/index.html
:
loginErrorValidation
is marked as failed with the error message and screenshot.The retry attempt is logged, showing the test ran twice.
Why RetryAnalyzer?
Flaky Tests: Reduces false failures by confirming consistent issues.
Selective Application: Applied only to flaky tests, optimizing execution time.
Interview Tip: Explain
IRetryAnalyzer
as a TestNG feature to handle flaky tests, using a counter to limit retries.
Final Framework Code
Check out the complete code repository below:
Key Takeaways
Extent Reports (Phase 1): Learned basic configuration with
ExtentSparkReporter
andExtentReports
for standalone tests.Framework Integration (Phase 2): Used
ExtentReporterNG
and TestNG Listeners to automate reporting without cluttering test classes.Screenshots: Attached screenshots to failed tests using
getScreenshot
andaddScreenCaptureFromPath
.Thread Safety: Fixed concurrency issues in parallel execution with
ThreadLocal
forExtentTest
.Retry Mechanism: Implemented
IRetryAnalyzer
to rerun flaky tests, reducing false failures.Framework Design: Built reusable utilities (
ExtentReporterNG
,Listeners
,Retry
) for a maintainable framework.
Thank you for reading!
Subscribe to my newsletter
Read articles from Samiksha Kute directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Samiksha Kute
Samiksha Kute
Passionate Learner!