by Mollie Brown | Apr 21, 2025 | Automation Testing, Blog, Latest Post |
Integrating Jenkins with Tricentis Tosca is a practical step for teams looking to bring more automation testing and consistency into their CI/CD pipelines. This setup allows you to execute Tosca test cases automatically from Jenkins, helping ensure smoother, more reliable test cycles with less manual intervention. In this blog, we’ll guide you through the process of setting up the Tosca Jenkins Integration using the Tricentis CI Plugin and ToscaCIClient. Whether you’re working with Remote Execution or Distributed Execution (DEX), the integration supports both, giving your team flexibility depending on your infrastructure. We’ll cover the prerequisites, key configuration steps, and some helpful tips to ensure a successful setup. If your team is already using Jenkins for builds and deployments, this integration can help extend automation to your testing layer, making automation testing a seamless part of your pipeline and keeping your workflow unified and efficient.
Necessary prerequisites for integration
To connect Jenkins with Tricentis Tosca successfully, organizations need to have certain tools and conditions ready. First, you must have the Jenkins plugin for Tricentis Tosca. This plugin helps link the automation features of both systems. Make sure the plugin works well with your version of Jenkins because updates might change how it performs.
Next, it is important to have a set up Tricentis test automation environment. This is necessary for running functional and regression tests correctly within the pipeline. Check that the Tosca Execution Client is installed and matches your CI requirements. For the best results, your Tosca Server should also be current and operational.
Finally, prepare your GitHub repository for configuration. This allows Jenkins to access the code, run test cases, and share results smoothly. With these steps completed, organizations can build effective workflows that improve testing results and development efforts.
Step-by-step guide to configuring Tosca in Jenkins
Achieving the integration requires systematic configuration of Tosca within Jenkins. Below is a simple guide:
Step 1: Install Jenkins Plugin – Tricentis Continuous Integration
1. Go to Jenkins Dashboard → Manage Jenkins → Manage Plugins.
2. Search for Tricentis Continuous Integration in the Available tab.

3. Install the plugin and restart Jenkins if prompted.
Step 2: Configure Jenkins Job with Tricentis Continuous Integration
Once you’ve installed the plugin, follow these steps to add it to your Jenkins job:
- Go to your Jenkins job or create a new Freestyle project.
- Click on Configure.
- Scroll to Build Steps section.
- Click Add build step → Select Tricentis Continuous Integration from the dropdown.

Configure the Plugin Parameters
Once the plugin is installed, configure the Build Step in your Jenkins job using the following fields:
S. No | Field Name | Pipeline Property | Required | Description |
1 | Tricentis client path | tricentisClientPath | Yes | Path to ToscaCIClient.exe or ToscaCIJavaClient.jar. If using .jar, make sure JRE 1.7+ is installed and JAVA_HOME is set on Jenkins agent. |
2 | Endpoint | endpoint | Yes | Webservice URL that triggers execution. Remote: http://servername:8732/TOSCARemoteExecutionService/ DEX: http://servername:8732/DistributionServerService/ManagerService.svc |
3 | TestEvents | testEvents | Optional | Only for Distributed Execution. Enter TestEvents (names or system IDs) separated by semicolons. Leave the Configuration File empty if using this. |
4 | Configuration file | configurationFilePath | Optional | Path to a .xml test configuration file (for detailed execution setup). Leave TestEvents empty if using this. |
Step 3: Create a Tosca Agent (Tosca Server)
Create an Agent (from Tosca Server)
You can open the DEX Monitor in one of the following ways:
- In your browser, by entering the address http://:/Monitor/.
Directly from Tosca Commander. - To do so, right-click a TestEvent and select one of the following context menu entries:
Open Event View takes you to the TestEvents overview page.
Open Agent View takes you to the Agents overview page.
Navigate the DEX Monitor
The menu bar on the left side of the screen allows you to switch between views:
- The Agent View, where you can monitor, recover, configure, and restart your Agents.
- The Event View, where you can monitor and cancel the execution of your TestEvents.
Enter:
- Agent Name (e.g., Agent2)
- Assign a Machine Name
This agent will be responsible for running your test event.

Step 4: Create and Configure a TestEvent (Tosca Commander)
- Open Tosca Commander
- Navigate to: Project > Execution > TestEvents
- Click Create TestEvent
- Provide a name like Sample
- Step 4.1: Assign Required ExecutionList
- Select the ExecutionList (this is where you define which test cases will run)
- Select an Execution Configuration
- Assign the Agent created in Step 3
- Step 4.2: Save and Copy Node Path
- Save the TestEvent

- TestEvent → Copy Node Path

- Paste this into the TestEvents field in Jenkins build step

Step 5: How the Integration Works
Execution Flow:
- Jenkins triggers test execution using ToscaCIClient.
- The request reaches the Tosca Distribution Server (ManagerService).
- Tosca Server coordinates with AOS to retrieve test data from the Common Repository.
- The execution task is distributed to a DEX Agent.
- DEX Agent runs the test cases and sends the results back.
- Jenkins build is updated with the execution status (Success/Failure).

Step 6: Triggering Execution via Jenkins
Once you’ve entered all required fields:
- Save the Jenkins job
- Click Build Now in Jenkins
What Happens Next:
- The configured DEX Agent will be triggered.
- You’ll see a progress bar and test status directly in the DEX Monitor.

- Upon completion, the Jenkins build status (Pass or failure) reflects the outcome of the test execution.

Step 7: View Test Reports in Jenkins
To visualize test results:
- Go to Manage Jenkins > Manage Plugins > Available
- Search and install Test Results Analyzer
- Once installed, configure Jenkins to collect results (e.g., via JUnit or custom publisher if using Tosca XML outputs)
Conclusion:
Integrating Tosca with Jenkins enhances your CI/CD workflow by automating test execution and reducing manual effort. This integration streamlines your development process and supports the delivery of reliable, high-quality software. By following the steps outlined in this guide, you can set up a smooth and efficient test automation pipeline that saves time and improves productivity. With testing seamlessly built into your workflow, your team can focus more on innovation and delivering value to end users.
Found this guide helpful? Feel free to leave a comment below and share it with your team or network who might benefit from this integration.
Frequently Asked Questions
- Why should I integrate Tosca with Jenkins?
Integrating Tosca with Jenkins enables continuous testing, reduces manual effort, and ensures faster, more reliable software delivery.
- Can I use Tosca Distributed Execution (DEX) with Jenkins?
Yes, Jenkins supports both Remote Execution and Distributed Execution (DEX) using the ToscaCIClient.
- Do I need to install a plugin for Tosca Jenkins Integration?
Yes, you need to install the Tricentis Continuous Integration plugin from the Jenkins Plugin Manager to enable integration.
- What types of test cases can be executed via Jenkins?
You can execute any automated Tosca test cases, including UI, API, and end-to-end tests, configured in Tosca Commander.
- Is Tosca Jenkins Integration suitable for Agile and DevOps teams?
Absolutely. This integration supports Agile and DevOps practices by enabling faster feedback and automated testing in every build cycle.
- How do I view Tosca test results in Jenkins?
Install the Test Results Analyzer plugin or configure Jenkins to read Tosca’s test output via JUnit or a custom result publisher.
by Jacob | Apr 19, 2025 | Automation Testing, Blog, Latest Post |
Selenium has become a go-to tool for automating web application testing. But automation isn’t just about running tests it’s also about understanding the results. That’s where Selenium Report Generation plays a crucial role. Good test reports help teams track progress, spot issues, and improve the quality of their software. Selenium supports various tools that turn raw test data into clear, visual reports. These reports can show test pass/fail counts, execution time, logs, and more making it easier for both testers and stakeholders to understand what’s happening. In this blog, we’ll explore some of the most popular Selenium reporting tools like Extent Reports, Allure, and TestNG. You’ll learn what each tool offers, how to use them, and how they can improve your test automation workflow. We’ll also include example screenshots to make things easier to understand.
Importance of Generating Reports in Selenium
Reports are very important in test automation. They help teams look at results easily. First, reports show what worked well during test execution and what did not. With a good reporting tool, different people like managers and developers can see how the testing cycle is going. Good reporting also makes workflows easier by showing insights in a simple way. Test automation reporting tools are especially helpful for big projects where it’s important to see complex test case data clearly. Also, advanced reporting tools have interactive dashboards. These dashboards summarize test execution, show trends, and track failures. This helps teams make quick decisions. By focusing on strong reporting, organizations can really improve project delivery and lessen delays in their testing pipelines
Detailed Analysis of Selenium Reporting Tools
You can find different reporting tools that work well with Selenium’s powerful test automation features. Many of these tools are popular because they can adapt to various testing frameworks. Each one brings unique strengths—some are easy to integrate, while others offer visual dashboards or support multiple export formats like HTML, JSON, or XML. Some tools focus on delivering a user-friendly experience with strong analytics, while others improve work efficiency by storing historical test data and integrating with CI/CD pipelines. Choosing the right reporting tool depends on your project’s requirements, the frameworks in use, and your preferred programming language.
Let’s take a closer look at some of these tools, along with their key features and benefits, to help you decide which one fits best with your Selenium report generation needs.
TestNG Reports
TestNG is a popular testing framework for Java that comes with built-in reporting features. When used in Selenium automation, it generates structured HTML reports by default, showing test status like passed, failed, or skipped. Though Selenium handles automation, TestNG fills the gap by providing essential test result reporting.
Features:
- Detailed Test Results: Displays comprehensive information about each test, including status and execution time.
- Suite-Level Reporting: Aggregates results from multiple test classes into a single report.
Benefits:
- Integrated Reporting: No need for external plugins; TestNG generates reports by default.
- Easy Navigation: Reports are structured for easy navigation through test results.
Integration with Selenium:
To generate TestNG reports in Selenium, include the TestNG library in your project and annotate your test methods with @Test. After executing tests, TestNG automatically generates reports in the test-output directory.
package example1;
import org.testng.annotations.*;
public class SimpleTest {
@BeforeClass
public void setUp() {
// code that will be invoked when this test is instantiated
}
@Test(groups = {"fast"})
public void aFastTest() {
System.out.println("Fast test");
}
@Test(groups = {"slow"})
public void aSlowTest() {
System.out.println("Slow test");
}
}
<project default="test">
<path id="cp">
<pathelement location="lib/testng-testng-5.13.1.jar"/>
<pathelement location="build"/>
</path>
<taskdef name="testng" classpathref="cp"
classname="org.testng.TestNGAntTask"/>
<target name="test">
<testng classpathref="cp" groups="fast">
<classfileset dir="build" includes="example1/*.class"/>
</testng>
</target>
</project>

Extent Report
Extent Reports is a widely adopted open-source tool that transforms test results into interactive and visually appealing HTML reports. Especially useful in Selenium-based projects, it enhances test readability, enables screenshot embedding, and offers flexible logging, making analysis and debugging more effective.
Extent Reports is a tool that helps create detailed and visually appealing test reports for Selenium automation tests. It enhances the way test results are presented, making them easier to understand and analyze.
Features:
- Customizable HTML Reports: Helps create detailed and clickable reports that can be customized as needed.
- Integration with Testing Frameworks: Works seamlessly with frameworks like TestNG and JUnit, making it easy to incorporate into existing test setups.
- Screenshot Embedding: Supports adding screenshots to reports, which is helpful for visualizing test steps and failures.
- Logging Capabilities: Enables logging of test steps and results, providing a clear record of what happened during tests.
Benefits:
- Enhanced Readability: Presents test results in a clear and organized manner, making it easier to identify passed, failed, or skipped tests.
- Improved Debugging: Detailed logs and embedded screenshots help in quickly identifying and understanding issues in the tests.
- Professional Documentation: Generates professional-looking reports that can be shared with team members and stakeholders to communicate test outcomes effectively.
Integration:
To use Extent Reports with Selenium and TestNG:
- Add Extent Reports Library: Include the Extent Reports library in your project by adding it to your project’s dependencies.

- Set Up Report Path: Define where the report should be saved by specifying a file path.
extent.reporter.spark.start=true
extent.reporter.spark.out=reports/Extent-Report/QA-Results.html
extent.reporter.spark.config=src/test/resources/extent-config.xml
extent.reporter.spark.base64imagesrc=true
screenshot.dir=reports/images
screenshot.rel.path=reports/images
extent.reporter.pdf.start=false
extent.reporter.pdf.out=reports/PDF-Report/QA-Test-Results.pdf
extent.reporter.spark.vieworder=dashboard,test,category,exception,author,device,log
systeminfo.OS=MAC
systeminfo.User=Unico
systeminfo.App-Name=Brain
systeminfo.Env=Stage
- Runner class: We need to add Plugin in the Runner Class to Generate reports
@RunWith(Cucumber.class)
@CucumberOptions(
features = "src/test/resources/features",
plugin = {
"com.aventstack.extentreports.cucumber.adapter.ExtentCucumberAdapter:",
"html:reports/cucumber/CucumberReport.html",
"json:reports/cucumber/cucumber.json",
"SpringPoc.utilities.ExecutionTracker"
},
glue = "SpringPoc"
)
- Attach Screenshots: If a test fails, capture a screenshot and attach it to the report for better understanding.
public void addScreenshot(Scenario scenario) {
if (scenario.isFailed()) {
String screenshotPath = ScreenshotUtil.captureScreenshot(driver, scenario.getName().replaceAll(" ", "_"));
scenario.attach(((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES),
"image/png", "Failed_Step_Screenshot");
}
}
- Generate the Report: After all tests are done, generate the report to the specified path

Extent Report Overview – Failed Scenarios Summary
This section displays the high-level summary of failed test scenarios from the automation suite for the Shoppers Stop application.

Detailed Error Insight – Timeout Exception in Scenario Execution
This section provides a detailed look into the failed step, highlighting a TimeoutException due to element visibility issues during test execution.

Allure Report
Allure is a flexible and powerful reporting framework designed to generate detailed, interactive test reports. Suitable for a wide range of testing frameworks including TestNG, JUnit, and Cucumber, it offers visual dashboards, step-level insights, and CI/CD integration—making it a great fit for modern Selenium test automation.
Allure helps testers and teams view test outcomes clearly with filters, severity levels, and real-time test data visualization. It’s also CI/CD friendly, making it ideal for continuous testing environments.
Features:
- Interactive Dashboard:
Displays test summary with passed, failed, broken, and skipped test counts using colorful charts and graphs. - Step-Level Details:
Shows each step inside a test case with optional attachments like screenshots, logs, or code snippets. - Multi-Framework Support:
Compatible with TestNG, JUnit, Cucumber, PyTest, Cypress, and many other frameworks. - Custom Labels and Severity Tags:
Supports annotations to add severity levels (e.g., critical, minor) and custom tags (e.g., feature, story). - Attachments Support:
Enables adding screenshots, logs, videos, and custom files directly inside the test report.
Benefits:
- Clear and Organized Presentation:
Makes it easy to read and understand test outcomes, even for non-technical team members. - Improved Debugging:
Each failed test shows detailed steps, logs, and screenshots to help identify issues faster. - Professional-Grade Reports:
The reports are clean, responsive, and suitable for sharing with clients or stakeholders. - Team-Friendly:
Improves collaboration by making test results accessible to QA, developers, and managers. - Supports CI/CD Pipelines:
Seamless integration with Jenkins and other tools to generate and publish reports automatically.
Integration:
Add the Dependencies & Run:
1. Update the Properties section in the Maven pom.xml file
2. Add Selenium, JUnit4 and Allure-JUnit4 dependencies in POM.xml
3. Update Build Section of pom.xml in Allure Report Project.
4. Create Pages and Test Code for the pages
Project Structure with Allure Integration
Displays the organized folder structure of the Selenium-Allure-Demo project, showing separation between page objects and test classes.

TestNG XML Suite Configuration for Allure Reports
Shows the testng.xml configuration file with multiple test suites defined to enable Allure reporting for Login and Dashboard test classes.

Allure Cucumber Plugin Setup in CucumberOptions
Demonstrates how to configure Allure reporting in a Cucumber framework using the @CucumberOptions annotation with the appropriate plugin.
package pocDemoApp.cukes;
import ...
@CucumberOptions(
features = {"use your feature file path"},
monochrome = true,
tags = "use your tags",
glue = {"use your valid glue"},
plugin = {
"io.qameta.allure.cucumber6jvm.AllureCucumber6Jvm"
}
)
public class SampleCukes extends AbstractTestNGCucumberTests {
}
Allure Report in Browser – Overview
A snapshot of the Allure report in the browser, showcasing test execution summary and navigation options.

ReportNG
ReportNG is a simple yet effective reporting plugin for TestNG that enhances the default HTML and XML reports. It provides better visuals and structured results, making it easier to assess Selenium test outcomes without adding heavy dependencies or setup complexity.
Features:
- Enhanced HTML Reports:
- Generates user-friendly, color-coded reports that make it easy to identify passed, failed, and skipped tests.
- Provides a summary and detailed view of test outcomes.
- JUnit XML Reports:
- Produces XML reports compatible with JUnit, facilitating integration with other tools and continuous integration systems.
- Customization Options:
- Allows customization of report titles and other properties to align with project requirements.
Benefits:
- Improved Readability:
- The clean and organized layout of ReportNG’s reports makes it easier to quickly assess test results.
- Efficient Debugging:
- By providing detailed information on test failures and skips, ReportNG aids in identifying and resolving issues promptly.
- Lightweight Solution:
- As a minimalistic plug-in, ReportNG adds enhanced reporting capabilities without significant overhead.
Integration Steps:
To integrate ReportNG with a Selenium and TestNG project:
Add ReportNG Dependencies:
Include the ReportNG library in your project. If you’re using Maven, add the following to your pom.xml:
<dependencies>
<dependency>
<groupId>org.webjars.npm</groupId>
<artifactId>bootstrap</artifactId>
<version>${webjars-bootstrap.version}</version>
</dependency>
<dependency>
<groupId>org.webjars.npm</groupId>
<artifactId>font-awesome</artifactId>
<version>${webjars-font-awesome.version}</version>
</dependency>
<!-- end of webjars -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>3.23.1</version>
<scope>test</scope>
</dependency>
</dependencies>
Configuring TestNG Suite with ReportNG Listeners
An example of a testng.xml configuration using ReportNG listeners (HTMLReporter and JUnitXMLReporter) for enhanced reporting.
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="MySuite" verbose="1">
<listeners>
<listener class-name="org.uncommons.reportng.HTMLReporter"/>
<listener class-name="com.uncommons.reportng.JUnitXMLReporter"/>
</listeners>
<test name="MyTest">
<classes>
<class name="com.test.Test"/>
</classes>
</test>
</suite>
ReportNG default HTML Report Location
Understanding the location of the index.html report generated under the test-output folder in a TestNG project.

ReportNG Dashboard Overview
Detailed insights from the Extent Report, including test execution summary, step details, pass percentage, and environment information.

JUnit
JUnit is a foundational Java testing framework often used with Selenium. While it doesn’t offer advanced reporting out of the box, its XML output integrates smoothly with build tools like Maven or Gradle and can be extended with plugins to generate readable test reports for automation projects.
Features:
- XML Test Results:
- JUnit outputs test results in XML format, which can be parsed by various tools to generate human-readable reports.
- Integration with Build Tools:
- Seamlessly integrates with build tools like Ant, Maven, and Gradle to automate test execution and report generation.
- Customizable Reporting:
- Allows customization of test reports through plugins and configurations to meet specific project needs.
Benefits:
- Early Bug Detection: By enabling unit testing, JUnit helps identify and fix bugs early in the development cycle.
- Code Refactoring Support: It allows developers to refactor code confidently, ensuring that existing functionality remains intact through continuous testing.
- Enhanced Productivity: JUnit’s simplicity and effectiveness contribute to increased developer productivity and improved code quality.
Integration Steps
Add JUnit 5 Dependency: Ensure your project includes the JUnit 5 library. For Maven, add the following to your pom.xml:

Write Test Methods: Use JUnit 5 annotations like @Test, @ParameterizedTest, @BeforeEach, etc., to write your test methods.
package com.mechanitis.demo.junit5;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
class ExampleTest {
@Test
void shouldShowSimpleAssertion() {
Assertions.assertEquals(1, 1);
}
}
Run Tests: Right-click on the test class or method and select Run ‘TestName’. Alternatively, use the green run icon in the gutter next to the test method.
View Test Results: After running the tests, IntelliJ IDEA displays the results in the Run window, showing passed, failed, and skipped tests with detailed information.

Log4j
Although not a reporting tool itself, Log4j complements Selenium test reporting by offering detailed, customizable logging. These logs can be embedded into test reports generated by other tools, making it easier to trace test execution flow, capture runtime errors, and debug effectively
Features of Log4j (in Simple Terms)
- Different Log Levels: Log4j allows you to categorize log messages by importance—like DEBUG, INFO, WARN, ERROR, and FATAL. This helps in filtering and focusing on specific types of messages.
- Flexible Configuration: You can set up Log4j using various file formats such as XML, JSON, YAML, or properties files. This flexibility makes it adaptable to different project needs.
- Multiple Output Options: Log4j can direct log messages to various destinations like the console, files, databases, or even remote servers. This is achieved through components called Appenders.
- Customizable Message Formats: You can define how your log messages look, making them easier to read and analyze.
- Real-Time Configuration Changes: Log4j allows you to change logging settings while the application is running, without needing a restart. This is useful for debugging live applications.
- Integration with Other Tools: Log4j works well with other Java frameworks and libraries, enhancing its versatility.
Benefits of Using Log4j in Selenium Automation
- Improved Debugging: Detailed logs help identify and fix issues quickly during test execution.
- Easier Maintenance: Centralized logging makes it simpler to manage and update logging practices across your test suite.
- Scalability: Efficient logging supports large-scale test suites without significant performance overhead.
- Customizable Logging: You can tailor log outputs to include relevant information, aiding in better analysis and reporting.
- Seamless Integration: Works well with IntelliJ IDEA and other development tools, streamlining the development and testing process.
Step 1 − Create a maven project and add the proper dependencies to the pom.xml file for the below items
Save the pom.xml with all the dependencies and update the maven project.
Step 2 − Create a configuration file – log4j.xml or loj4j.properties file. Here, we will provide the settings. In our project, we had created a file named log4j2.properties file under the resources folder.

Step 3 − Create a test class where we will create an object of the Logger class and incorporate the log statements. Run the project and validate the results.
package Logs;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.concurrent.TimeUnit;
public class LoggingsInfo {
// object of Logger class
private static Logger logger = LogManager.getLogger(LoggingsInfo.class);
public static void main(String args[]) {
System.out.println("Execution started: ");
}
}
Step 4 : Configurations in log4j2.properties file.
name=PropertiesConfig
property.filename = logs
appenders = console, file
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%–5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
appender.file.type = File
appender.file.name = LOGFILE
appender.file.fileName=${filename}/LogsGenerated.log
appender.file.layout.type=PatternLayout
appender.file.layout.pattern=[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
loggers=file
logger.file.name=Logs
logger.file.level=debug
logger.file.appenderRefs = file
logger.file.appenderRef.file.ref = LOGFILE
rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
Along with that a file LogsGenerated.log get generated within the log folder within the project containing the logging information as the console output.

Chain Test Report
ChainTest Report is a modern test reporting solution that enhances visibility and tracking of Selenium automation results. With real-time analytics, historical trend storage, and easy integration, it helps teams monitor test executions efficiently while reducing the overhead of manual report generation.
Features:
- Real-Time Analytics: View test results as they happen, allowing for immediate insights and quicker issue resolution.
- Historical Data Storage: Maintain records of past test runs to analyze trends and improve testing strategies over time.
- Detailed Reports: Generate comprehensive and easy-to-understand reports that include charts, logs, and screenshots.
- Easy Integration: Seamlessly integrate with existing Selenium projects and popular testing tools like TestNG, JUnit, and Cucumber.
- User-Friendly Interface: Provides an intuitive dashboard that simplifies the monitoring and analysis of test executions.
Benefits:
- Improved Test Visibility: Gain clear insights into test outcomes, facilitating better decision-making.
- Enhanced Collaboration: Share understandable reports with both technical and non-technical stakeholders.
- Faster Issue Identification: Real-time analytics help in promptly detecting and addressing test failures.
- Historical Analysis: Track and compare test results over time to identify patterns and areas for improvement.
- Simplified Reporting Process: Automate the generation of detailed reports, reducing manual effort and potential errors.
For more details on this report, please refer to our Chaintest Report blog post
Conclusion
Choosing the right reporting tool in Selenium automation depends on your project’s specific needs—whether it’s simplicity, advanced visualization, real-time insights, or CI/CD integration. Tools like TestNG, Extent Reports, Allure, ReportNG, JUnit, and Log4j each bring unique strengths. For example, TestNG and ReportNG offer quick setups and default HTML outputs, while Allure and Extent provide visually rich, interactive dashboards. If detailed logging and debugging are priorities, integrating Log4j can add immense value. Ultimately, the ideal solution is one that aligns with your team’s workflow, scalability requirements, and reporting preferences—ensuring clarity, collaboration, and quality in every test cycle.
.
Frequently Asked Questions
- What are the advantages of using Extent Reports over others?
Extent Reports is noted for its stylish, modern dashboards and customizable visuals, including pie charts. It offers great ease of use. The platform has features like detailed analytics and lets users export in multiple formats. This helps teams show complex test results easily and keep track of their progress without any trouble.
- How do JUnit XML Reports help in analyzing test outcomes?
JUnit XML Reports make test analysis easier by changing Selenium execution data into organized XML formats. These reports show test statuses clearly, helping you understand failures, trends, and problems. They work well with plugins, making it simple to improve visibility for big projects.
- What is the default reporting tool in Selenium?
Selenium does not have a built-in reporting tool. Tools like TestNG or JUnit are typically used alongside it to generate reports.
- What is ChainTest Report and how is it beneficial?
ChainTest Report is a modern test reporting tool offering real-time analytics, detailed insights, and historical trend analysis to boost test monitoring and team collaboration.
- How does Allure differ from other reporting tools?
Allure provides interactive, step-level test reports with rich visuals and attachments, supporting multiple languages and integration with CI/CD pipelines.
by Chris Adams | Apr 16, 2025 | Automation Testing, Blog, Latest Post |
In the ever-evolving world of software development, efficiency and speed are key. As projects grow in complexity and deadlines tighten, AI-powered tools have become vital for streamlining workflows and improving productivity. One such game-changing tool is JetBrains AI Assistant a powerful feature now built directly into popular JetBrains IDEs like IntelliJ IDEA, PyCharm, and WebStorm. JetBrains AI brings intelligent support to both developers and testers by assisting with code generation, refactoring, and test automation. It helps developers write cleaner code faster and aids testers in quickly understanding test logic, creating new test cases, and maintaining robust test suites.
Whether you’re a seasoned developer or an automation tester, JetBrains AI acts like a smart coding companion making complex tasks simpler, reducing manual effort, and enhancing overall code quality. In this blog, we’ll dive into how JetBrains AI works and show its capabilities by simply demonstrating its real-world benefits.
What is JetBrains AI Assistant?
JetBrains AI Assistant is an intelligent coding assistant embedded within your JetBrains IDE. Powered by large language models (LLMs), it’s designed to help techies—whether you’re into development, testing, or automation—handle everyday coding tasks more efficiently.
Here’s what it can do:
- Generate new code or test scripts from natural language prompts
- Provide smart in-line suggestions and auto-completions while you code
- Explain unfamiliar code in plain English—great for understanding legacy code or complex logic
- Refactor existing code or tests to follow best practices and improve readability
- Generate documentation and commit messages automatically
Whether you’re kicking off a new project or maintaining a long-standing codebase, JetBrains AI helps techies work faster, cleaner, and smarter—no matter your role. Let’s see how to get started with JetBrains AI.
Installing JetBrains AI Plugin in IntelliJ IDEA
Requirements
- IntelliJ IDEA 2023.2 or later (Community or Ultimate)
- JetBrains Account (Free to sign up)
1)Click the AI Assistant icon in the top-left corner of IntelliJ IDEA.

2)Click on Install Plugin.

3)Once You Installed You will login or register

4)Once logged in, you’ll see an option to Start Free Trial to activate JetBrains AI features.

5)This is the section where you can enter and submit your prompt

Let’s Start with a Simple Java Program
Now that we’ve explored what JetBrains AI Assistant can do, let’s see it in action with a hands-on example. To demonstrate its capabilities, we’ll walk through a basic Java calculator project. This example highlights how the AI Assistant can help generate code, complete logic, explain functionality, refactor structure, document classes, and even suggest commit messages—all within a real coding scenario.
Whether you’re a developer writing core features or a tester creating test logic, this simple program is a great starting point to understand how JetBrains AI can enhance your workflow.
1. Code Generation
Prompt: “Generate a Java class that implements a basic calculator with add, subtract, multiply, and divide methods.”
JetBrains AI can instantly create a boilerplate Calculator class for you. Here’s a sample result:

2. Code Completion
While typing inside a method, JetBrains AI predicts what you intend to write next. For example, when you start writing the add method, it might auto-suggest the return statement based on the method name and parameters.
Prompt: Start writing public int add(int a, int b) { and wait for the AI to auto-complete.
Enter this in the AI Assistant chat.The AI will generate updated code where a and b are taken from the user via console input.

3. Code Explanation
You can ask JetBrains AI to explain any method or class.
Prompt: “Explain what the divide method does.”

Output:
This method takes two integers and returns the result of dividing the first by the second. It also checks if the divisor is zero to prevent a runtime exception.
Perfect for junior developers or anyone trying to understand unfamiliar code.
4. Refactoring Suggestions
JetBrains AI can suggest improvements if your code is too verbose or doesn’t follow best practices.
Prompt: “Refactor this Calculator class to make it more modular.”

5. Documentation Generation
Adding documentation is often the most skipped part of development, but not anymore.
Prompt: “Add JavaDoc comments for this Calculator class.”
JetBrains AI will generate JavaDoc for each method, helping improve code readability and aligning with project documentation standards.

6. Commit Message Suggestions
After writing or updating your Calculator class, ask:
Prompt: “Generate a commit message for adding the Calculator class with basic operations.”

Conclusion
JetBrains AI Assistant is not just another plugin it’s your smart programming companion. From writing your first method to generating JavaDoc and commit messages, it makes the development process smoother, smarter, and more efficient. As we saw in this blog, even a basic Java calculator app becomes a perfect canvas to showcase AI’s potential in coding. If you’re a developer looking to boost productivity, improve code quality, and reduce burnout, JetBrains AI is a game-changer.
Frequently Asked Questions
- What makes JetBrains AI unique in tech solutions?
JetBrains AI stands out because of its flexible way of using AI in tech solutions. It gives developers the choice to use different AI models. This includes options that are in the cloud or hosted locally. By offering these choices, it encourages new ideas and meets different development needs. Its adaptability, along with strong features, makes JetBrains AI a leader in AI-driven tech solutions.
- How does JetBrains AI impact the productivity of developers?
JetBrains AI helps developers work better by making their tasks easier and automating things they do often. This means coding can be done quicker, mistakes are cut down, and project timelines improve. With smart help at every step, JetBrains AI lets developers focus on more important work, which boosts their overall efficiency.
- Can JetBrains AI integrate with existing tech infrastructures?
JetBrains AI is made to fit well with the tech systems you already have. It easily works with popular JetBrains IDEs. It also supports different programming languages and frameworks. This makes it a flexible tool that can go into your current development setups without any problems.
- What future developments are expected in JetBrains AI?
Future updates in JetBrains AI will probably aim for new improvements in AI models. These improvements may include special models designed for certain coding jobs or fields. We can also expect better connections with other developer tools and platforms. This will help make JetBrains AI a key player in AI-driven development.
- How to get started with JetBrains AI for tech solutions?
Getting started with JetBrains AI is easy. You can find detailed guides and helpful documents on the JetBrains website. There is also a strong community of developers ready to help you with any questions or issues. This support makes it easier to start using JetBrains AI.
by Anika Chakraborty | Apr 7, 2025 | Automation Testing, Blog, Latest Post |
Automation testing is essential in today’s software development. Most people know about tools like Selenium, Cypress, and Postman. But many don’t realize that Spring Boot can also be really useful for testing. Spring Boot, a popular Java framework, offers great features that testers can use for automating API tests, backend validations, setting up test data, and more. Its integration with the Spring ecosystem makes automation setups faster and more reliable. It also works smoothly with other testing tools like Cucumber and Selenium, making it a great choice for building complete automation frameworks.
This blog will help testers understand how they can leverage Spring Boot for automation testing and why it’s not just a developer’s tool anymore!
Key Features of Spring Boot that Enhance Automation
One of the biggest advantages of using Spring Boot for automation testing is its auto-configuration feature. Instead of dealing with complex XML files, Spring Boot figures out most of the setup automatically based on the libraries you include. This saves a lot of time when starting a new test project.
Spring Boot also makes it easy to build standalone applications. It bundles everything you need into a single JAR file, so you don’t have to worry about setting up external servers or containers. This makes running and sharing your tests much simpler.
Another helpful feature is the ability to create custom configuration classes. With annotations and Java-based settings, you can easily change how your application behaves during tests—like setting up test databases or mocking external services.
Spring Boot simplifies Java-based application development and comes with built-in support for testing. Benefits include:
- Built-in testing libraries (JUnit, Mockito, AssertJ, etc.)
- Easy integration with CI/CD pipelines
- Dependency injection simplifies test configuration
- Embedded server for end-to-end tests
Types of Tests Testers Can Do with Spring Boot
S. No | Test Type | Purpose | Tools Used |
1 | Unit Testing | Test individual methods or classes | JUnit 5, Mockito |
2 | Integration Testing | Test multiple components working together | @SpringBootTest, @DataJpaTest |
3 | Web Layer Testing | Test controllers, filters, HTTP endpoints | MockMvc, WebTestClient |
4 | End-to-End Testing | Test the app in a running state | TestRestTemplate, Selenium (optional) |
Why Should Testers Use Spring Boot for Automation Testing?
S. No | Benefits of using Spring Boot in Test Automation | How it Helps Testers |
1 | Easy API Integration | Directly test REST APIs within the Spring ecosystem |
2 | Embedded Test Environment | No need for external servers for testing |
3 | Dependency Injection | Manage and reuse test components easily |
4 | Database Support | Database Support Automated test data setup using JPA/Hibernate |
5 | Profiles & Configurations | Run tests in different environments effortlessly |
6 | Built-in Test Libraries | JUnit, TestNG, Mockito, RestTemplate, WebTestClient ready to use |
7 | Support for Mocking | Mock external services easily using MockMvc or WireMock |
Step-by-Step Setup: Spring Boot Automation Testing Environment
Step 1: Install Prerequisites
Before you begin, install the following tools on your system:
Java Development Kit (JDK)
Maven (Build Tool)
IDE (Integrated Development Environment)
- Use IntelliJ IDEA or Eclipse for coding and managing the project.
Git
Step 2: Configure pom.xml with Required Dependencies
Edit the pom.xml to add the necessary dependencies for testing.
Here’s an example:
<dependencies>
<!-- Spring Boot Test -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Selenium -->
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.18.1</version>
<scope>test</scope>
</dependency>
<!-- RestAssured -->
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<version>5.4.0</version>
<scope>test</scope>
</dependency>
<!-- Cucumber -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>7.15.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-spring</artifactId>
<version>7.15.0</version>
<scope>test</scope>
</dependency>
</dependencies>
Run mvn clean install to download and set up all dependencies.
Step 3: Organize Your Project Structure
Create the following basic folder structure:
src
├── main
│ └── java
│ └── com.example.demo (your main app code)
├── test
│ └── java
│ └── com.example.demo (your test code)
Step 4: Create Sample Test Classes
@SpringBootTest
public class SampleUnitTest {
@Test
void sampleTest() {
Assertions.assertTrue(true);
}
}
1. API Automation Testing with Spring Boot
Goal: Automate API testing like GET, POST, PUT, DELETE requests.
In Spring Boot, TestRestTemplate is commonly used for API calls in tests.
Example: Test GET API for fetching user details
User API Endpoint:
GET /users/1
Sample Response:
Test Class with Code:
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class UserApiTest {
@Autowired
private TestRestTemplate restTemplate;
@Test
void testGetUserById() {
ResponseEntity<User> response = restTemplate.getForEntity("/users/1", User.class);
assertEquals(HttpStatus.OK, response.getStatusCode());
assertEquals("John Doe", response.getBody().getName());
}
}
Explanation:
S. No | Line | Meaning |
1 | @SpringBootTest | Loads full Spring context for testing |
2 | TestRestTemplate | Used to call REST API inside test |
3 | getForEntity | Performs GET call |
4 | Assertions | Validates response status and response body |
2. Test Data Setup using Spring Data JPA
In automation, managing test data is crucial. Spring Boot allows you to set up data directly in the database before running your tests.
Example: Insert User Data Before Test Runs
@SpringBootTest
class UserDataSetupTest {
@Autowired
private UserRepository userRepository;
@BeforeEach
void insertTestData() {
userRepository.save(new User("John Doe", "[email protected]"));
}
@Test
void testUserExists() {
List<User> users = userRepository.findAll();
assertFalse(users.isEmpty());
}
}
Explanation:
- @BeforeEach → Runs before every test.
- userRepository.save() → Inserts data into DB.
- No need for SQL scripts — use Java objects directly!
3. Mocking External APIs using MockMvc
MockMvc is a powerful tool in Spring Boot to test controllers without starting the full server.
Example: Mock POST API for Creating User
@SpringBootTest
@AutoConfigureMockMvc
class UserControllerTest {
@Autowired
private MockMvc mockMvc;
@Test
void testCreateUser() throws Exception {
mockMvc.perform(post("/users")
.content("{\"name\": \"John\", \"email\": \"[email protected]\"}")
.contentType(MediaType.APPLICATION_JSON))
.andExpect(status().isCreated());
}
}
Explanation:
S. No | MockMvc Method | Purpose |
1 | perform(post(…)) | Simulates a POST API call |
2 | content(…) | Sends JSON body |
3 | contentType(…) | Tells server it’s JSON |
4 | andExpect(…) | Validates HTTP Status |
4. End-to-End Integration Testing (API + DB)
Example: Validate API Response + DB Update
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class UserIntegrationTest {
@Autowired
private TestRestTemplate restTemplate;
@Autowired
private UserRepository userRepository;
@Test
void testAddUserAndValidateDB() {
User newUser = new User("Alex", "[email protected]");
ResponseEntity<User> response = restTemplate.postForEntity("/users", newUser, User.class);
assertEquals(HttpStatus.CREATED, response.getStatusCode());
List<User> users = userRepository.findAll();
assertTrue(users.stream().anyMatch(u -> u.getName().equals("Alex")));
}
}
Explanation:
- Calls POST API to add user.
- Validates response code.
- Checks in DB if user actually inserted.
5. Mock External Services using WireMock
Useful for simulating 3rd party API responses.
@SpringBootTest
@AutoConfigureWireMock(port = 8089)
class ExternalApiMockTest {
@Autowired
private TestRestTemplate restTemplate;
@Test
void testExternalApiMocking() {
stubFor(get(urlEqualTo("/external-api"))
.willReturn(aResponse().withStatus(200).withBody("Success")));
ResponseEntity<String> response = restTemplate.getForEntity("http://localhost:8089/external-api", String.class);
assertEquals("Success", response.getBody());
}
}
Best Practices for Testers using Spring Boot
- Follow clean code practices.
- Use Profiles for different environments (dev, test, prod).
- Keep test configuration separate.
- Reuse components via dependency injection.
- Use Mocking wherever possible.
- Add proper logging for better debugging.
- Integrate with CI/CD for automated test execution
Conclusion
Spring Boot is no longer limited to backend development — it has emerged as a powerful tool for testers, especially for API automation, backend testing, and test data management. Testers who learn to leverage Spring Boot can build scalable, maintainable, and robust automation frameworks with ease. By combining Spring Boot with other testing tools and frameworks, testers can elevate their automation skills beyond UI testing and become full-fledged automation experts. At Codoid, we’ve adopted Spring Boot in our testing toolkit to streamline API automation and improve efficiency across projects.
Frequently Asked Questions
- Can Spring Boot replace tools like Selenium or Postman?
No, Spring Boot is not a replacement but a complement. While Selenium handles UI testing and Postman is great for manual API testing, Spring Boot is best used to build automation frameworks for APIs, microservices, and backend systems.
- Why should testers learn Spring Boot?
Learning Spring Boot enables testers to go beyond UI testing, giving them the ability to handle complex scenarios like test data setup, mocking, integration testing, and CI/CD-friendly test execution.
- How does Spring Boot support API automation?
Spring Boot integrates well with tools like RestAssured, MockMvc, and WireMock, allowing testers to automate API requests, mock external services, and validate backend logic efficiently.
- Is Spring Boot CI/CD friendly for test automation?
Absolutely. Spring Boot projects are easy to integrate into CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI. Tests can be run as part of the build process with reports generated automatically.
by Rajesh K | Apr 3, 2025 | Automation Testing, Blog, Latest Post |
Playwright is a fast and modern testing framework known for its efficiency and automation capabilities. It is great for web testing, including Playwright Mobile Automation, which provides built-in support for emulating real devices like smartphones and tablets. Features like custom viewports, user agent simulation, touch interactions, and network throttling help create realistic mobile testing environments without extra setup. Unlike Selenium and Appium, which rely on third-party tools, Playwright offers native mobile emulation and faster execution, making it a strong choice for testing web applications on mobile browsers. However, Playwright does not support native app testing for Android or iOS, as it focuses only on web browsers and web views.

In this blog, the setup process for mobile web automation in Playwright will be explained in detail. The following key aspects will be covered:
Before proceeding with mobile web automation, it is essential to ensure that Playwright is properly installed on your machine. In this section, a step-by-step guide will be provided to help set up Playwright along with its dependencies. The installation process includes the following steps:
Setting Up Playwright
Before starting with mobile web automation, ensure that you have Node.js installed on your system. Playwright requires Node.js to run JavaScript-based automation scripts.
1. Verify Node.js Installation
To check if Node.js is installed, open a terminal or command prompt and run:
If Node.js is installed, this command will return the installed version. If not, download and install the latest version from the official Node.js website.
2. Install Playwright and Its Dependencies
Once Node.js is set up, install Playwright using npm (Node Package Manager) with the following commands:
npm install @playwright/test
npx playwright install
- The first command installs the Playwright testing framework.
- The second command downloads and installs the required browser binaries, including Chromium, Firefox, and WebKit, to enable cross-browser testing.
3. Verify Playwright Installation
To ensure that Playwright is installed correctly, you can check its version by running:
This will return the installed Playwright version, confirming a successful setup.
4. Initialize a Playwright Test Project (Optional)
If you plan to use Playwright’s built-in test framework, initialize a test project with:
npx playwright test --init
This sets up a basic folder structure with example tests, Playwright configurations, and dependencies.
Once Playwright is installed and configured, you are ready to start automating mobile web applications. The next step is configuring the test environment for mobile emulation.
Emulating Mobile Devices
Playwright provides built-in mobile device emulation, enabling you to test web applications on various popular devices such as Pixel 5, iPhone 12, and Samsung Galaxy S20. This feature ensures that your application behaves consistently across different screen sizes, resolutions, and touch interactions, making it an essential tool for responsive web testing.
1. Understanding Mobile Device Emulation in Playwright
Playwright’s device emulation is powered by predefined device profiles, which include settings such as:
- Viewport size (width and height)
- User agent string (to simulate mobile browsers)
- Touch support
- Device scale factor
- Network conditions (optional)
These configurations allow you to mimic real mobile devices without requiring an actual physical device.
2. Example Code for Emulating a Mobile Device
Here’s an example script that demonstrates how to use Playwright’s mobile emulation with the Pixel 5 device:
const { test, expect, devices } = require('@playwright/test');
// Apply Pixel 5 emulation settings
test.use({ ...devices['Pixel 5'] });
test('Verify page title on mobile', async ({ page }) => {
// Navigate to the target website
await page.goto('https://playwright.dev/');
// Simulate a short wait time for page load
await page.waitForTimeout(2000);
// Capture a screenshot of the mobile view
await page.screenshot({ path: 'pixel5.png' });
// Validate the page title to ensure correct loading
await expect(page).toHaveTitle("Fast and reliable end-to-end testing for modern web apps | Playwright");
});
3. How This Script Works
- It imports test, expect, and devices from Playwright.
- The test.use({…devices[‘Pixel 5’]}) method applies the Pixel 5 emulation settings.
- The script navigates to the Playwright website.
- It waits for 2 seconds, simulating real user behavior.
- A screenshot is captured to visually verify the UI appearance on the Pixel 5 emulation.
- The script asserts the page title, ensuring that the correct page is loaded.
4. Running the Script
Save this script in a test file (e.g., mobile-test.spec.js) and execute it using the following command:
npx playwright test mobile-test.spec.js
If Playwright is set up correctly, the test will run in emulation mode and generate a screenshot named pixel5.png in your project folder.

5. Testing on Other Mobile Devices
To test on different devices, simply change the emulation settings:
test.use({ ...devices['iPhone 12'] }); // Emulates iPhone 12
test.use({ ...devices['Samsung Galaxy S20'] }); // Emulates Samsung Galaxy S20
Playwright includes a wide range of device profiles, which can be found by running:
Using Custom Mobile Viewports
Playwright provides built-in mobile device emulation, but sometimes your test may require a device that is not available in Playwright’s predefined list. In such cases, you can manually define a custom viewport, user agent, and touch capabilities to accurately simulate the target device.
1. Why Use Custom Mobile Viewports?
- Some new or less common mobile devices may not be available in Playwright’s devices list.
- Custom viewports allow testing on specific screen resolutions and device configurations.
- They provide flexibility when testing progressive web apps (PWAs) or applications with unique viewport breakpoints.
2. Example Code for Custom Viewport
The following Playwright script manually configures a Samsung Galaxy S10 viewport and device properties:
const { test, expect } = require('@playwright/test');
test.use({
viewport: { width: 414, height: 896 }, // Samsung Galaxy S10 resolution
userAgent: 'Mozilla/5.0 (Linux; Android 10; SM-G973F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Mobile Safari/537.36',
isMobile: true, // Enables mobile-specific behaviors
hasTouch: true // Simulates touch screen interactions
});
test('Open page with custom mobile resolution', async ({ page }) => {
// Navigate to the target webpage
await page.goto('https://playwright.dev/');
// Simulate real-world waiting behavior
await page.waitForTimeout(2000);
// Capture a screenshot of the webpage
await page.screenshot({ path: 'android_custom.png' });
// Validate that the page title is correct
await expect(page).toHaveTitle("Fast and reliable end-to-end testing for modern web apps | Playwright");
});
3. How This Script Works
- viewport: { width: 414, height: 896 } → Sets the screen size to Samsung Galaxy S10 resolution.
- userAgent: ‘Mozilla/5.0 (Linux; Android 10; SM-G973F)…’ → Spoofs the browser user agent to mimic a real Galaxy S10 browser.
- isMobile: true → Enables mobile-specific browser behaviors, such as dynamic viewport adjustments.
- hasTouch: true → Simulates a touchscreen, allowing for swipe and tap interactions.
- The test navigates to Playwright’s website, waits for 2 seconds, takes a screenshot, and verifies the page title.
4. Running the Test
To execute this test, save it in a file (e.g., custom-viewport.spec.js) and run:
npx playwright test custom-viewport.spec.js
After execution, a screenshot named android_custom.png will be saved in your project folder.

5. Testing Other Custom Viewports
You can modify the script to test different resolutions by changing the viewport size and user agent.
Example: Custom iPad Pro 12.9 Viewport
test.use({
viewport: { width: 1024, height: 1366 },
userAgent: 'Mozilla/5.0 (iPad; CPU OS 14_6 like Mac OS X) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1.1 Mobile/15E148 Safari/537.36',
isMobile: false, // iPads are often treated as desktops
hasTouch: true
});
Example: Low-End Android Device (320×480, Old Android Browser)
test.use({
viewport: { width: 320, height: 480 },
userAgent: 'Mozilla/5.0 (Linux; U; Android 4.2.2; en-us; GT-S7562) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30',
isMobile: true,
hasTouch: true
});
Real Device Setup & Execution
Playwright enables automation testing on real Android devices and emulators using Android Debug Bridge (ADB). This capability allows testers to validate their applications in actual mobile environments, ensuring accurate real-world behavior.
Unlike Android, Playwright does not currently support real-device testing on iOS due to Apple’s restrictions on third-party browser automation. Safari automation on iOS requires WebDriver-based solutions like Appium or Apple’s XCUITest, as Apple does not provide a direct automation API similar to ADB for Android. However, Playwright’s team is actively working on expanding iOS support through WebKit debugging, but full-fledged real-device automation is still in the early stages.
Below is a step-by-step guide to setting up and executing Playwright tests on an Android device.
Preconditions: Setting Up Your Android Device for Testing
1. Install Android Command-Line Tools
- Download and install the Android SDK Command-Line Tools from the official Android Developer website.
- Set up the ANDROID_HOME environment variable and add platform-tools to the system PATH.
2. Enable USB Debugging on Your Android Device
- Go to Settings > About Phone > Tap “Build Number” 7 times to enable Developer Mode.
- Open Developer Options and enable USB Debugging.
- If using a real device, connect it via USB and authorize debugging when prompted.
3. Ensure ADB is Installed & Working
Run the following command to verify that ADB (Android Debug Bridge) detects the connected device:
Running Playwright Tests on a Real Android Device
Sample Playwright Script for Android Device Automation
const { _android: android } = require('playwright');
const { expect } = require('@playwright/test');
(async () => {
// Get the list of connected Android devices
const devices = await android.devices();
if (devices.length === 0) {
console.log("No Android devices found!");
return;
}
// Connect to the first available Android device
const device = devices[0];
console.log(`Connected to: ${device.model()} (Serial: ${device.serial()})`);
// Launch the Chrome browser on the Android device
const context = await device.launchBrowser();
console.log('Chrome browser launched!');
// Open a new browser page
const page = await context.newPage();
console.log('New page opened!');
// Navigate to a website
await page.goto('https://webkit.org/');
console.log('Page loaded!');
// Print the current URL
console.log(await page.evaluate(() => window.location.href));
// Verify if an element is visible
await expect(page.locator("//h1[contains(text(),'engine')]")).toBeVisible();
console.log('Element found!');
// Capture a screenshot of the page
await page.screenshot({ path: 'page.png' });
console.log('Screenshot taken!');
// Close the browser session
await context.close();
// Disconnect from the device
await device.close();
})();
How the Script Works
- Retrieves connected Android devices using android.devices().
- Connects to the first available Android device.
- Launches Chrome on the Android device and opens a new page.
- Navigates to https://webkit.org/ and verifies that a page element (e.g., h1 containing “engine”) is visible.
- Takes a screenshot and saves it as page.png.
- Closes the browser and disconnects from the device.
Executing the Playwright Android Test
To run the test, save the script as android-test.js and execute it using:
If the setup is correct, the test will launch Chrome on the Android device, navigate to the webpage, validate elements, and capture a screenshot.
Screenshot saved from real device

Frequently Asked Questions
- What browsers does Playwright support for mobile automation?
Playwright supports Chromium, Firefox, and WebKit, allowing comprehensive mobile web testing across different browsers.
- Can Playwright test mobile web applications in different network conditions?
Yes, Playwright allows network throttling to simulate slow connections like 3G, 4G, or offline mode, helping testers verify web application performance under various conditions.
- Is Playwright the best tool for mobile web automation?
Playwright is one of the best tools for mobile web testing due to its speed, efficiency, and cross-browser support. However, if you need to test native or hybrid mobile apps, Appium or native testing frameworks are better suited.
- Does Playwright support real device testing for mobile automation?
Playwright supports real device testing on Android using ADB, but it does not support native iOS testing due to Apple’s restrictions. For iOS testing, alternative solutions like Appium or XCUITest are required.
- Does Playwright support mobile geolocation testing?
Yes, Playwright allows testers to simulate GPS locations to verify how web applications behave based on different geolocations. This is useful for testing location-based services like maps and delivery apps.
- Can Playwright be integrated with CI/CD pipelines for mobile automation?
Yes, Playwright supports CI/CD integration with tools like Jenkins, GitHub Actions, GitLab CI, and Azure DevOps, allowing automated mobile web tests to run on every code deployment.
by Rajesh K | Apr 2, 2025 | Automation Testing, Blog, Latest Post |
Ensuring high-quality software requires strong testing processes. Software testing, especially test automation, is very important for this purpose. High test coverage through automation test coverage metrics shows how much automated tests are used in testing a software application. This measurement is key for a good development team and test automation. When teams measure and analyze automation test coverage, they can learn a lot about how well their testing efforts are working. This helps them make smart choices to boost software quality.
Understanding Automation Test Coverage
Automation test coverage shows the percentage of a software application’s code that a number of automation tests run. It gives a clear idea of how well these tests check the software’s functionality, performance, and reliability. Getting high automation test coverage is important. It helps cut testing time and costs, leading to a stable and high-quality product.
Still, it’s key to remember that automation test coverage alone does not define software quality. While having high coverage is good, it’s vital not to sacrifice test quality. You need a well-designed and meaningful test suite of automated tests that focus on the important parts of the application.

Key Metrics to Measure Automation Test Coverage
Measuring automation test coverage is very important for making sure your testing efforts are effective. These metrics give you useful information about how complete your automated tests are. They also help you find areas that need improvement.By watching and analyzing these metrics closely, QA teams can improve their automation strategies. This leads to higher test coverage and better software quality.
1. Automatable Test Cases
This metric measures the percentage of test cases that can be automated in relation to the total number of test cases in a suite, ensuring a stable build. It plays a crucial role in prioritizing automation efforts and identifying scenarios that require manual testing due to complexity. By understanding the proportion of automatable test cases, teams can create a balanced testing strategy that effectively integrates both manual and automated testing. Additionally, it helps in recognizing test cases that may not be suitable for automation, thereby improving resource allocation. Some scenarios, such as visual testing, CAPTCHA validation, complex hardware interactions, and dynamically changing UI elements, may be difficult or impractical to automate, requiring manual intervention to ensure comprehensive test coverage.
The formula to calculate test automation coverage for automatable test cases is:
Automatable Test Cases (%) = (Automatable Test Cases ÷ Total Test Cases) × 100
For example, if a project consists of 600 test cases, out of which 400 can be automated, the automatable test case coverage would be 66.67%.
A best practice for maximizing automation effectiveness is to prioritize test cases that are repetitive, time-consuming, and have a high business impact. By focusing on these, teams can enhance efficiency and ensure that automation efforts yield the best possible return on investment.
2. Automation Pass Rate
Automation pass rate measures the percentage of automated test cases that successfully pass during execution. It is a key metric, a more straightforward metric, for assessing the reliability and stability of automated test scripts, with a low failure rate being crucial. A consistently high failure rate may indicate flaky tests, unstable automation logic, or environmental issues. This metric also helps distinguish whether failures are caused by application defects or problems within the test scripts themselves.
The formula to calculate automation pass rate is:
Automation Pass Rate (%) = (Passed Test Cases ÷ Executed Test Cases) × 100
For example, if a testing team executes 500 automated test cases and 450 of them pass successfully, the automation pass rate is:
(450 ÷ 500) × 100 = 90%
This means 90% of the automated tests ran successfully, while the remaining 10% either failed or were inconclusive. A low pass rate could indicate issues with automation scripts, environmental instability, or application defects that require further investigation.
A best practice to improve this metric is to analyze frequent failures and determine whether they stem from script issues, test environment instability, or genuine defects in the application.
3. Automation Execution Time
Automation execution time measures the total duration required for automated test cases to run from start to finish, including test execution time. This metric is crucial in evaluating whether automation provides a time advantage over manual testing. Long execution times can delay deployments and impact release schedules, making it essential to optimize test execution for efficiency. By analyzing automation execution time, teams can identify areas for improvement, such as implementing parallel execution or optimizing test scripts.
One way to improve automation execution time and increase test automation ROI is by using parallel execution, which allows multiple tests to run simultaneously, significantly reducing the total test duration. Additionally, optimizing test scripts by removing redundant steps and leveraging cloud-based test grids to execute tests on multiple devices and browsers can further enhance efficiency.
For example, if the original automation execution time is 4 hours and parallel testing reduces it to 1.5 hours, it demonstrates a significant improvement in test efficiency.
A best practice is to aim for an execution time that aligns with sprint cycles, ensuring that testing does not delay releases. By continuously refining automation strategies, teams can maximize the benefits of test automation while maintaining rapid and reliable software delivery.
4. Code Coverage Metrics
Code coverage measures how much of the application’s codebase is tested through automation.
Key Code Coverage Metrics:
- Statement Coverage: Measures executed statements in the source code.
- Branch Coverage: Ensures all decision branches (if-else conditions) are tested.
- Function Coverage: Determines how many functions or methods are tested.
- Line Coverage: Ensures each line of code runs at least once.
- Path Coverage: Verifies different execution paths are tested.
Code Coverage (%) = (Covered Code Lines ÷ Total Code Lines) × 100
For example, If a project has 5,000 lines of code, and tests execute 4,000 lines, the coverage is 80%.
Best Practice: Aim for 80%+ code coverage, but complement it with exploratory and usability testing.
5. Requirement Coverage
Requirement coverage ensures that automation tests align with business requirements and user stories, helping teams validate that all critical functionalities are tested. This metric is essential for assessing how well automated tests support business needs and whether any gaps exist in test coverage.
The formula to calculate the required coverage is:
Requirement Coverage (%) = (Tested Requirements ÷ Total Number of Requirements) × 100
For example, if a project has 60 requirements and automation tests cover 50, the requirement coverage would be:
(50 ÷ 60) × 100 = 83.3%
A best practice for improving requirement coverage is to use test case traceability matrices to map test cases to requirements. This ensures that all business-critical functionalities are adequately tested and reduces the risk of missing key features during automation testing.
6. Test Execution Coverage Across Environments
This metric ensures that automated tests run across different browsers, devices, and operating system configurations. It plays a critical role in validating application stability across platforms and identifying cross-browser and cross-device compatibility issues. By tracking manual test cases and test execution coverage with a test management tool, teams can optimize their cloud-based test execution strategies and ensure a seamless user experience across various environments.
The formula to calculate test execution coverage is:
Test Execution Coverage (%) = (Tests Run Across Different Environments ÷ Total Test Scenarios) × 100
For example, if a project runs 100 tests on Chrome, Firefox, and Edge but only 80 on Safari, then Safari’s execution coverage would be:
(80 ÷ 100) × 100 = 80%
A best practice to improve execution coverage is to leverage cloud-based testing platforms like BrowserStack, Sauce Labs, or LambdaTest. These tools enable teams to efficiently run tests across multiple devices and browsers, ensuring broader coverage and faster execution.
7. Return on Investment (ROI) of Test Automation
The ROI of test automation helps assess the overall value gained from investing in automation compared to manual testing. This metric is crucial for justifying the cost of automation tools and resources, measuring cost savings and efficiency improvements, and guiding future automation investment decisions.
The formula to calculate automation ROI is:
Automation ROI (%) = [(Manual Effort Savings – Automation Cost) ÷ Automation Cost] × 100
For example, if automation saves $50,000 in manual effort and costs $20,000 to implement, the ROI would be:
(50,000 – 20,000) ÷ 20,000] × 100 = 150%
A best practice is to continuously evaluate ROI to refine the automation strategy and maximize cost efficiency. By regularly assessing returns, teams can ensure that automation efforts remain both effective and financially viable.
Conclusion
In conclusion, metrics for automation test coverage are important for making sure products are good quality and work well in today’s QA practices. By looking at key metrics, such as how many automated tests there are and what percentage of unit tests and test cases are automated, teams can improve how they test and spot issues in automation scripts. This helps boost overall coverage. Using smart methods, like focusing on test cases based on risk and applying continuous integration and deployment, can increase automation coverage. Examples from real life show how these metrics are important across different industries. Regularly checking and using automation test coverage metrics is necessary for improving quality assurance processes. Codoid, a leading software testing company, helps businesses improve automation coverage with expert solutions in Selenium, Playwright, and AI-driven testing. Their services optimize test execution, reduce maintenance efforts, and ensure high-quality software.
Frequently Asked Questions
- What is the ideal percentage for automation test coverage?
There isn't a perfect percentage that works for every situation. The best level of automation test coverage changes based on the software development project's complexity, how much risk you can handle, and how efficient you want your tests to be. Still, aiming for 80% or more is usually seen as a good goal for quality assurance
- How often should test coverage metrics be reviewed?
You should look over test coverage metrics often. This is an important part of the quality assurance and test management process, ensuring that team members are aware of progress. It’s best to keep an eye on things all the time. However, you should also have more formal reviews at the end of each sprint or development cycle. This helps make adjustments and improvements
- Can automation test coverage improve manual testing processes?
Yes, automation test coverage can help improve manual testing processes. When we automate critical tasks that happen over and over, it allows testers to spend more time on exploratory testing and handling edge cases. This can lead to better testing processes, greater efficiency, and higher overall quality.