by Anika Chakraborty | Oct 4, 2021 | Software Testing, Blog, Latest Post |
Some still believe that the need for manual testing can be completely eliminated by automating all the test cases. However, in reality, you simply can’t eradicate manual testing as it plays an extremely important role in covering some edge cases from the user’s perspective. Since it is apparent that there is a need for both automation and manual testing, it is up to us to choose between the two and strike a perfect balance. So let’s pit Automation Testing vs Manual Testing and explore how it is possible to balance them both in the software delivery life cycle.
Stable Automated Tests
Stable test automation scripts help your team provide quick feedback at regular intervals. Automated tests boost your team’s confidence levels and help certify if the build is release-able or not. If you have a good talent pool to create robust automated scripts, then your testers can concentrate on other problem-solving activities (Exploratory & Usability Testing) which add value to the software.
Quarantined Automation Test Suites
Flaky automated tests kill the productivity of QA testers with their false positives and negatives. But you can’t just ignore the unstable automated test scripts as your team would have invested a lot of time & effort in developing the scripts. The workaround here is to quarantine the unstable automated test scripts instead of executing both the stable and unstable scripts in the delivery pipeline. So stabilizing & moving the quarantined test scripts to the delivery pipeline is also a problem-solving activity. As a leading automation testing company, we strongly recommend avoiding flaky automation test scripts from the very beginning. You can make that possible by following the best practices to create robust automated test scripts.
Tips to stabilize the unstable scripts
- Trouble-shoot and fix the script failures immediately instead of delaying them.
- Train the team to easily identify the web elements and mobile elements.
- Avoid boilerplate codes.
- Implement design patterns to minimize script maintenance.
- Execute the scripts as much as possible to understand the repetitive failures.
- Run batch by batch instead of running everything in one test suite.
- Make the automated test report visible to everyone in the team.
- If your team is not able to understand the test results, then you need to make the reports more readable.
Exploratory Testing
Test automation frees testers from repetitive scripted testing. Scripted automated testing does enable fast feedback loops, but your team will be able to unearth new product features & bugs, and identity additional test scenarios when performing exploratory testing.
Benefits of Exploratory Testing (ET)
- Scripted testing provides confidence, but ET helps your team to understand whether the new features make sense from the end user’s standpoint.
- ET is usually performed by testers who have good product knowledge. Every feedback from ET is also an input for your automated test suites.
- Some test cases cannot be automated. So you can allocate those test cases in ET sessions.
- ET helps you to create additional tests which are not covered in automated test suites.
Usability Testing
Usability testing is an experiment to identify and rectify usability issues that exist in the product. Let’s say a functional tester is testing the product check-out scenario of an e-commerce app. Here, the tester has good knowledge about the product and knows where the important buttons (Add to Cart & Proceed to Checkout) are placed on the app. However, in real-time, if the end-users struggle to find these buttons, they will definitely have second thoughts about using your app again. So it is clearly evident that functional UI tests can’t help you identify usability issues.
Usability Testing Steps
1. Prepare the research questions and test objectives
2. Identity participants who represent the end-users
3. Setup an actual work environment
4. Interview the participants about the app experience
5. Prepare test reports with recommendations
Conclusion
We hope you’re able to understand that using any one of the two types of testing will only lead to the creation of surface-level solutions. For robust solutions, you should use both manual and automation testing to test the features which are released continuously in the delivery pipeline. So compare Automation Testing vs Manual Testing and use the one that is more apt for the requirement. Testers should be performing exploratory testing continuously against the latest builds that come out of Continuous Integration. Whereas, scripted testing should be taken care of by automation.
by admin | Oct 1, 2021 | Software Testing, Blog, Latest Post |
DevOps testing strategy can be defined as more of a team effort as it categorizes the best working shift that is conclusive of both the timing and frequency of a developer’s testing practice in an automated fashion. Though such radical transformation might not be easy for some testers, proper use of strategies can be a game-changer. As a leading QA company, we always make sure to implement the best DevOps Testing Strategy to streamline our testing process. Such best strategies can be created by following the tips and tools that we will be exploring in this blog. So in this blog, we will be introducing you to the benefits of an effective DevOps Testing Strategy and the tips & tools you could use to implement that strategy for yourself.
Benefits of using an effective DevOps Testing Strategy:
A company can stay ahead of the pack in today’s competitive market by becoming more efficient with its process to offer the finest features to customers on time. Here are some of the major advantages that a firm may gain by implementing the DevOps methodology.
Higher Release Velocity:
DevOps practices help in increasing the release velocity which enables us to release the code to production at faster rates. But it is not just about the speed as we will be able to perform such releases with more confidence.
Shorter Development Cycle:
With the introduction of DevOps, the complete software development cycle starting right from the initial design up to the production deployment becomes shorter.
Earlier Defect Detection:
With the implementation of the DevOps approach, we will be able to identify defects much earlier and prevent them from getting released to production. So improved quality of software events becomes a natural byproduct of DevOps.
Easier Development Rollback:
In addition to earlier defect detection, we will also be able to predict possible failures due to any bug in the code or issue in production are analyzed in DevOps, we need not worry about the downtime for rollback. It either avoids the issue altogether or makes us better equipped to handle the situation. Even in the event of an unpredicted failure, a healthy DevOps process will make it easier for us to recover from it.
Curates a Collaborative Culture:
The core concept of DevOps itself is to enhance the collaboration between the Dev & Ops team so that they work together as one team with a common goal. The DevOps culture brings more than that to the table as it encourages employee engagement and increases employee satisfaction. So engineers will be more motivated to share their insight and continuously innovate, enabling continuous improvement.
Performance-Oriented Approach:
DevOps can ensure an increase in your team’s performance as it encourages a performance-oriented approach. Coupling that with the collaborative culture, the teams become more productive and more innovative.
More Responsibility:
Since DevOps engineers share responsibility for key deliverables and goals, we will be able to witness an enhanced perspective that prevents siloed thinking and helps guarantee success.
Better Hireability:
DevOps cultures require a variety of skill sets and specializations for successful implementation. Studying DevOps is a lucrative career path for development staff, operations staff, and also a great advantage for digital & IT spheres.
On the whole, DevOps focuses on removing siloed perspectives that ensure pipelines are delivered with as much business value as possible.
Tips to create an effective DevOps Testing Strategy:
Now that we have seen all the benefits that an effective DevOps Testing Strategy has to offer, let’s explore a few tips that one would have to follow to reap such benefits.
1. Deploy small changes as often as possible:
Deploy small changes frequently as it allows for a more stable and controllable production environment. Even in the case of a crucial bug, it will be easier to identify it and come up with a solution.
2. Infrastructure as code:
An incentive to adopt infrastructure as code is to allow more governance of the deployment process between different environments to enable a faster, more efficient, and reliable deployment by automation management, rather than a manual process.
3. GIT log commit messages:
Viewing the GIT log can be very messy. A good way to make GIT log clear and understandable is to write a meaningful commit message. A good commit message consists of a clear title (first line) and a good description (body of the message).
4. Good read – “DevOps Handbook”:
Books are always one of the best ways to learn and the DevOps Handbook can answer a lot of questions you might have such as,
What is DevOps culture?
What are its origins?
How to evaluate DevOps work culture?
How to find the problems in your organization’s process and improve them?
5. Build Stuff:
Linux, Cloud, DevOps, coding, AWS Training, DevOps to AWS free tier, Docker play, KBS play, and code on Github.
Tools to make an effective DevOps Testing Strategy:
We have already established that the DevOps movement encourages IT departments to improve developer, sysadmin, and tester teamwork. So it’s more about changing processes and streamlining workflows between departments than it is about using new tools when it comes to “doing DevOps.” As a result, there will never be an all-in-one DevOps tool.
However, if you have the correct tools, you can always benefit. DevOps tools are divided into five categories based on their intended use at each stage of the DevOps life cycle.

Configuration Management:
First off, we have configuration management and it is about managing all the configurations of all environments of a software application. To implement continuous delivery, use an automated process for version control. Even with the manual process, apply all changes automatically. It will be deployed through automation by checking the scripts.
Configuration management is classified in two ways:-
1. Infrastructure as code
2. Configuration as code
Infrastructure as code:
It is defining the entire environment definition as a code or script. Environment definition generally includes setting up the servers for comparing & configuring the networks and setting up other computing resources. Basically, they are part of the IT infrastructure setup. All these details would be written out as a text file or in the form of code. They are then checked out into a version control tool where they would become the single source of defining the environments or even updating the environments. This crushes the need for a developer or tester to be a system admin expert to set up their servers for development or testing activity. So the infrastructure setup process in DevOps would be completely automated.
Configuration as code:
It is defining the configuration of the servers and the components as a script and checking them into version control. It either includes parameters that define the recommended settings for software to run successfully or a set of commands to be done initially to set up the software applications or it even could be a configuration of each of the components of the software that are set up. It could also be any specific user rules or a user privilege.
Puppet:
Puppet is a well-established Configuration Management platform. When we talk about configuration management in terms of servers (i.e.) when you have a lot of servers in a data center or in an in-house setup. You will want to keep your servers in a particular state. Since Puppet is also a deployment tool, a simple code can be written and deployed onto the servers to automatically deploy the software on the system. Puppet implements infrastructure as code. The policies and configurations are also written as code.
Chef:
Chef is infrastructure as code. Chef ensures our configurations are applied consistently in every environment at any scale with the help of infrastructure automation. Chef is best suited for organizations that have a heterogeneous infrastructure and are looking for the following mature solutions.
1. Programmatically provision and configure components.
2. Treat it like any other codebase.
3. Recreate business from code repository, data backup, and compute resources.
4. Reduce management through abstraction.
5. Reserve the configuration of your infrastructure in version control.
Chef also ensures that each node compiles with the policy and the policy is determined by the configurations in each node’s run list.
Ansible:
Ansible has a lot going for itself as it’s configuration management, deployment, and orchestration tool. Since it is focused mainly on multiple profiles, we will be able to configure them automatically. After which, we can automate them and benefit from them with deployment purposes like Docker as we also have an authorization tool available in Ansible. We know that it is a “push-based” configuration management tool. When it comes to pushing, let’s say we want to apply changes on multiple servers, we can just push the changes and not configure the entire system or nodes. It automates your entire IT infrastructure by providing large productivity gains.
Continuous integration (CI):
Continuous integration is a DevOps software development practice that enables the developers to merge their code changes in the central repository so that the automated builds and tests can be run. Continuous integration is considered as such an important process due to the following reasons.
1. It avoids merge conflicts when we sync our source code from our systems to the shared repository. It helps different developers to collaborate their source code into a single shared repository without any issues.
2. The time we spend on code review is something that we can easily decrease with the help of this continuous integration process.
3. Since the developers can easily collaborate with each other, it speeds up the development process.
4. We will also be able to reduce project backlog as we will be able to make frequent changes in the repositories. So if any kind of product backlog is pending for a long time, it can be managed easily.
Jenkins:
It is a continuous integration tool that allows continuous development, test, and deployment of newly created codes. It is an open-source automation server. Jenkins is a web-based application that is completely developed in Java. Jenkins script uses groovy in the back end. It is used for integrating all DevOps stages with the help of plugins to enable continuous delivery. There are actually two ways you can develop. You can either develop by using GUI or by groovy. The usage basically depends on the project. In a few of our projects, we would use a Jenkins file created in the project territory. This Jenkins file will be run on the job, and so would have to be written in the GUI. Some people will write everything in the GUI, and that is why it is called Jenkins.
Travis:
Travis CI is undoubtedly one of the most straightforward CI servers to use. Travis CI is an open-source, distributed continuous integration solution for building and testing GitHub projects. It can be set up to run tests on a variety of machines, depending on the software that is installed.
Team City:
Team City is a powerful, expandable, and all-in-one continuous integration server. The platform is provided through JetBrains and is written in Java. A total of 100 ready-to-use plugins support the platform in various frameworks and languages. The installation of TeamCity is very straightforward, and there are multiple installation packages for different operating systems.
Configuration Inspection:
Now let’s take a look at a few top tools used during Configuration Inspection. Any points to be added?
SonarQube:
SonarQube is the central location for code quality management. It provides visual reporting on and across projects. It also has the ability to replay previous code in order to examine metrics evolution. Though it’s written in Java, it can decipher code from more than 20 different programming languages making it a tool you cannot avoid.
Fortify:
The Fortify Static Code Analyzer (SCA) assists you in verifying the integrity of your program, lowering expenses, increasing productivity, and implementing best practices for secure coding. It examines the source code, determines the source of software security flaws, then correlates and prioritizes the findings. As a result, you’ll have a line–of–code help for fixing security flaws.
Coverity:
Coverity identifies flaws that are actionable and have a low false-positive rate. The team is encouraged to develop better, cleaner, and more robust code as a result of their use of the tool. Coverity is a static analysis (SAST) solution for development and security teams that helps them resolve security and quality flaws early in the software development life cycle (SDLC), track & manage risks throughout the application portfolio, and assure security & coding standards compliance. Coverity offers security and quality checking support for over 70 frameworks and 21 languages.
Containerization:
The practice of distributing and deploying applications in a portable and predictable manner is known as containerization. We can achieve this by packing application code and its dependencies into containers that are standardized, isolated, and lightweight process environments.
Docker:
Docker is an open platform used by DevOps teams to make it easier for developers and sysadmins to push code from development to production without having to use multiple conflicting environments throughout the application life cycle. Docker’s containerization technology gives apps mobility by allowing them to run in self-contained pieces that can be moved around.
Vagrant:
A vagrant is a virtual machine manager that is available as an open-source project. It’s a fantastic tool that allows you to script and package the VM configuration and provisioning setup for several VMs, each with its own puppet and/or chef setups.
Virtualization:
Multiple independent services can be deployed on a single platform using virtualization. Virtualization refers to the simultaneous use of multiple operating systems on a single machine. Virtualization is made feasible by a software layer known as a “hypervisor.” Virtual Machine contains dependencies, libraries, and configurations. It is an operating system that allows one server to share its resources. They have their own infrastructure and are cut off from the rest of the world. Virtual machines run applications on different operating systems without the need for additional hardware.
Amazon EC2:
The Amazon Elastic Compute Cloud (Amazon EC2) uses scalable processing capabilities in the Amazon Web Services (AWS) cloud to deliver virtualization. By reducing the initial cost of hardware, Amazon EC2 reduces capital expenditure. Virtual servers, security and networking configurations, and storage management are all available to businesses.
VMWare:
Virtualization is provided by VMWare through a variety of products. Its vSphere product virtualizes server resources and provides crucial capacity and performance control capabilities. Network virtualization and software-defined storage are provided by VMware’s NSX virtualization and Virtual SAN, respectively.
Conclusion:
So we have made a comprehensive coverage of many benefits we can get from DevOps, the tips to follow while implementing DevOps, and also the various tools you would need at every stage of the DevOps process. We hope you had an enjoyable read while still learning how these tips & tools make an effective DevOps Testing Strategy. We have been able to make large strides and grow with DevOps to provide some of the best automation testing services to our clients and felt it was necessary to share some valuable information that we have learned with experience.
by admin | Sep 30, 2021 | Automation Testing, Blog, Latest Post |
Cypress has been gaining popularity in the testing community despite Selenium still being the favorite choice. During the initial days, Cypress had features only for Unit testing, and it was also supported only by a few browsers. However, Cypress has now extended its capabilities for End-to-end Testing, Integration Testing, and Unit Testing. So choosing between these two options isn’t as easy as it once was. So in this blog article, we will be pitting Selenium vs Cypress and list out the key differences between the two to find out which will be better for your needs.
Cypress
Cypress is not an open-source tool only its test runner is open-source. Many modern test automation frameworks are built on top of Selenium. Whereas Cypress has its own architecture to interact with browsers. Selenium performs the actions on a browser through the browser API. Cypress has a node process that runs behind the scenes and controls the web application to perform the set of scenarios we have listed below.
- Stub the functions of your browser or application to force them to behave as per the requirements in your test case.
- It enables us to programmatically alter the state of your application directly from your test code by exposing data stores (Like in Redux).
- You can force your server to send empty responses and test edge cases like ‘empty views’.
- You have the option to test how your application responds to errors on your server by modifying the response status codes to be 500.
- Direct modification of DOM elements like forcing hidden elements to be shown can be done.
- It is possible to prevent Google Analytics from loading before any of your application code is executed while testing.
- Stay in the loop with the synchronous notifications you get whenever your application transitions to a new page or when it begins to unload.
- Move forward or backward to control time and allow the timers or polls to automatically fire without having to wait for the required time in your tests.
- You can also add your own event listeners for responding to your application. You could also update your application code so that it behaves differently when under tests in Cypress.
Source – Cypress Documentation
Selenium
Selenium WebDriver can be used to control your browser either locally or remotely.
Local – First up, the client binding sends the WebDriver commands to the driver. Following this, the driver sends the command to the browser. Once a command is executed on the browser, the outcome of the command execution will be sent back from the same channel.
Remote – Let’s say you have the automation codebase in Windows, but want to run your scripts on the Chrome & Linux combination. You can start the Selenium Remote WebDriver on the Linux machine. After which, the client binding from Windows will send the commands to the remote WebDriver. From there the remote WebDriver sends the commands to Chrome Driver, and finally, the commands reach the browser via the Chrome Driver.
Advantages of Selenium
As stated earlier, Selenium is the crowd favorite. So let’s take a look at the advantages that make Selenium so popular.
- Selenium WebDriver supports multiple programming languages.
- Selenium can be integrated into any test automation framework.
- It supports multiple browsers.
- In Selenium 4, you can get the DevTools instance using CDP.
- Selenium also has a strong online community.
We have seen a glimpse of both Cypress and Selenium separately. Now let’s take the Selenium vs Cypress a notch higher and compare both on a point-to-point basis to see which one will be useful for you.
Selenium vs Cypress Comparison
S. No |
Features |
Selenium |
Cypress |
1 |
Programming Languages |
Selenium Has Client Bindings For Following Languages C#, Python, Ruby, Java, JavaScript And Etc. |
JavaScript |
2 |
Pricing |
Free |
Free For Up To 3 Users & 75 USD Per Month For 10 Users |
3 |
Browsers Supported |
Chrome Edge IE Firefox Safari Opera Headless |
Edge Chrome Firefox Electron |
4 |
Video Recording |
Selenium Is A Web Browser Automation Tool. You Can Bring Execution Recording Using Your Test Automation Framework, But Not Using Selenium. |
Cypress Has An In-Built Feature For Video Recording. |
5 |
Screenshots |
Page-Level And Element-Level Screenshots Can Be Captured Using The TakeScreenshot Method. |
By Default, Screenshots Are Taken For Failures And Embedded Into Test Results. If You Want To Take Inside The Script At Any Point, You Can Use Cy.Screenshot() Command. |
6 |
Jira & Slack Integration |
No. You Need To Write Your Own Utilities To Integrate Jira & Slack. |
Yes |
7 |
Testing Libraries |
You Can Use Any Testing Libraries Based On Client Bindings Support. |
Cypress Supports Only The Following JavaScript Testing Libraries – Mocha, Chai, Chai-JQuery, And Sinon-Chai. |
8 |
Reporting |
Selenium Does Not Have In-Built Reporting Libraries. You Need To Integrate Some External Reporting Tools Like ReportPortal Or Allure Reporting. |
Cypress Has A Comprehensive Reporting Dashboard That Can Make The Test Results Visible To The Entire Team. |
9 |
Open-Source/Freeware/Commerical |
Open-Source |
Cypress Dashboard Is A Commercial Tool. Test Runner Is Open-Source |
10 |
Load Balancing |
No |
Cypress Will Automatically Balance Your Spec Files Across The Available Machines In Your CI Provider. |
Conclusion
As a leading test automation company, we use Selenium extensively for delivering the best-in-class automation testing services. After a lot of R&D, we did try Cypress out in a couple of our projects. We found Cypress to be a good choice for web applications developed using only JavaScript technologies in both the client-side and server-side.
by admin | Sep 28, 2021 | Automation Testing, Blog, Latest Post |
If you are assuming that achieving successful test automation is a cakewalk, you can’t be more wrong. You need a highly skilled team, a proper automation framework, healthy support from the management, the right tools, and proper training. Apart from that, we should always be on the lookout for ways to improve the automation testing process & scripts. But you need not be overwhelmed by the long list of requirements as we will be exploring the fundamentals of automation testing in this blog. So let’s get started.
Automation Testing Team
You can bring a good automation testing tool to automate web applications, mobile apps, and APIs. But you can’t bring a tool to write automated test scripts by itself as you need human intelligence to create reliable test automation scripts. That is what makes your team one of the fundamentals of automation testing.
So let’s take a look at a few pointers to understand why we need automation testers to create and maintain automation test scripts.
- Only an automation tester who has good domain & system knowledge can write proper test automation workflows for the given end-to-end tests.
- No application is static in the modern world. Your application GUI will be based on the test data, geolocation, language, user roles, and color themes. The automation testing tool needs two inputs. One is the information about the element to interact, and the other is the test data to be fed. Only an automation tester can provide the necessary inputs to the tool while automating a test workflow.
- Test Automation Tools can tell you how many times a test script is failing. However, it can’t tell you why it is failing. Automation testers will troubleshoot the failures and understand the reason for the failures. So if there is any issue in the script, it will be fixed immediately.
The Required Skills
Now that we have seen why we need the team, let’s check what skills an automation testing team should possess.
1. The team should know how to make the automated test reports visible to everyone.
2. They should have a thorough knowledge of the tools they use.
3. Boilerplate codes are killers during maintenance. So your team should know how to implement the code design patterns to avoid boilerplate codes.
4. Knowledge sharing within the team and following the best practices for test automation are vital aspects.
So when you have talented people on board, you can see success in test automation from the very beginning.
System Knowledge & Thinking
As an experienced automation testing company, we have mitigated several failed automation testing projects. Our observations make it clear that System Knowledge & Thinking are also the fundamentals of automation testing as the lack of technical skills was the major concern in many projects. However, in some cases, the projects had failed due to the lack of domain knowledge. The teams had thought only about the test automation tasks and lacked the system thinking (i.e.), the type of thinking that analyses if your automated test scripts will add value to the system and not to the team or your department. Your team should in no way be working for your departmental goals. So the question has to be asked, how can we make a team focus on the system? Here are a few basics you can follow.
- Don’t isolate the automation testing team.
- Include them in all the process improvement meetings.
- Include a high-level and real-time automation testing report in the system development dashboard.
- Enable effective collaboration and knowledge sharing.
- Make sure everyone in the team is well aware of the system goals.
- Encourage them to participate in pair testing and exploratory testing sessions.
Designing automated test workflows requires in-depth knowledge in the application domain
Acceptance Test-Driven Automation (ATDA)
Times were different a decade ago, automation testing teams usually spent a lot of time developing test automation frameworks. Which meant that the script development process started only after the entire framework development process was done. But it goes without saying that automated script development should not wait for the completion of the framework development.
But you might ask how it is possible to start script development without a framework.
It is possible if you are familiar with Test Driven Development (TDD) as you can easily understand Acceptance Test-Driven Automation (ATDA) as well. ATDA allows you to develop the test automation framework while writing automated test scripts.
In TDD, the developers first write Unit tests and then develop the application features until the captured unit tests are passed. In the same way, when it comes to test automation, you start writing test scripts using Cucumber, Selenium, and Appium tools. So if a script needs the test data from an Excel or a YAML file, you must add the test data utility in the framework before moving to the next test script.
But when using ATDA, you will make the test automation framework evolve by developing the test scripts in parallel. In the traditional automation approach, you will spend a hell of lot of time developing a framework and its utilities. Moreover, it is impossible to gather all the requirements for developing the framework without commencing the script development.
Test Data Management
Automation Test Script executions need a fresh dataset. Most of the execution failures happen due to the below three reasons.
1. Missing test data or the data which are fed for execution are already consumed.
2. Application Issues
3. Script Issues (Incorrect object locator details, UI or Functionality Change)
Test data issues need to be avoided at any cost and automation testers should not spend time to feed test data in automated test suites every time before starting the execution.
Use or create a test data management tool to keep as much as the required test data ready. So make sure your test automation scripts don’t starve for test data.
Reporting & Logging
Since Reporting & Logging is one of the fundamentals of automation testing, you have to make sure that any and all automation testing frameworks you choose have the below capabilities.
- The reports should be real-time as your team should be able to view which test case is being executed now and the test step which was just executed.
- All the framework & utilities-related exceptions and log information should go in a separate log file.
- The reporting tool should track how many times a script has failed.
- It should also collect the report data for metrics.
- Reports should be accessible and understandable to everyone.
- Screenshots and Video recordings should be embedded.
There are many test automation reporting tools available in the market that can come in handy. Make sure the tool takes care of everything once you ingest the reporting data into the tool database. Never try to build a reporting tool on your own unless you have a special requirement. Maintaining a tool is an overhead, and your focus should be on the Framework & script development.
Conclusion
So these are the fundamentals of automation testing that one has to know. As one of the best companies for automation testing, we always improve our test automation process and train our teams for continuous improvement. We have seen many instances where the frameworks which were used were not updated with the latest libraries and test automation techniques. So it is evident that when you have a proper team that always focuses to improve and collaborate effectively, you will see success in test automation.
by admin | Sep 27, 2021 | Selenium Testing, Blog, Latest Post |
In Selenium 4, the WebDriver bidirectional (BiDi) protocol has been implemented. First of all, you should know why WebDriver needs the BiDi protocol. First, the Selenium WebDriver client library sends HTTP command requests to a server. The received requests are then processed by the server. Once the server is ready with the response, it will send the response to the client.
So the Selenium WebDriver Remote server does not send any HTTP response or message to the client until it receives an HTTP request.

In the above diagram, the client sends a request and the server responds. The client does not kill the connection with the server until the response for a command request is received. Selenium WebDriver has more than 60 endpoints to communicate with a remote server.
Why BiDi Protocol?
The WebDriver Bi-Directional protocol allows both the client and the server to send & receive requests and responses. But we have enough HTTP commands to send requests to the server. Then why do we need the BiDi protocol? To understand this, you need to know some features which are available in Selenium 4.
- Listening DOM events
- Capture & send JS Errors and Console messages to client
- Record Traffic
- Access to native devtools protocol
Let’s take sending console messages to the client as an example. If a webpage has 100 console log messages, then the server needs to establish 100 HTTP connections and moreover, the server cannot send a response without a valid request from the client.
Even if the selenium developers tried to implement this feature with an HTTP connection, the server wouldn’t be able to handle multiple requests simultaneously. That is the reason why the BiDi protocol has come into the picture.
What is BiDi Protocol?
BiDi is a web-socket protocol. Websocket is an event-driven protocol. Let’s say the Selenium client wants JavaScript error messages from the browser. Whenever JavaScript errors are notified by the browser to the server, the error messages will be sent to the client. So the server will send the error message only when there is an error. This is called the event-driven protocol.
In the below diagram, you can see that server is sending the WebSocket URL to the client in the new session endpoint response. So any commands other than HTTP server endpoints will be communicated using the WebSocket URL.

When the client needs the JavaScript console error logs, then it should subscribe to the server so that the Selenium Server will send the error messages to the client from the browser.
Refer to the below snippet. The client subscribes to the Browser Log messages. So whenever any log message is added in DevTools, the server will send the message.
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.devtools.DevTools;
import org.openqa.selenium.devtools.Log;
import org.slf4j.*;
public class Selenium4Devtools {
final static Logger logger = LoggerFactory.getLogger(Selenium4Devtools.class);
public static void main(String args[]){
System.setProperty("webdriver.chrome.driver", "drivers/chromedriver.exe");
WebDriver driver = new ChromeDriver();
try{
DevTools devTools = ((ChromeDriver)driver).getDevTools();
devTools.createSession();
devTools.send(Log.enable());
devTools.addListener(Log.entryAdded(), entry -> logger.error(entry.asSeleniumLogEntry().getMessage()));
driver.get("https://www.codoid.com");
}
catch(Exception e){
e.printStackTrace();
}
finally {
driver.quit();
}
}
}
Conclusion
As a leading automated functional testing services company, we have explored many Selenium 4 features, and the WebDriver BiDi protocol is certainly an interesting topic. It is pivotal for every automation tester to be familiar with the internal processes of Selenium.
by admin | Sep 29, 2021 | E-Learning Testing, Blog, Latest Post |
Creating responsive content is always the recommended way irrespective of the platform you are creating it for. But when it comes to content on the LMS, the importance is a notch higher as many eLearning courses are now available on smartphones and tablets as well. The unexpected pandemic has only given more reasons for you to make your content responsive. To begin with, the number of people taking eLearning courses went up. They did so with the devices they had in hand. So the already difficult challenge of formatting the content to fit across different screen sizes and form factors got a lot harder. But even without the pandemic, testing your LMS to see if the content is responsive has always been a primary focus. So let’s take a look at a few vital tips that you can follow to make the job easier.
The LMS Choice
Choosing the right LMS provider for the job is the first step and a very important step as well. But how will you know if you have the right one? The best option here is to analyze the LMS features and verify if it will be helpful in building and delivering content that can be optimized for mobile or tablet users. We should also verify if the LMS provider will be able to provide the instructors and course designers with the required tools to create responsive content. Not all tools work in all use cases, which is why the LMS providers should have the expertise to choose the tools that will be apt for your needs. The reason we’re starting with these factors is that the content creation process itself should start with the end goal in mind.
Curating Responsive Content
So with the first step, we have made sure that the instructors and the content designers have the best tools at their disposal to get the job done. Now we have to implement the best practices to ensure that the curated content is responsive. For the content to look clean and organized across all screen sizes, we can assign a limited word count for the instructor when it comes to written content. Likewise, we can limit the image size and advise the designers to work accordingly. We will be tasked with different challenges when it comes to video content. In case the user has low internet speeds, then videos will have a tough time loading. So we have to make sure the instructor creates a transcript of the video.
Test Strategy
Creating a test strategy from the learner’s viewpoint helps us understand their expectations better. So your test strategy should align with those expectations and the initial goals defined at the start of the development process. The test strategy shouldn’t just focus on the content’s accuracy. Rather, the performance of the platform should also be tested without fail. So it is imperative to check how well the content performs in different devices under different scenarios.
Let’s say the content fits perfectly but takes a lot of time to load. That shouldn’t be a pass when testing to see if the content is responsive. Instead, we have to make sure there is no loss in usability. So before you start developing our test strategy, make sure to receive all the documentation and go through it thoroughly. It will help you get a crystal clear understanding of each module or component’s purpose.
Conclusion
Although these are valuable tips you can follow to ensure that your content is reactive, it is vital for the LMS to make it as easy as it can be for learners to submit feedback if or when they face any issues. As a leading e-learning testing company, we never overlook the fact that content will constantly be added to the LMS. So we have used the feedback and our own test reports to streamline the process and set ourselves on a path of continuous improvement.