Select Page

Category Selected: Latest Post

158 results Found


People also read

Automation Testing

Selenium to Playwright Migration Guide

Artificial Intelligence

AutoGPT vs AutoGen: An In-Depth Comparison

Software Development

AI for Code Documentation: Essential Tips

Talk to our Experts

Amazing clients who
trust us


poloatto
ABB
polaris
ooredo
stryker
mobility
The Top 5 JSON Libraries Every Automation Tester Must-Know

The Top 5 JSON Libraries Every Automation Tester Must-Know

Nowadays, data transfer from a client to a server or vice versa has become more concerning and significant. From the very beginning, using XML (Extensible Markup Language) has been one of the best ways for transferring data. Be it a configuration file or a mapping document, XML has made life easier for us by making quick data interchange possible by giving a clear structure to the data and helping the dynamic configuration & loading of variables. Then came JSON (JavaScript Object Notation), a competitive alternative and even possible replacement to XML. As a leading Test Automation Company, we make sure to always use the best tools in our projects. So in this blog, we will be listing the top 5 JSON Libraries every tester must know about and back it up with the need. But let’s take a look at a few basics before heading to the list.

What is JSON?

JSON is a data format that is both easy to read and write for us humans and easy to understand for the machines. It is mainly used to transmit data from a server to a web or mobile application. JSON is a much simpler and lightweight alternative to XML as it requires less coding and is smaller in size. This makes JSON faster when it comes to processing and transmitting data. Although it is written in JavaScript, JSON is language-independent.

Why is JSON so popular?

What makes JSON so popular is that it is text-based and has easy to parse data formatting that requires no additional code for parsing. Thus it helps in delivering faster data interchange and excellent web service results. The JSON library is open source and what makes it even better is that it is supported in all browsers. If we take a look at the other advantages of JSON, it has very precise syntax, the creation & manipulation of JSON are easy, and it uses the map data structure instead of XML’s tree data structure. We have added a sample syntax of JSON below:

{
 “Id”: “101”,
 “name: “Elvis”,
 “Age”: 26,
 “isAlive”: true,
 “department”: “Computer Science”,
}
JSON Syntax Rules:

The syntax rules are very similar to the syntax rules of JavaScript, and they are as follows,

1. It should start and end with curly brackets.

2. Both keys and values must be indicated as strings.

3. Data are separated by commas.

Example:

{“name”:”Adam”,”age”:23}

4. Square brackets hold the arrays.

1. Jackson JSON Library

Jackson Library is an open-source library that is used by the Java community mostly because of its clean and compact JSON results that creates a very simple reading structure. In this library, dependencies are not required as it is independent. Mapping creation is also not required as it provides the default mapping for most of the objects which can be serialized. Though the system holds a large object or graph, it consumes a lesser amount of space to process and fetches the result.

Three steps to process the JSON by Jackson API

1. Streaming API

It enables us to read and write JSON content as discrete events. The implication here is that the JSON Parser reads the data and the JSON Generator writes the data. It can very easily be added to the maven repository by adding its dependency to the pom.xml file

<dependency>
    		<groupId>com.fasterxml.jackson.core</groupId>
    		<artifactId>jackson-core</artifactId>
    		<version>2.11.1</version>
</dependency>
2. Tree Model

It converts the JSON content into a tree node, and the ObjectMapper helps in building a tree of JsonNode nodes. The tree model approach can be considered equivalent to the DOM parser that is used for XML. It is the most flexible approach as well. So similar to the Streaming API, the tree model can also be added to the maven repository by adding its dependency to the pom.xml file

<dependency>
        	<groupId>com.fasterxml.jackson.core</groupId>
        	<artifactId>jackson-databind</artifactId>
        	<version>2.9.8</version>
    	</dependency>
3. Data Binding

Data binding lets us convert JSON to and from Plain Old Java Object (POJO) with the use of annotations. Here, the ObjectMapper reads and writes both types of data bindings (Simple Data Binding and Full Data Binding). We can add it to the maven repository by simply adding its dependency to the pom.xml file

<dependency>
    		<groupId>com.fasterxml.jackson.core</groupId>
    		<artifactId>jackson-annotations</artifactId>
    		<version>2.12.3</version>
</dependency>

2. GSON Library

GSON is also an open-source library that was developed by Google. This library is special among the other JSON Libraries as it is capable of converting a JSON String into a Java Object and a Java Object into an equivalent JSON representation without calling the Java annotations in your classes.

Features of GSON

1. Open Source library

2. Cross-platform

3. Mapping is not necessary

4. Quite fast and holds low memory space

5. No Dependencies

6. Clean and compact JSON results.

Also, in GSON, we have the same three steps to process the JSON, and they are

1. Streaming API

2. Tree model

3. Data Binding

Adding it to the maven repository also has the same procedure as we have to just add it to its dependency in the pom.xml file

<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.2</version>
</dependency>

3. JSON-simple Library

It is a simple JSON library that is used for encoding and decoding the JSON text. It uses Map and List internally for JSON processing. We can use this JSON-simple to parse JSON data as well as write JSON to a file.

Features of JSON-simple

1. Lightweight API, which works quite well with simple JSON requirements.

2. No dependencies

3. Easy to use by reusing Map and List

4. High in performance

5. Heap-based parser

If you want to use a lightweight JSON library that both reads & writes JSON and also supports streams, you probably should choose this JSON-simple library.

The same process of adding its dependency to the pom.xml life can be carried out to add it to the maven repository.

<dependency>
	<groupId>com.googlecode.json-simple</groupId>
	<artifactId>json-simple</artifactId>
	<version>1.1.1</version>
</dependency>

4. Flexjson

It is also another JSON library that is used to serialize and deserialize Java objects into and from JSON. What’s special about Flexjson is its control over serialization that allows both deep and shallow copies of objects.

Normally, to send an object-oriented model or graph, other libraries create a lot of boilerplate to translate it into a JSON object. Flexjson tries to resolve this issue by providing a higher-level API like DSL.

If you know for a fact that you will be using a small amount of data in your application that will only need a small amount of space to store and read the object into JSON format, you should consider using Flexjson.

As usual, we can add it to the maven repository by adding its dependency to the pom.xml file.

<dependency>
	<groupId>net.sf.flexjson</groupId>
	<artifactId>flexjson</artifactId>
	<version>2.0</version>
</dependency>

5. JSON-lib

JSON-lib is a java library for transforming beans, maps, collections, java arrays, and XML to JSON and back again to beans and DynaBeans. Beans are classes that encapsulate many objects into a single object (the bean), and DynaBeans, a Java object that supports properties whose names, data types, and values can be dynamically modified.

If you are about to use a large amount of data to store or read to/from JSON, then you should consider using JSON-lib or Jackson.

You can add the below dependency file to the pom.xml file to add it to the maven repository.

<dependency>
    		<groupId>net.sf.json-lib</groupId>
    		<artifactId>json-lib</artifactId>
    		<version>2.4</version>
</dependency>

Conclusion:

We hope you are now clear which of these 5 JSON libraries would be apt for your use based on the points that we have discussed. As providing the best automation testing services is always a priority for us, we always explore all the viable options to streamline our process and enhance efficiency. With these libraries, you can parse the JSON String and generate Java objects or create a JSON String from your Java Objects. If you are having web services or any applications that result in a JSON response, then these libraries are very important for you.

Ultimately, if you want to handle large data with a good response speed, you can go with Jackson. But if all you need is a simple response, GSON is better, and if you are looking for any third-party dependencies, then you can go with JSON-simple or Flexjson.

What every QA Tester should know about DevOps Testing

What every QA Tester should know about DevOps Testing

Being good at any job requires continuous learning to become a habitual process of your professional life. Given the significance of DevOps in today’s day and age, it becomes mandatory for a software tester to have an understanding of it. So if you’re looking to find out what a software tester should know about DevOps, this blog is for you. Though there are several new terms revolving around DevOps like AIOps & TestOps, they are just the subsets of DevOps. Before jumping straight into the DevOps testing-related sections, you must first understand what DevOps is, the need for DevOps, and its principles. So let’s get started.

Definition of DevOps

“DevOps is about humans. DevOps is a set of practices and patterns that turn human capital into high-performance organizational capital” – John Willis. Another quote that clearly sums up everything about DevOps is from Gene Kim and it is as follows.

“DevOps is the emerging professional movement that advocates a collaborative working relationship between Development and IT Operations, resulting in the fast flow of planned work (i.e., high deploy rates), while simultaneously increasing the reliability, stability, resilience, and security of the production environment.” – Gene Kim.

The above statements strongly emphasize a collaborative working relationship between the Development and IT operations. The implication here is that Development and Operations shouldn’t be isolated at any cost.

Why do we need to merge Dev and Ops?

In the traditional software development approach, the development process would be commenced only if the requirements were captured fully. Post the completion of the development process, the software would be released to the QA team for quality check. One small mistake in the requirement phase will lead to massive reworks that could’ve been easily avoided.

Agile methodology advocates that one team should share the common goal instead of working on isolated goals. The reason is that it enables effective collaboration between businesses, developers, and testers to avoid miscommunication & misunderstanding. So the purpose here would be to keep everyone in the team on the same page so that they will be well aware of what needs to be delivered and how the delivery adds value to the customer.

But there is a catch when it comes to Agile as we are thinking only till the point where the code is deployed to production. Whereas, the remaining aspects like releasing the product in production machines, ensuring the product’s availability & stability are taken care of by the Operations team.
So let’s take a look at the kind of problems a team would face when the IT operations are isolated,

1. New Feature

Let’s say a new feature needs multiple configuration files for different environments. Then the dev team’s support is required until the feature is released to production without any errors. However, the dev team will say that their job is done as the code was staged and tested in pre-prod. It now becomes the Ops team’s responsibility to take care of the issue.

2. Patch Release

Another likely possibility is there might be a need for a patch release to fix a sudden or unexpected performance issue in the production environment. Since the Ops team is focused on the product’s stability, they will be keen to obtain proof that the patch will not impact the software’s stability. So they would raise a request to mimic the patch on lower environments. But in the meanwhile, end users will still be facing the performance issue until the proof is shown to the Ops team. It is a well-known fact that any performance issue that lasts for more than a day will most probably lead to financial losses for the business.

These are just 2 likely scenarios that could happen. There are many more issues that could arise when Dev and Ops are isolated. So we hope that you have understood the need to merge Dev and Ops together. In short, Agile teams develops and release the software frequently in lower environments. Since deploying in production is infrequent, their collaboration with Ops will not be effective to address key production issues.

When Dev + Ops = DevOps, new testing activities and tools will also be introduced.

DevOps Principles

We hope you’ve understood the need for DevOps by now. So let’s take a look at the principles based on which DevOps operate. After which we shall proceed to explore DevOps testing.

Eliminate Waste

Anything that increases the lead time without a reason is considered a waste. Waiting for additional information and developing features that are not required are perfect examples of this.

Build Quality In

Ensuring quality is not a job made only for the testers. Quality is everyone’s responsibility and should be built into the product and process from the very first step.

Create Knowledge

When software is released at frequent intervals, we will be able to get frequent feedback. So DevOps strongly encourages learning from feedback loops and improve the process.

Defer Commitment

If you have enough information about a task, proceed further without any delay. If not, postpone the decision until you get the vital information as revisiting any critical decision will lead to rework.

Deliver Fast

Continuous Integration allows you to push the local code changes into the master. It also lets us perform quality checks in testing environments. But when the development team pushes a bunch of new features and bug fixes into production on the day of release, it becomes very hard to manage the release. So the DevOps process encourages us to push smaller batches as we will be able to handle and rectify production issues quickly. As a result, your team will be able to deliver faster by pushing smaller batches at faster rates.

Respect People

A highly motivated team is essential for a product’s success. So when a process tries to blame the people for a failure, it is a clear sign that you are not in the right direction. DevOps lends itself to focus on the problem instead of the people during root cause analysis.

Optimise the whole

Let’s say you are writing automated tests. Your focus should be on the entire system and not just on the automated testing task. As a software testing company, our testers work by primarily focusing on the product and not on the testing tasks alone.

What is DevOps Testing?

As soon as Ops is brought into the picture, the team has to carry out additional testing activities & techniques. So in this section, you will learn the various testing techniques which are required in the DevOps process.

In DevOps, it is very common for you to see frequent delivery of any feature in small batches. The reason behind it is that if developers hand over a whole lot of changes for QA feedback, the testers will only be able to respond with their feedback in a day or two. Meanwhile, the developers would have to shift their focus towards developing other features.

So if any feedback is making a developer revisit the code that they had committed two or three days ago, then the developer has to pause the current work and recollect the committed code to make the changes as per the feedback. Since this process would significantly impact the productivity, the deployment is done frequently with small batches as it enables testers to provide quick feedback that makes it easy to revoke the release when it doesn’t go as expected.

A/B Testing

This type of testing involves presenting the same feature in two different ways to random end-users. Let’s say you are developing a signup form. You can submit two Signup forms with different field orders to different end-users. You can present the Signup Form A to one user group and the Signup Form B to another user group. Data-backed decisions are always good for your product. The reason why A/B testing is critical in DevOps is that it is instrumental in getting you quick feedback from end-users. It ultimately helps you to make better decisions.

Automated Acceptance Tests

In DevOps, every commit should trigger appropriate automated unit & acceptance tests. Automated regression testing frees people to perform exploratory testing. Though contractors are highly discouraged in DevOps, they are suitable to automate & manage acceptance tests. Codoid, as an automation testing company, has highly skilled automation testers, and our clients usually engage our test automation engineers to automate repetitive testing activities.

Canary Testing

Releasing a feature to a small group of users in production to get feedback before launching it to a large group is called Canary Testing. In the traditional development approach, the testing happens only in test environments. However, in DevOps, testing activities can happen before (Shift-Left) and after (Shift-Right) the release in production.

Exploratory Testing

Exploratory Testing is considered a problem-solving activity instead of a testing activity in DevOps. If automated regression tests are in place, testers can focus on Exploratory Testing to unearth new bugs, possible features and cover edge cases.

Chaos Engineering

Chaos Engineering is an experiment that can be used to check how your team is responding to a failure and verify if the system will be able to withstand the turbulent conditions in production. Chaos Engineering was introduced by Netflix in the year 2008.

Security Testing

Incorporate security tests early in the deployment pipeline to avoid late feedback.

CX Analytics

In classic performance testing, we would focus only on simulating traffic. However, we never try to concentrate on the client side’s performance and see how well the app is performing in low network bandwidth. As a software tester, you need to work closely with IT Ops teams to get various analytics reports such as Service Analytics, Log Analytics, Perf Analytics, and User Interaction Data. When you analyze the production monitoring data, you can understand how the new features are being used by the end-users and improve the continuous testing process.

Conclusion

So to sum things up, you have to be a continuous learner who focuses on methods to improve the product and deliver value. It is also crucial for everyone on the team to use the right tools and follow the DevOps culture. DevOps emphasizes automating the processes as much as possible. So to incorporate automated tests in the pipeline, you would need to know how to develop robust automated test suites to avoid false positives & negatives. If your scripts are useless, there is no way to achieve continuous testing as you would be continuously fixing the scripts instead. We hope that this has been an equally informative and enjoyable read. In the upcoming blog articles, we will be covering the various DevOps testing-related tools that one must know.

An End-to-End DBT Tutorial for Testing

An End-to-End DBT Tutorial for Testing

Currently, we are extracting the data from different source systems, transforming it, and then finally loading it into the data warehouses. But when it comes to DBT, the raw source data from different source systems are extracted and directly loaded into the warehouse. After which, DBT allows us to create new materialized tables into which the transformed data is fed as it helps showcase the meaningful data for business analysis purposes by using simple select statements. But what makes DBT stand tall is it allows both the development and production environment teams to work in the same place by creating models & testing the scripts for transformation purposes. Being one of the leading QA companies, we will be mainly focusing on the testing part of the above process in this DBT Tutorial by exploring how we test our models.

Data flow of DBT Tutorial

What is DBT?

If you’re new to the DBT tool, worry not, we have got some basics in this DBT Tutorial for you as well. The Data Build Tool (DBT) is an open-source test automation tool and a command-line tool. It mainly focuses on the transformation part in the “Extract load and transform” pipeline. DBT allows both data analysts and data engineers to build the models and transform the data in their warehouses to make it meaningful.

New to DBT tool?

Let’s walk you through some of the basics to help you get a quick overview of DBT and the necessary steps that have to be done to create an account. Basically, there are two ways for DBT to be used:

1. DBT CLI(Command Line Interface):

The DBT CLI variant allows users to write their scripts and test them locally by using the DBT command-line tool.

2. DBT Cloud:

The DBT Cloud is a facilitated variant that smooths out an advancement with an online Integrated Development Environment, an interface to write our DBT test scripts and run them on a schedule.

In this DBT Tutorial, we will be covering the DBT cloud variant. It is worth noting that the ideas and practices can be stretched out to the CLI variant as well. So let’s get started.

Data warehouse:

As stated earlier, the DBT is used to handle the transformation part of the ELT for data warehousing. “But how does it work?” you might ask. Well, DBT helps us create a connection with the warehouse and then lets us write the SQL simple select statements against the warehouse to transform the data.

Supported Warehouses:

We have listed the names of the supported warehouses in DBT, and they are

• Postgres

• Redshift

• BigQuery

• Snowflake

• Apache Spark

• Databricks and

• Presto (Partially Supported).

We would be using the Snowflake warehouse in this blog for demonstration purposes.

DBT Tutorial for the Setup:

Now that we have seen an overview of DBT, let’s find out how to set up the DBT cloud and use its interface. If you’ve not yet created an account in DBT, you can sign up for DBT cloud by visiting their Sign up page.

Once you are done with your DBT cloud sign-up process, let’s find out how to set up your very first DBT project.

DBT recommends using GitHub to store our deployments. So the first step would be to create an empty repository in GitHub as shown below.

Setting up DBT tutorial

Enter your repository name and click on ‘Create repository’.

Once you have signed up with your DBT cloud, just select the snowflake warehouse and start configuring it using your snowflake credentials. We have shown a sample connection in the below images for your reference.

Set up Database connection - DBT tutorial

Development Credentials

After entering your warehouse and database details, make sure you have entered the snowflake user name and password in development credentials. Next, click on ‘Continue’ to get this confirmation message “Connection Test Success”. Now, you have to click on the ‘Continue’ button for configuring the GitHub. Since you’re done with GitHub configuration, we’re good to start with DBT.

DBT Tutorial for its Interface:

DBT has a lot of inbuilt options that help access and understand it easily. After setting up with DBT, the interface of DBT that we get to see initially will look like how it is shown in the image below.

DBT Interface

As you can see, DBT might initially look a bit empty and even show that the compilation error at the bottom right. So we would have to click on ‘Initialize your Project’ to start our first project. Doing this will make the DBT will provide us with a skeleton as shown in the below image for our project automatically.

Starting of DBT tutorial

We can clearly see that the compilation message that was displayed at the right bottom has now changed to become ‘ready’, implying that it is time to start our project. So let’s get started.

DBT Tutorial Database

First off, let’s find out the scope of some of the functions,

1. The commit button will handle all our git configurations like creating a new branch, pull request, commit, merge, etc.

2. After writing any SQL statements we can check the query then and there by using the dbt “preview data” and “compile SQL” buttons.

3. We can run and test our models by writing simple DBT commands (e.g. dbt debug, dbt run, dbt test).

4. DBT also generates documentation that will come in very handy for analytical purposes as it will contain all the details about the project. You can access it by clicking on the “view docs” option as shown in the above pic.

Note: Before running any model or a test, make sure that you have saved it.

Loading training data into the Warehouse:

So as stated earlier, DBT cloud currently supports only BigQuery, Postgres, Redshift & Snowflake warehouses, and we have decided to use Snowflake for the demo. So to illustrate how to import or load your training data into the warehouse, we will be loading the two tables that we have created.

Snowflake:

It is important to note that we have written the below instructions assuming that you have the database named compute_wh in your Snowflake account. So make sure that you have the required privileges to create the objects in this database.

Run the following commands in your Snowflake Warehouse SQL Runner:

To create the Database:

Create database STUDENT;
Create Schema STUDENT_INFO:
Use schema STUDENT_INFO;

To create the Tables:

Table 1: Student_Profile

Table 1 will contain the personal information of the students.

Create table student_profile (
  S_no int not null,
  Std_id int not null,
  Std_name varchar(45) null,
  Std_dep varchar(45) null,
  DOB datetime null,
  Primary key (Std_id));
  insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('1', '294', 'Afshad', 'CSE', '03/10/1998');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('2', '232', 'Sreekanth', 'Mech', '02/09/1997');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('3', '276', 'John', 'EEE', '10/06/1998');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('4', '303', 'Rahul', 'ECE', '12/05/1997');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('5', '309', 'Sam', 'Civil', '09/04/1999');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('6', '345', 'Ram', 'CA', '03/11/1998');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('7', '625', 'Priya', 'CSE', '03/12/1996');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('8', '739', 'Bob', 'MEC', '06/07/1998');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('9', '344', 'Ganesh', 'Mech', '07/09/2024');
insert into student_profile (S_no, Std_id, Std_name, Std_dep, DOB) values ('10', '123', 'Neha', 'ECE', '12/09/1998');
Table 2: STUDENT_RESULTS

Table 2 will contain the results of those students.

Create table student_results (
  S_no int not null,
  Std_id int not null,
  Marks int null,
  Result varchar(45) null,
  Primary key (Std_id));
insert into student_results (S_no, Std_id, Marks, Result) values ('1', '294', '78', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('2', '232', '56', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('3', '276', '88', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('4', '303', '67', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('5', '309', '38', 'Fail');
insert into student_results (S_no, Std_id, Marks, Result) values ('6', '345', '90', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('7', '625', '87', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('8', '739', '45', 'Fail');
insert into student_results (S_no, Std_id, Marks, Result) values ('9', '344', '97', 'Pass');
insert into student_results (S_no, Std_id, Marks, Result) values ('10', '123', '49', 'Fail');

Now that you have the training data in your warehouse, you are one step away from starting your DBT testing.

DBT Tutorial for the available Models:

In analytics, the process of modeling is changing the data from being the raw data to the final transformed data. Typically the data engineers are responsible for building tables that represents your source data, and on top of that, they also build the tables/views that transform the data step by step.

The models are just SQL select statements in your dbt project. These models are created inside the Models directory with the .sql extension.

What’s great about dbt is you don’t need to know DDL/DML to build the tables or views.

Note: The name of the file is used as the model or table name in your warehouse.

Creating Models:

Inside the model’s directory, we have created two simple models for better understanding.

Model's Directory - DBT Tutorial

Model 1:Student_profile.sql

{{ config(materialized='table') }}

with source_data as (
select 
    S_No,
    Std_Id,
    Std_Name,
    Std_Dep,
    DOB
 from 
STUDENT.STUDENT_INFO.STUDENT_PROFILE

)

select *
from source_data

Model 2: Student_Results.sql

{{ config(materialized='table') }}

with source_data as (
select 
    S_No,
    Std_Id,
    Marks,
    Result 
 from 
STUDENT.STUDENT_INFO.STUDENT_RESULTS

)

select *
from source_data

Once you’ve built your models, make sure to save and run the models to load them into the warehouse. You can use the ‘dbt run’ command to run your created models, or if you want to run any particular model, you can use the ‘dbt run –Student_Profile’ command.

Successful Completion of DBT function

DBT will show detailed information about the models when you run your models as shown in the above image. So we have successfully completed all the required steps and can move forward to find out how to perform testing in DBT.

DBT Tutorial to perform Testing:

We all know that testing plays a major role in deployment, so tests in analytics are just assertions that you have about your data. But what makes these assertions important is that if these assertions are met, it’ll be instrumental in helping others trust you by looking at the data.

The great part of DBT testing is that by default it provides four data quality checks. They are Unique, Not Null, Relationships (Referential Integrity), and Accepted Value checks.

Basically, DBT provides us with two types of testing. So let’s take a look at the steps to be done to perform the first type, the Schema test.

1. Schema Test:

Create a YAML file in the same model’s directory with a .yml Extension (Ex. Schema.yml).

Configure your tests (Unique, Not Null, Relationships, and Accepted Values) in the YAML file based on your table data and run it by using the command “dbt test”.

Note: Yaml is a data serialization and human-readable language that can be easily understood even by non-technical people. It is mainly used for configuration purposes as it supports all programming languages.

We have attached one YAML file for your reference below.

Schema.yml:

version: 2

models:
    - name: student_profile
      description: "Table contains all the information about the students"
      columns:
          - name: std_id
            description: "The primary key for this table"
            tests:
                - unique
                - not_null

    - name: student_results
      description: "Table contains the student results info"
      columns:
          - name: std_id
            description: "The primary key for this table"
            tests:
                - unique
                - not_null
                - relationships:
                    to: ref('student_profile')
                    field: std_id
          - name: result
            description: "Result of a student"
            tests:      
                - accepted_values:
                    values:
                      - pass
                      - fail

Once you run this test, you will get detailed information on all your tests. If you need to run only one particular schema test, you can use this command “dbt test –m Student_results”.

2. Data Test:

But if you are looking to write custom test scripts based on the requirements against the data, that also can be done by using tests of this type.

Start by creating the SQL files with your test name inside the test directory of the DBT project as shown below.

Test Directory

Now, add your test script to your test file.

Duplicate.sql:

select std_id, count(std_id) 
  from {{ref('student_profile')}} 
  group by std_id 
  having count(std_id)>2

referential_integrity.sql:

select std_id 
from {{ref('student_profile')}} 
where std_id not in (
    select std_id 
    from {{ref('student_results')}} )

Note:

Here ref function is used to refer one or more specific files into the script.

Finally, you can run the test by using the “dbt test –data” command.

Running the test by DBT command

DBT Tutorial for Documentation:

As discussed earlier, documentation is one of the greatest features of DBT. So once your project is completed, all you have to do is use the “dbt docs –generate” command to generate the document.

DBT Documentation

After running the command, just click on “view docs” at the top left of the DBT interface to view the complete information about your project in the form of a document. It even generates a lineage graph like the one shown below for a crystal clear understanding.

Lineage Graph

DBT also supports filters in the lineage graph, which is a great advantage when it comes to analysis.

Conclusion:

We hope that you have enjoyed reading this DBT Tutorial blog and also hope that DBT would be a great addition to your toolkit. To sum things up, DBT helps carry out the most complex and heavy transformations in a simple and easy manner by using simple select statements. It enables both data engineers and data analysts to work in the same environment and provides a unique experience for them to transform the data. The models in DBT enable faster execution and also make it so easy to test and modify. Finally, it even shows detailed information about the project by generating the document along with a lineage graph for clear analysis purposes. All thanks to such awesome features, DBT has been a resourceful tool that has helped us deliver the best software testing services to our clients.

Frequently Asked Questions

  • What is dbt tool used for?

    DBT (Data Build Tool) is used to perform the Transform part of the ETL (Extract Transform Load) process in a simplified manner by writing transformations as queries and orchestrating them effectively.

  • What is DBT testing?

    DBT testing is the process of ensuring the data you have is reliable by using assertions you have about the data as tests. There are 2 types of dbt testing; namely Schema Test and Data Test.

  • What are the four generic tests that dbt ships with?

    The 4 generic tests that dbt ships with are Unique, Not Null, Relationships (Referential Integrity), and Accepted Value checks.

  • What does dbt stand for data?

    DBT stands for Data Build Tool and as the name suggests, dbt can be used to transform data in warehouses in an effective manner.

  • Can I use dbt for free?

    Yes, you can use dbt Core for free as it is an open-source tool released under an Apache 2.0 License. But dbt Cloud is not a completely free tool as few features are restricted in the free version.

What makes QA Outsourcing the Best Way to Ensure Software Quality?

What makes QA Outsourcing the Best Way to Ensure Software Quality?

QA outsourcing has a major role to play in the success of an application or software as an app might have 100 awesome features, but if there is just one small bug that troubles the customer, then that is what will define the experience of the user. The one bug will outweigh all the positives and stick out like a sore thumb in the customer’s memory. Once a mindset like that has been created, there is simply no going back. The impact of that bug will forever be a deep dent that spoils the outlook of the product. So an app that is not properly tested will result in its creators paying a big price in terms of finance and reputation.

The solution here would be to employ a high-caliber testing team to take care of the scripted testing, regression testing, and real-world testing so that the in-house team can concentrate on new features instead of focusing on iterative testing. That’s where QA outsourcing comes into play, as one can get a highly skilled and capable team to test their software and also cut costs in their budget.

A Statistical viewpoint of Outsourcing

It’s not just us who are claiming this to be a successful strategy, the statistics behind our statement is also as strong as it can get. In 2019, the global IT outsourcing market had reached a mammoth value of $333.7 Billion. But the most interesting fact is that it is on a CAGR of 4.5% and is expected to reach $397.6 Billion by 2025. According to a recent survey, 83% of IT leaders are having plans to outsource their security to an MSP in 2021. So is it just the big fish at work here? No, 37% of small businesses have also opted for outsourcing as they have witnessed increased efficiency.

What makes Outsourcing the way to go?

So why are so many companies outsourcing? Is it only because of the cost? Cost is an important reason, but not the only reason. Every project has a budget cap and the management will always be on the lookout for the most efficient path to success. So outsourcing becomes the obvious solution here as though outsourcing is cost-efficient, it is in no way different in quality. When it comes to QA outsourcing, factors such as the average income of a QA Engineer and the hourly rate are what cause this difference.

Why is QA Outsourcing cost effective

So it is evident here that the cost of the work varies from one region to region. But it is also worth noting that the cost is the only factor that is changing across regions and not the quality of work. In fact, we could even go beyond and say that QA outsourcing increases the level of quality. Now, coming back to the main question. Is cost the only major factor? No, the value that the QA outsourcing companies like us provide is off the charts in comparison. Even if we assume that a team has an unlimited supply of resources at their disposal, QA outsourcing would still be the better option as QA outsourcing software testing nullifies the impact of cognitive bias in the process.

As a top QA company ourselves, we are hard on our testing to make it easier for our clients to smoothly launch their new products with the utmost confidence.

An Adaptable choice

Let’s take a deeper dive into the value that we speak so highly of. We have already brushed through the fact that all businesses irrespective of their scale find QA outsourcing to be resourceful. So if it’s a small startup wondering how to make room for software testing in their budget, QA outsourcing has the ability to ensure software quality while still being on a budget. If it is a bigger company, they will be able to get a far more experienced team in the same price bracket through QA outsourcing.

There might even be a short-term need that requires just a week or two of software testing. Building an in-house team or recruiting an employee for such a short period of time will be a cumbersome task that may not yield the best results as well. Every project is unique to itself and would demand a special set of skills from the testing team. As an experienced software testing service provider, we are experts in handling any challenge that comes our way all thanks to our dedicated teams for all types of testing such as manual, automation, performance, and the list could just keep going. On the whole, QA outsourcing can solve all such problems and most importantly save everyone from the unnecessary hassle.

A Proof of Concept

Despite all the goods we have seen, there are cases where people could be on the fence and still be skeptical about going forward with QA outsourcing. The next move that could potentially make all the doubts fly away would be to approach a QA company and get a Proof of Concept. A POC helps the person witness the QA team’s expertise with the solutions they bring to the table. Most importantly, a POC will instill more confidence as it helps to analyze the QA team beyond just their successful track record.

That is why we trust in our ability and provide a free POC to not just establish the fact that we have what it takes to ensure software quality, but also lay a foundation for the successful partnership that is in the making. You’re just a step away from getting the best software testing services, so take a step in the right direction and contact us now.

Read Data from XML by Using Different Parsers in Java

Read Data from XML by Using Different Parsers in Java

Reducing the complexity of any process is always the key to better performance, similarly parsing the XML data to obtain a readable format of that XML file that we humans can understand is also a very important process. A simple equivalent to this parsing process would be the process of language translation. Let’s take the example of two national leaders discussing an important meeting. They could either choose to use a common language like English or talk in the languages they are comfortable with and use translators to solve the purpose. Likewise, the XML will be in a format that is easily understood by a computer, but once the information has been parsed, we will be able to read data from XML and understand it with ease.

As one of the leading QA companies in the market, we use different parsers based on our needs and so let’s explore which parser would be the perfect match for your need by understanding how they work. But before we explore how we can read data from XML, let us get introduced to XML first as there might be a few readers who may not know much about XML.

An Introduction to the XML:

XML stands for Extensible mark-up Language, and it’s primarily used to describe and organize information in ways that are easily understandable by both humans and computers. It is a subset of the Standard Generalized Mark-up Language (SGML) that is used to create structured documents. In XML, all blocks are considered as an “Element”. The tags are not pre-defined, and they are called “Self-descriptive” tags as it enables us to create our own customized tags. It also supports node-to-node interaction to fill the readability gap between Humans and Machines.

XML is designed to store and transfer data between different operating systems without us having to face any data loss. XML is not dependant on any platform or language. One can say that XML is similar to HTML as it neither acts as the frontend nor as the backend. For example, we would have used HTML to create the backend code, and that code would be passed to the frontend where it is rendered as a webpage.

Prerequisite:

There are a few basic prerequisites that should be ready in order to read data from XML, and we have listed them below,

1. Install any IDE(Eclipse/Intellij )

2. Make sure if Java is installed

3. Create a Java project in IDE

4. Create an XML file by using .xml extension

XML file creation:

So the first three steps are pretty straightforward, and you may not need any help to get it done. So let’s directly jump to the fourth and final prerequisite step, where we have to create an XML file manually in our Java project.

Navigate to the File tab in your IDE

– Create a new file

– Save it as “filename.xml”

The XML file will display under your Java project. In the same way, we can create the XML file in our local machine by using the .xml file extension. Later, we can use this XML file path in our program for parsing the XML. Let’s see the technologies for parsing the XML.

XML Parse:

XML parsing is nothing but the process of converting the XML data into a human-readable format. The XML parsing can be done by making use of different XML Parsers. But what do these parsers do? Well, parsers make use of the XSL Transformation (XSLT) processor to transform the XML data to a readable format and paves the way for using XML in our programs. The most commonly used parsers are DOM, SAX, StAX, Xpath, and JDOM. So let’s take a look at each parses one-by-one..

Using DOM Parser to Read data from XML:

DOM stands for Document Object Model. DOM is a parser that is both easy to learn and use. It acts as an interface to access and modify the node in XML. DOM works by building the entire XML file into memory and moving it node by node in a sequential order to parse the XML. DOM can be used to identify both the content and structure of the document. But the setback that comes with DOM is that it is slow and consumes a large amount of memory because of the way it works. So DOM will be an optimal choice if you are looking to parse a smaller file and not a very large XML file as everything in DOM is a node in the XML file. Let’s see how to parse the below XML by using the DOM parser.

Here is the XML File that we need to parse:

<?xml version = "1.0"?>
<Mail>
        <email Subject="Codoid Client Meeting Remainder">
    <from>Priya</from>
    <empid>COD11</empid>
    <Designation>Software Tester</Designation>
    <to>Karthick</to>
    <body>We have meeting at tomorrow 8 AM. Please be available
    </body>
        </email>
    <email Subject="Reg:Codoid Client Meeting Remainder ">
        <from>Kartick</from>
        <empid>COD123</empid>
        <Designation>Juniour Software Tester</Designation>
        <to>Priya</to>
        <body>Thanks for reminding me about the meeting. Will join on time</body>
    </email>
</Mail>
DOM Parser:
package com.company;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.SAXException;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import java.io.File;
import java.io.IOException;
public class DOMParser {
    public static void main(String[] args) throws ParserConfigurationException, IOException, SAXException {
        try {
            File file = new File("E:\\Examp\\src\\com\\company\\xmldata.xml");
            DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
            DocumentBuilder builder = documentBuilderFactory.newDocumentBuilder();
            Document doc = builder.parse(file);
            doc.getDocumentElement().normalize();
            System.out.println("Root element::  " + doc.getDocumentElement().getNodeName());
            NodeList nList = doc.getElementsByTagName("email");
            for (int temp = 0; temp < nList.getLength(); temp++) {
                Node nNode = nList.item(temp);
                System.out.println("\nCurrent Element :" + nNode.getNodeName());
                if (nNode.getNodeType() == Node.ELEMENT_NODE) {
                    Element eElement = (Element) nNode;
                    System.out.println("Email Subject : "
                            + eElement.getAttribute("Subject"));
                    System.out.println("From Name : "
                            + eElement
                            .getElementsByTagName("from")
                            .item(0)
                            .getTextContent());
                    System.out.println("Designation : "
                            + eElement
                            .getElementsByTagName("Designation")
                            .item(0)
                            .getTextContent());
                    System.out.println("Employee Id : "
                            + eElement
                            .getElementsByTagName("empid")
                            .item(0)
                            .getTextContent());
                    System.out.println("To Name : "
                            + eElement
                            .getElementsByTagName("to")
                            .item(0)
                            .getTextContent());
                    System.out.println("Email Body : "
                            + eElement
                            .getElementsByTagName("body")
                            .item(0)
                            .getTextContent());
                }
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

We have created a DocumentBuilderFactory API to produce the object trees from XML, after which we’ve also created a document interface to access the XML document data. As stated earlier, the node is the base datatype for DOM here. From the code, we can see that the getDocumentElement() method will return the root of the element, and the getElementsByTagName() method will return the value of that particular tag.

Using the SAX Parser to Read data from XML:

The SAX parser is a simple event-based API that parses the XML document line-by-line using the Handler class. Everything in XML is considered to be “Tokens” in SAX. Unlike the DOM parser that we saw earlier, SAX does not load the entire XML file into memory. It also doesn’t create any object representation of the XML document. Instead, it triggers events when it encounters the opening tag, closing tag, and character data in an XML file. It reads the XML from top to bottom and identifies the tokens and call-back methods in the handler that are invoked. Due to the top to bottom approach, tokens are parsed in the same order as they appear in the document. Due to the change in the way SAX works, it is faster and uses less memory in comparison to the DOM parser.

SAX Parser:
try{
            File file = new File("E:\\Examp\\src\\com\\company\\xmldata.xml");
            SAXParserFactory saxParserFactory= SAXParserFactory.newInstance();
            SAXParser saxParser= saxParserFactory.newSAXParser();
            SaxHandler sax= new SaxHandler();
            saxParser.parse(file,sax);

        }
        catch (Exception e){
            e.printStackTrace();
        }
    }
}

In the above code, we have created an XML file and given its path in the code. The SAXParserFactory used in the code creates the new instance for that file. After that, we can create the object for the Handler class using which we parse the XML data. So we have called the handler class method by using the object. Now, let’s see how the Handler class and its method are created.

class SaxHandler extends DefaultHandler{
    boolean from=false;
    boolean to=false;
    boolean Designation= false;
    boolean empid= false;
    boolean body=false;
    StringBuilder data=null;
@Override
    public void startElement(String uri, String localName,
                             String qName, Attributes attributes){

    if(qName.equalsIgnoreCase("email")){
        String Subject= attributes.getValue("Subject");
        System.out.println("Subject::  "+Subject);
    }
    else if(qName.equalsIgnoreCase("from")){
        from=true;
    }
    else if(qName.equalsIgnoreCase("Designation")){
        Designation=true;
    }
    else if(qName.equalsIgnoreCase("empid")){
        empid=true;
    }
    else if(qName.equalsIgnoreCase("to")){
        to=true;
    }
    else if(qName.equalsIgnoreCase("body")) {
        body = true;
    }
    data=new StringBuilder();
}
@Override
      public void endElement(String uri, String localName, String qName){
      if(qName.equalsIgnoreCase("email")){
          System.out.println("End Element::  "+qName);
      }
}
    @Override
   public void characters(char ch[], int start, int length){
//    data.append(new String(ch,start,length));
        if(from){
            System.out.println("FromName::  "+new String(ch,start,length));
            from=false;
        }
        else if(Designation){
            System.out.println("Designation::  "+new String(ch,start,length));
            Designation=false;
        }
        else if(empid){
            System.out.println("empid::  "+new String(ch,start,length));
            empid=false;
        }
        else if(to){
            System.out.println("to::  "+new String(ch,start,length));
            to=false;
        }
        else if(body){
            System.out.println("body::  "+new String(ch,start,length));
            body=false;
        }
}
}

Our ultimate goal is to read data from XML using the SAX parser. So in the above example, we have created our own SAX Parser class and also extended the DefaultHandler class which has various parsing methods. The 3 most prevalent methods of the DefaultHandler class are:

1. startElement() – It receives the notification of the start of an element. It has 3 parameters which we have explained by providing the data that has to be used.

startElement(String uri, String localName,String qName, Attributes attributes)

uri – The Namespace URI, or the empty string if the element has no Namespace URI.

localName – The local name (without prefix) or the empty string if Namespace processing is not being performed.

qName – The qualified name (with prefix) or the empty string if qualified names are not available.

attributes – The attributes attached to the element. If there are no attributes, it shall be an empty attributes object.

The startElement() is used to identify the first element of the XML as it creates an object every time a start element is found in the XML file.

2. endElement() – So we have already seen about startElement(), and just as the name suggests, endElement() receives the notification of the end of an element.

endElement (String uri, String localName, String qName) 

uri – The Namespace URI, or the empty string if the element has no Namespace URI

localName – The local name (without prefix) or the empty string if Namespace processing is not being performed.

qName – The qualified name (with prefix) or the empty string if qualified names are not available.

The endElement() is used to check the end element of the XML file.

3.characters() – Receives the notification of character data inside an element.

characters (char ch[], int start, int length) 

ch – The characters.

start – The start position in the character array.

length – The number of characters that have to be used from the character array.

characters() is used to identify the character data inside an element. It divides the data into multiple character chunks. Whenever a character is found in an XML document, the char() will be executed. That’s why we append() the string to keep this data.

Using the JDOM Parser to Read data from XML:

So the JDOM parser is a combination of the DOM and SAX parsers that we have already seen. It’s an open-source Java-based library API. The JDOM parser can be as fast as the SAX, and it also doesn’t require much memory to parse the XML file. In JDOM, we even can switch the two parsers easily like DOM to SAX, or vice versa. So the main advantage is that it returns the tree structure of all elements in XML without impacting the memory of the application.

import org.jdom2.Attribute;
import org.jdom2.Document;
import org.jdom2.Element;
import org.jdom2.JDOMException;
import org.jdom2.input.SAXBuilder;

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class JDOMParser {
    public static void main(String[] args) throws JDOMException, IOException {
        try{
            File file = new File("E:\\Examp\\src\\com\\company\\xmldata.xml");
            SAXBuilder saxBuilder = new SAXBuilder();
            Document doc= saxBuilder.build(file);
            System.out.println("Root element :" + doc.getRootElement().getName());
            Element ele= doc.getRootElement();
            List<Element> elementList = ele.getChildren("email");
            for(Element emailelement: elementList){
                System.out.println("Current element::  "+emailelement.getName());
                Attribute attribute= emailelement.getAttribute("Subject");
                System.out.println("Subject::  "+attribute.getValue());
                System.out.println("From::  "+emailelement.getChild("from").getText());
                System.out.println("Designation::  "+emailelement.getChild("Designation").getText());
                System.out.println("Empid::  "+emailelement.getChild("empid").getText());
                System.out.println("To::  "+emailelement.getChild("to").getText());
                System.out.println("Body::  "+emailelement.getChild("body").getText());
            }
        }
        catch (Exception e){
            e.printStackTrace();
        }
    }
}

We have used the SAXBuilder class to transform the XML to a JDOM document. The getRootElement() is used to find the starting element of the XML and store all the elements from the XML to a list based on the starting element and iterate that element list. At the very end, we have used the getText() method to get the value of each attribute.

Using the StAX Parser to Read data from XML:

The StAX Parser is similar to the SAX Parser with just one difference. That major difference is that it employs 2 APIs (Cursor based API and Iterator-based API) to parse the XML. The StAX parser is also known as the PULL API, and it gets the name from the fact that we can use it to access the information from the XML whenever needed. The other standout aspect of the StAX parser is that it can read and also write the XML. Every element in the XML is considered as “Events”, and below is the code that we require for parsing the XML file using the StAX Parser.

XMLInputFactory factory = XMLInputFactory.newInstance();
XMLEventReader eventReader =
        factory.createXMLEventReader(new FileReader("E:\\Examp\\src\\com\\company\\xmldata.xml "));
while(eventReader.hasNext()) {
        XMLEvent event = eventReader.nextEvent();
        switch(event.getEventType()) {
        case XMLStreamConstants.START_ELEMENT:
        StartElement startElement = event.asStartElement();
        String qName = startElement.getName().getLocalPart();
        if (qName.equalsIgnoreCase("email")) {
        System.out.println("Start Element : email");
        Iterator<Attribute> attributes = startElement.getAttributes();
    String rollNo = attributes.next().getValue();
    System.out.println("Subject " + Subject);
    } else if (qName.equalsIgnoreCase("from")) {
    EmailFrom = true;
    } else if (qName.equalsIgnoreCase("empid")) {
    Empid = true;
    } else if (qName.equalsIgnoreCase("Designation")) {
    Desination = true;
    }
    else if (qName.equalsIgnoreCase("to")) {
    EmailTo = true;
    }
    else if (qName.equalsIgnoreCase("body")) {
    EmailBody = true;
    }
    break;
    case XMLStreamConstants.CHARACTERS:
    Characters characters = event.asCharacters();
    if(EmailFrom) {
    System.out.println("From: " + characters.getData());
    EmailFrom = false;
    }
    if(Empid) {
    System.out.println("EmpId: " + characters.getData());
    Empid = false;
    }
    if(Desination) {
    System.out.println("Designation: " + characters.getData());
    Desination = false;
    }
    if(EmailTo) {
    System.out.println("to: " + characters.getData());
    EmailTo = false;
    }
    if(EmailBody) {
    System.out.println("EmailBody: " + characters.getData());
    EmailBody = false;
    }
    break;
    case XMLStreamConstants.END_ELEMENT:
    EndElement endElement = event.asEndElement();
    if(endElement.getName().getLocalPart().equalsIgnoreCase("email")) {
    System.out.println("End Element : email");
    System.out.println();
    }
    break;
    }
    }
    } catch (Exception e) {
    e.printStackTrace();
        }
}}

In StAX, we have used the XMLEventReader interface that provides the peek at the next event and also returns the configuration information.

The StartElement interface give access to the start elements in XML and the asStartElement() method returns the startElement event. It is important to note that the exception will be shown if the start element event doesn’t occur.

All character events are reported using the Characters interface. If you are wondering what would get reported as character events? The answer is that all the text and whitespaces events are reported as characters events.

The asCharacters() method returns the Characters from XML, and we will be able to get the data from XML as characters using the getData() method. Though it iterates each and every data from the XML and gives it in the form of a tree structure, it doesn’t return the start and end element events.

The EndElement class is used to point and return the end of the elements in an XML doc.

Using the Xpath Parser to Read data from XML:

The Xpath parser is a query language that is used to find the node from an XML file and parse the XML based on the query string. Now let’s take a look at an example code for better understanding.

File inputFile = new File("E:\\Examp\\src\\com\\company\\xmldata.xml");

            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
//            DocumentBuilder dBuilder;
            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
            Document doc = dBuilder.parse(inputFile);
            doc.getDocumentElement().normalize();
            XPath xPath =  XPathFactory.newInstance().newXPath();
            String expression = "/Mail/Email";
            NodeList nodeList = (NodeList) xPath.compile(expression).evaluate(doc, XPathConstants.NODESET);
for (int i = 0; i < nodeList.getLength(); i++) {
    Node nNode = nodeList.item(i);
    System.out.println("\nCurrent Element :" + nNode.getNodeName());
    if (nNode.getNodeType() == Node.ELEMENT_NODE) {
        Element eElement = (Element) nNode;
        System.out.println("From : " + eElement.getElementsByTagName("from").item(0).getTextContent());
        System.out.println("EmpId : " + eElement.getElementsByTagName("empid").item(0).getTextContent());
        System.out.println("Designation : " + eElement.getElementsByTagName("Designation").item(0).getTextContent());
        System.out.println("TO : " + eElement.getElementsByTagName("to").item(0).getTextContent());
        System.out.println("Body : " + eElement.getElementsByTagName("body").item(0).getTextContent());
    }

In the above code, we used the XPath Factory for creating a new instance for the XPath. Then we have taken the XPath for the XML data and stored it as a String datatype. This String expression is called as “XPath Expression”.

Next, we have compiled the list of the XPath Expression by using the xPath.compile() method and iterated the list of nodes from the compiled expression using the evaluate() method.

We have used the getNodeName() method to get the starting element of the XML.

So once the XML document has been read, we would reuse the document and the XPath object in all the methods.

Conclusion

We hope you have found the parser that fits your requirement and in-process also enjoyed reading this article. So to sum things up, we have seen how each parser works to understand the pros and cons of each type. Choosing the apt parser might seem like a very small aspect when compared to the entire scale of the project. But as one of the best software testing service providers, we believe in attaining maximum efficiency in each process, be it small or big.

An Introductory Action Class Guide for Beginners

An Introductory Action Class Guide for Beginners

If you are going to test an application using Selenium WebDriver, you most definitely will face scenarios where you will be needed to trigger keyboard and mouse interactions. This is where our Action Class Guide comes into the picture. Basically, Action Class is a built-in feature provided by Selenium for simulating various events and acts of a keyboard and mouse. With the help of Action classes, you will be able to trigger mouse events like Double Click, Right Click, and many more events. The same goes for keyboards as well, you can trigger the functions of CTRL key, CTRL + different keys, and other such combinations. As one of the best QA companies, we have been able to use Action Class to its zenith by using it in various combinations as per the project needs. But before exploring such Action class implementations, let’s take a look at some basics.

Action Class Guide for MouseActions

So we wanted to start our Action Class Guide by listing some of the most frequently used mouse events available in the Action class.

click() – Clicks on the particular WebElement (Normal Left Click)

contextClick() – Right clicks on the particular WebElement

doubleClick() – Performs a double click on the WebElement

dragAndDrop (WebElement source, WebElement target) – Clicks and holds the web element to drag it to the targeted web element where it is released.

dragAndDropBy(WebElement source, int xOffset, int yOffset) – Clicks and Drag the element to the given location using offset values

moveToElement(WebElement) – Moves the mouse to the web element and holds it on the location (In simple words, the mouse hovers over an element)

moveByOffset(int xOffSet, int yOffSet) – Moves the mouse from the current position to the given left (xOffset value) and down (yOffset Value).

clickAndHold(WebElement element) – Clicks and holds an element without release.

release() – Releases a held mouse click.

Action Class Guide Keyboard Actions

Same as above, we have listed some frequently used keyboard events available in the Action class,

keyDown(WebElement, java.lang.CharSequence key) – To press the key without releasing it on the WebElement

keyUp(WebElement, java.lang.CharSequence key) – To release the key stroke on the webElement

sendkeys(value) – To enter values on WebElements like textboxes

So by including these methods, you can smoothly run your script and execute the code without any issues….

Absolutely not, we’re just kidding. We still have to gather all the action methods and execute them under the Action class.

build() – It is a method where all the actions are chained together for the action which is going to be performed.

So the above method can be used to make the actions that are to be executed ready.

perform() – It is a method used to compile and also execute the action class.
A perform method can be called alone without a build method to execute the action class methods if only one action is performed.

Action Class Guide for Performing actions

Now that we have gone through the basics, let’s find out how to implement the Action Classes in Code.

Step1:

Import the Interaction package that contains the Action Class. You can use the below line for importing,

“importorg.openqa.selenium.interactions.Actions; ”

Step2:

Create the object of the Action Class and use the Web Driver reference as the argument
Actions action = new Actions (driver)

Step3:

Once the above two steps have been completed, you can start writing your script using the Action classes and the different methods available.

Let’s proceed further and take a look at the implementation and uses of the actions available for both the mouse & keyboard.

1. SendKeys(WebElement element, value)

As stated above, this action class is mainly used to send a char sequence into the textbox. But it is also worth noting that we can use it to send the keystrokes of different key combinations likeCTRL+T, Enter, and so on.

import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import java.util.concurrent.TimeUnit;
public class SendKeys {
    public static void main(String[] args) {
        WebDriver driver;        System.setProperty("webdriver.chrome.driver","D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("https://www.flipkart.com/");
        driver.manage().window().maximize();
        Actions action = new Actions(driver);
        WebElement eleSearchBox = driver.findElement(By.cssSelector("input[placeholder='Search for products, brands and more']"));
        driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
        action.sendKeys(eleSearchBox, "Iphone").build().perform();
        action.sendKeys(Keys.ENTER).build().perform();
        driver.close();
    }
}

By using the SendKeys method, an element is searched by the keystroke instead of clicking on the Search Button. (i.e.) We can clearly see in the code that the “Keys.Enter” is inside the Keys class that has various keystrokes available for the keys.

2. MoveToElement(WebElement element)

You might be in a position to test if an element changes color, shows some additional information, or performs the intended action when the mouse hovers over it. So let’s take a look at the code and find out how you can make it happen.

import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.util.concurrent.TimeUnit;
public class MouseHover {
    public static void main(String[] args) {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("http://www.leafground.com/");
        driver.manage().window().maximize();
        Actions action = new Actions(driver);
        driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
        JavascriptExecutor js = (JavascriptExecutor) driver;
        js.executeScript("window.scrollBy(0,170)", "");
        driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
        new WebDriverWait(driver, 20).until(ExpectedConditions.elementToBeClickable(By.xpath("//img[@alt='mouseover']"))).click();
        WebElement eleTabCourses = driver.findElement(By.xpath("//a[normalize-space()='TestLeaf Courses']"));
        action.moveToElement(eleTabCourses).build().perform();
        driver.close();
    }
}

We have written the above code in a way that the code first waits for the image to become clickable. Once it loads, the image gets clicked, and the mouse hovers over the element for a second.

3. DragAndDrop(source, target)

So there are basically two types of drag and drop that we will be seeing in this Action Class Guide. This is the type of action class using which we can assign a target area where the element can be dragged and dropped. Now let’s see the code to execute the DragAndDrop action,

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import java.util.concurrent.TimeUnit;
public class DragAndDrop {
        public static void main(String[] args) {
            WebDriver driver;
            System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
            driver = new ChromeDriver();
            driver.get("http://www.leafground.com/");
            driver.manage().window().maximize();
            Actions action = new Actions(driver);
            driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
            driver.findElement(By.xpath("//h5[normalize-space()='Droppable']")).click();
            WebElement eleSource = driver.findElement(By.xpath("//div[@id='draggable']"));
            WebElement eleTarget = driver.findElement(By.xpath("//div[@id='droppable']"));
            action.dragAndDrop(eleSource,eleTarget).build().perform(););
            driver.close();
        }
}

For dragging an element to the dropped place, first, the locators are captured for the source and target. Following this, they are passed inside the action method using dragAndDrop.

4. DragAndDropBy(WebElement source,int xOffset, int yOffSet )

So we have already seen how to drag a drop an element within a targeted area, but what if we would like to drag and drop an element by a defined value? Let’s take a look at the code and find out how.

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import java.util.concurrent.TimeUnit;
public class DragAndDropOffset {
        public static void main(String[] args) throws InterruptedException {
            WebDriver driver;
            System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
            driver = new ChromeDriver();
            driver.get("http://www.leafground.com/");
            driver.manage().window().maximize();
            Actions action = new Actions(driver);
            driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
            driver.findElement(By.xpath("//img[@alt='Draggable']")).click();
            WebElement eleDrag= driver.findElement(By.xpath("//div[@id='draggable']"));
            action.dragAndDropBy(eleDrag,200,130).build().perform();
            Thread.sleep(2000);
            driver.close();
        }
    }

In the above code, we have used the DragAndDropBy method in a way that it clicks and moves the element to the offset position as specified and releases it once the target location is reached.

5. Click(WebElement element)

There is no way to test anything without being able to use the left click button. So let’s find out the code to execute this very basic and necessary functionality.

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
public class LeftClick {
    public static void main(String[] args) {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("https://www.google.com/");
        driver.manage().window().maximize();
        Actions actions = new Actions(driver);
        WebElement eleInput = driver.findElement(By.name("q"));
        actions.sendKeys(eleInput, "www.codoid.com").build().perform();
        WebElement BtnSearch = driver.findElement(By.xpath("//div[@class='FPdoLc lJ9FBc']//input[@name='btnK']"));
        actions.click(BtnSearch).build().perform();
        driver.close();
    }
}

6. ContextClick(WebElement element)

Though the right-click is not used as commonly as the left click, it is still a very basic functionality every tester must know. So let’s take a look at the code to find out how to implement it.

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import java.util.concurrent.TimeUnit;
public class RightClick {
    public static void main(String[] args) {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("http://demo.guru99.com/test/simple_context_menu.html");
        driver.manage().window().maximize();
        Actions action = new Actions(driver);
        driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
        WebElement eleRightClick = driver.findElement(By.xpath("//span[@class='context-menu-one btn btn-neutral']"));
        action.contextClick(eleRightClick).perform();
        driver.close();
    }
}

It is worth mentioning here that we have not used ‘build’ anywhere in the above code. Instead, we have used ‘perform’ to execute the functionality.

7. DoubleClick(WebElement element)

Just like the previous functionalities we have seen in the Action Class Guide, double-click is another basic functionality that is vital to any form of testing. So let’s jump straight to the code.

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import java.util.concurrent.TimeUnit;
public class DoubleClick {
    public static void main(String[] args) {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("http://demo.guru99.com/test/simple_context_menu.html");
        driver.manage().window().maximize();
        Actions action = new Actions(driver);
        driver.manage().timeouts().pageLoadTimeout(10, TimeUnit.SECONDS);
        WebElement eleDoubleClick = driver.findElement(By.xpath("//button[normalize-space()='Double-Click Me To See Alert']"));
        action.doubleClick(eleDoubleClick).perform();
        driver.quit();
    }

8. KeyDown(WebElement element, Modifier Key) & KeyUp (WebElement element, Modifier Key)

CTRL, SHIFT, and ALT are few examples of modifier keys that we all use on a day-to-day basis. For example, we hold down Shift if we want to type something in caps. So when we use the KeyDown action class, it holds a particular key down until we release it using the KeyUp action class. With that said, let’s see an example code in which we have used these functionalities,

import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
public class KeyDownAndKeyUp {
    public static void main(String[] args) {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("https://www.google.com/");
        driver.manage().window().maximize();
        Actions actions = new Actions(driver);
        WebElement eleInput = driver.findElement(By.name("q"));
        actions.click(eleInput).build().perform();
        actions.keyDown(eleInput, Keys.SHIFT);
        actions.sendKeys("eiffel tower");
        actions.keyUp(eleInput,Keys.SHIFT);
        actions.sendKeys(Keys.ENTER);
        actions.build().perform();
        driver.close();
    }
}

So if you have taken a look at the code, it is evident that once we have used the KeyDown method, the Shift key was pressed down. So the text ‘eiffel tower’ that was fed in the next line would have gotten capitalized. Now that the KeyDown has solved its purpose in this scenario, we have used to KeyUp method to release it.

9. MoveByOffset(int xOffSet, int yOffSet)

As seen above, ByOffset(int x, int y) is used when we need to click on any particular location. We can do this by assigning locations for the x and y axes. Now let’s explore the code that we have used for execution.

import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
public class MoveByOffSet {
    public static void main(String[] args) throws InterruptedException {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("https://www.google.com/");
        driver.manage().window().maximize();
        Actions actions = new Actions(driver);
        WebElement eleInput = driver.findElement(By.name("q"));
        actions.sendKeys(eleInput, "Eiffel").build().perform();
        actions.sendKeys(Keys.ENTER).build().perform();
        Thread.sleep(2000);
        actions.moveByOffset(650, 300).contextClick().build().perform();
        driver.close();
    }
}

10. ClickAndHold(WebElement element)

The action method that we will be seeing now in our Action Class Guide can be used when an element has to be clicked and held for a certain period of time.

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
public class ClickAndHold {
    public static void main(String[] args) throws InterruptedException {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("https://www.google.com/");
        driver.manage().window().maximize();
        Actions actions = new Actions(driver);
        WebElement eleInput = driver.findElement(By.name("q"));
        actions.sendKeys(eleInput, "Flower").build().perform();
        actions.moveByOffset(500,300).click().build().perform();
        Thread.sleep(2000);
        WebElement BtnSearch = driver.findElement(By.xpath("//div[@class='FPdoLc lJ9FBc']//input[@name='btnK']"));
        actions.clickAndHold(BtnSearch).build().perform();
        driver.close();
    }
}

In the above code, we have first opened Google and then searched using ‘Flower’ as the input and then performed a left-click action at the defined location. After which, we have performed a click and hold action on the search button.

Note:

In addition to that, if we need the click to be released, we can use the release method to release the clicked element before using ‘build’.

actions.clickAndHold(BtnSearch).release().build().perform();

Uploading a File Using SendKeys Method:

We know that the SendKeys action method can be used to send a character sequence. But one interesting way to use it would be to upload a normal document. If you’re wondering how that is possible, let’s take a look at the code,

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
public class FileUpload {
    public static void main(String[] args) throws InterruptedException {
        WebDriver driver;
        System.setProperty("webdriver.chrome.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\chromedriver.exe");
        driver = new ChromeDriver();
        driver.get("http://www.leafground.com/");
        driver.manage().window().maximize();
        driver.findElement(By.xpath(" //img[@alt='contextClick']")).click();
        WebElement eleSendFile = driver.findElement(By.cssSelector("input[name='filename']"));
        eleSendFile.sendKeys("C:\\Users\\OneDrive\\Desktop\\Sample.txt");
        Thread.sleep(2000);
        driver.close();
    }
}

In the above code, we have used the SendKeys action method to enter the file’s address path to locate the object that has to be uploaded. The document which we have used here is an input type document as this type of document is only ideal for sending the document using SendKeys.

Note:

Just in case you are not using Google Chrome and would like to execute these action class methods in Mozilla Firefox, all you have to do is just add the driver and set the system property for the gecko driver and initialize the driver object for it using the below line.

System.setProperty("webdriver.gecko.driver", "D:\\ActionClass\\src\\test\\java\\Drivers\\geckodriver.exe");
driver = new FirefoxDriver();

Conclusion:

We hope you have enjoyed reading our introductory action class guide that has covered the basic action class methods starting from a basic ‘Click’ to uploading a file using ‘SendKeys’. Action class helps one to perform all the basic actions on a webapp.Release() method, which can be used to release any element that was clicked or held. ‘Perform’ can be used alone, but if there are more composite actions, build and perform should be used together. As a leading test automation company, we are well aware of how resourceful action classes can be when used effectively. Once you have understood the basics, you would be able to perform so many more actions by using combinations of the action methods established in this blog.