Select Page

Category Selected: Web Service Testing

14 results Found


People also read

API Testing

Postman API Automation Testing Tutorial

Automation Testing

Top 10 AI Automation Testing Tools

Automation Testing

Talk to our Experts

Amazing clients who
trust us


poloatto
ABB
polaris
ooredo
stryker
mobility
6 Great Tips for Website Testing You Need to Apply

6 Great Tips for Website Testing You Need to Apply

Website testing is checking a website for potential bugs before making it publicly available on the internet. It assesses functionality, usability, compatibility with other systems, security, and performance. 

This also evaluates web application security vulnerabilities and bugs in the website’s design. Moreover, it tests how the site works for the mentally and physically disabled (if it can cater to those segments of society) and its ability to handle many users simultaneously.

Some software testing services you need to know are:

1. Performance

Make sure your site works under all loads. Software testing involves:

  • Website application response times at different connection speeds
  • Testing to see if your website handles normal loads
  • Testing to make sure your site can handle peak loads
  • Testing your site to find its breaking point at peak loads
  • Testing to see if a crash occurs when the site is pushed to beyond normal loads at peak time
  • Make sure optimization techniques like gzip compression browser and server-side cache are enabled to reduce load times

2. Compatibility

Software testing services and mobile testing services must run these checks regularly:

  • Browser Compatibility: Observe if your web application appears correctly across browsers and JavaScript, AJAX, and authentication works fine. Check for mobile browser compatibility, too.
  • Web Elements: Elements like buttons, text fields, and other interface components can differ depending on the browser’s operating system. Be aware of these distinctions when building your website to keep it running smoothly for all users.

3. Database

Another critical component of a web application is its database, and you must ensure that you test it thoroughly. You need to perform the following activities for better evaluation:

  • Test if any errors appear while executing queries
  • Check the integrity of your data as you create, update, and delete entries in your database
  • Test the response time of your questions and tune them if necessary
  • Ensure that the data you retrieve from your database is accurate as it is displayed in your web application

4. Interface

Software and mobile testing services need to perform these steps:

  • Test the application layer by sending requests to the database and displaying the output on the client’s side
  • Test the web server layer by sending requests to the application
  • Test the database layer by executing the queries sent from the application
  • Test the system when one of these components is down and return an appropriate message to the end-user

5. Usability

Usability testing is becoming a vital part of any web-based project. A tester or focus group can perform one similar to the website’s target audience. Moreover, while trying the navigation, look for navigation menus, buttons, and links on the site. They should be easy to find and should work well on every webpage. 

While reviewing the content, look for correct spelling, grammar, and general information. Also, look for the presence of images. If present, check that each image has an “alt” text.

6. Functionality

Functionality involves testing each of the features and functions on the website to ensure all components work as they should. This is a form of black-box testing that allows users to carry out manual and automated tests. 

Cases vary and cover different areas, including user interface, APIs, and database and security testing. After completing this, you can carry out basic functional tests to check that each feature on the website works correctly. This may include software and mobile testing services to ensure optimal multi-device performance.

The Last Thing for a Great Website Is

Hacks or DDoS situations can derail your operations as the site will no longer be accessible across multiple devices. Testing for security ensures your customers’ and company’s privacy is protected against cyberattacks that could potentially ruin business transactions. 

These six tips will help you test a website that performs optimally across all mobile devices and ensures you stay connected with your customers 24/7. But if you need more professional help involving software and mobile testing services, contact Codoid now! 

We’re an industry leader in QA with a passion for guiding and leading the Quality Assurance community. Our brilliant team of engineers loves to attend and speak at software testing meetup groups, forums, events, and conferences. Get started with us today!

A Beginner’s Guide to Web Application Test Automation

A Beginner’s Guide to Web Application Test Automation

Considering that traditional front and back-office applications are slowly being phased out for web-based applications, testing the functionality of web applications becomes infinitely more important. This is why you’ll want to do all you can to ensure that your web app testing is efficient and effective. The best way to go about doing this is through automation. Manual testing can be quite taxing as it requires an investment of time and manpower. This is why you’ll want to learn as much as you can about what web application test automation is and how it works. If this is something that you want to know more about, read on for our beginner’s guide to web app testing automation.

What is Web App Testing?

Before we discuss anything, it’s important that we define what web app testing is. Web testing is a software practice that is used to ensure the quality of a given web application by testing that the functions of the application are working as intended. Web testing can be used to find bugs at any time before a release or on a day-to-day basis. Testing is an essential part of app development. This is especially true when the app is changed in any way. Whenever there’s a change in the code, no matter how small, bugs and errors can manifest. It’s important to implement an effective testing strategy so that you can deal with these potential issues swiftly. 

What Are the Benefits of Automated Web App Testing?

While testing is important, you can’t ignore how labor-intensive it is. Automation addresses this problem, as it helps you test efficiently without sacrificing quality. Automated tests eliminate the need for testers to perform routine and repetitive tests. Automated tests help find bugs in specific operations and simple-use cases (e.g. logging in, creating a new account, or resetting passwords.) By eliminating these tasks, testers can focus on exploratory testing or other tests that require a human perspective.

What Can Be Automated?

Now, it’s important to note that not all web app tests can be automated. Some tests simply require a human touch. Here are a few types of tests that you can automate for your web applications:

  • Functional Testing: Functional testing is used to ensure that an app works as intended from the end-user’s perspective. While this process can be automated, it’s important that you supplement the automated tests with manual tests in order to find bugs.
  • Regression Testing: Regression testing describes “repeated functional testing”. It is used to make sure that a software’s functionality continues to work after parts of it have been modified with new code. Regression testing is basically functional testing that is repeated once a new software code or configuration is added.
  • Cross-browser Testing: Cross-browser testing ensures that your web application performs well in different browsers both on desktop and mobile devices.
  • Performance Testing: Performance testing, or stress and load testing, ensures that a web application can endure extended periods of activity or peak user loads. While it is possible to do this manually, it would be extremely impractical.

Conclusion

We hope this article proves to be useful when it comes to helping you further your understanding of web application test automation. Now that you understand this process better, you’ll be able to maximize automation in a way that helps you optimize your web applications.

If you’re in need of an automation testing company with a spotless track record in web app testing, Codoid is your best choice. Every new software product deserves high-quality automation testing, and our team of highly skilled QA professionals can handle any job. When it comes to web application test automation, there’s no better choice than Codoid. For more information on what we can do for you, visit our website today!

An A to Z Google Lighthouse Tutorial to Gauge Web Page Quality

An A to Z Google Lighthouse Tutorial to Gauge Web Page Quality

Lighthouse is a great open-source and automated web page quality improvement tool. It is primarily used to perform audits for performance, accessibility, progressive web apps, SEO, and other factors. Its functionality doesn’t end there as the audits will provide you with very useful suggestions and act as a guide to improving the page you audited. Be it a public page or a page that requires authentication, you will have no trouble using it. As a leading performance testing company, we have found Lighthouse to be an instrumental tool in ensuring the sites we test are optimized. So in this Lighthouse Tutorial, we will be seeing how it can be run it using Chrome DevTools, the command line, and as a Node module. We will also be taking a deep dive into the reports and seeing how to share them. So let’s get started.

The List of Lighthouse Processes:

  • Using the Chrome DevTools, you’ll be able to audit authentication-required pages and read reports without having to put in too much effort.
  • You can use shell scripts to automate your Lighthouse runs by using Command Prompt.
  • Lighthouse can be integrated into your continuous integration system in the form of a node module with ease.
  • You can also use a web interface to run Lighthouse and link to reports without having to install anything at all.

Lighthouse Tutorial to Run it in Chrome DevTools

Since Google Chrome is the most popular browser, you might already have it installed in your system. But if you don’t, make sure to install it. As any URL on the web can be audited, let’s take a look at the steps to generate a report.

1. First up, open Google Chrome and navigate to the URL you’d like to audit.

2. Once the page has been loaded, open Chrome DevTools.

3. Click on the ‘Lighthouse’ tab.

Run Lighthouse in Chrome Extension

You will get a view of the page you’re looking to audit on the left and the Lighthouse-powered Chrome DevTools panel on the right.

4. You’ll see a list of audit categories as shown in the image. Check and make sure all the categories are enabled.

5. You can create a report in under a minute by just clicking on the ‘Generate report’ option. Once you do that, you’ll see a report as shown in the below image.

Generate Report in Google DevTools

Lighthouse Tutorial to Run it using the Node command-line tool

Download & install the latest Node version that has Long-Term Support and then install Lighthouse using the following command.

npm install -g lighthouse

The -g flag here installs it as a global module.

You can use the following two commands to run an audit and to see all the options,

To run an audit:

lighthouse <url>

To see all the options:

lighthouse --help

Lighthouse Tutorial to Run it with a Chrome Extension

Download the Lighthouse Chrome Extension from the Chrome Webstore, and go to the page you want to audit in Chrome.

1. If you see the Lighthouse icon beside the Chrome address bar, you can click on it to enable the extension.

2. If you don’t, you can click on the extensions button to find the Lighthouse extension. So, the Lighthouse menu expands once you click on it.

Run Lighthouse in Chrome Extension

3. Once you click on the ‘Generate report’ option, Lighthouse will perform the audit on the page that is currently open and then shows the report in a new tab.

Generate Report in Google Lighthouse Tutorial

How to Run PageSpeed Insights?

Using PageSpeed Insights with Lighthouse is quite simple. You just have to follow these steps:

  • Navigate to PageSpeed Insights.
  • Enter a web page URL.
  • Click Analyze.

Page Speed Insights

The Lighthouse Report

Now that we have seen how to generate the Lighthouse report in 3 different ways and seen how to obtain the PageSpeed Insights in this Lighthouse Tutorial, it’s time to explore the various aspects of the report one would have to know about. Then only the user will be able to comprehend the report and take the required action.

Performance Score

Visiting a slow-loading website is definitely an excruciating task that no one will enjoy. So, using the performance score, we will be able to identify how quickly a website or app loads and how quickly the users will be able to access or view the content on the page. There are six-speed metrics that will be used to calculate this score.

First Contentful Paint

Imagine yourself opening a webpage. As soon as you click on the button, you would expect the page to navigate to the page you want to go to and start showing the content. So, this metric indicates the time it takes for the first text or image to become visible for the users.

First Meaningful Paint

Likewise, a barely loaded page that takes a lot of time to load the important contents of the page is also not a good sign. Hence as the name suggests, this metric indicates the time taken for the meaningful part of a page to be loaded.

Speed Index

So the speed index is a uniform metric that establishes how quickly the content of a page loads.

Time to Interactive

Just because the user is able to see the content, doesn’t mean the site is ready to be used. As soon as the users see the content, they will assume the page is ready and try to interact with it. But if the page takes a long time for it to be interactive, then the user will definitely get annoyed. So this metric reveals how much time is taken by the webpage and its content to become fully interactive for the user.

First CPU Idle

This metric is also very similar to the previous metric we saw. It differs majorly by what we mean when saying the site has to become interactive in the previous metric. For a page to be considered fully reactive, it has to respond to user interactions within 50 milliseconds. First CPU Idle differs as it reveals the time it has taken for the majority of the UI elements to work and not the time it takes for all UI elements to work. There is a reason why it is seen as a different metric as well. Ultimately, this metric returns the time it takes for the page’s main thread activity to become low enough to accept the inputs that have to be processed.

Estimated Input Latency

The latency approximates how long an app or webpage takes to react to user inputs during the 5-second window that has the maximum computational page load. The fact to keep in mind here is that if the latency is over 50ms, users might perceive the app or website to be very slow.

Make sure to review the suggestions shown in Lighthouse as it will be helpful in reducing load times easily.

Accessibility Score

The Accessibility score helps us understand how accessible the website is to people with disabilities such as vision impairment, hearing disabilities, and other physical or cognitive conditions. If we were to take the example of a visually impaired user, then the user would use a screen reader to access the content. But for the screen reader to work properly, proper heading tags are a must in the content. The alt text for visuals and proper texts explaining the buttons and hyperlinks are also important. So such will be reviewed for this score.

Best Practices Score

The Best Practices aspect primarily revolves around the security levels of the webpage. Lighthouse tests a total of 16 practices and all of them are focused on safety and modern web development standards. You can get a good score here if there are no issues in JavaScript libraries, establish secure database connections, check insecure commands, and so on.

SEO Score

So we’ve seen how fast the page loads, how inclusive it is by checking accessibility, and see if it is secure enough to provide the best user experience. But SEO is also one important action that we have to look into. The reason is that without proper SEO, the webpage itself will not be able to reach your intended audience. So though the above enhancements are important, it is very important for us to make the site appear in the search results.

Lighthouse Tutorial to Share & View reports online

Once the reports have been generated, it is also possible to view and share the reports using the Lighthouse viewer. The report’s JSON is actually needed to use Lighthouse viewer. So let’s see what has to be done to get the report as a JSON file.

Google Lighthouse tutorial - Report Viewer

Share reports as JSON

Depending on the Lighthouse workflow you’re using, the below steps describe how to get the JSON output.

1. Click on the ‘Tool’ menu once the report has been generated.

2. Choose Save as JSON or HTML.

To view the report data:

1. Open the Lighthouse Viewer in Google Chrome.

2. You can click anywhere on the Viewer to open the file navigator using which you can select the JSON file. You can even drag the JSON file onto the viewer.

Lighthouse Features

There is a lot for web developers to benefit from following Lighthouse’s advice. Here are a couple of Lighthouse features that can enhance that experience with customization options.

Stack Packs

Today’s developers have the option to use various technologies like CMS, JavaScript frameworks, and so on while developing web pages. The best part of Lighthouse is that it can now provide more relevant and actionable advice suggestions based on the tools used. So this makes the first customization option as you’ll receive suggestions that go beyond the general recommendations.

Lighthouse Plugins

Lighthouse Plugins are the second customizability option that enables community domain experts to cater to their specific needs. For example, Lighthouse can create new audits by using the data it collects. Here, a Lighthouse plugin will act as a node module that implements a set of checks that Lighthouse will run and then add as a new category to the report.

So as a leading test automation companies, we find these 2 features to be the highlights of Lighthouse.

CONCLUSION

We hope you have a clear picture of what Lighthouse is and what it is capable of as well after reading this Lighthouse Tutorial. Since Lighthouse is an open-source project that welcomes contributions, you could look into the issue tracker and find possible bugs or even analyze audits to see how it can be improved. So make sure to keep an eye on the issues tracker as it is a great place to discuss audit metrics, new audit ideas, and so on.

How to Test your Website at Different Screen Resolutions?

How to Test your Website at Different Screen Resolutions?

Long gone are the days where websites were predominantly accessed from desktops as people have started accessing websites from laptops, tablets, smartphones, and even smartwatches. There are countless models that have different screen resolutions and sizes under these categories as well. So you will be in deep trouble if you’re website doesn’t have a responsive design, as you will be driving away a massive chunk of your audience due to the poor design. It is not just about the aesthetic feel your design has to offer, functionality is also a crucial part of being responsive. Though bounce rate is an aspect to worry about, your website will not even be able to rank and reach people if your website isn’t mobile-friendly. Now that we have established why we need to test your website at different screen resolutions, let’s find out how.

There are various solutions that will enable you to test your website at different screen resolutions. As a leading QA company, we have shortlisted the best options for this blog.

Dev Tools

Understanding the growing need for websites to be responsive, many prominent browsers have made it easier for testers or developers to check it using Dev Tools. According to reports, the 4 most popular browsers are Google Chrome, Safari, Microsoft Edge, and Mozilla Firefox.

  • In most cases, a regular right-click on any part of the website you want to test will show an option called ‘Inspect’ in the dropdown list. You can click on it to launch the Dev tools and the emulator along with it.
  • Once Dev tools has been launched you can define the screen resolution as you choose or even choose from the list of predefined screen resolutions that come along with it.
  • If you are testing a new device, then you can even add the custom resolution and save it by giving a name to reuse it whenever needed. You can refer to the below visual to see how it can be done in a few easy steps.

Using Dev Tools to Test your Website at Different Screen Resolutions

Unlike Google Chrome and Microsoft Edge Mozilla Firefox will not launch the emulator directly. Once the Dev tools window has been launched, you have to look for an icon that denotes a mobile and a tablet together and click it, or press ctrl+shift+M to launch the emulator.

When it comes to Safari, you first have to follow this series of actions.

  • Click on ‘Preferences’ -> Advanced.
  • In the menu that appears, you have to enable the ‘Show Develop menu in menu bar’ checkbox.
  • Now, you will be able to view the ‘Develop’ menu in the menu bar. Click on that and select the ‘Enter Responsive Design Mode’ option.

The conventional way to launch Dev Tools for the other 3 browsers would be opening the ‘Menu’, navigating to ‘More Tools’, and then opening Developer tools.

BrowserStack

The first option we saw makes use of emulators to help test your website at different screen resolutions. But if you are looking to take it a notch higher and perform the test on real devices, then definitely buying all the devices in the market will not be a viable option. Here is where a tool like BrowserStack will come in handy as it is a cloud-based platform that will let us use real devices to test. Apart from using real devices for better assurance, BrowserStack will be instrumental in testing cross-browser functionalities. As one of the best mobile testing companies, we have used BrowserStack to great success in many of our projects. Though there are other similar tools, we believe BrowserStack to be the best.

Look beyond the norm

The conventional screen sizes we can see on a laptop or desktop are 1920×1080 and 1366×768. But many users are beginning to transition to desktop monitors and laptop displays with 2K & 4K resolutions. Also keep in mind that beyond the 16:9 aspect ratios that we are used with, there is also a rise in the usage of 16:10 aspect ratio displays that have different screen resolutions. These kinds of screen resolutions are mainly used by creators and it is sure to catch on as Apple has also been using this aspect ratio with their new range of laptops. The mobile devices we use nowadays are all touch screens, and many laptop displays are also getting touch displays. So make sure to design your websites for touch input to stay future-proof.

Why You Should Add a Progressive Web App to Your Website

Why You Should Add a Progressive Web App to Your Website

Our society has never been more connected than today, and technology is growing much more sophisticated every year, transforming the way we carry out processes. Smartphones, in particular, have essentially become extremely powerful mini-computers thanks to the advent of mobile apps, which has made it much easier to complete a wide range of tasks. From grocery shopping to paying your bills to video conferencing, mobile apps have caused more people to favor their smartphones over their desktops or laptops due to their sheer power. 

While smartphones are ubiquitous, laptops and computers are still popular for web browsing due to progressive web apps or PWA. They offer an improved approach to browsing online, providing users with a faster and more optimized experience that can drive more traffic to your website and expand your visibility when appropriately deployed. 

What Are Progressive Web Apps?

The progressive web app was a term created by a Google engineer several years ago to describe apps that leveraged the superior capabilities that modern browsers offer. End users could upgrade web apps to PWAs in their native operating system to enjoy a smoother, faster performance. A Google developer page defines PWAs as a feature that “use[s] modern web capabilities to deliver an app-like user experience. They evolve from pages in browser tabs to immersive, top-level apps, maintaining the web’s low friction at every point.”

The great thing about PWAs is that you can use them with other website languages like JavaScript, CSS, and HTML. This feature allows web developers to transform website browsing since they can build PWAs with one codebase, and they don’t need to upload it onto an app store, making it accessible to everyone.

Integrating a PWA to a website offers a seamless, more comprehensive experience for website visitors without tacking on a premium. Many corporate giants like Twitter, Pinterest, and Uber have added a PWA onto their websites for more straightforward navigability. 

The Components of a PWA

A PWA has three significant components. The first is a service worker, which is a script that operates behind the scenes. It is the component responsible for enabling offline loading, caching, sending out push notifications, and other complex aspects of a PWA.

The next component is the manifest file, a JSON file harboring information about how your PWA should function and display. It contains details such as the name, icons, colors, and description used.

The last component is a secure connection, as PWAs operate only on trusted secure connections like HTTPS. It also offers another layer of security to website browsers, helping them feel more confident and comfortable when browsing your website.

The Key Features of a PWA

A PWA has a few distinctive features. It offers access to the app even without an Internet connection, offering a world of convenience for end-users. It doesn’t rely on app stores for distribution, which means more people can use it without being restricted to the operating system they use. It also comes with push notifications, is discoverable through standard SEO practices, and helps improve user engagement. 

Conclusion

Overall, adding a PWA to a website can help businesses offer a more satisfactory browsing experience for their customers, nudging them further in the buying journey and making them more likely to purchase. It allows visitors to access your website offline, toggle push notifications, enjoy faster design, and other features that they have come to expect from mobile apps.

If you’re looking for QA automation services to test your web app, be sure to let us know at Codoid. We are an industry leader in software testing and QA, including mobile and web apps. Contact us today to learn more about how we can help you get started with a PWA!

Puppeteer Tutorial-The Complete Guide to using a Headless Browser for your Testing

Puppeteer Tutorial-The Complete Guide to using a Headless Browser for your Testing

We wanted to kick off this Puppeteer Tutorial by breaking a general assumption that Puppeteer is primarily a testing tool because, in reality, it is primarily an automation tool. But that doesn’t take away the fact that Puppeteer is incredibly popular for use cases such as scraping, generating PDFs and so much more that we will be exploring in this blog. Loading a browser requires a lot of resources as it has to load a lot of other UI elements like the toolbar, buttons, and so on. The need for such UI elements which are not needed can be eliminated when everything is being controlled with code. Fortunately, there are better solutions like making use of headless browsers.

You can find many blog articles and YouTube videos that explain the puppeteer setup. However, in this Puppeteer Tutorial we will be going through the setup process, and also explore how easy it is to perform web scraping (web automation) in a somewhat non-traditional method that uses a headless browser. This method has been often helped us in providing the best Automation Testing services to our clients and now let’s find out how you can benefit from it too.

An Introduction to the Puppeteer Tutorial

Browsers are usually executed without a graphical user interface when they are being used for automated testing. It is obvious that we would need to use a Puppeteer to make this possible. The question here is – how do we do it. The solution is a headless browser as it’s a great tool when it comes to performing automated testing in server environments there is no need for a visible UI shell.

Puppeteer is made by the team behind Google Chrome, and so we can trust it to be well maintained and to perform common actions on the Chromium browser and programmatically through JavaScript, via a simple and easy-to-use API. Nowadays, JavaScript has been ruling the web, and pretty much everything you interact with on websites uses JavaScript. The added advantage here is that Puppeteer can be used to safely automate even potentially malicious pages as it operates off-process with respect to Chromium. Before we proceed further, let’s cover the Puppeteer installation process just in case you are unaware of it.

Node Installation:

One simply cannot install a puppeteer without having a node. So in order to install the node package, you would need a Node Package manager. You can install a Node package manager by using the ‘Brew Install’ command. Once the npm is installed, you can verify the installation using the below command.

node –v,  npm –v

Packages Installation:

Now that the nodes have been installed using an npm, a folder will be created. So you can navigate to this folder and run the initialization command given below.

 npm init –y

This will create a package .json file in the directory. This package .json includes the puppeteer dependency and test scripts like Runner class. If you need to run any program, you should add the name of the package .json file you want to run in your script, as shown below.

"Dependencies": {"puppeteer": "^9.0.0"}
"Scripts": {"test": "node filename.js"}

Puppeteer Installation:

Now to install the puppeteer, you would have to execute the commands from the terminal. Note that the working directory should be the one that contains the package .json file.

npm install --save puppeteer

The above command installs both the Puppeteer and a version of Chromium that the Puppeteer team knows will work with their API, making the process very simple.

All you need here is the required keyword, as it will make sure that the Puppeteer library is available in the file. The asynchronous function will get executed once it is created.

const puppeteer = require('puppeteer');

Puppeteer-core:

Puppeteer-core package is a version of Puppeteer that not everyone might need as it doesn’t download any browser by default. So if you are looking to use a pre-existing browser or connect to a remote one, this option will come in handy. Since Puppeteer-core doesn’t download Chromium when installed, we have to define an executable Path option that contains the Chrome or Chromium browser path if that is the need.

Environment variables:

If you would like to specify a version of Chromium you’d like Puppeteer to use, or skip downloading the Chromium browser for Puppeteer downloads, you will need to set two environment variables:

PUPPETEER_SKIP_CHROMIUM_DOWNLOAD – You can skip the Chromium download by setting this to be true

PUPPETEER_EXECUTABLE_PATH – To customize the browser as per your need you can set this to the path of the Chrome browser on your system or CI image.

Now that we have prepped everything, let’s go ahead and find out how we can launch the headless browser and use all its functionalities.

Puppeteer Tutorial for each functionality

Browser launch:

Finally, you will be able to open the browser using the launch () keyword with puppeteer, as shown below.

const browser = await puppeteer .launch({ });

The browser that is launched will be in headless mode.

Headless mode

The above line can be modified to include an object as a parameter, and instead of launching in headless mode, you can even launch a full version of the browser using headless: false, as shown below

const browser = await puppeteer. launch ({headless :false});

Browser size

Once the browser has been launched, if you want to make the browser go full screen by converting to the maximized screen option, you can make use of the below code

args: ["--start-fullscreen"],  args: ["--start-maximized"]

The reason we are including this in our Puppeteer tutorial is that the Puppeteer sets the initial page size to its default option of 800×600px. This value can be changed before taking the screenshot by setting the viewport as shown in the code.

await page. SetViewport ({width: 1920,height: 1080,});

Slow it down

The slow Mo option is a pretty useful feature in specific situations as it can be used to slow down the Puppeteer operations by the specified amount of milliseconds. As per our need, we used the code given below to slow down the Puppeteer operations by 250 milliseconds.

const browser = await puppeteer .launch({headless: false, slowMo: 250})

Chrome Devtools

When the browser is running, you would have to open Devtools in Chrome to debug the application browser code inside evaluate (). We instead managed to get it working by creating a new page instance and navigating to the Devtools URL. We were then able to query the DOM and interact with the panels.

const browser = await puppeteer .launch({ devtools : true });

URL launch

Now that a page or in other words, a tab is available, any website can be loaded by simply calling the go to () function. This is the basic step in this Puppeteer tutorial as any action like scraping elements can be done only after a website is launched.

Here is the code that we used to launch our own website using the launch () function.

const page = await browser .newPage();
await page.goto('https://www.codoid.com/');
const title = await page.title();
await page.reload();
await page.goBack();
await page.goForward();

If needed, we can also run automation test scripts on incognito mode in puppeteer.

const context = await browser.createIncognitoBrowserContext();

Scraping an element

Now that we have seen how to launch a defined website, let’s find out how we can scrape various elements from that page. Once we start the execution, the browser is launched on headless mode, and it directly sends a get request to the web page and receives the HTML content that we require as explained below in steps.

1. Sending the HTTP request

2. Parsing the HTTP response and extracting desired data

3. Saving the data in some persistent storage, e.g. file, database, and similar

Using the below code, we have retrieved the main header info from our Home Page.

await page.goto("https://codoid.com/");
title = await page.evaluate(() => {
return document.querySelector("#main-header").textContent.trim();});
console.log(title);

Scraping multiple elements

You definitely would have to scrape more than 1 element from a webpage and you can get it done by following the following step. Select a querySelectorAll to get all the elements matching the selector, and create an array as heading elements are a type of Node List.

await page.goto("https://en.wikipedia.org/wiki/Web_scraping");
headings = await page.evaluate(() => {
headings_elements = document.querySelectorAll("h2 .mw-headline");
headings_array = Array.from(headings_elements);
return headings_array.map(heading => heading.textContent);
  });
console.log(headings);

Debugger

Once the execution is over, we can easily set the debugger in the automation process and get a current page Dom file in ChromeDev tools by using the below code

await page.evaluate(() => { debugger; });

Screenshot

Another useful feature is the ability to take screenshots when the browser is running. These screenshots can be taken by using the puppeteer Node library. The library provides a high-level API that can be used to control the headless Chrome or Chromium over the DevTools Protocol. Now, you will see a jpg file with the name “screenshot” inside your working folder.

await page.screenshot({ path: 'codoid.png'})

Getting PDF

We can easily convert HTML text to a PDF page that is basically a report/result for patients with data visualization, containing a lot of SVG. Furthermore, we can make some special requests to manipulate the layout and make some rearrangements of the HTML elements.

Ultimately the PDF must have a defined styling if you need to generate documents as PDF using the below command. In the command, we have defined the format to be A4.

const pdf = await page.pdf({ format: 'A4' });

Switch to New tab

Many people might encounter difficulties if their work demands several tabs. So we thought the code to open a link in a new tab in puppeteer would come in handy, and added it to this Puppeteer tutorial.

await page.bringToFront();

Type

An input field is something that pretty much every website has and we can define what input has to be given by using the Puppeteer’s page method page .type, which makes use of a CSS selector to spot the element you want to type in and a string you wish to type in the field.

const elements3 = await page.$x("//input[@id='contactname']")
await elements3[0].type("codoid");

Click

We can also click on any element or button in the puppeteer, but the only challenging aspect here would be to find the element. Once you have found the element, you can just fire up the click() function as shown below.

const elements = await page.$x("//a[.='Resources']")
await elements[0].click()

Checkbox

The checkbox is another element that we can handle by assigning two inputs as shown in the code. Here, the first input is taken as the selector, which is the option we want to select, and the second input as the click count.

const ele2= await page.$x("//input[@id='tried-test-cafe']")
await ele2[0].click({clickCount:1})

Dropdown

Puppeteer has a select (selector, value) function to get the value from the dropdown that takes two arguments as input. The first one is taken as a selector and the second argument as value, which is similar to what we saw in the case of a checkbox.

const ele3= await page.$x("//select[@id='preferred-interface']")
await ele3[0].select("Both");

Element value

This method is used to get the element value using the $eval () function. In the code shown below, we have defined the ‘Heading Text’ element as the one for which we want to obtain the value. This will stage two parameters as an argument where the first parameter will be the selector and the second parameter will be element= element.textContent.

const Message =  await page.$eval('p', ele => ele.textContent);
console.log('Heading text:' ,Message);

Element count

It’s pretty simple to get the count of the number of elements in a particular webpage. We have the $$eval() function, which can be employed to get the count of an element with the same selector as shown below.

const count =  await page.$$eval('p', ele => ele.length);
console.log("Count p tag in the page: "+count);

Headless Chrome Crawler

Once we start the execution, Google Chrome runs on headless mode, which is awesome for web crawling. Since Google Chrome executes the JavaScripts, it yields more URLs to crawl simple requests to HTML files that are generally fast. Anybody who is looking for ways to help their webpage rank better would know the importance of the crawling that helps in the pages getting indexed. The code required to execute crawling is given below.

const HCCrawler = require('headless-chrome-crawler');
(async () => {
const crawler = await HCCrawler.launch({ evaluatePage: (() => ({ title: $('title').text(), })),  onSuccess: (result => {console.log(result);}), });   
await crawler.queue('https://codoid.com/');
await crawler.onIdle(); // Resolved when no queue is left
await crawler.close(); // Close the crawler
})();

Conclusion

As a leading software testing company that provides the best automation testing services, our favorite “feature” of this approach is that we get to improve the loading performance and the indexing ability of a webpage without significant code changes! We hope you enjoyed reading this Puppeteer Tutorial blog and if you did, make sure to subscribe to our blog to make sure you never miss out on all upcoming blogs.