Select Page

Category Selected: Automation Testing

152 results Found


People also read

OTT Testing

Hidden Hurdles: Challenges in OTT Platform Testing

Automation Testing

Playwright Cheatsheet: Quick Tips for Testers

Automation Testing

A Quick Playwright Overview for QA Managers

Talk to our Experts

Amazing clients who
trust us


poloatto
ABB
polaris
ooredo
stryker
mobility
Page Object Pattern in Python

Page Object Pattern in Python

Page objects are commonly used for testing, but should not make assertions themselves. Their responsibility is to provide access to the state of the underlying page. It’s up to test clients to carry out the assertion logic.

Martin Fowler

In this blog article, you will learn how to implement Page Object Pattern using PyPOM package.

Installation

pip install PyPOM
  

[/code]

Sample Code

from pypom import Page
from selenium import webdriver
from selenium.webdriver.common.by import By

driver = webdriver.Chrome(executable_path='driverschromedriver.exe')

class Home(Page):
    _contact_us_locator = (By.ID, 'menu-item-54')

    @property
    def fill_contact_us_form(self):
        self.find_element(*self._contact_us_locator).click()

base_url = 'https://www.codoid.com'

home_page = Home(driver, base_url).open()

home_page.fill_contact_us_form

driver.quit()

In Conclusion:

We have other page object model packages in Python. We will review them in our subsequent articles. Please feel free to contact us if you face any issues with PyPOM implementation.

Troubleshoot Cucumber Failures with More Details

Troubleshoot Cucumber Failures with More Details

After automated regression test suite execution, troubleshooting failures is an important post execution task. However, it needs more error details to make sure whether the failure is test data issue or script issue or an actual application bug. Default Cucumber reporting does not provide step definition information. If you want to add the failed step definition information in Cucumber HTML Report, you can use Scott Test Reporter.

How to install? Add the below plugin and dependency in POM.xml

<build>
        <plugins>
            <plugin>
                <groupId>hu.advancedweb<groupId>
                <artifactId>scott-maven-plugin<artifactId>
                <version>3.5.0<version>
                <executions>
                    <execution>
                        <goals>
                            <goal>prepare-agent<goal>
                        <goals>
                    <execution>
                <executions>
            <plugin>
        <plugins>
    <build>

<dependency>
            <groupId>hu.advancedweb<groupId>
            <artifactId>scott<artifactId>
            <version>3.5.0<version>
            <scope>test<scope>
<dependency>

Cukes Test

package cukes;

import cucumber.api.CucumberOptions;
import cucumber.api.testng.AbstractTestNGCucumberTests;

@CucumberOptions(features = "src/test/resources/Sample.feature", monochrome = true, plugin = {
        "pretty", "hu.advancedweb.scott.runtime.ScottCucumberHTMLFormatter:target/cucumber",
        "hu.advancedweb.scott.runtime.ScottCucumberJSONFormatter:target/cucumber.json"},
        glue = {"step_definitions"})
public class Runner1 extends AbstractTestNGCucumberTests {

}

Report

Troubleshoot Cucumber Failures

In Conclusion:

Using this library, you can troubleshoot failures without revisiting or executing your code again. To provide quick feedback after test automation execution, you need to troubleshoot the failures quickly.

Test Junkie Framework Review

Test Junkie Framework Review

Nowadays, we have umpteen number of testing frameworks developed using Python and most of them don’t fit everyone’s requirement, through extensive research we have a found a testing framework which is comprehensive and has all the required features to maintain your automated test cases. It is Test Junkie which is still in the Alpha stage. This framework was developed by Artur Spirin . In this blog article, we will see what are all the distinct features of Test Junkie.

ListenersIn Cucumber, we use hooks like beforeScenario & afterScenario. Similarly, in Test Junkie, we have many listeners for Test and Suite.

Test Listeners: on_in_progress, on_success, on_failure, on_error, on_ignore, on_cancel, on_skip, & on_complete.

Suite Listeners: on_class_in_progress, on_before_class_failure, on_before_class_error, on_after_class_failure, on_after_class_error, on_class_skip, on_class_cancel, on_class_ignore, on_class_complete, on_before_group_failure, on_before_group_error, on_after_group_failure, & on_after_group_error.

from test_junkie.listener import Listener

class MyTestListener(Listener):
    def __init__(self, **kwargs):
        Listener.__init__(self, **kwargs)
    ...
  

Test ReportingIn Test Junkie, you can report the test execution in HTML, XML, and JSON. Using monitor_resources parameter, you can enable monitoring of the MEM & CPU usage during test execution. Test Junkie generates a beautiful HTML report as shown below.

Test Junkie Review

Advanced Test Suite Example


from test_junkie.decorators import Suite, test, afterTest, beforeTest, beforeClass, afterClass
from test_junkie.meta import meta, Meta
@Suite(parameters=[{"login": "[email protected]", "password": "example", "admin": True},
                   {"login": "[email protected]", "password": "example", "admin": False}])
class LoginSuite:
    @beforeClass()
    def before_class(self, suite_parameter):  # yes, we just parameterized this function, seen that anywhere else?
        # Lets assume we have some code here to login with
        # username . . . suite_parameter["login"]
        # password . . . suite_parameter["password"]
        # This is our, hypothetical, pre-requirement before we run the tests
        # If this step were to fail, the tests would have been ignored
        pass
    @afterClass()
    def after_class(self):
        # Here, generally, we would have clean up logic.
        # For the sake of this example, lets assume we logout
        # from the account that we logged into during @beforeClass()
        # no `suite_parameter` in method signature,
        # logout process would likely be the same irrespective of the account
        pass
    @test(parameters=["page_url_1", "page_url_2", "page_url_3"])
    def validate_user_login_in_header(self, parameter, suite_parameter):
        # Lets assume that in this test case we are going to be validating
        # the header. We need to make sure that email that user logged in with
        # is displayed on every page so we will make this test parameterized.
        # By doing so we will know exactly which pages pass/fail without
        # writing any extra logic in the test itself to log all the failures
        # and complete testing all the pages which would be required if you
        # were to use a loop inside the test case for instance.
        # Now we would use something like Webdriver to open the parameter in order to land on the page
        # and assert that suite_parameter["username"] in the expected place
        pass
    @test(parameters=["admin_page_url_1", "admin_page_url_2"])
    def validate_access_rights(self, parameter, suite_parameter):
        # Similar to the above test case, but instead we are validating
        # access right privileges for different user groups.
        # Using same principal with the parameterized test approach.
        # Now we would also use Webdriver to open the parameter in order to land on the page
        # and assert that the page is accessible if suite_parameter["admin"] is True
@Suite(pr=[LoginSuite],
       parameters=[{"login": "[email protected]", "password": "example", "user_id": 1},
                   {"login": "[email protected]", "password": "example", "user_id": 2}])
class EditAccountCredentialsSuite:
    """
    It is risky to run this suite with the LoginSuite above because if
    the suites happen to run in parallel and credentials get updated
    it can cause the LoginSuite to fail during the login process.
    Therefore, we are going to restrict this suite using the `pr` property, this will insure that
    LoginSuite and EditAccountCredentialsSuite will never run in parallel thus removing any risk
    when you run Test Junkie in multi-threaded mode.
    """
    @test(priority=1, retry=2)  # this test, in case of failure, will be retried twice
    def reset_password(self, suite_parameter):  # this test is now parameterised with parameters from the suite
        # Lets assume in this test we will be resetting password of the
        # username . . . suite_parameter["login"]
        # and then validate that the hash value gets updated in the database
        # We will need to know login when submitting the passowrd reset request, thus we need to make sure that
        # we don't run this test in parallel with edit_login() test bellow.
        # We will use decorator properties to prioritize this test over anything else in this suite
        # which means it will get kicked off first and then we will disable parallelized mode for the
        # edit_login() test so it will have to wait for this test to finish.
        pass
    @test(parallelized=False, meta=meta(expected="Able to change account login"))
    def edit_login(self, suite_parameter):
        # Lets assume in this test we will be changing login for . . . suite_parameter["login"]
        # with the current number of tests and settings, this test will run last
        Meta.update(self, suite_parameter=suite_parameter, name="Edit Login: {}".format(suite_parameter["login"]))
        # Making this call, gives you option to update meta from within the test case
        # make sure, when making this call, you did not override suite_parameter with a different value
        # or update any of its content
    @afterClass()
    def after_class(self, suite_parameter):
        # Will reset everything back to default values for the
        # user . . . suite_parameter["user_id"]
        # and we know the original value based on suite_parameter["login"]
        # This will insure other suites that are using same credentials, wont be at risk
        pass
@Suite(listener=MyTestListener,  # This will assign a dedicated listener that you created
       retry=2,  # Suite will run up to 2 times but only for tests that did not pass
       owner="Chuck Norris",  # defined the owner of this suite, has effects on the reporting
       feature="Analytics",  # defines a feature that is being tested by the tests in this suite,
                             # has effects on the reporting and can be used by the Runner
                             # to run regression only for this feature
       meta=meta(name="Example",  # sets meta, most usable for custom reporting, accessible in MyTestListener
                 known_failures_ticket_ids=[1, 2, 3]))  # can use to reference bug tickets for instance in your reporting
class ExampleTestSuite:
    @beforeTest()
    def before_test(self):
        pass
    @afterTest()
    def after_test(self):
        pass
    @test(component="Datatable",  # defines the component that this test is validating,
                                  # has effects on the reporting and can be used by the Runner
                                  # to run regression only for this component
          tags=["table", "critical", "ui"],  # defines tags that this test is validating,
                                             # has effects on the reporting and can be used by the Runner
                                             # to run regression only for specific tags
          )
    def something_to_test1(self, parameter):
        pass
    @test(skip_before_test=True,  # if you don't want to run before_test for s specific test in the suite, no problem
          skip_after_test=True)  # also no problem, you are welcome!
    def something_to_test2(self):
        pass

Command Line InterfaceTest Junkie comes with three features in Command Line execution. First one is run command which allows to execute your tests. The second one is audit command which to quickly scan existing codebase and aggregate test data based on the audit query. The last one is config command is to let you easily persist test configuration for later use with other commands like run & audit

In Conclusion:

Reviewing a testing framework helps the test automation community to understand the framework features quickly. We, at Codoid, urge our testers to explore latest tools/frameworks to improve our automation testing services. Test Junkie is still Alpha phase. Once it is released, it will be the most popular Python testing framework.

Remaining Consistently Aware of Automation and Measuring Its Success

Remaining Consistently Aware of Automation and Measuring Its Success

Automated software testing schedules have the ability to increase the depth and expand the scope of software testing while elevating the quality of software applications and packages. Today, test automation frameworks are deployed to execute thousands of different complex test cases during every test run, thus providing expansive testing coverage, which would be impossible with manual testing. For instance, certain automated testing tools can playback pre-recorded and predefined actions, compare the results to the expected behavior of a software package, and report the success or failure of these manual tests to the software testing experts. Automation also boosts the repeatability of tests, which can be extended to perform tasks impossible with manual testing. These facts help leading software testing companies to promote automated software testing as an essential component of successful software development projects, while also measuring its success across a range of projects.

Test Management Tools

Testing engineers offering automation testing services should identify ‘candidates’ for automation during the process of defining test cases and testing scenarios. The salient features of a test management tool are flexibility and ease of use. Further, a good quality tool should be able to help testers identify any bugs in the project. Additionally, a test management tool should have the ability to execute various automation tests and scripts in batches or position these in the testing pipeline for scheduled testing or on-demand.

Remaining Consistently Aware of Automation and Measuring Its Success

What to Automate

An intelligent automated testing strategy should be able to answer the questions: ‘where to automate’ and ‘what to automate’. Testing engineers must answer these queries to denote the layer of an application that must undergo automated testing. However, testing engineers offering automation testing services must bear in mind that although automated testing requires higher initial investment, it can yield a significantly higher return on investment. Therefore, teams of software testing experts should have answers to the questions mentioned to enable a better focus on their automation efforts.

Tool Selection

The correct automation tool is critical to ensure the success of automation testing. Hence, test engineers must, for instance, work to determine which tool could enable the automated testing of a desktop application. Further, factors such as language support, tool support, the availability of newer versions of a tool, support for API automation, and more, must be considered during the tool selection process. These factors can and should influence commercial decisions to purchase tools for firms that offer test automation services. Certain complexities may arise when a company engaged in automation testing services negotiates the process of tool selection in specific circumstances.

Environment Stability

Test automation thrives in stable testing environments. Hence, test engineers performing test automation services must consider factors such as dependency, availability and the up-time of a test environment. Testers should take a special interest in managing test infrastructure – such as hardware servers, application servers, networking, firewalls, software components required for testing, build software required for testing releases and more. Further, they must manage test environments such as database clusters, UAT, Pre-production, and the data required for testing. Additionally, testers must remain aware that test environments differ from production environments in several ways – these include operating systems, configuration, software versions, patches, and others. Therefore, the wider the gap between test and production environments, the greater the probability the delivered product will feature a range of bugs and defects.

Data Usage

Data and its appropriate usage are critical to devising a correct approach to automation. The complications associated with data bring up several unexpected challenges in automation testing. Therefore, testing engineers must find the means to randomly generate or obtain varied data input combinations as part of reinforcing their efforts towards test automation. For instance, data can be sourced from tables, files or APIs. Further, engineers must make qualitative judgments such as whether sourcing a certain type of data would affect the performance of scripts, and others.

In Conclusion

A well-calibrated and prudent approach to testing automation can elevate the quality of testing in software testing scenarios. Testers with the ability to put together a unique approach to test automation would be more successful in their chosen domain of work. Companies must partner with a leading software testing company to get the best service, experts, and top testing tools. Connect with us for all these features and more.

Essential Shift from Manual to Automated Tests

Essential Shift from Manual to Automated Tests

The norms of evolution in technology emphasize a shift from the manual to the automated. Software testing services are no exception and hence automation testing services are gaining ground within the industry. This shift is largely powered by a well-grounded view that test automation services help unearth a larger number of glitches in software applications and products. The very essence of automation has enabled it to emerge as a more reliable testing method, which the industry seems more than willing to emulate.

Is Automation Testing Necessary?

The answer is an emphatic yes and since automation testing represents the future of software testing processes and services. Intelligently framed testing scripts that would create accurate and repeatable automated tests across multiple mobile devices, platforms, and environments will most likely dominate the software testing industry in the years to come.

Is Automation Testing Necessary

Effortless and Efficient Testing

Expert testers vouch for the efficacy of test automation services, saying that it offers seamless execution of software testing systems and principles. This approach, however, must be complemented by the clarity of the objective of the testing regime and the ability of software testing engineers to manage the rigors of such a testing method. The testers should be able to determine the purpose of a test script and only then consider automating to affect speedier and more efficient testing. Such practices drive a transition from manual testing systems to automation-driven regimes in the long term.

Storable, Customizable, and Reusable Scripts

Software testing systems evolved from manual practices to automated testing, being driven by the efforts and intelligence of human testers. Transitioning from a manual regime to automation testing services requires a graded and calibrated approach. It is possible to attain a middle ground when testers automate certain tests, which makes their execution more efficient. Benefits accrue since automated testing tools have the ability to playback pre-recorded and predefined actions, compare results with expected outcomes, and report the actual outcomes to the testing team. It is possible to repeat and extend automated tests to perform tasks, which would not be possible with manual testing.

Evaluating Tests

Testing managers understand the critical nature of automated software testing as an essential component of successful development projects. However, the existence and rationale of a test script must undergo interrogation before implementing it into an automated regime. A test scenario should merit testing in different ways to ensure the satisfactory coverage of a given set-up. The evaluation process must include queries designed to satisfy the curiosity of the tester regarding likely outcomes. It is important to ensure that the efforts focus on reviewing, refining, and tightening the repository of test scripts to retain a high degree of relevance.

Regular and Rigorous Evaluation

Test suites need regular evaluation by the creators of test automation services. Being able to extract significant value from the information created by executing various tests is necessary for the success of the project. This is a marked departure from mechanistic testing regimes dominated by legacy testing scripts. Therefore, providers of automation testing should invest time and effort to frame automated tests that conform to client requirements and can add value to the process of modern software development.

In Conclusion

Automated software testing regimens offer distinctive benefits by way of faster feedback, reduced business expenses, accelerated results, higher levels of testing efficiency, comprehensive test coverage, the early detection of defects, and thoroughness in testing outcomes. However, the transition from manual to automation testing services must have clear lines of thought and inquiry and must be complemented by a robust assessment of the benefits afforded by each script. Connect with us to leverage on our experience and expertise in the realm of automation testing services, and a much more.

Assertion in Python using Assertpy

Assertion in Python using Assertpy

Assertpy is a simple assertion library in python with a nice fluent API. All automation testing validations should be asserted. In Java, we use AssertJ to compare actual and expected result. However, Assertpy is a suitable package to write readable assertions with method chaining concept. Implementing BDD step definitions using Assertpy enables effective collaboration between testers and other stakeholders. Writing chaining method calls for assertion helps automation testers to understand the existing test scripts during maintenance.

Assertion in Python using Assertpy

Installation

pip install assertpy 

Matching strings

assert_that('').is_not_none()
assert_that('').is_empty()
assert_that('').is_false()
assert_that('').is_type_of(str)
assert_that('').is_instance_of(str)

assert_that('foo').is_length(3)
assert_that('foo').is_not_empty()
assert_that('foo').is_true()
assert_that('foo').is_alpha()
assert_that('123').is_digit()
assert_that('foo').is_lower()
assert_that('FOO').is_upper()
assert_that('foo').is_iterable()
assert_that('foo').is_equal_to('foo')
assert_that('foo').is_not_equal_to('bar')
assert_that('foo').is_equal_to_ignoring_case('FOO')

assert_that(u'foo').is_unicode() # on python 2
assert_that('foo').is_unicode()  # on python 3

assert_that('foo').contains('f')
assert_that('foo').contains('f','oo')
assert_that('foo').contains_ignoring_case('F','oO')
assert_that('foo').does_not_contain('x')
assert_that('foo').contains_only('f','o')
assert_that('foo').contains_sequence('o','o')

assert_that('foo').contains_duplicates()
assert_that('fox').does_not_contain_duplicates()

assert_that('foo').is_in('foo','bar','baz')
assert_that('foo').is_not_in('boo','bar','baz')
assert_that('foo').is_subset_of('abcdefghijklmnopqrstuvwxyz')

assert_that('foo').starts_with('f')
assert_that('foo').ends_with('oo')

assert_that('foo').matches(r'w')
assert_that('123-456-7890').matches(r'd{3}-d{3}-d{4}')
assert_that('foo').does_not_match(r'd+')

Matching Integers

assert_that(0).is_not_none()
assert_that(0).is_false()
assert_that(0).is_type_of(int)
assert_that(0).is_instance_of(int)

assert_that(0).is_zero()
assert_that(1).is_not_zero()
assert_that(1).is_positive()
assert_that(-1).is_negative()

assert_that(123).is_equal_to(123)
assert_that(123).is_not_equal_to(456)

assert_that(123).is_greater_than(100)
assert_that(123).is_greater_than_or_equal_to(123)
assert_that(123).is_less_than(200)
assert_that(123).is_less_than_or_equal_to(200)
assert_that(123).is_between(100, 200)
assert_that(123).is_close_to(100, 25)

assert_that(1).is_in(0,1,2,3)
assert_that(1).is_not_in(-1,-2,-3)

In Conclusion:

Assertpy development is still active. There are more assertions packages available in Python, we will review in the subsequent blog articles.