Select Page
Automation Testing

Test Junkie Framework Review

Being a leading QA Company, we write blogs on all prominent software testing topics and tools using our real-world experience. So stay sharp by subscribing to our Newsletter.

Listen to this blog

Nowadays, we have umpteen number of testing frameworks developed using Python and most of them don’t fit everyone’s requirement, through extensive research we have a found a testing framework which is comprehensive and has all the required features to maintain your automated test cases. It is Test Junkie which is still in the Alpha stage. This framework was developed by Artur Spirin . In this blog article, we will see what are all the distinct features of Test Junkie.

ListenersIn Cucumber, we use hooks like beforeScenario & afterScenario. Similarly, in Test Junkie, we have many listeners for Test and Suite.

Test Listeners: on_in_progress, on_success, on_failure, on_error, on_ignore, on_cancel, on_skip, & on_complete.

Suite Listeners: on_class_in_progress, on_before_class_failure, on_before_class_error, on_after_class_failure, on_after_class_error, on_class_skip, on_class_cancel, on_class_ignore, on_class_complete, on_before_group_failure, on_before_group_error, on_after_group_failure, & on_after_group_error.

from test_junkie.listener import Listener

class MyTestListener(Listener):
    def __init__(self, **kwargs):
        Listener.__init__(self, **kwargs)
    ...
  

Test ReportingIn Test Junkie, you can report the test execution in HTML, XML, and JSON. Using monitor_resources parameter, you can enable monitoring of the MEM & CPU usage during test execution. Test Junkie generates a beautiful HTML report as shown below.

Test Junkie Review

Advanced Test Suite Example


from test_junkie.decorators import Suite, test, afterTest, beforeTest, beforeClass, afterClass
from test_junkie.meta import meta, Meta
@Suite(parameters=[{"login": "[email protected]", "password": "example", "admin": True},
                   {"login": "[email protected]", "password": "example", "admin": False}])
class LoginSuite:
    @beforeClass()
    def before_class(self, suite_parameter):  # yes, we just parameterized this function, seen that anywhere else?
        # Lets assume we have some code here to login with
        # username . . . suite_parameter["login"]
        # password . . . suite_parameter["password"]
        # This is our, hypothetical, pre-requirement before we run the tests
        # If this step were to fail, the tests would have been ignored
        pass
    @afterClass()
    def after_class(self):
        # Here, generally, we would have clean up logic.
        # For the sake of this example, lets assume we logout
        # from the account that we logged into during @beforeClass()
        # no `suite_parameter` in method signature,
        # logout process would likely be the same irrespective of the account
        pass
    @test(parameters=["page_url_1", "page_url_2", "page_url_3"])
    def validate_user_login_in_header(self, parameter, suite_parameter):
        # Lets assume that in this test case we are going to be validating
        # the header. We need to make sure that email that user logged in with
        # is displayed on every page so we will make this test parameterized.
        # By doing so we will know exactly which pages pass/fail without
        # writing any extra logic in the test itself to log all the failures
        # and complete testing all the pages which would be required if you
        # were to use a loop inside the test case for instance.
        # Now we would use something like Webdriver to open the parameter in order to land on the page
        # and assert that suite_parameter["username"] in the expected place
        pass
    @test(parameters=["admin_page_url_1", "admin_page_url_2"])
    def validate_access_rights(self, parameter, suite_parameter):
        # Similar to the above test case, but instead we are validating
        # access right privileges for different user groups.
        # Using same principal with the parameterized test approach.
        # Now we would also use Webdriver to open the parameter in order to land on the page
        # and assert that the page is accessible if suite_parameter["admin"] is True
@Suite(pr=[LoginSuite],
       parameters=[{"login": "[email protected]", "password": "example", "user_id": 1},
                   {"login": "[email protected]", "password": "example", "user_id": 2}])
class EditAccountCredentialsSuite:
    """
    It is risky to run this suite with the LoginSuite above because if
    the suites happen to run in parallel and credentials get updated
    it can cause the LoginSuite to fail during the login process.
    Therefore, we are going to restrict this suite using the `pr` property, this will insure that
    LoginSuite and EditAccountCredentialsSuite will never run in parallel thus removing any risk
    when you run Test Junkie in multi-threaded mode.
    """
    @test(priority=1, retry=2)  # this test, in case of failure, will be retried twice
    def reset_password(self, suite_parameter):  # this test is now parameterised with parameters from the suite
        # Lets assume in this test we will be resetting password of the
        # username . . . suite_parameter["login"]
        # and then validate that the hash value gets updated in the database
        # We will need to know login when submitting the passowrd reset request, thus we need to make sure that
        # we don't run this test in parallel with edit_login() test bellow.
        # We will use decorator properties to prioritize this test over anything else in this suite
        # which means it will get kicked off first and then we will disable parallelized mode for the
        # edit_login() test so it will have to wait for this test to finish.
        pass
    @test(parallelized=False, meta=meta(expected="Able to change account login"))
    def edit_login(self, suite_parameter):
        # Lets assume in this test we will be changing login for . . . suite_parameter["login"]
        # with the current number of tests and settings, this test will run last
        Meta.update(self, suite_parameter=suite_parameter, name="Edit Login: {}".format(suite_parameter["login"]))
        # Making this call, gives you option to update meta from within the test case
        # make sure, when making this call, you did not override suite_parameter with a different value
        # or update any of its content
    @afterClass()
    def after_class(self, suite_parameter):
        # Will reset everything back to default values for the
        # user . . . suite_parameter["user_id"]
        # and we know the original value based on suite_parameter["login"]
        # This will insure other suites that are using same credentials, wont be at risk
        pass
@Suite(listener=MyTestListener,  # This will assign a dedicated listener that you created
       retry=2,  # Suite will run up to 2 times but only for tests that did not pass
       owner="Chuck Norris",  # defined the owner of this suite, has effects on the reporting
       feature="Analytics",  # defines a feature that is being tested by the tests in this suite,
                             # has effects on the reporting and can be used by the Runner
                             # to run regression only for this feature
       meta=meta(name="Example",  # sets meta, most usable for custom reporting, accessible in MyTestListener
                 known_failures_ticket_ids=[1, 2, 3]))  # can use to reference bug tickets for instance in your reporting
class ExampleTestSuite:
    @beforeTest()
    def before_test(self):
        pass
    @afterTest()
    def after_test(self):
        pass
    @test(component="Datatable",  # defines the component that this test is validating,
                                  # has effects on the reporting and can be used by the Runner
                                  # to run regression only for this component
          tags=["table", "critical", "ui"],  # defines tags that this test is validating,
                                             # has effects on the reporting and can be used by the Runner
                                             # to run regression only for specific tags
          )
    def something_to_test1(self, parameter):
        pass
    @test(skip_before_test=True,  # if you don't want to run before_test for s specific test in the suite, no problem
          skip_after_test=True)  # also no problem, you are welcome!
    def something_to_test2(self):
        pass

Command Line InterfaceTest Junkie comes with three features in Command Line execution. First one is run command which allows to execute your tests. The second one is audit command which to quickly scan existing codebase and aggregate test data based on the audit query. The last one is config command is to let you easily persist test configuration for later use with other commands like run & audit

In Conclusion:

Reviewing a testing framework helps the test automation community to understand the framework features quickly. We, at Codoid, urge our testers to explore latest tools/frameworks to improve our automation testing services. Test Junkie is still Alpha phase. Once it is released, it will be the most popular Python testing framework.

Comments(0)

Submit a Comment

Your email address will not be published. Required fields are marked *

Talk to our Experts

Amazing clients who
trust us


poloatto
ABB
polaris
ooredo
stryker
mobility