Select Page

Category Selected: Automation Testing

162 results Found


People also read

Artificial Intelligence

ANN vs CNN vs RNN: Understanding the Difference

Automation Testing

No Code Test Automation Tools: Latest

Accessibility Testing

Cypress Accessibility Testing: Tips for Success

Talk to our Experts

Amazing clients who
trust us


poloatto
ABB
polaris
ooredo
stryker
mobility
Python Behave BDD Framework Overview

Python Behave BDD Framework Overview

Python behave is a much widely used BDD framework these days. In this blog article, you will get to learn behave BDD framework’s features and how to use it to create automation test scripts. Behave is an open source tool which has 62 contributors who are actively developing new features and fixing the issues. Python Behave is a matured BDD framework.

Implementing readable and business friendly automation testing solution is the need of the hour. When your automation test suites gives provision for non-technical people to collaborate, it will be a value add to your product/project. As an automation testing services company, we get lot of inquiries on implementing business friendly test automation solution of late.

Feature Files

behave is considered to be a clone of Cucumber BDD framework. Most of the features of behave is similar to Cucumber. However, there is a provision for additional flexibility in behave framework. Let’s have a look at them one by one. You can execute feature files in the following ways – by providing feature name, feature directory, at last without providing feature details. When you run behave command without feature path, it will search feature files in features directory by default.

Fixtures

Fixtures aka Hooks are used to call Setup & Cleanup code before and after Test Run, Feature, Scenario, and Tag. You needn’t write a separate method for ‘After’ hooks. Both ‘Before’ and ‘After’ hooks can be implemented inside a single method. Let’s look at the below example.

Feature

@launch.browser
Feature: Codoid Website Feature

Scenario: Contact Codoid Team
  Given I am on home page
  When I submit contact us form
  Then I should see Thank You page with a message
  

environment.py

from behave import fixture, use_fixture
from selenium import webdriver

@fixture
def launch_browser(context, timeout=30, **kwargs):
    context.driver = webdriver.Chrome(executable_path='driverschromedriver.exe')
    yield context.driver
    context.driver.quit()

def before_tag(context, tag):
    if tag == "launch.browser":
        the_fixture = use_fixture(launch_browser, context)
  

step_def.py

from behave import given, when, then
from selenium import webdriver

@given('I am on home page')
def step_impl(context):
    context.driver.get("https://codoid.com")
    assert True is True
  

Executing Remaining Steps in a Scenario

By default, scenario execution will be stopped if any of the steps is failed. However, you can override this in behave framework using Hooks and Command line.

Hook

from behave.model import Scenario

def before_all(context):
    userdata = context.config.userdata
    continue_after_failed = userdata.getbool("runner.continue_after_failed_step", False)
    Scenario.continue_after_failed_step = continue_after_failed
  

Command Line

behave -D runner.continue_after_failed_step=true features/
  

Filtering Scenarios and Examples using Tag Value

You can select scenarios and examples using Tag and its value. This feature is helpful when you want to select an example based on the environment value for a tag.

Feature:
  Scenario Outline: Wow
    Given an employee "<name>"

    @use.with_stage=develop
    Examples: Araxas
      | name  | birthyear |
      | Alice |  1985     |
      | Bob   |  1975     |

    @use.with_stage=integration
    Examples:
      | name   | birthyear |
      | Charly |  1995     |
  

In Conclusion

As a test automation services company, we advocate behave BDD framework for creating readable and understandable automation test suite. It is still widely used by the automation community and its contributors are releasing newer versions frequently based on the market need.

Test Automation Success can be Achieved by Determining its Goals

Test Automation Success can be Achieved by Determining its Goals

Ask yourself, what do you want out of the test automation process? Without identifying your goals for test automation, you can’t succeed. You should define your goals by a set of gauges called success criteria since every team is different. Every stage of maturity in the process of finance, regulation, technology, and skill should be appropriately assessed.

Let’s see how your success criteria can be converted to success with your testing teams shifting to automation. Is it your default practice? Don’t automate every test, but at the same time, it is highly beneficial if you automate tests by answering the question of why we should not automate it. The constant churning of code will help, and many teams experience that in trying to define, redefine, and implement standards for automation, they end up saving time. Learn new skills and automation techniques to create a list of conditions needed to use your current
automation abilities.

Automated tests are nothing but clients of a System Under Test (SUT). Check if your automated tests/test clients execute anywhere and at any time because sometimes constraints can force these tests to run in a specific environment. Automated tests run on different machines and can quickly determine the devices it works on so that you fix it for those it doesn’t. Developers should run tests on their devices and use them for debugging. While testers, analysts, and product managers should be able to access tests and question aspects of the application. If tests are flexible, it will allow information to be gathered about the SUT by having test-client code executed on various devices and teams working through the environmental constraints.

Tests should continue to execute without any human intervention, and you shouldn’t have to push more than one button to get it going. Did you do anything before decision-makers could view the reports or before you started the tests? If yes, then you’re not meeting the set criterion. Run tests frequently because when automation is not run automatically against changing code, it is not an efficient setup, and you won’t get credible results. Have unit tests and test your automation and system often, through scheduled runs. Continuous Integration (CI) should bedone to test automation, just like compilers check code and ensure that tests run publicly. If a test case that failed once passes during a rerun, double-check the defect that showed up in the first instance, because good testers don’t ignore anomalies. Communicate what should be changed in your application by coding tolerance for it. Make your testing reliable and trustworthy for it to be considered successful.

Inferior automation can drive automation engineers to waste time debugging code rather than provide timely, accurate information to developers and other stakeholders. Your automation engineers should focus on building new tests instead of maintaining existing automation. If automation needs codebases, environments, agents, and systems, we should free them up and
not hinder the process by being ready for testing all the time. Sincere and transparent test automation reports will help your developers act quickly. Ensure they produce multiple test cases per day to check if there are problems with the commitment to the automation and testing process.

Reassess agent roles and responsibilities by updating automation goals and treat it as a development activity. Someone on the team must be accountable for fixing issues with automation because when no one owns it, automation fails. Your automation team should use the best practices of software development. Testers should store automation in source code control, maintain existing automation, plan for future automation, and fix failing tests immediately. Consider technical debt for test automation and account for automation by adhering to coding standards and practices.

Create test automation before, or in parallel with test code. A different vision of success for each team will help you assess if your team is ready. If the team is, then you’ll be prepared to work in a Continuous Delivery (CD) world, and your end-users will get reliable, precise, and on-time insight into your applications. QA companies like Codoid , have adapted its practices to help its clients strategize goals to achieve a successful implementation of their automation testing services.

A Comprehensive Review of PractiTest

A Comprehensive Review of PractiTest

This is a blog article which intents to review PractiTest, one of the best test management tools available in the market. It helps you ease the testing and development process. Sharing quick feedback to the team after QA activities is important to deliver high-quality software to the end-users. Sharing the test results in an excel sheet as an attachment has become obsolete. Your managers have to view/analyse the test results quickly in order to take decision. Let’s say you present the test results in a web dashboard, then it makes it easy for the entire team to understand the outcome of your report post successful completion of the testing activity. Let’s see how you can manage test cases and share the test results in PractiTest.

Overview

PractiTest is an application life cycle management platform. You can manage the requirements, test plans, test cases, and issues effectively. It helps to share the test results to everyone. If your team has already started developing the application without a test management tool or is fed up with the current tool, you can easily import the existing artifacts in PractiTest and start managing the tests effectively.

Traceability

You can link the defined requirements with the test cases to achieve traceability. PractiTest provides end-to-end visibility. When you perform test execution, it will reflect the status in Test Library and Requirements modules in PractiTest.

Test case Management

Using a complicated test case management tool always increases maintenance effort. The tool should be simple enough to manage and define test cases. The idea of PractiTest is to divide test cases that are created, imported to and stored in the Test Library, from the Test Sets & Runs where you group those tests according to relevancy (for example – cycle, sprint etc) and run them as instances. Unlike other systems that work with static folders, PractiTest uses dynamic filter trees (based on system and custom fields) to organize all testing data. With filter trees, a user can slice and dice his data according to what he needs and create customized dashboards and reports to other stakeholders based on the data relevant to them.

Jira Integration

Jira is a most widely used Project management tool. Test management tool without Jira integration neither adds any value to your team nor your project. In PractiTest, you can import requirements from Jira by just providing Jira Ticket number. When you make any changes to a Jira ticket, the changes will be reflected in ParctiTest immediately. You can also create Jira Bug ticket when executing your test cases. Linking between the test case and the Jira Issue will be established automatically when you create a Jira Bug from PractiTest. PractiTest provides JIRA Two-Way Integration feature. i.e. you can view test execution status in Jira ticket for the linked test cases.

Estimation

You can set execution time for each test case. This feature is helpful to determine the difference between actual and expected test execution time. When you induct new QA engineers in your team, time duration details will make them to understand how much time need to be spent for individual test case.

Automation Testing with PractiTest

One of the main ideas of PractiTest is to enable visibility to the entire testing process including automation. The system integrates with automation using REST API and Fire Cracker. You can update test case status by calling PractiTest API from your automation testing framework. Since this is an API call, you can integrate PractiTest in any test automation codebase. The following test case statuses – PASSED, FAILED, BLOCKED, OT COMPLETED, NO RUN can be sent from automated test script.

Firecracker

Firecracker allows you to take XML report files from any CI/CD tool and upload them into PractiTest. From now on, every time you have new build results, they will be sent to PractiTest where you will be able to see the results of your entire software development and deployment processes in one solution.

Integrations

PractiTest integrations include the following systems: Jira Cloud, Jira Server & Datacenter, Pivotal Tracker, Bugzilla, Youtrack, Azure DevOps, CA Agile Central (Rally), FogBugz, GitHub, GitLab, Trac, Mantis, Assembla, Lighthouse, Redmine , Slack, SVN, Zapier, AND Jenkins.

Exploratory Testing

PractiTest has Exploratory Testing module which allows you to define test charters & guide points for your exploratory sessions and save those guidelines as test cases for future reusability. When running the ET test case, you can document important points that arises in the annotation section as well as report new issues or links existing issues.

Agile Artifacts

If your team is not using Jira for Agile Development Process, then you can very well use PractiTest to save the artifacts. For example – You can use Requirements module to define User Stories & Test Library module to create Acceptance Tests. You can use the Dashboard to keep your team up to date with the status of the Sprint in General and of each User Storie in particular.

Reporting

Without a proper testing report, your team cannot take appropriate decision during a release. Spending time to prepare test reports is a waste of time. Quicker and quality feedback help your team. PractiTest provides pre-defined report templates (Tabular Report, Tabular Aggregated Report, and Detailed Report.) You can also modify report columns & graphs based on your needs and schedule the reports to send on daily, weekly or monthly basis.

Test Steps Resusabilty

Let’s say your testers are writing test cases for two functionalities which are almost identical, then you need to reuse the test steps instead of duplicating them. PractiTest allows you to call test steps from another test to enable reusability.

In Conclusion

As a leading software testing company, we have used a wide range of software testing tools for our QA projects. If you are planning to evaluate test management tools, then don’t forget to add PractiTest in your list. It comes in three basic versions namely Professional, Enterprise, and Unlimited. You can signup for 15 days Trial period for thorough evaluation. Please note – for automation testing integration, we recommend Enterprise plan.

We hope you’ve enjoyed reading this blog article as much as we’ve enjoyed writing it. Contact Us for your QA needs.

A fool with a tool is still a fool.

Test Automation Challenges and Strategies

Test Automation Challenges and Strategies

Nowadays most companies invest a huge amount of money in Automation testing, but the success rate in test automation is very less (25%) due to various reasons. In this blog, we are going to discuss the most common challenges that the Organization faces in automation common challenges that the Organization faces in automation and the best strategies which when implemented will yield better results.

Below is the list of challenges that are widely faced in Automation:

  • Lack of Continuous Monitoring
  • Management / Stakeholder guidance
  • Lack of Knowledge in Functional Testing team
  • Lack of Understanding of What to Automate
  • Automation team skillset and Training
  • Attrition Rate and Ability to retain good resources
  • Lack of Team bonding and co-operation
  • Lack of Ownership
  • Right Tool Strategy
  • Right Framework standards
  • Lack of Development practice with good code review process
  • Quantity conscious rather than Quality conscious
  • Lack of Governance

Lack of Continuous Monitoring

Automation should be treated as a development practice. Like how development is monitored on a continuous basis, Automation Testing progress should also be monitored and closely watched. Below are the continuous monitoring activities that every development work is governed and the same should be applicable for automation as well.

  • Reviewing the quality of work
  • Validating the delivery accomplishment as per the agreed plan
  • Evaluating unforeseen risk based on the outcome
  • What are the activities that need changes based on priority changes from time to time
  • Expected Result Vs Actual Result assessment
  • Statistics of project progress

These activities are applicable for Automation Testing as well and this needs to be assessed and evaluated time to time.

Management and Stakeholder guidance

Time to time Management and Stakeholder review should be conducted and guidance to be provided to achieve success. Below should be the key people in such people

  • Automation Lead
  • Functional Testing Lead
  • Program sponsor
  • Development Lead and Manager
  • Testing team Manager
  • Automation Architect if such a role exists

Such a meeting should be conducted on a scheduled frequency and discuss on achievements, hurdles, and solution to hurdles.

Lack of Knowledge in Functional Testing team

30% of Automation failure is due to the lack of knowledge on Functional aspects from the Functional Testing team. The functional team should be in a position to guide what to automate and what not to automate. The functional team should know the challenges in automation and help the automation team with the right area of Automation. Below is the expectation from the Functional Testing team to get good Automation result.

Identify the right area to Automate

1) Instead of automating UI, there could be an API that carries the data to UI and API can be automated instead of UI. API Automation is easy and simple. The focus should be given on such areas

2) If Functional team does not have an understanding before taking up Test Automation activity, we should discuss with key stakeholders and come up with such areas

Test Case Revamping

When there are at-least 100+ Test cases to automate, Functional team should come up with scenarios which can club multiple Test cases without compromising coverage. This will reduce the effort drastically on Automation as well increase the quality

Test Data Management

1) For automating any test case, test data is pre-requisite. Functional team should add such requirement as a part of Test case preparation activity

2) Should also have a clear idea of how to generate the data before taking up automation work.

Basic Feasibility

Based on the nature of application and dynamics involved in it, the functional team can suggest what application to be automated and describe the possibility of Automation

Lack of Understanding of What to Automate

Many Organization focuses on automation based on funding allocation rather than based on the understanding of prioritization. This leads to poor automation with less ROI.

Since time is not invested in what to automate, the focus goes towards the quantity rather than quality.

Test Automation Team Skillset and Training

As a test automation services company, we always encourage our automation testers to learn continuously. Automation team needs to learn new technologies, new methodologies and better coding standards year on year. This needs frequent workshops and frequent reviews to improve ourselves. Also, we need to enable the existing functional team to move to automation to reduce the attrition rate. Hence skill improvement and training should be a continuous process and the team which does these activities excel in their outcome.

Attrition Rate and Ability to retain good resources

Retaining key players is a big challenge in any industry and in the automation area, it is inevitable to take measures to retain resources. Below are the few areas to reduce the attrition rate.

  • Rewards and recognition to star performer.
  • Identifying the right resource for the right skill
  • Job rotation if the same job can be done by multiple resources
  • Giving Exposure to the team to key stakeholders
  • Monetary benefits
  • Giving space for recreation and fun activities to improve the moral

Lack of Team Bonding and Co-operation

Many occasions due to lack of team bonding, a lot of confusion and lack of ownership happen. We need to identify skillset and interest levels to allocate the activity. Team building exercises have to be conducted to create better bonding

Right Tool Strategy

Based on the Organization policy, fund allocation and based on Application technology automation testing tool procurement will be decided. Before fixing up tools, below are the areas to be considered

  • Tool feasibility across Technology platform has to be conducted before making the tool decision
  • The capability of the tool beyond automation (Like integration with other tools), plug and play support, etc to be assessed
  • Resource availability with the proposed tool skill in the market and within Organization
  • Support for any technical help through forums or through tool vendor to be assessed
  • Tool vendor support in procurement process like a waiting period for evaluation time, support from their team on technical feasibility, training to the Organization
  • Rate each of the tools and their vendors based on various criteria’s and come to the right tool strategy for the Organization.

Right Framework Standards

Before doing automation across applications, form a team for Organization Global framework development and Maintenance which will help to create dedicated focus on this activity. When the Organization matures, this framework will also mature and provide good ROI during DevOps implementation .

This dedicated team can assess different Technology stacks involved and create Framework based on Technologies.

Since the delivery and Framework team is separated, there will be a seamless delivery and continuous improvement.

Lack of Development practice with a good code review process

Automation work should be treated as a development practice. Usage of tools for Configuration Management, Automated code inspection tools, Build Management, Continuous Execution is highly recommended.

This also can help in early detection of code quality and issues and fix from time to time. This can also help to easily identify the weak resources and provide adequate training

Quantity conscious rather than Quality conscious

Management should be quality conscious rather than quantity conscious. Record usage of existing scripts Month on Month to increase ROI. Keep an eye on what improvement we have brought in and what we have done differently across each application each month to improve quality. Always upgrade the Lessons learned from each activity for implementing better strategies.

Lack of Governance

During governance discussion, numbers/metrics are tweaked to meet the expectation and hence there is less focus on improvement.

During governance, constructive discussion on the challenges faced, a solution adopted and other opportunities we have to improve and implement can be discussed rather than focusing on just Metrics.

Strategies for Test Automation:

  • A proper assessment on application and technology stack
  • Right tool strategy by doing a proper feasibility study<
  • Right framework approach keeping in mind of scalability and adaptability
  • Retaining skilled resources
  • Conducting training and upskilling resources
  • Infrastructure and Environment for Automation
  • Identifying the right area of Automation rather focusing only on UI
  • Constructive Governance rather than pure metrics-driven
  • Stakeholder Involvement
Learn How QA Automation ROI is Calculated

Learn How QA Automation ROI is Calculated

Are you curious as to how businesses calculate ROI for automation testing? Since test automation is expensive, the decision of whether to invest in automated testing tools or continue with manual testing can be challenging to make. Let’s break it down for you and simplify the various ROI calculation formulas on test automation and help you make an educated decision.

Usually, the primary type of calculation is a simple ROI calculation, but in this blog, we will delve into advanced calculation techniques.

Efficiency ROI calculationIt needs to be done in terms of days since automated tests can be run continuously as compared to the 8 hours calculated for manual testing. We calculate this with an average of 18 hours as test cases are usually interrupted and do not always run for 24 hours. The formula, then for efficiency ROI calculation on test automation is:

Key: Automated Test = AT, Execution Time = ET, Maintenance Time = MT, Manual Test = MaT, Period = P

  • AT script development time = (Hourly automation time per test * Number of AT cases)/8
  • AT script ET = (AT ET per test * Number of AT cases*P of ROI)/18
  • AT analysis time = (Test analysis time * P of ROI)/8
  • AT MT = (MT * P of ROI)/8
  • MaT ET = (MaT ET * No. of MaT cases * P of ROI)/8

Notes: The period of ROI = number of weeks for which ROI needs to be calculated. Divided by 8 for manual automation and divided by 18 for system automation.

To calculate efficiency ROI, rather than monetary factor, we consider time investment profits. We input the values of the respective variables in the formula and arrive at the answer. Although you are not required to disclose the cost to the company for hiring a tester or other such sensitive information, this method makes plenty of assumptions instead. Even though you automate test cases, you may still need manual intervention at times.

Risk Reduction ROI

In this ROI calculation method, automation benefits are calculated separately while addressing concerns that arose from the approach mentioned above. When you adopt automated testing, resources become more available to do other productive tasks like in-depth time analysis, independent negative/random testing, design/development of the tests, etc. It increases coverage, and you can discover bugs in your application and fix post-delivery errors. If any bugs are overlooked, then the consequent loss needs to be factored into this ROI formula. Any gain in terms of money, if errors post-delivery or implementation are found and fixed, will directly impact the calculation. Values are inserted in the same formulas, and then you can calculate the ROI. Even though the investment cost is similar, the biggest gain is the monetary loss that the company would save itself from if automation testing is implemented. The advantage here is that it deals with the positive effect of test coverage, a high level of risk analysis because this method is subjective.

Several factors affect ROI calculation for automation testing, so you should be advised by a reputable ?automation testing services company?. Add efficient testing automation tools, invest more time in the initial phase, and use better reporting mechanisms to introduce improvements to your automation testing. There is no singular method to calculate ROI for automation testing, so it is best to hire a t?est automation services company? like Codoid. A consolidated decision between your testers and management will lead to the best automation testing method installed in your business that is sure to bring forth profits and help you achieve your goals quicker.

Best Practices for Automation Testing with BDD

Best Practices for Automation Testing with BDD

If you follow automation testing best practices religiously it will eventually decrease rework. Nowadays, BDD frameworks have strong automation testing user base. BDD framework enables effective collaboration and automation. If your team is following Agile methodology, then make sure you automate Acceptance Criteria of each story within the sprint. The reason being if any automation test script is being developed with effective collaboration, it will produce high-quality output. Let’s say you have missed to automate a couple of stories in the current sprint, then the non-automated acceptance criteria becomes a technical debt and you may face difficulties to connect with your team members to discuss about old sprint stories.

If you are automating existing regression tests pack, you can use BDD framework only for automation without collaboration. Writing BDD test scenarios is an art. None of your team can understand if a Gherkin scenario is too long. Following best practices is an essential for successful automation testing with BDD.

Feature Template

Feature: [One line describing the story]
[Optional — Feature description]

Scenario: [One line describing the scenario]
Given [context]
And [some more context]…
When [event]
Then [outcome]
And [another outcome]…
  

BDD Scenario Best Practices

  • Given-When-Then should be in sequence.
  • Each scenario should ideally have only one ‘When’ clause that clearly points to the purpose of the test.
  • Use past tense for ‘Given’ clauses, present tense for ‘When’ and future tense for ‘Then’.
  • Use ‘And’ clause to add multiple Given and Then steps.
  • Put scenarios prerequisite steps in ‘Background’. Note: If the prerequisite steps are more technical, then use Before hook.
  • Scenarios should be more functionality oriented rather than UI/UX actions.
  • You can also write BDD Styled Acceptance Criteria for web services. However, you need to ensure it is understood by the product owner.
  • Write scenarios after talking to business people.
  • Use declarative steps rather than imperative.

Declarative

Given I pass the header information for SSN
When the client request the URL with data
Then the response code should be 200
  

Imperative

Given I pass the header information for SSN
When client request POST "<ServiceURL>" with json data "<RequestFile>"
Then the response code should be 200
And the SSNcached result should be same as valid transaction response "<ResponseFile>"
  

Tagging

Grouping test scenarios from different features is must. In order to select the tests for different execution types, tagging is important. Nowadays, we have many test cases managements plugins for Jira. Updating automated test execution result in Jira is vital. Managing manual and automated executions result in test case management tool is helpful to collect test metrics and improve the test coverage. As one of the leading test automation companies, we tag BDD scenarios with Test ID, environment (qa, stage, 7 prod), and testing purpose (Smoke, Integration & regression). Using Test ID, we can push the automated test execution status in test case management tool.

Feature Narrative Description

Feature: Title (one line describing the story)
 
 Narrative Description: As a [role], I want 
  • , so that I [benefit]

    Step Definitions dos and don’ts

    • Don’t call a step definition method from another method.
    • Never hard code any test data in step definition.
    • Learn how regular expression is used to match a Gherkin step and a Step Definition method.
    • Chain the assertions to enable code readability.
    assertThat(fellowshipOfTheRing).filteredOn( character -> character.getName().contains("o") )
                                   .containsOnly(aragorn, frodo, legolas, boromir);
      

    Use Cucumber Lambda expressions to simplify the step definitions.

    Given("I login as (.*)$",(String name)-> System.out.println(name));
      

    Reporting

    BDD Reporting is important to trouble-shoot the failures and collect test automation metrics. You can use Serenity BDD reporting, ExtentReporter for BDD, Report Portal, and Allure Reporting.

    In Conclusion

    We, as a test automation services company, use Cucumber, Behave, SpecFlow, and Serenity BDD frameworks for our automation testing projects. Following automation testing best practices has helped us to deliver high-quality test automation scripts to our clients with precision. We suggest that you prepare a training plan for your automation testers and make them understand how anti-patterns impact test automation. Writing BDD scenarios which are too long will not add value to the team and it increases script maintenance. If your team is comfortable using DSL, then the output of your test automation scripting will be concise, which will enable the effective collaboration between Developers, Testers, and Business.