by admin | Mar 30, 2020 | Mobile App Testing, Fixed, Blog |
When testing mobile application, below are the important points to consider:
Re-usability:
When there is a web development and Mobile App development with the same functionalities to be built, tools that support both the mobile application and Web App Development should be considered which will increase the re-usability in building the solution and the same should be applicable for testing as well.
Maintainability:
Code integration with the usage of tools like Git / SVN along with quality control validation using tools like SonarQube will increase the chance of high maintainable code. This also helps in performing CI / CD / CT execution and helps in early defect detection.
Performance / Scalability:
When testing mobile apps, an important factor is performance and scalability. Current industry expectation is any user who browses content through mobile phone should have the same performance as it is with desktop/laptop, without any compromise on quality.
There are common issues encountered when using Mobile phones
- Application behaves differently on various platforms and handsets
- High memory and CPU consumption
- Mobile application too slow to load
- Application causing battery drain out
Because of the above issues, performance testing for mobile Application is inevitable
Offline sync:
Below are the reasons why Offline mode is preferred by users and it needs extensive testing
- These apps constantly work without delays.
- They want to keep working without an internet connection.
- Apps with offline mode load quickly.
- Offline apps hardly drain a phone’s battery.
Many mobile users do not stick to a mobile app if there is no Offline support since when the network connection is poor, the app does not load and it leads to poor User experience.
Security:
- Mobile App to be tested with multiple mobile devices across multiple platforms over diverse networks
- Identifying spyware, viruses, Trojans, data privacy, data leakage, unsolicited network connections, etc.
- Validate whether it meets standards like Mobile Application Security Verification Standard
Messaging:
There are various messaging Applications like SMS, MMS, Cell Broadcast SMS, WAP-Push, etc and each messaging system has different parameters to be considered for testing. The architecture of each of the messaging systems is different and the design aspect is different. While testing the messaging system we need to consider all these different parameters and apply testing principles to test them.
Social Integration:
- Most of the mobile app is integrated with Social media platforms like Facebook, Instagram, etc
- Testing these social media touchpoints are mandatory and their interfaces to be tested thoroughly
- This integration when it is seamless there is a high probability of the user to stick on to this Mobile app and it is inevitable
- It also increases the chance of increased exposure and popularity increases
- Since data science and ML is the current trend and most of the users are sticking to their smartphone it has a high probability to access the user preference and it helps in data analytics and helps in revenue generation
Service Testing:
Most of the mobile apps use services as their communication layer to help in faster data transmission and helps in achieving better User Experience. The testing service layer is essential and most of the time, performance testing for the service layer is also recommended. Jmeter is the most popular tool for Performance testing for API layer.
Browser / Web Testing
Like how web testing is done, similarly, web testing is carried out for Mobile devices as well. There are multiple ways to do Web testing, easy way without Mobile is to use mobile emulator options that come as a part of the browser extension.
by admin | Mar 28, 2020 | Automation Testing, Fixed, Blog |
Python behave is a much widely used BDD framework these days. In this blog article, you will get to learn behave BDD framework’s features and how to use it to create automation test scripts. Behave is an open source tool which has 62 contributors who are actively developing new features and fixing the issues. Python Behave is a matured BDD framework.
Implementing readable and business friendly automation testing solution is the need of the hour. When your automation test suites gives provision for non-technical people to collaborate, it will be a value add to your product/project. As an automation testing services company, we get lot of inquiries on implementing business friendly test automation solution of late.
Feature Files
behave is considered to be a clone of Cucumber BDD framework. Most of the features of behave is similar to Cucumber. However, there is a provision for additional flexibility in behave framework. Let’s have a look at them one by one. You can execute feature files in the following ways – by providing feature name, feature directory, at last without providing feature details. When you run behave command without feature path, it will search feature files in features directory by default.
Fixtures
Fixtures aka Hooks are used to call Setup & Cleanup code before and after Test Run, Feature, Scenario, and Tag. You needn’t write a separate method for ‘After’ hooks. Both ‘Before’ and ‘After’ hooks can be implemented inside a single method. Let’s look at the below example.
Feature
@launch.browser
Feature: Codoid Website Feature
Scenario: Contact Codoid Team
Given I am on home page
When I submit contact us form
Then I should see Thank You page with a message
environment.py
from behave import fixture, use_fixture
from selenium import webdriver
@fixture
def launch_browser(context, timeout=30, **kwargs):
context.driver = webdriver.Chrome(executable_path='driverschromedriver.exe')
yield context.driver
context.driver.quit()
def before_tag(context, tag):
if tag == "launch.browser":
the_fixture = use_fixture(launch_browser, context)
step_def.py
from behave import given, when, then
from selenium import webdriver
@given('I am on home page')
def step_impl(context):
context.driver.get("https://codoid.com")
assert True is True
Executing Remaining Steps in a Scenario
By default, scenario execution will be stopped if any of the steps is failed. However, you can override this in behave framework using Hooks and Command line.
Hook
from behave.model import Scenario
def before_all(context):
userdata = context.config.userdata
continue_after_failed = userdata.getbool("runner.continue_after_failed_step", False)
Scenario.continue_after_failed_step = continue_after_failed
Command Line
behave -D runner.continue_after_failed_step=true features/
Filtering Scenarios and Examples using Tag Value
You can select scenarios and examples using Tag and its value. This feature is helpful when you want to select an example based on the environment value for a tag.
Feature:
Scenario Outline: Wow
Given an employee "<name>"
@use.with_stage=develop
Examples: Araxas
| name | birthyear |
| Alice | 1985 |
| Bob | 1975 |
@use.with_stage=integration
Examples:
| name | birthyear |
| Charly | 1995 |
In Conclusion
As a test automation services company, we advocate behave BDD framework for creating readable and understandable automation test suite. It is still widely used by the automation community and its contributors are releasing newer versions frequently based on the market need.
by admin | Feb 1, 2020 | Software Testing, Fixed, Blog |
This blog was written with the purpose of being a complete guide to usability testing. Usability testing ensures that the interface of your app is built to fit the end-user expectations and incorporates easy usage, learnability of the system, and user experience satisfaction. It has many dimensions where usability tests translate your app’s experience into a successful validation process.
So when do you need to conduct usability testing?
The earlier a defect is found, the better in the Software Development Lifecycle (SDLC) it will be cheaper to fix. Since usability test results affect the design of the product, it should start at the same level as the software (to undergo changes) throughout the SDLC process. Continuous and rigorous testing will yield maximum results. As an internal process, designers and developers on the project perform the testing to analyze the system for design and code to be modified as per the changes detected in the usability testing phase.
Here are some methods of usability testing:
Usability tests validate if the software is how a user would like it to be and whether it is a comfortable and holistic experience for the user.
- During the design phase – Draw your app/website design and evaluate its workability.
- In the build phase – Randomly test to determine the app’s usability factors.
- Hire real-time users to use your app/website and report results.
- Check statistics based on the input wireframes.
- Employ a QA services company that specializes in the usability testing field
Standard types of usability tests:
Guerilla usability testing Set up your tests where there is a lot of traffic, which allows you to ask passers-by for feedback of your app/website to evaluate user experience.
Unmoderated usability testing Uses third-party software to recruit ideal sample users for your test to interact with your app/website in their natural environment while recording their task completion to give you objective feedback.
Moderated usability testing Interacts with sample users in person or via video call to elaborate on their comments or help users understand tasks and keep them on track.
Now let’s discuss the 9 phases of usability testing,
Phase 1: Decide which part of your application/website you’ll test.Do you have particular concerns with your interaction or workflow? Want to know what your users will do on your product page? List your app/website’s pros and cons and create a concrete hypothesis for your testing.
Phase 2: Pick your test tasks. Your sample user’s task should be similar to your end-users goals with your app/website when they interact with it, for example, while transacting a purchase.
Phase 3: Set a level for success. Decide what you want to test and how you plan to check it and then accordingly set criteria to determine success levels for each task. Establishing a measurement for success/failure helps determine if your app/website’s user experience is intuitively designed.
Phase 4: Write scripts and plan tests. The purpose of the tests should be added to the beginning of your script and must include what you’ll be recording during the testing phase of your app/website. Gather knowledge of your sample user while they perform the tasks to make your test consistent and unbiased.
Phase 5: Assign roles. While testing usability ensures that your moderator is neutral while guiding sample users through the tasks mentioned in the script. During the testing, ensure notes are taken, which will later help extract insights to prove your hypothesis.
Phase 6: Identify your sample users. Recruitment of a sample user base and keep it small during every testing phase to ensure it resembles your actual user base as closely as possible.
Phase 7: Conduct the tests. During the testing, check if your sample users can complete tasks one at a time without assistance because the results from the testing can diagnose the pros and cons of your design.
Phase 8: Analyze results. The results obtained after testing will help you discover problem patterns so you can assess the severity of each usability issue. While examining the data, please pay attention to user performance and their feelings about your app/website.
Phase 9: Review and report findings. Insights from your results will help layout the next steps to improve your app/website’s design. Correct enhancements can be done before the next round of testing.
In Conclusion:
Usability testing is, therefore, vital as it is a great technique to help your UX designers in your development team by giving them the necessary insight on how their users interact with the final app/website. Always employ one of the top QA companies to carry out your Usability Testing to make sure you can apply appropriate techniques to improve the quality and design of your app/website and that the end product is user-centered. Codoid is considered an industry expert on usability, and we would be happy to help you out with your next launch.
by admin | Feb 4, 2020 | Automation Testing, Fixed, Blog |
Ask yourself, what do you want out of the test automation process? Without identifying your goals for test automation, you can’t succeed. You should define your goals by a set of gauges called success criteria since every team is different. Every stage of maturity in the process of finance, regulation, technology, and skill should be appropriately assessed.
Let’s see how your success criteria can be converted to success with your testing teams shifting to automation. Is it your default practice? Don’t automate every test, but at the same time, it is highly beneficial if you automate tests by answering the question of why we should not automate it. The constant churning of code will help, and many teams experience that in trying to define, redefine, and implement standards for automation, they end up saving time. Learn new skills and automation techniques to create a list of conditions needed to use your current
automation abilities.
Automated tests are nothing but clients of a System Under Test (SUT). Check if your automated tests/test clients execute anywhere and at any time because sometimes constraints can force these tests to run in a specific environment. Automated tests run on different machines and can quickly determine the devices it works on so that you fix it for those it doesn’t. Developers should run tests on their devices and use them for debugging. While testers, analysts, and product managers should be able to access tests and question aspects of the application. If tests are flexible, it will allow information to be gathered about the SUT by having test-client code executed on various devices and teams working through the environmental constraints.
Tests should continue to execute without any human intervention, and you shouldn’t have to push more than one button to get it going. Did you do anything before decision-makers could view the reports or before you started the tests? If yes, then you’re not meeting the set criterion. Run tests frequently because when automation is not run automatically against changing code, it is not an efficient setup, and you won’t get credible results. Have unit tests and test your automation and system often, through scheduled runs. Continuous Integration (CI) should bedone to test automation, just like compilers check code and ensure that tests run publicly. If a test case that failed once passes during a rerun, double-check the defect that showed up in the first instance, because good testers don’t ignore anomalies. Communicate what should be changed in your application by coding tolerance for it. Make your testing reliable and trustworthy for it to be considered successful.
Inferior automation can drive automation engineers to waste time debugging code rather than provide timely, accurate information to developers and other stakeholders. Your automation engineers should focus on building new tests instead of maintaining existing automation. If automation needs codebases, environments, agents, and systems, we should free them up and
not hinder the process by being ready for testing all the time. Sincere and transparent test automation reports will help your developers act quickly. Ensure they produce multiple test cases per day to check if there are problems with the commitment to the automation and testing process.
Reassess agent roles and responsibilities by updating automation goals and treat it as a development activity. Someone on the team must be accountable for fixing issues with automation because when no one owns it, automation fails. Your automation team should use the best practices of software development. Testers should store automation in source code control, maintain existing automation, plan for future automation, and fix failing tests immediately. Consider technical debt for test automation and account for automation by adhering to coding standards and practices.
Create test automation before, or in parallel with test code. A different vision of success for each team will help you assess if your team is ready. If the team is, then you’ll be prepared to work in a Continuous Delivery (CD) world, and your end-users will get reliable, precise, and on-time insight into your applications. QA companies like Codoid , have adapted its practices to help its clients strategize goals to achieve a successful implementation of their automation testing services.
by admin | Feb 2, 2020 | Automation Testing, Fixed, Blog |
This is a blog article which intents to review PractiTest, one of the best test management tools available in the market. It helps you ease the testing and development process. Sharing quick feedback to the team after QA activities is important to deliver high-quality software to the end-users. Sharing the test results in an excel sheet as an attachment has become obsolete. Your managers have to view/analyse the test results quickly in order to take decision. Let’s say you present the test results in a web dashboard, then it makes it easy for the entire team to understand the outcome of your report post successful completion of the testing activity. Let’s see how you can manage test cases and share the test results in PractiTest.
PractiTest is an application life cycle management platform. You can manage the requirements, test plans, test cases, and issues effectively. It helps to share the test results to everyone. If your team has already started developing the application without a test management tool or is fed up with the current tool, you can easily import the existing artifacts in PractiTest and start managing the tests effectively.
You can link the defined requirements with the test cases to achieve traceability. PractiTest provides end-to-end visibility. When you perform test execution, it will reflect the status in Test Library and Requirements modules in PractiTest.
Using a complicated test case management tool always increases maintenance effort. The tool should be simple enough to manage and define test cases. The idea of PractiTest is to divide test cases that are created, imported to and stored in the Test Library, from the Test Sets & Runs where you group those tests according to relevancy (for example – cycle, sprint etc) and run them as instances. Unlike other systems that work with static folders, PractiTest uses dynamic filter trees (based on system and custom fields) to organize all testing data. With filter trees, a user can slice and dice his data according to what he needs and create customized dashboards and reports to other stakeholders based on the data relevant to them.
Jira is a most widely used Project management tool. Test management tool without Jira integration neither adds any value to your team nor your project. In PractiTest, you can import requirements from Jira by just providing Jira Ticket number. When you make any changes to a Jira ticket, the changes will be reflected in ParctiTest immediately. You can also create Jira Bug ticket when executing your test cases. Linking between the test case and the Jira Issue will be established automatically when you create a Jira Bug from PractiTest. PractiTest provides JIRA Two-Way Integration feature. i.e. you can view test execution status in Jira ticket for the linked test cases.
You can set execution time for each test case. This feature is helpful to determine the difference between actual and expected test execution time. When you induct new QA engineers in your team, time duration details will make them to understand how much time need to be spent for individual test case.
One of the main ideas of PractiTest is to enable visibility to the entire testing process including automation. The system integrates with automation using REST API and Fire Cracker. You can update test case status by calling PractiTest API from your automation testing framework. Since this is an API call, you can integrate PractiTest in any test automation codebase. The following test case statuses – PASSED, FAILED, BLOCKED, OT COMPLETED, NO RUN can be sent from automated test script.
Firecracker allows you to take XML report files from any CI/CD tool and upload them into PractiTest. From now on, every time you have new build results, they will be sent to PractiTest where you will be able to see the results of your entire software development and deployment processes in one solution.
PractiTest integrations include the following systems: Jira Cloud, Jira Server & Datacenter, Pivotal Tracker, Bugzilla, Youtrack, Azure DevOps, CA Agile Central (Rally), FogBugz, GitHub, GitLab, Trac, Mantis, Assembla, Lighthouse, Redmine , Slack, SVN, Zapier, AND Jenkins.
PractiTest has Exploratory Testing module which allows you to define test charters & guide points for your exploratory sessions and save those guidelines as test cases for future reusability. When running the ET test case, you can document important points that arises in the annotation section as well as report new issues or links existing issues.
If your team is not using Jira for Agile Development Process, then you can very well use PractiTest to save the artifacts. For example – You can use Requirements module to define User Stories & Test Library module to create Acceptance Tests. You can use the Dashboard to keep your team up to date with the status of the Sprint in General and of each User Storie in particular.
Without a proper testing report, your team cannot take appropriate decision during a release. Spending time to prepare test reports is a waste of time. Quicker and quality feedback help your team. PractiTest provides pre-defined report templates (Tabular Report, Tabular Aggregated Report, and Detailed Report.) You can also modify report columns & graphs based on your needs and schedule the reports to send on daily, weekly or monthly basis.
Let’s say your testers are writing test cases for two functionalities which are almost identical, then you need to reuse the test steps instead of duplicating them. PractiTest allows you to call test steps from another test to enable reusability.
As a leading software testing company, we have used a wide range of software testing tools for our QA projects. If you are planning to evaluate test management tools, then don’t forget to add PractiTest in your list. It comes in three basic versions namely Professional, Enterprise, and Unlimited. You can signup for 15 days Trial period for thorough evaluation. Please note – for automation testing integration, we recommend Enterprise plan.
We hope you’ve enjoyed reading this blog article as much as we’ve enjoyed writing it. Contact Us for your QA needs.
A fool with a tool is still a fool.
by admin | Feb 23, 2020 | Automation Testing, Fixed, Blog |
Nowadays most companies invest a huge amount of money in Automation testing, but the success rate in test automation is very less (25%) due to various reasons. In this blog, we are going to discuss the most common challenges that the Organization faces in automation common challenges that the Organization faces in automation and the best strategies which when implemented will yield better results.
Below is the list of challenges that are widely faced in Automation:
- Lack of Continuous Monitoring
- Management / Stakeholder guidance
- Lack of Knowledge in Functional Testing team
- Lack of Understanding of What to Automate
- Automation team skillset and Training
- Attrition Rate and Ability to retain good resources
- Lack of Team bonding and co-operation
- Lack of Ownership
- Right Tool Strategy
- Right Framework standards
- Lack of Development practice with good code review process
- Quantity conscious rather than Quality conscious
- Lack of Governance
Lack of Continuous Monitoring
Automation should be treated as a development practice. Like how development is monitored on a continuous basis, Automation Testing progress should also be monitored and closely watched. Below are the continuous monitoring activities that every development work is governed and the same should be applicable for automation as well.
- Reviewing the quality of work
- Validating the delivery accomplishment as per the agreed plan
- Evaluating unforeseen risk based on the outcome
- What are the activities that need changes based on priority changes from time to time
- Expected Result Vs Actual Result assessment
- Statistics of project progress
These activities are applicable for Automation Testing as well and this needs to be assessed and evaluated time to time.
Management and Stakeholder guidance
Time to time Management and Stakeholder review should be conducted and guidance to be provided to achieve success. Below should be the key people in such people
- Automation Lead
- Functional Testing Lead
- Program sponsor
- Development Lead and Manager
- Testing team Manager
- Automation Architect if such a role exists
Such a meeting should be conducted on a scheduled frequency and discuss on achievements, hurdles, and solution to hurdles.
Lack of Knowledge in Functional Testing team
30% of Automation failure is due to the lack of knowledge on Functional aspects from the Functional Testing team. The functional team should be in a position to guide what to automate and what not to automate. The functional team should know the challenges in automation and help the automation team with the right area of Automation. Below is the expectation from the Functional Testing team to get good Automation result.
Identify the right area to Automate
1) Instead of automating UI, there could be an API that carries the data to UI and API can be automated instead of UI. API Automation is easy and simple. The focus should be given on such areas
2) If Functional team does not have an understanding before taking up Test Automation activity, we should discuss with key stakeholders and come up with such areas
Test Case Revamping
When there are at-least 100+ Test cases to automate, Functional team should come up with scenarios which can club multiple Test cases without compromising coverage. This will reduce the effort drastically on Automation as well increase the quality
Test Data Management
1) For automating any test case, test data is pre-requisite. Functional team should add such requirement as a part of Test case preparation activity
2) Should also have a clear idea of how to generate the data before taking up automation work.
Basic Feasibility
Based on the nature of application and dynamics involved in it, the functional team can suggest what application to be automated and describe the possibility of Automation
Lack of Understanding of What to Automate
Many Organization focuses on automation based on funding allocation rather than based on the understanding of prioritization. This leads to poor automation with less ROI.
Since time is not invested in what to automate, the focus goes towards the quantity rather than quality.
Test Automation Team Skillset and Training
As a test automation services company, we always encourage our automation testers to learn continuously. Automation team needs to learn new technologies, new methodologies and better coding standards year on year. This needs frequent workshops and frequent reviews to improve ourselves. Also, we need to enable the existing functional team to move to automation to reduce the attrition rate. Hence skill improvement and training should be a continuous process and the team which does these activities excel in their outcome.
Attrition Rate and Ability to retain good resources
Retaining key players is a big challenge in any industry and in the automation area, it is inevitable to take measures to retain resources. Below are the few areas to reduce the attrition rate.
- Rewards and recognition to star performer.
- Identifying the right resource for the right skill
- Job rotation if the same job can be done by multiple resources
- Giving Exposure to the team to key stakeholders
- Monetary benefits
- Giving space for recreation and fun activities to improve the moral
Lack of Team Bonding and Co-operation
Many occasions due to lack of team bonding, a lot of confusion and lack of ownership happen. We need to identify skillset and interest levels to allocate the activity. Team building exercises have to be conducted to create better bonding
Right Tool Strategy
Based on the Organization policy, fund allocation and based on Application technology automation testing tool procurement will be decided. Before fixing up tools, below are the areas to be considered
- Tool feasibility across Technology platform has to be conducted before making the tool decision
- The capability of the tool beyond automation (Like integration with other tools), plug and play support, etc to be assessed
- Resource availability with the proposed tool skill in the market and within Organization
- Support for any technical help through forums or through tool vendor to be assessed
- Tool vendor support in procurement process like a waiting period for evaluation time, support from their team on technical feasibility, training to the Organization
- Rate each of the tools and their vendors based on various criteria’s and come to the right tool strategy for the Organization.
Right Framework Standards
Before doing automation across applications, form a team for Organization Global framework development and Maintenance which will help to create dedicated focus on this activity. When the Organization matures, this framework will also mature and provide good ROI during DevOps implementation .
This dedicated team can assess different Technology stacks involved and create Framework based on Technologies.
Since the delivery and Framework team is separated, there will be a seamless delivery and continuous improvement.
Lack of Development practice with a good code review process
Automation work should be treated as a development practice. Usage of tools for Configuration Management, Automated code inspection tools, Build Management, Continuous Execution is highly recommended.
This also can help in early detection of code quality and issues and fix from time to time. This can also help to easily identify the weak resources and provide adequate training
Quantity conscious rather than Quality conscious
Management should be quality conscious rather than quantity conscious. Record usage of existing scripts Month on Month to increase ROI. Keep an eye on what improvement we have brought in and what we have done differently across each application each month to improve quality. Always upgrade the Lessons learned from each activity for implementing better strategies.
Lack of Governance
During governance discussion, numbers/metrics are tweaked to meet the expectation and hence there is less focus on improvement.
During governance, constructive discussion on the challenges faced, a solution adopted and other opportunities we have to improve and implement can be discussed rather than focusing on just Metrics.
Strategies for Test Automation:
- A proper assessment on application and technology stack
- Right tool strategy by doing a proper feasibility study<
- Right framework approach keeping in mind of scalability and adaptability
- Retaining skilled resources
- Conducting training and upskilling resources
- Infrastructure and Environment for Automation
- Identifying the right area of Automation rather focusing only on UI
- Constructive Governance rather than pure metrics-driven
- Stakeholder Involvement