Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir

Model-Based Testing and Test Impact Analysis

In my previous blogs and over the years, I already stated how complicated, demanding and challenging is the mobile space, therefore it seems obvious that there needs to be a structured method of building test automation and meeting test coverage goals for mobile apps.

While there are various tools and techniques, in this blog I would like to focus on a methodology that has been around for a while but was never adopted in a serious and scalable way by organizations due to the fact that it is extremely hard to accomplish, there are no sufficient tools out there that support it when it comes to non-proprietary open-source tools and more.

First things first, let’s define what is a Model-Based testing

Model-based testing (MBT) is an application for designing and optionally executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment.

mbt

Fig 1: MBT workflow example. Source: Wikipedia

In the context of mobile, if we think about an end-user workflow with an application it will usually start with a Login to the app, performing an action, going back, performing a secondary action and often even a 3rd action based on the previous output of the 2nd. Complex Ha

The importance of modeling a mobile application serves few business goals:

  1. App release velocity
  2. App testing coverage (use cases)
  3. App test automation coverage (%)
  4. Overall app quality
  5. Cross-team synchronization (Dev, Test, Business)

 

As already covered in an old blog  I wrote, mobile apps would behave differently, and support different functionality based on the platform they are running (E.g., not every iOS device support both Touch ID and/or 3D Touch gesture). Therefore, being able to not only model the app and generate the right test cases but also to match these tests across the different platforms can be a key to achieving many of the 1-5 goals above.

In the market, today there are various commercial tools that can assist in MBT like CA Agile Requirement Designer, Tricentis Tosca, and others.

Looking at an example provided by one of the commercial vendors in the market (Tricentis), it can show a common workflow around MBT. A team aims to test an application; therefore, they would scan it using the MBT tool to “learn” its use cases, capabilities, and other artifacts so they can stack these into a common repository that can serve the team to build test automation.

tosca

Fig 2: Tricentis Tosca MBTA tool

In Fig 2., Tricentis examines a web page to learn its entire options, objects, and other data related items. Once the app is scanned, it can be easily converted into a flow diagram that can serve as the basis for test automation scenarios.

With the above goals in mind, it is important to understand that having an MBT tool that serves the automation team is a good thing, but only if it increases the team efficiency, its release velocity, and the overall test automation coverage. If the output of such tool would be a long list of test cases that either does not cover the most important user flows, or it includes many duplicates than this wouldn’t serve the purpose of MBT but rather will delay existing processes, and add unnecessary work to teams that are already under pressure.

In addition to the above commercial tools, there is an older but free tool that allows Android MBT with robotium called MobiGuitar. This tool not just offers MBT capabilities but also code coverage driven by the generated test scripts.

A best practice in that regards would be to probably use an MBT tool that can generate all the application related artifacts that include the app object repository, the full set of use cases, and allow all of that to be exported to leading open-source test automation frameworks like Selenium, Appium, and others.

Mobile Specific MBT – Best Practices and Examples

Drilling down into a workflow that CA would recommend around MBT would look as follows – In reality, the below is easier said than done for a Mobile App compared to Web and Desktop:

  1. The business analysts will create the story using tools like CA Agile Requirement Designer or such (see below more examples)
  2. The story is then passed to an ALM tool (e.g.: CA Agile Central [formerly Rally], Jira, etc.) for project tracking
  3. Teams use the MBT tools to collaborate
    1. The automation engineer adds the automation code snippets to the nodes where needed or adds additional nodes for automation.
    2. The programmer updates the model for technical specs or more technical details.
    3. The Test Data engineer assigns test data to the model
  4. Changes to the story are synchronized with the ALM Tool
  5. Test cases are synchronized with the ALM Tool
  6. The programmer completes coding
  7. The code is promoted from Dev to QA
  8. Testing begins
    1. The tester uses the test cases with test data from MBT tools for manual test case execution
    2. The automation scripts with test data are executed for functional and regression testing

To learn more about efficient MBT solutions, practices please refer to these sources:

Mobile Testing: Difference Between BDD, ATDD/TDD

Last week I presented in the Joe Colantonio AutomationGuild online conference – Kudos to Joe for a great event!

ag-logo-small

Among multiple interesting questions that I got post my session,  like what is the best test coverage for mobile projects? how to design effective non-functional and performance testing in mobile and RWD?, I also got a question about the differences between BDD and ATDD.

My session was around an Open Source test automation framework called Quantum that supports cucumber BDD (Behavior Driven Development) and this obviously triggered the question.

Definition: BDD and ATDD

ATDD Acceptance Test Driven Development

Based on Wikipedia’s definition (referenced above), ATDD is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example,behavior-driven development (BDD),example-driven development (EDD), and support-driven development also called story test–driven development (SDD).

All these processes aid developers and testers in understanding the customer’s needs prior to implementation and allow customers to be able to converse in their own domain language.

ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

BDD Behavior Driven Development

Again, based on Wikipedia’s definition (referenced above), BDD is a software development process that emerged from test-driven development (TDD)Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

Mobile Testing In the Context of BDD and ATDD

The way to look at the 2 agile like practices of BDD, ATDD, TDD is from the context of higher velocity and quality requirements.

Organizations are aiming to release faster to market, with great quality, sufficient test coverage and in the same time of course – meet the business goals and customer satisfaction. To achieve these goals, teams ought to be strongly collaborative from the very beginning app development and design stages.

Once organizations have the customer product requirements, and they can start developing through user stories, acceptance criteria’s and such the product & the tests several goals can be met:

  • High customer-vendor alignment == Customer satisfaction
  • Faster time to market, app is tested along the SDLC
  • Quality is in sync with customer needs and there are much less redundant tests
  • There are no Communication gaps or barriers between Dev, Test, Marketing, Management

 

Looking at the below example of a BDD-based test automation test code, it is very easy to understand the functionality and use cases under test, the desired test outcome.

quantum123

As can be seen in the screenshot above, the script installs and launches on an available Samsung device the TestApp.APK file performs a successful login and presses on a menu item. As a final step, it also performs a mobile visual validation to assure that the test also passes, and also as an automaton anchor, the test code reached the expected screen.

It is important to mention that the test framework, tools that can support both TDD, ATDD and BDD can be in many cases similar, and in our case above – one can still develop and test from a BDD or ATTD standpoint by using a Cucumber test automation framework (Cucumber, Quantum).

If we would compare the above functional use case, or as stated in the cucumber language “Scenario” to a scenario that would fit an ATDD based approach – we would most likely need to introduce the known “3 amigos” approach  –> three perspectives of customer (what problem are we trying to solve?), development (how might we solve this problem?), and testing (what about…).

 

Since a real ATDD best practice will determine a Gherkin like app scenario’s before the development even starts, the above BDD example will be a precondition test for the app development team to make sure that they develop against acceptance criteria that in our example is a successful app install and log in.

An additional example of an acceptance test that also involves a layer of login/Register that I can reference would like this:

effective-testing-practices-in-an-agile-environment-28-638

I can understand that confusion between BDD and ATDD since as mentioned above, they can look a lot like the same.

Bottom line, and as I responded to the event last week – both BDD/ATDD/TDD are methods to better sync the various counterparts involved in shipping a working product to the market, faster, with higher quality and with the right functionality that would meet the customer requirements. Implementing it using Gherkin method makes a lot of sense due to the easy alignment and common language these counterparts uses during the SDLC workflow.

Happy Testing!

What You Need To Know When Planning Your Test Lab in 2017

As we kick-off 2017, I am thrilled to release the most updated 6th edition of the Digital Test Coverage Index report, a guide to help you decide how to build your test lab. 2016 was an exciting year in the Digital space, and as usual, Q4 market movement is sure to impact 2017 development and testing plans. And it doesn’t appear that the market is slowing down, with continued innovation expected this year. In this post, I will summarize the key insights we saw last quarter, as well as few important things that are projected for 2017 that should be applied when building your test lab.

dtci

Key Takeaways

  • Beta OS versions remain an important aspect of your test coverage strategy. With Apple releasing 5 different minor versions of iOS 10 since it’s release in September 2016, iPhone/iOS 10 beta are a “must-include in your test lab” device/OS combination. On the browser side, Chrome and Firefox beta versions are also critical test targets for sustaining the quality of your mobile web/responsive websites.
  • The Android fragmentation trend is changing, with Google putting pressure on device manufacturers to keep pace with the latest OS versions. As evidence, we already see that Android 6.x has the greatest market share as of Q42016, with roughly 27%, followed by Android Lollipop. With Google releasing its first Android Pixel devices, the market is already starting to see a boost in Android 7 Nougat adoption which is expected to grow within Q12017 to between 2-5% market share.
  • Galaxy S7 and S7 Edge were a turning point for Samsung: Over the last year, Samsung has seen a revenue slowdown due, in part, to competition from both Apple and emerging Android manufacturers OnePlus, Xiaomi, and Huawei. With the launch of Samsung S7 & S7 Edge, the company is regaining its position. We can see in this edition of the Index (and the previous one,) that Samsung is the leading brand in many countries, which should impact the test coverage plans in Brazil, India, Netherlands, UK, Germany and U.S.
  • The mobile app engagement methods are evolving, with various enterprises counting on the mobile platform to drive more revenues and attract more users. We are seeing greater adoption of external application integration either through dedicated OS-level applications like the iOS iMessage or through other solutions like the Google app shortcuts that were recently introduced as part of Android 7.1. These changes represent a challenge from a testing perspective, since there is now additional outside-of-app dependencies that the Dev and QA teams need to manage.
  • Test Lab size is expected to slightly grow YoY as the market matures:   Looking at the annual growth projection below, we see a slight growth in the need for a 10, 25 and 32 device lab, based on new the devices that are being introduced into the market faster than old devices are retired. What we see is an annual introduction of around 15 leading devices per year with an average retirement of 5-7 per year (due to decreased usage, terminated support by vendor etc.). Integrating these numbers into the 30%-80% model would bring the annual growth as demonstrated in the following graph.

annual_growth

 

2017 Trends

As this is the first Index for 2017, here are the most important market events that will impact both Dev and QA teams in the digital space, in the categories of Mobile, Web or both.

New Players

The most significant player to joins the mobile space in 2017 is Nokia. After struggling for many years to become a relevant vendor, and being unsuccessful under the Windows Phone brand, Nokia is now back in the game with a new series of Android-based devices that are supposed to be introduced during MWC 2017. A second player that is going to penetrate the mobile market is Microsoft who is supposed to introduce the first Microsoft Surface Phone during H1 2017.

Innovative Technologies

During 2017 we will definitely continue to see more IoT devices, smartwatches, and additional features coming from both Google and Apple, in the mobile, automotive and smart home markets. In addition, we might see the first foldable touch smartphone released to the market by Samsung under the name “Samsung X”. In addition, we should see a growing trend of external App interfaces in various forms such as bots, iMessages, App Shortcuts and Voice based features. The market refers to these trends as result of “App Fatigue” which is causing organizations to innovate and change the way their end-users are interacting with the apps and consuming data. From a testing perspective, this is obviously a change from existing methods and will require re-thinking and new development of test cases. In a recent blog, I addressed the above – feel free to read more about it here.

Key Device Launches to Consider for an Updated Test Lab

Most of the below can be seen in the market calendar for 2017, but the highlights are listed here as well:

  • Samsung S8/S8 Edge flagship devices from Samsung are due by February 2017 and should be the successors of the highly successful S7/S7 Edge devices
  • iPhone 8/iPhone 8 Plus together with iOS 11 launch in MID-September 2017 will mark the 10th anniversary for the Apple iPhone series. This launch is expected to be a groundbreaking one for iOS users.
  • Huawei Mate 9/Mate 9 Pro, and in general, the Huawei smartphone portfolio is continuing its global growth. 2017 should continue the growth trend both in China and India, but also as seen in this Index report in many European countries where we are already seeing devices like Huawei P8, P9, and others in use.

From a web perspective, we are not going to see any major surprises from the leading browsers like Chrome, FireFox, and Safari. However, from Microsoft Edge browser, we expect a significant market share uptick as more and more users adopt Windows 10 and abandon legacy Windows OS machines.

cal2017

 

In the Index report, you may find all the information necessary to better plan for 2017, as well as market calendars for both mobile and the web, plus a rich collection of insights and takeaways. DOWNLOAD HERE.

Happy Testing in 2017!

How To Adapt to Mobile Testing Complexity Increase Due to External App Context Features?

If you follow my blogs, white papers and webinars for the past years you are already familiar with the most known common challenges around mobile app testing such as:

  • Device/OS proliferation and market fragmentation
  • Ability to test the real end-user environment within and outside of the app
  • testing both visual aspects/UI as well as native elements of the app
  • Keeping up with the agile and release cadence while maintaining high app quality
  • Testing for a full digital experience for both mobile, web, IOT etc.

 

If the above is somehow addressed by various tools, techniques and guidelines there is a growing trend in the industry in both iOS and Android platforms that are adding another layer of complexity to testers and developers. With iOS 10 and Android 7 as the latest OS releases but also with earlier versions, we start to see more abilities to engage with the app outside of the app.

imessage-apps-2-800x525

If we look at the recent change made in iOS 10 around iMessage, it is clear that Apple is trying to enable mobile app developers better engagement with their end-users’ even outside of the app itself.  Heavy messaging users can remain in the same app/screen that they’re using and respond quickly to external apps notifications in various ways.

This innovation is a clear continuation to the Force Touch (3D Touch) functionality that was introduced with iOS 9 and iPhone 6S/6S Plus that also allows users to click on the App icon without opening the full app and perform a quick action like writing a new facebook status, upload an image to facebook or other app related activities.

Add to the above capabilities the recent Android 7.1 App Shortcuts support which allow users to create relevant shortcuts on the device screen for app capabilities that they commonly use. More example that you can refer to is the Android 7.0 split window feature – allowing app to consume 1/2 or 1/3 of the device screen while the remaining screen is allocated to a different app that might compete with yours on HW/System resources etc.

So What Has Changed?

Quick answer – A lot 🙂

As I recently wrote in my blog around mobile test optimization, the testing plans for different mobile OS versions is becoming more and more complex and requires a solid methodology so teams can defer the right tests (manual/automation) to the right platforms based on supported features of the app and the capabilities of the devices – testing app shortcuts (see below an example)  is obviously irrelevant on Android 7.0 and below, so the test matrix/decision tree needs to accommodate this.

appshortcuts

To be able to test different app context you need to make sure that you have the following capabilities from a tool perspective in place and also to include the following test scenarios in your test plan.

  1. Testing tools now must support the App under test and also the full device system in order to engage with system popups, iMessage apps, device screen for force-touch based testing etc.
  2. The test plan in whatever tree or tool is being managed, ought to accommodate to the variance between platforms and devices and allow relevant testing of apps–>features–>devices (see my referenced blog above for more insights)
  3. New test scenarios to be considered if your app leverages such capabilities
    1. What happens when incoming events like calls or text messages occur while the app interacts within an iMessage/Split screen/shortcut etc. also what happens when these apps receive other notifications (lock screen or within the unlocked device screen)
    2. What happens to the app when there are degraded environment conditions like loss of network connection, flight mode is on etc. – note that apps like iMessage rely on network availability
    3. If your app engages with a 3rd party app – take into account that these apps are also exposed to defects that are not under your control – Facebook, iMessage, others. If they are not working well or crashes, you need to simulate early in your testing activities such scenario and understand the impact on your app and business
    4. Apps that work with iMessage as an example might require a different app submission process and also might be part of a separate binary build that needs to be tested properly – take this into account.
    5. Since the above complexities are all dependent on the market and OS releases, make sure that any Beta version that is released gets proper testing by your teams to ensure no regressions occur.

I hope these insights can help you plan for a new trend/future that I see growing in the mobile space that IMO does add an extra layer of challenges to existing test plans.

Comments are always welcomed.

Happy Testing!

Mobile Testing On Real Devices Vs. Emulators

Though it seems the debate over the importance of testing on real devices and basing a Go/No-Go release decision only on real devices is over i am still being asked – why it’s important to test on real devices? What are the emulators limitation?

In this blog i will try to summarize some key points and differences that might help address the above questions.

emulatorslimitations

End users Use Real Devices and Not Emulators

Developing and deploying a mobile app to the market isn’t intended to be used on desktops with mouse and keyboards but on real devices with small screens, limited hardware, RAM, storage and many other unique attributes. Testing on a different target then the end-users will use simply exposes organizations to quality risks, security, performance and others.

The end-users engage with the application with unique gestures like TouchID, Force Touch, Voice commands. End-users operate their mobile apps in conjunctions with many other background apps, system processes — These conditions are simply either hard to mimic on emulators or are unsupported by emulator.

As seen also in the above visual, Emulators don’t carry the real hardware as a real device would – this includes chip-set, screen, sensors and so forth.

Platform OS Differences

Mobile devices are running a different OS flavor than the one that runs on Emulators. Think about a Samsung device or other launched by Verizon, T-Mobile, AT&T and other large carriers – these platform versions that run on the devices are by far different than the ones that run on Emulators.

Thinking about devices and carriers, note that real devices receive plenty of notifications like push notification, location alerts, incoming text messages (whats app etc.), google play store/app store app updates and so forth –> these are not relevant in Emulators and by not testing in these real environment conditions, the test coverage is simply incomplete and wrong.

real_env_conditions

The above image was taken actually from my own device when i was travelling to New York last week – look at the amount of background pop-ups, notifications and real conditions like network, locations, battery while i simply use the Waze app. This is a very common scenario for most end users that consume any mobile app. There is no way to mimic all of the above scenarios on Emulators in real-time, real network conditions etc.

Think also on varying network condition simulation that transition from Wifi to real carrier network, than add lose of network connection at all that impact location, notifications and more.

Wasting a lot of time in testing against the wrong platforms costs money, exposes risks and is inefficient.

Innovative Use Cases Simulation

With the recent Mobile OS platforms that were recently released to the market including Android 7.1.1 and iOS 10.x we start to see a growing trend of apps that are being used in different contexts.

appshortcuts

With Android 7.1.1 we now see App-Shortcuts (above image) that allows developers to actually create a shortcut to a specific feature of the application. This is already achievable with iOS 9 force-touch capabilities. Add to these use cases like iMessage Apps that were introduced in iOS10, Split window in Android 7.0 and you understand that an app can be engaged by the user either through part of the screen or within a totally different app like iMessage.

With such complexities the test plans for once are getting more fragmented across devices and platforms but the gaps between what an Emulator can simply provide developers and testers and what a real device in a real environment can is growing.

Bottom Line

Developers might find at a given stage of the app value of using Emulators and i am not taking that away – testing on an Emulator within the native IDE’s in early stages is great, however when thinking about the complete SDLC, release criteria and test coverage there is no doubt that real devices are the only way to go.

Don’t believe me, ask Google – https://developer.android.com/studio/run/device.html

google

Happy And REAL Device Testing 🙂

Joe Colantonio’s Test Talk: Mobile Testing Coverage Optimization

How does a company nowadays put together a comprehensive test strategy for delivering high-quality experiences for their applications on any device? I think this is the question I get asked most frequently and it is the biggest challenge in today’s market, how to tackle mobile testing and responsive web testing. The solution can be the difference between an app rated 1 star or an app rated 5 stars.

Play Podcast

I had a lot of fun talking to Joe Colantonio from Test Talks about how to create a successful app starting with my Digital Test Coverage Optimizer. Listen to the full talk to hear my ideas on moving from manual testing to automation, tracking the mobile market, the difference between testing in simulators and emulators versus real devices and more.

https://joecolantonio.com/testtalks/110-mobile-testing-coverage-optimization-eran-kinsbruner/

 

JC

Responsive Web: The Importance of Getting Test Coverage Right

When building your test lab as part of a RWD site test plan, it is important to strategically define the right mobile devices and desktop browsers which will be your target for your manual and automated testing.

For mobile device testing you can leverage your own analytics together with market data to complement your coverage and be future ready, or leverage reports such the Digital Test Coverage Index Report.

For web testing you should also look into your web traffic analytics or based on your target markets understand which are the top desktop browsers and OS versions on which you should test against – alternatively, you can also use the digital test coverage index report referenced above.

Related Post: Set Your Digital Test Lab with Mobile and Web Calendars

Coverage is a cross organizational priority where both business, IT, Dev and QA ought to be consistently aligned. You can see a recommended web lab configuration for Q1 2016 below which is taken from the above mentioned Index – Note the inclusion of Beta browser versions in the recommended mix due to the nature silent updates of these versions deployment on end-user browsers.

WCReport
For ongoing RWD projects  – once defining the mobile and web test coverage using the above guidelines, the next steps are of course to try and achieve parallel side by side testing for high efficiency, as well as keep the lab up to date by revising the coverage once a quarter and assure that both the analytics as well as the market trends still matches your existing configuration.

As a best practice and recommendation, please review the below mobile device coverage model which is built out of the 3 layers of Essential, Enhanced and Extended where each of these layers includes a mix of device types such as legacy, new, market leaders and reference devices (like Nexus devices).

MobileCoverageLayers

To learn more, check out our new Responsive Web Testing Guide.

responsive web testing strategy

Tests to Include Within Automation Suite

When developing a mobile or desktop test automation plan organization often struggle with the right scope and coverage for the project.

In previous post, i covered the test coverage recommendations in a mobile project and now, i would like to also expand on the topic of which tests to automate.

Achieving release agility with high quality is fully dependent today more than ever on continuous testing which is gained through proper test automation, however automating every test scenario is not feasible and not necessary to meet this goal.

In the below table  we can see some very practical examples of test cases with various parameters with a Y/N recommendation whether to automate or no.

As shown below, and as a rule for both Mobile, Web and other projects the key tests by definition which should be added to an automation suite (from ROI perspective and TTM) are the ones who are:

  • Required to be executed against various data sets
  • Tests which ought to run against multiple environments (Devices, Browsers, Locations)
  • Complex test scenario’s (these are time consuming and error prone when done manually)
  • Tedious and repetitive test cases are a must to automate
  • Tests which are dependent on various aspects (can be other tests, other environments etc.)

Picture1

Bottom line: Automation is key in today’s digital world, but doing it right and wisely can shorten time to market, redundant resources and a lot of wasted R&D time chasing unimportant defects coming from irrelevant tests

Happy Testing!