Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.


Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)


Digital Dashboard Example With Predefined ContextTags (source: Perfecto)


Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

JAMO Solutions – Mobile test automation tool


I am sure that most mobile testers heard about JAMO solutions, but if not than i will briefly introduce this solution.

In past blogs i mentioned few powerful automation tools such as PerfectoMobile, Keynote Device anywhere, SeeTest by ExperiTest and FoneMonkey (MonkeyTalk).

Except of the Monkey Talk – all others are commercial tools and not free.

Jamo is also a commercial and not free solutions which comes in various licensing models from a company based in Belgium  (

The solutions is based on a plug in to various of IDE’s such as QTP (HP), Eclipse, and Visual Studio, which than allows connectivity over USB or WiFi through a Device manager to the mobile platform (iOS, Android, BlackBerry and Windows Mobile) on which an agent is also installed to provide the full control and automation via record/replay, easy to input text utils etc.

For Android OS device automation using eclipse, you can see a short demo of the tool and its methodology:

In the same manner as the video above users can automate their scenario’s through QTP or Visual studio for the various OS platforms.

For more information or inquiries please contact Jamo through their web site:



Complexities in automating tests on mobile platforms


In previous blogs i shared some of the difficulties in running a mobile project, and few leading cloud based solutions which can be useful in dealing with few of the difficulties i raised.

In this blog i would like to highlight few of the problems which an automation team which develops tests for mobile should be aware of in advance.

From one hand, we want to isolate when developing automation our test environment and make it free of interrupts and noises, this in order to have a consistent and flawless test run without the need to complicate the test scripts etc., however – from the other hand when developing automation (especially for mobile) – we want to actually introduce as many interrupts as possible during the test run since these are most probably some of the issues which a real user will face when running the application we tend to test etc.

So – what is the proposed solution?

IMHO, the way to develop robust automation for mobile requires to have a controlled test environment with the right number of mobile handsets per platform, while adding to the “formula” automation which will trigger possible interferences (such as incoming call, incoming SMS/Mail, low battery events, lost of communication, screen orientation change etc.).

We have the ability these days to use e.g. Android Emulator in order to simulate an incoming call/SMS, or Roaming, or change of Network as well as change of location (by using the Android DDMs tool with the Location configuration tool), which we can use in our test automation environment. We also have tools which act as a private cloud within a closed network (e.g. Seetest by Experitest) which can allow us to test on multiple devices at the same time using parallel scenario’s etc.

The bottom line is: We know that automating tests on mobile which is a hard thing on its own (need to handle detection of unique objects, dynamic events and many more), however the tests we develop, should be relevant to the mobile world and combine mobile world interrupts such as i mentioned above, otherwise we can assure that the application runs OK on the device, but when interrupts starts to show up – we do not cover this at all since we isolated the test platforms from the “world”.

Another complexity when doing automation for mobile (on top of the variety of platforms to support – iOS, Android etc.), is the “portability” of the developed tests across mobile handsets. We know already that solutions such as “PerfectoMobile” uses ScriptOnce keyword based development language to overcome this, and allow a develop once, run everywhere solution, and we also see other tools today such as PhoneGap, CodeNameOne, MagicSoftware  uniPaaS tool which aim to cover development and automation in a cross-platform environment.

I will extend my blogs in the future around each of the above cross-platform solutions and explain how they work and what are their advantages.

I wrote a while ago a presentation about mobile test automation which is still relevant and can be complimentary to this blog – see Link:


Eran Kinsbruner