Android Privacy Policy May Break Your Test Automation Scripts

Last month, Google announced its plans to purge play store apps that do not include a privacy policy and the app required security permissions upon app installation.

Behind that requirement, Google is trying to provide its users maximum transparency on what the app is requiring, and what data it collects when users consume the apps.

An example for a native app that already implemented that requirement is StateFarm insurance, see below

So, with that simple request to mobile Android app developers, there are few quality implications.

Immediate Implications and Requirements

  1. Revise and continuously maintain your test code
    1. The above screen obviously was not planned for in the latest test automation cycle, which means that a new cycle will get stuck and fail since this is a new screen with a request for a user action – Teams ought to develop new test steps that upon initial app installation would test the following: When user Clicks Accept the app launches successfully, while when users Click Decline the app closes)
    2. Coverage matrix implications: Existing test suites should cover the above new scenario’s on the supported platforms – Device/OS combinations
  2. Varying permissions across platforms
    1. Most apps will require unique permissions that are (hopefully!) being used and required by the app to functions (see below visual from iUbenda).
    2. Different OS versions of iOS and Android might behave differently and support different security features like in Android 6.0 and above (Doze, Permission groups etc.)
  3. Compatibility of device/OS features and permissions
    1. Wha happens in the above regards, once an app supports even a normal related permission e.g. USE_FINGERPRINT. This permission since it resides in the normal group, it will be automatically granted to the user, however, what if the DUT (device under test) does not support this feature? How are teams differentiating in an automated way the test execution with regards to the device capability? Matching the device features and the test case as part of a dynamic test execution can be a powerful agile capability especially in the growing mobile fragmented market.



As seen in the above, from a testing perspective, Android apps that support “Dangerous Permissions” would require the Dev/Test teams to develop and validate a varying use case or device/OS behavior test case when tested on Android 6.0 and above compared to Android 5.1 and below (e.g Android API level < version 23).



Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.


Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)


Digital Dashboard Example With Predefined ContextTags (source: Perfecto)


Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!