Android Privacy Policy May Break Your Test Automation Scripts

Last month, Google announced its plans to purge play store apps that do not include a privacy policy and the app required security permissions upon app installation.

Behind that requirement, Google is trying to provide its users maximum transparency on what the app is requiring, and what data it collects when users consume the apps.

An example for a native app that already implemented that requirement is StateFarm insurance, see below

So, with that simple request to mobile Android app developers, there are few quality implications.

Immediate Implications and Requirements

  1. Revise and continuously maintain your test code
    1. The above screen obviously was not planned for in the latest test automation cycle, which means that a new cycle will get stuck and fail since this is a new screen with a request for a user action – Teams ought to develop new test steps that upon initial app installation would test the following: When user Clicks Accept the app launches successfully, while when users Click Decline the app closes)
    2. Coverage matrix implications: Existing test suites should cover the above new scenario’s on the supported platforms – Device/OS combinations
  2. Varying permissions across platforms
    1. Most apps will require unique permissions that are (hopefully!) being used and required by the app to functions (see below visual from iUbenda).
    2. Different OS versions of iOS and Android might behave differently and support different security features like in Android 6.0 and above (Doze, Permission groups etc.)
  3. Compatibility of device/OS features and permissions
    1. Wha happens in the above regards, once an app supports even a normal related permission e.g. USE_FINGERPRINT. This permission since it resides in the normal group, it will be automatically granted to the user, however, what if the DUT (device under test) does not support this feature? How are teams differentiating in an automated way the test execution with regards to the device capability? Matching the device features and the test case as part of a dynamic test execution can be a powerful agile capability especially in the growing mobile fragmented market.

 

 

As seen in the above, from a testing perspective, Android apps that support “Dangerous Permissions” would require the Dev/Test teams to develop and validate a varying use case or device/OS behavior test case when tested on Android 6.0 and above compared to Android 5.1 and below (e.g Android API level < version 23).

 

 

Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Google Mobile Friendly With Perfecto and Quantum

Guest Blog Post by Amir Rozenberg, Senior Director of Product Management, Perfecto

resize

Google recently announced “Mobile First Indexing”, from Google:

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

screen-shot-2017-02-13-at-5-33-26-pm

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

  • Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same
  • Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that
  • The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉
  • This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

 

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

  • Added 2 Gherkin commands
// If you navigate directly to this page
Then I check mobileFriendly URL "http://www.nfl.com"
// If you got to this page through clicks
Then I check mobileFriendly current URL
  • Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)
  • And the script example is pretty simple:
@Web
Feature: NFL validate

  @SimpleValidation
  Scenario: Validate NFL
    Given I open browser to webpage "http://www.nfl.com"
    Then I check mobileFriendly current URL
    Then I check mobileFriendly URL "http://www.nfl.com"
    Then I wait "5" seconds to see the text "video"

 

That’s it. Next steps:

 

Ideas for future improvement:

  • You can automate the validation such that every click would trigger a check with Google behind the scenes.

Just for fun, some more screenshots for detailed analysis for NFL.com:

 

screen-shot-2017-02-13-at-5-33-48-pm

 

screen-shot-2017-02-13-at-5-34-09-pm

screen-shot-2017-02-13-at-5-34-23-pm

 

 

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir

Model-Based Testing and Test Impact Analysis

In my previous blogs and over the years, I already stated how complicated, demanding and challenging is the mobile space, therefore it seems obvious that there needs to be a structured method of building test automation and meeting test coverage goals for mobile apps.

While there are various tools and techniques, in this blog I would like to focus on a methodology that has been around for a while but was never adopted in a serious and scalable way by organizations due to the fact that it is extremely hard to accomplish, there are no sufficient tools out there that support it when it comes to non-proprietary open-source tools and more.

First things first, let’s define what is a Model-Based testing

Model-based testing (MBT) is an application for designing and optionally executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment.

mbt

Fig 1: MBT workflow example. Source: Wikipedia

In the context of mobile, if we think about an end-user workflow with an application it will usually start with a Login to the app, performing an action, going back, performing a secondary action and often even a 3rd action based on the previous output of the 2nd. Complex Ha

The importance of modeling a mobile application serves few business goals:

  1. App release velocity
  2. App testing coverage (use cases)
  3. App test automation coverage (%)
  4. Overall app quality
  5. Cross-team synchronization (Dev, Test, Business)

 

As already covered in an old blog  I wrote, mobile apps would behave differently, and support different functionality based on the platform they are running (E.g., not every iOS device support both Touch ID and/or 3D Touch gesture). Therefore, being able to not only model the app and generate the right test cases but also to match these tests across the different platforms can be a key to achieving many of the 1-5 goals above.

In the market, today there are various commercial tools that can assist in MBT like CA Agile Requirement Designer, Tricentis Tosca, and others.

Looking at an example provided by one of the commercial vendors in the market (Tricentis), it can show a common workflow around MBT. A team aims to test an application; therefore, they would scan it using the MBT tool to “learn” its use cases, capabilities, and other artifacts so they can stack these into a common repository that can serve the team to build test automation.

tosca

Fig 2: Tricentis Tosca MBTA tool

In Fig 2., Tricentis examines a web page to learn its entire options, objects, and other data related items. Once the app is scanned, it can be easily converted into a flow diagram that can serve as the basis for test automation scenarios.

With the above goals in mind, it is important to understand that having an MBT tool that serves the automation team is a good thing, but only if it increases the team efficiency, its release velocity, and the overall test automation coverage. If the output of such tool would be a long list of test cases that either does not cover the most important user flows, or it includes many duplicates than this wouldn’t serve the purpose of MBT but rather will delay existing processes, and add unnecessary work to teams that are already under pressure.

In addition to the above commercial tools, there is an older but free tool that allows Android MBT with robotium called MobiGuitar. This tool not just offers MBT capabilities but also code coverage driven by the generated test scripts.

A best practice in that regards would be to probably use an MBT tool that can generate all the application related artifacts that include the app object repository, the full set of use cases, and allow all of that to be exported to leading open-source test automation frameworks like Selenium, Appium, and others.

Mobile Specific MBT – Best Practices and Examples

Drilling down into a workflow that CA would recommend around MBT would look as follows – In reality, the below is easier said than done for a Mobile App compared to Web and Desktop:

  1. The business analysts will create the story using tools like CA Agile Requirement Designer or such (see below more examples)
  2. The story is then passed to an ALM tool (e.g.: CA Agile Central [formerly Rally], Jira, etc.) for project tracking
  3. Teams use the MBT tools to collaborate
    1. The automation engineer adds the automation code snippets to the nodes where needed or adds additional nodes for automation.
    2. The programmer updates the model for technical specs or more technical details.
    3. The Test Data engineer assigns test data to the model
  4. Changes to the story are synchronized with the ALM Tool
  5. Test cases are synchronized with the ALM Tool
  6. The programmer completes coding
  7. The code is promoted from Dev to QA
  8. Testing begins
    1. The tester uses the test cases with test data from MBT tools for manual test case execution
    2. The automation scripts with test data are executed for functional and regression testing

To learn more about efficient MBT solutions, practices please refer to these sources:

Introduction to Android Espresso Testing and Spoon

Espresso UI test automation framework is Google’s de-facto testing platform for Android app developers.

The way it is easily used from within Android Studio and IntelliJ IDEA IDE’s makes it a powerful tool that differentiates it from other open-source cross-platform solutions such as Appium and other commercial tools.

Before drilling into basic setup and execution of an Espresso simple test, let’s first understand some of the basics:

  • Espresso is an Android only test automation framework (not cross platform like Appium/Selenium)
  • Espresso requires a separate APK package running in parallel with the application under test
  • Espresso is not Dev-Language Free framework like Appium (that supports Java, JS, Python, C#, Perl)

Positive Motivations to Use Espresso

  • The Espresso framework is embedded into the entire dev workflow and IDE, and that makes the adoption and leverage higher
  • Espresso can be used to do a quick post-commit validation of a fix or new code implementation, and also as part of a larger test scale within the CI workflow.
  • Espresso provides fast feedback to its users which is a big advantage since it is running on the device/emulator side-by-side with the app
  • Espresso supports annotations to determine the test execution scope (small/medium/large) which organizes the overall testing cycle for both dev and test
  • Espresso has unique synchronization method in its core making the tests less flaky and more robust. It will pass to the next test step in the code only once the view is available on the device screen in opposed to other tools that can easily fail without having timers, validation points and more.

Basic Espresso Framework Methods:

Espresso framework allows the automation developer to manipulate the test using 3 concepts:

  1. View Matchers
  2. View Actions
  3. View Assertions

basicses

As seen in the above definition, onView(xxx) of a specific object on the app screen, an Action will be performed and an Assertion will be made to validate the end result.

Espresso Setup

The setup within Android studio is quite simple, and there is plenty of documentation in the google community around it.

The developer will edit his build.Gradle file for the application under test to include the Espresso framework dependency, the JUnit version, and the InstrumentationRunner (see below example)

gradlesample

Once the above is done, it is time to create for the corresponding app the test class.

This class will need to include through Import, few libraries that are required by the Espresso test (below example)

import

Test Code Implementation

In order to develop an Espresso UI automation, the developer must have the unique object identifiers for the application under test.

To study the app objects (Hamcrest Matchers) the developer can use various methods:

  1. UIAutomation.bat tool that is built into the Android Studio SDK
  2. All resource ID’s should be automatically be stored in a dynamically generated R.java file
  3. Object spy within tools that supports Espresso (Perfecto and others).

Looking at a simple TipCalculator application, you can see through the UIAutomator spy, that the text box object ID is named bill_value

uiautomator

In the R.java file, it will look like this (choose the best method you find comfortable)

rjava

When implementing the Espresso test code, we will leverage the ObjectID as part of the onView method to perform a Click prior to entering an input value to that text box.

code1

In order to perform a type of value into the above Total Bill text box, we will use the 2nd method provided by Espresso, that is. Perform:

code2

Once we are done with the action, we would like to assure that the result of that action is as expected, and this is when the developer will use the assertion method .Check

code3

Finally, once the entire test suite is implemented and ready – running the test from Android Studio is very simple.

Select the Test class from the Edit Configurations menu in Android studio and chose run. Select your target (ADB connected device, cloud devices, emulators).

code4

At the end of a test, a basic test report will be provided to the user.

Running Espresso Tests in Parallel – Using Spoon

No test engineer or developer will be quite unless it validates the functionality of his app on multiple devices and emulators. For that, there is another widely used tool called Spoon (there are also cloud-based solutions as mentioned above that support parallel execution on real devices). This tool, will collect all the target devices (that are visible via adb devices) test results and aggregate them into one HTML view that can be easily investigated.

example_main

In order to leverage Spoon, please download the Gradle for spoon plugin and install it. Post installation, configure as follows

gradlespoon

By default, Spoon will run your tests on all ADB connected devices, however, if you want to run concrete devices and skip others in order to reproduce a specific defect on 1 device, you can configure spoon accordingly

spoon2

Good Luck!

Mobile Testing: Difference Between BDD, ATDD/TDD

Last week I presented in the Joe Colantonio AutomationGuild online conference – Kudos to Joe for a great event!

ag-logo-small

Among multiple interesting questions that I got post my session,  like what is the best test coverage for mobile projects? how to design effective non-functional and performance testing in mobile and RWD?, I also got a question about the differences between BDD and ATDD.

My session was around an Open Source test automation framework called Quantum that supports cucumber BDD (Behavior Driven Development) and this obviously triggered the question.

Definition: BDD and ATDD

ATDD Acceptance Test Driven Development

Based on Wikipedia’s definition (referenced above), ATDD is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example,behavior-driven development (BDD),example-driven development (EDD), and support-driven development also called story test–driven development (SDD).

All these processes aid developers and testers in understanding the customer’s needs prior to implementation and allow customers to be able to converse in their own domain language.

ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

BDD Behavior Driven Development

Again, based on Wikipedia’s definition (referenced above), BDD is a software development process that emerged from test-driven development (TDD)Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

Mobile Testing In the Context of BDD and ATDD

The way to look at the 2 agile like practices of BDD, ATDD, TDD is from the context of higher velocity and quality requirements.

Organizations are aiming to release faster to market, with great quality, sufficient test coverage and in the same time of course – meet the business goals and customer satisfaction. To achieve these goals, teams ought to be strongly collaborative from the very beginning app development and design stages.

Once organizations have the customer product requirements, and they can start developing through user stories, acceptance criteria’s and such the product & the tests several goals can be met:

  • High customer-vendor alignment == Customer satisfaction
  • Faster time to market, app is tested along the SDLC
  • Quality is in sync with customer needs and there are much less redundant tests
  • There are no Communication gaps or barriers between Dev, Test, Marketing, Management

 

Looking at the below example of a BDD-based test automation test code, it is very easy to understand the functionality and use cases under test, the desired test outcome.

quantum123

As can be seen in the screenshot above, the script installs and launches on an available Samsung device the TestApp.APK file performs a successful login and presses on a menu item. As a final step, it also performs a mobile visual validation to assure that the test also passes, and also as an automaton anchor, the test code reached the expected screen.

It is important to mention that the test framework, tools that can support both TDD, ATDD and BDD can be in many cases similar, and in our case above – one can still develop and test from a BDD or ATTD standpoint by using a Cucumber test automation framework (Cucumber, Quantum).

If we would compare the above functional use case, or as stated in the cucumber language “Scenario” to a scenario that would fit an ATDD based approach – we would most likely need to introduce the known “3 amigos” approach  –> three perspectives of customer (what problem are we trying to solve?), development (how might we solve this problem?), and testing (what about…).

 

Since a real ATDD best practice will determine a Gherkin like app scenario’s before the development even starts, the above BDD example will be a precondition test for the app development team to make sure that they develop against acceptance criteria that in our example is a successful app install and log in.

An additional example of an acceptance test that also involves a layer of login/Register that I can reference would like this:

effective-testing-practices-in-an-agile-environment-28-638

I can understand that confusion between BDD and ATDD since as mentioned above, they can look a lot like the same.

Bottom line, and as I responded to the event last week – both BDD/ATDD/TDD are methods to better sync the various counterparts involved in shipping a working product to the market, faster, with higher quality and with the right functionality that would meet the customer requirements. Implementing it using Gherkin method makes a lot of sense due to the easy alignment and common language these counterparts uses during the SDLC workflow.

Happy Testing!

What You Need To Know When Planning Your Test Lab in 2017

As we kick-off 2017, I am thrilled to release the most updated 6th edition of the Digital Test Coverage Index report, a guide to help you decide how to build your test lab. 2016 was an exciting year in the Digital space, and as usual, Q4 market movement is sure to impact 2017 development and testing plans. And it doesn’t appear that the market is slowing down, with continued innovation expected this year. In this post, I will summarize the key insights we saw last quarter, as well as few important things that are projected for 2017 that should be applied when building your test lab.

dtci

Key Takeaways

  • Beta OS versions remain an important aspect of your test coverage strategy. With Apple releasing 5 different minor versions of iOS 10 since it’s release in September 2016, iPhone/iOS 10 beta are a “must-include in your test lab” device/OS combination. On the browser side, Chrome and Firefox beta versions are also critical test targets for sustaining the quality of your mobile web/responsive websites.
  • The Android fragmentation trend is changing, with Google putting pressure on device manufacturers to keep pace with the latest OS versions. As evidence, we already see that Android 6.x has the greatest market share as of Q42016, with roughly 27%, followed by Android Lollipop. With Google releasing its first Android Pixel devices, the market is already starting to see a boost in Android 7 Nougat adoption which is expected to grow within Q12017 to between 2-5% market share.
  • Galaxy S7 and S7 Edge were a turning point for Samsung: Over the last year, Samsung has seen a revenue slowdown due, in part, to competition from both Apple and emerging Android manufacturers OnePlus, Xiaomi, and Huawei. With the launch of Samsung S7 & S7 Edge, the company is regaining its position. We can see in this edition of the Index (and the previous one,) that Samsung is the leading brand in many countries, which should impact the test coverage plans in Brazil, India, Netherlands, UK, Germany and U.S.
  • The mobile app engagement methods are evolving, with various enterprises counting on the mobile platform to drive more revenues and attract more users. We are seeing greater adoption of external application integration either through dedicated OS-level applications like the iOS iMessage or through other solutions like the Google app shortcuts that were recently introduced as part of Android 7.1. These changes represent a challenge from a testing perspective, since there is now additional outside-of-app dependencies that the Dev and QA teams need to manage.
  • Test Lab size is expected to slightly grow YoY as the market matures:   Looking at the annual growth projection below, we see a slight growth in the need for a 10, 25 and 32 device lab, based on new the devices that are being introduced into the market faster than old devices are retired. What we see is an annual introduction of around 15 leading devices per year with an average retirement of 5-7 per year (due to decreased usage, terminated support by vendor etc.). Integrating these numbers into the 30%-80% model would bring the annual growth as demonstrated in the following graph.

annual_growth

 

2017 Trends

As this is the first Index for 2017, here are the most important market events that will impact both Dev and QA teams in the digital space, in the categories of Mobile, Web or both.

New Players

The most significant player to joins the mobile space in 2017 is Nokia. After struggling for many years to become a relevant vendor, and being unsuccessful under the Windows Phone brand, Nokia is now back in the game with a new series of Android-based devices that are supposed to be introduced during MWC 2017. A second player that is going to penetrate the mobile market is Microsoft who is supposed to introduce the first Microsoft Surface Phone during H1 2017.

Innovative Technologies

During 2017 we will definitely continue to see more IoT devices, smartwatches, and additional features coming from both Google and Apple, in the mobile, automotive and smart home markets. In addition, we might see the first foldable touch smartphone released to the market by Samsung under the name “Samsung X”. In addition, we should see a growing trend of external App interfaces in various forms such as bots, iMessages, App Shortcuts and Voice based features. The market refers to these trends as result of “App Fatigue” which is causing organizations to innovate and change the way their end-users are interacting with the apps and consuming data. From a testing perspective, this is obviously a change from existing methods and will require re-thinking and new development of test cases. In a recent blog, I addressed the above – feel free to read more about it here.

Key Device Launches to Consider for an Updated Test Lab

Most of the below can be seen in the market calendar for 2017, but the highlights are listed here as well:

  • Samsung S8/S8 Edge flagship devices from Samsung are due by February 2017 and should be the successors of the highly successful S7/S7 Edge devices
  • iPhone 8/iPhone 8 Plus together with iOS 11 launch in MID-September 2017 will mark the 10th anniversary for the Apple iPhone series. This launch is expected to be a groundbreaking one for iOS users.
  • Huawei Mate 9/Mate 9 Pro, and in general, the Huawei smartphone portfolio is continuing its global growth. 2017 should continue the growth trend both in China and India, but also as seen in this Index report in many European countries where we are already seeing devices like Huawei P8, P9, and others in use.

From a web perspective, we are not going to see any major surprises from the leading browsers like Chrome, FireFox, and Safari. However, from Microsoft Edge browser, we expect a significant market share uptick as more and more users adopt Windows 10 and abandon legacy Windows OS machines.

cal2017

 

In the Index report, you may find all the information necessary to better plan for 2017, as well as market calendars for both mobile and the web, plus a rich collection of insights and takeaways. DOWNLOAD HERE.

Happy Testing in 2017!

My 2017 Continuous Quality Predictions

A guest post by Amir Rozenberg, Sr. Director of product management at Perfecto Mobile & Yoram (Perfecto CTO)
resize
========================================================================
As 2016 winds down and we look into 2017, I’d like to share few thoughts on trends in delivering high-quality digital applications. This post is organized in two parts: Start with a collection of observations of key market trends and drivers; followed with continuous quality implications. While this article focuses on examples and quotes from the banking vertical, the discussion is certainly applicable more broadly.

2017 – Year of accelerated digital transformation with user experience in focus:

eiu-temenos-report_the-3-rs-of-retail-banking-regulate-revise-re-envisage-8-238x180
Image courtesy: Banking Technology

 

 

    1. Increased formal digital engagement: Consumers want independence and access, ‘self-serve’ or ‘Direct Banking’ in the banking space, at a time and location of their preference. As A.T. Kearney reports , many transactions done today by the bank will be done by the customer. That is a big opportunity that many banks capitalize on via their online apps. atkearney
    2. Informal digital presence: Implementation of multi-channel approach inclusive of social networks is proliferating as a complementary touch point with the customer. Activities include proactively scanning social networks for disgruntled customers and addressing their challenges individually, marketing and advertising new services and streamlining services. For example, allowing users to log into their online bank account using their social network presence. One bank reports a short-term marketing effort in those channels increased 13% mobile app enrollments, doubling their social activity following etc. (Source)
    3. Improved operating efficiency: Another strong driver for the digital transformation is introducing efficient processes and leveraging new channels to better manage expenditure. According to McKinsey, digital transformation can enable retail banks to increase revenues in upward of 30% and decrease expenditure by 20%-25%.
      In addition to slashing branches for efficient online service (Ally Bank: “Instead of spending money on expensive branches, we pass the savings on to you” ), DBS also treated customer care flows and improved their efficiency.
  • User Experience & efficiency: functional and delightful experience are top of mind for both customers as well as vendors: “Our customers don’t benchmark us against banks,” said Hari Gopalkrishnan, BOFA CIO of client-facing platform technology, in an interview with InformationWeek. “They benchmark us against Uber and Amazon.”. On the application side, there is a strong emphasis on the end user efficiency as they try to accomplish the task at hand. At DBS, 250 million customer hours wasted each year were saved in 2016 by improving bank-side processes and enabling more online self-serve transactions by customers.
    Further, investments are made in the area of streamlining user flows. One example is text entry replacement by using the onboard sensors: location-via GPS, check and barcode scanning via the camera, or speech dictation via Siri, Google speech etc. “Solutions that combine the ability to find, analyze and assemble data into formats that can be read in natural language will improve both the speed and the quality of business content delivery. Personal assistants such as Apple’s Siri and Microsoft’s Cortana — as well as IBM Watson, with its cognitive technology — provide richer and more interactive content.”- From Gartner’s report “Top Strategic Predictions for 2016 and Beyond: The Future Is a Digital Thing

Challenges & Implications

Having looked at some of the trends, the implication of accelerated digital transformation, focus on user experience now are met with competitive pressure and the need to deliver product faster to market. Many organizations are adopting agile methodologies, and from a continuous quality perspective, let’s discuss some challenges and implications:
  • (Simplified) Automation at scale: With an ever-growing matrix of test cases and shrinking test cycle, I believe (/hope) attention will be given in 2017 to designing/implementing automation at scale in organizations. There are many challenges, such as the skill set of testers/developers, cross-team collaboration, tooling, timing, and budgets. But everyone needs to agree that having over 20%-25% of manual testing, or spending too much time maintaining test script maintenance is simply blocking coverage, quality and eventually business success.
    • Always-on lab: A robust and stable lab is an absolute requirement to remove fragility, the biggest reason for test failure. Almost always this means a lab in the cloud: Device on a desk or a local lab will break the regression test in the critical moment.
    • Scripts: Need to be based on core functions which are robust, mature and reusable. Handle unplanned cases (popups), apply retry mechanism, apply baseline for the environment (Database in the right place, servers are operational, WiFi is on, no popups, etc.)
    • Switch to “always green” mode: if you need to review your results every morning and spend 1-2 hours on it, you’re doing something wrong and you can’t really scale your automation. Prefer green over coverage. A false negative is the worse disease of automation. Unless something really bad happens, your scripts should end with green status, no excuses.
    • Test automation framework: This is the building block that will drive sustainability and scale. There are many frameworks out there, some are offered as open source, some by system integrators. Here are some thoughts on selecting your test framework:
      1. Skill set and tools match: Testers skills vary. We typically see many manual testers who are supported by a core team of advanced coders. Those who code, operate in Java, Javascript, C#, Python, Ruby etc. The foundation for automation at scale is a set of sustainable and reusable automation assets (so your time spent on maintenance is limited): A solid object repository, scripts, test orchestration, and planning etc. A good framework will allow multi-language support (in a way that supports your organization) and multi-skill-level: Java-like scripting for codes, BDD-like scripting (ex.: Cucumber) for those new to coding.
      2. Open source and modular: There are significant benefits to adopting technology with the wide community behind it. We recommend selecting a solution that is made of architectural components that are best in class in its function. Shameless plug: Perfecto and Infostretch came with an open source framework named Quantum. The objective is to provide a complete package where experienced as well as non-coders can write test scripts based on smart XPath and a smart object repository via Java and Cucumber. Test orchestration and reporting are also available via TestNG. The framework is made of best-of-breed open source components, we welcome the community and our customers to try it out and give feedback.
    • Efficient, role-driven reporting: Considering automation at scale, it is mandatory to provide a strong reporting suite. The tester needs to quickly recognize trends in last nights’ regression test (hopefully made of thousands of executions), and drill down from the trends to a single test execution to analyze the root cause and compare it against yesterday’s execution or another platform. By the same token, quality visibility (‘build health’) mandates transparency. (another shameless plug:) Perfecto’s new set of dashboards enables the application team as well as executives to understand build weaknesses and establish an informed go/no-go decision.
 Next, on the challenges list, let’s discuss the client side:
  • Client side increased capabilities… and vulnerabilities: The focus to drive more functionality and streamline the user experience drive a larger coverage matrix. We’re seeing thick client applications strengthening and enriching the experience. As such, demand for test coverage and process change are needed
    1. Coverage: The proliferation of using onboard sensors and connected devices (see below) will drive the need to expand the test environment and capabilities to include those. In 2016 we saw increased use of image injection scenarios as well as touch ID. I believe in 2017 speech input will gain momentum as well as ongoing innovation around augmented reality (perhaps less in banking, rather other verticals). All of these scenarios need to be covered.
      • Of particular interest is the IoT space: This is an area that’s been growing rapidly over the last few years, whether consumer products, medical or industrial applications. “The relationships between machines and people are becoming increasingly competitive, as smart machines acquire the capabilities to perform more and more daily activities“. In Gartner’s IoT forecast, we estimate that, by 2020, more than 35 billion things will be connected to the Internet. Particularly in banking, IoT represents an interesting opportunity. For example, authenticating the customer in the branch with biometric sensing accessories will streamline experiences and increase security. Other examples include contactless payments and access to account functions from a wearable accessory (Source)
    2. Accessibility: since 2015 over 240 businesses have been sued over accessibility compliance according to WSJ. TechCrunch advice is to plan, design and implement accessibility in the app,  and work closely with a council on the regulations. We too are seeing growing demand for accessibility related coverage. This is certainly an area we’re going to pay close attention to in the near future.
Lastly, process and maturity changes:
  • Process changes and (quality) maturity growth: As we work with our customer and the market is maturing, we are fortunate to observe market trends that are happening (some slower than others, but still)
    1. CoE collaboration with the app team: As agile is implemented in many of our customers, we’re witnessing first hand the autonomy and independence driven by the application team. While the application team creates, builds and tests code, they still may need centralized perspective on quality practices and tooling needed for success. Some of these teams consider and curious about the application usage and behavior in production (more below). Our recommendation to the various teams is to seek and drive collaboration: for example, establish a slim, robust and stable acceptance test to build a common language between the tests that are run in the cycle and those running after.
    2. DevOps: Teams are seeking efficiencies and transparency in managing quality across the SDLC. One area is shifting testing earlier in the cycle, covered nicely by my colleague Eran. devops1The second is using the same (testing) tool and approach for production (‘Testing in production’). This approach reduces delays in time to launch (no need to wait until production monitoring scripts are created) and enables visibility to usage, behavior and weaknesses of the app in production. I believe traditional production-dedicated APM tools will need to find ways to merge into the cycle to survive.
    3. New entrants in the developer/quality workflow: I believe new opportunities and startups will emerge in areas that simplify/automate testing and predict the impact of code changes in advance. Imagine proactive code scanning tools integrated with production insight that direct developers about the risk associated with the area of code you’re about to touch, or automated test code/plan generators. This area has plenty of room for growth.

 

Advanced areas

  • Shifting, even more, testing left: In further mature teams we find that automation drives further test cases into the nightly regressions test, because it provides high value (as opposed to finding bugs late) and it’s frankly, possible. The area of introducing real user conditions in the cycle provides critical insight. Other areas such as small-scale multi user test (for code efficiency and multi-threading behavior), some level of security, accessibility tests etc.
  • Transitioning testing to user journey: Lastly, an advanced topic I’d like to mention is changing the perspective from a matrix of test cases X browsers/devices/OS/version X real user conditions into diagonal, if you will, user journeys across platforms. To go by example, take a typical journey for bank loan research: consumers are likely to begin their engagement on a large screen where they research rates, terms etc. They may summarize findings and take decisions using excel (local/online). They may apply for a loan over their desktop browser or on their tablet, and then continue the interaction on their mobile device. In those ‘diagonal’ test journeys one could then classify different journeys into different personas: There’s the consumer, the loan officer, the customer care professional etc. All of whom go through different journeys on different screens. Being able to provide a quality score per build for specific persona’ journey would be very meaningful to the business to make decisions. The point being, in a limited time available for quality activities, one could consider creating user journeys across specific screens rather than trying the complete rows and columns across test cases and screens matrix.
To summarize, I see an exciting 2017 coming with lots of changes and innovation in delivering digital applications that work. Certainly looking forward to taking part!

How to Efficiently Test Your Mobile App for Battery Drain?

With my experience in the mobile space over the past 2 decades, I rarely run across efficient mobile app testing that assures resource usage by the app as part of the overall test strategy and test plan.

Teams would often focus on the app usability, functionality, performance and security and as long as the app performs what it was designed to do – the app will get pushed to production as is.

Resource Consumption As an App Quality Priority

Let’s have a look at one of 2016 most popular mobile native apps, that is Pokemon Go. This mobile app alone, require constant GPS location services being active, it keeps the screen fully lit when in the foreground, operates the camera, plays sounds and renders 3d graphics content.

If we translate the above resource consumption when running this App on a fully charged Android device, research shows that in 2 hours and 40 minutes the phone will drop from 100% to 0% battery.

The thing is of course, that the end user will typically have at least 10 others apps running in the background at the same time, hence the battery drain of the device will be of course faster.

From a recent research done by AVAST, you can see 2 set of greediest apps in the market in Q3 2016. The 2 visuals below taken from the report show 2 sets of apps – 1 that is usually launched at the device startup, and the 2nd set of apps mostly launched by the users.

batteryd1

batteryd2

How to Test the App for Battery Drain?

Teams need to come as close as possible to their end-users, this is a clear requirement in today’s market. This means that from a battery drain testing perspective, the test environment needs to mimic the real user from the device perspective, OS, network conditions (2G, 3G, Wifi, Roaming), background popular apps installed and running on the device and of course a varying set of devices in the lab with different battery states.

  • Test against multiple devices 

Device hardware is different across both models manufacturers. Each battery will obviously have a limited capacity than the other. Each device after a while will have degraded battery chemistry that impacts the performance, the duration it can last and more. This is why a variety of new, legacy and different battery capacities needs to be a consideration in any mobile device lab. This is a general requirement for mobile app quality, but in the context of battery testing – this gets a different angle that ought to be leveraged by the teams.

  • Listen to the market and end-users’

Since the market constantly changes, the “known state” and quality of your app including battery consumption and other resources consumption may change as well. This can happen due to app different performance on a new device that you have no experience with or it can be due to a new OS version released to the market by Google or Apple – we have seen plenty of examples like that, including the recent iOS 10.2 release.

It is very hard to monitor these things in products, so one advice should be to start testing the app on OS Beta versions and measure the app battery consumption prior to the OS is released as GA – this can eliminate issues around new OS versions. Other methods that are commonly used by mobile teams is to monitor the app store and either get notified by the end-users’ about such issues (less preferred). Continuously including such tests on a refreshed device lab will reduce the risks and identify issues earlier in the cycle and prior to production. Make these tests or a subset of these part of your CI cycle to enhance test coverage and reduce risks.

screenshot-2016-12-22-at-01-42-05

Summary

In today’s market, there is not good automation method to test app battery drain, therefore my recommendation is to create a plethora of devices in the lab with varying conditions as mentioned above and measure the battery drain through native apps on the devices as well as timer measurements. The tests should be first against the app running on a clean device and than on a real end user device.