Enabling Mobile Testing In a Fast Growing DevOps Reality

6 months ago I launched my 1st book called “The Digital Quality Handbook”.

The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry.

I have also recently joined the working group of ISTQB to influence the material in the mobile certification course, where I plan to include insights from the book as well.

In this book, I am hosting top leaders from the industry touching the most important aspects in assuring DevOps.

The above image is taken from Amazon recommending my book close to the leading DevOps practitioner books, this is another strong validation of the book relevancy and value.

Few highlights from the book are below:

  1. Shifting quality left and right to cover as many tests automatically throughout the release pipeline is a key to move faster and identify issues earlier in the process (Angie Jones from Twitter, Manish Maturia from InfoStretch and others provide practitioner level insights and tips)
  2. Testing on the right platforms and OS’s is a key to assure high quality across different devices (new, legacy, popular) in various locations and environments
    1. I am referring to this magazine, that I author on a quarterly basis in the book, and highly recommend subscribing to receive this free asset upon each release: http://info.perfectomobile.com/factors-magazine.html 
  3. Robust automation is achieved through best practices such as building a page object model (POM) and using unique object locators rather than flaky XPATHs etc. I am referring to a free online tool that can help score your object as part of your test automation development http://xpathvalidator.projectquantum.io/
  4. Testing not only via the UI is another key for success, so complementing UI testing with API level testing can reduce the time of testing, provide faster feedback and other values. This chapter was actually developed by my twin brother Lior Kinbruner 🙂 – worth checking it out!
  5. Performance testing and UX is another challenge and key to success. A full section of the book is dedicated to wind tunnel testing, user experience testing (JeanAnn Harrison contributes a lot here together with Amir Rozenberg).

The book was #1 in the new best selling book on Amazon, and still rocking today after more than 6 months. It is #43 as of today in the overall Software Testing Book which is a great validation and honor for me and the contributors.

 

If you still haven’t got a copy of the book, i really encourage you to do so – I am already planning on my next journey so stay tuned 🙂

Advertisements

Optimizing Mobile Test Automation Across The Pipeline

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Oil Transport

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Find the Key in the Picture

Recommended Practices

  • Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.
  • Match your device under tests capabilities to the test code and application under test. Make sure that you focus e.g. your fingerprint based tests only on the devices that support it (API XX and above).
  • Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.
  • Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.
  • Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

CI_Dash1.png

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Happy Testing!

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir

How to Efficiently Test Your Mobile App for Battery Drain?

With my experience in the mobile space over the past 2 decades, I rarely run across efficient mobile app testing that assures resource usage by the app as part of the overall test strategy and test plan.

Teams would often focus on the app usability, functionality, performance and security and as long as the app performs what it was designed to do – the app will get pushed to production as is.

Resource Consumption As an App Quality Priority

Let’s have a look at one of 2016 most popular mobile native apps, that is Pokemon Go. This mobile app alone, require constant GPS location services being active, it keeps the screen fully lit when in the foreground, operates the camera, plays sounds and renders 3d graphics content.

If we translate the above resource consumption when running this App on a fully charged Android device, research shows that in 2 hours and 40 minutes the phone will drop from 100% to 0% battery.

The thing is of course, that the end user will typically have at least 10 others apps running in the background at the same time, hence the battery drain of the device will be of course faster.

From a recent research done by AVAST, you can see 2 set of greediest apps in the market in Q3 2016. The 2 visuals below taken from the report show 2 sets of apps – 1 that is usually launched at the device startup, and the 2nd set of apps mostly launched by the users.

batteryd1

batteryd2

How to Test the App for Battery Drain?

Teams need to come as close as possible to their end-users, this is a clear requirement in today’s market. This means that from a battery drain testing perspective, the test environment needs to mimic the real user from the device perspective, OS, network conditions (2G, 3G, Wifi, Roaming), background popular apps installed and running on the device and of course a varying set of devices in the lab with different battery states.

  • Test against multiple devices 

Device hardware is different across both models manufacturers. Each battery will obviously have a limited capacity than the other. Each device after a while will have degraded battery chemistry that impacts the performance, the duration it can last and more. This is why a variety of new, legacy and different battery capacities needs to be a consideration in any mobile device lab. This is a general requirement for mobile app quality, but in the context of battery testing – this gets a different angle that ought to be leveraged by the teams.

  • Listen to the market and end-users’

Since the market constantly changes, the “known state” and quality of your app including battery consumption and other resources consumption may change as well. This can happen due to app different performance on a new device that you have no experience with or it can be due to a new OS version released to the market by Google or Apple – we have seen plenty of examples like that, including the recent iOS 10.2 release.

It is very hard to monitor these things in products, so one advice should be to start testing the app on OS Beta versions and measure the app battery consumption prior to the OS is released as GA – this can eliminate issues around new OS versions. Other methods that are commonly used by mobile teams is to monitor the app store and either get notified by the end-users’ about such issues (less preferred). Continuously including such tests on a refreshed device lab will reduce the risks and identify issues earlier in the cycle and prior to production. Make these tests or a subset of these part of your CI cycle to enhance test coverage and reduce risks.

screenshot-2016-12-22-at-01-42-05

Summary

In today’s market, there is not good automation method to test app battery drain, therefore my recommendation is to create a plethora of devices in the lab with varying conditions as mentioned above and measure the battery drain through native apps on the devices as well as timer measurements. The tests should be first against the app running on a clean device and than on a real end user device.

Shifting Mobile App Quality Into the Dev Build Cycles

It’s no doubt that quality is becoming a joint feature team responsibility, and with that in mind – it is not enough for traditional QA engineers to develop and execute test automation post a successful build, but actually the growing expectations now are from the Dev team to also take part and include as many tests as they can in their build cycles per each code commit.

Tests can be either unit, functional, UI or even small scale performance tests.

With that in mind, Dev team need a convenient environment that allows them to perform these quality related activities so they deliver better code faster!

Developers today are specifically challenged with the following:

  1. Solving issues that come from production or from their QA teams that require a specific device or/and environment that’s usually not available for the dev team
  2. Validation of newly developed apps or features within apps across different environments and devices as part of their dev process
  3. Lack of shared assets for the entire dev team
  4. Ability to get a “long USB cable” that enables full remote device capabilities & debugging

Perfecto just made available as part of its continuous quality lab in the cloud a set of new tools and capabilities that addresses these requirements and enable Dev team to accomplish their goals.

Perfecto’s DevTunnel solution for Android that is part of the recent 9.4 release is the 1st significant step toward helping developers accomplish more tests as part of the build cycle.

dt

With the above challenges and requirements in mind, Perfecto has developed a unique solution called the “DevTunnel” which enables developers to get enhanced remote access to mobile devices in the cloud and perform any operation that they could have done with these devices if they were locally connected – things like debugging, running unit tests, testing UI at scale from within the IDE and more.

espredebug

In addition, when referring to Android Dev activities, it’s clear that Android Studio & IntelliJ IDEA are the leading IDE’s to operate in. For that, Perfecto invested in developing a robust plugin that integrates nicely into the development workflow.

Espresso Framework

It’s no doubt that Espresso test automation framework is becoming more and more adopted across the developers for various reasons like:

  1. Embedded into Android Studio play an important role for Android developers.
  2. It’s very fast and easy to execute and receive feedback on Android devices

Espresso can be used within the Perfecto lab today in the following 2 modes

  • Locally – Execution through DevTunnel (see below)
  • Via Continuous Integration (CI) – using a command for Espresso test execution through Jenkins server

In the community series targeted to Dev Tunnel, you can learn more about the capabilities, use cases and get samples to get you started with the new capability.

To see this also in action, please refer to the video playlist that demonstrates how to get started and install DevTunnel, use Perfecto Lab within Android Studio with Espresso for testing and debugging purposes and more.

 

Good Luck!

4 Benefits of Using the Espresso Test Automation Tool

If you’re an Android developer, you’re probably familiar with Google’s Espresso test automation framework. As an open-source tool, it’s very easy for developers to use and extend within their working environment (Android Studio IDE).

But before discussing the benefits of Espresso, let’s understand the motivations and pains developers and test automation engineers face today while trying to validate their Android application (APK) throughout the build/dev/test workflow.

  • Each build needs to be validated after code changes are made.
  • Dependencies on remote servers and other workstations for testing slow down the process.
  • Unit and functional tests need to be easy to execute from both an IDE and continuous integration perspective.
  • Apps need to be tested using the latest Android OS APIs that support new platform features and OS versions.
  • Testing needs to occur on both emulators and real devices.

In light of these challenges, it’s clear why the adoption of the Espresso automation framework is high. Even though Espresso is an instrumentation-based test framework, it has many benefits to both developers and test automation engineers. It uses Junit underneath the hood, so Espresso is easy to use within leading IDEs and provides useful testing annotations and assertions. It’s also fully integrated within the leading Google Android IDE – Android Studio.

Here are four main benefits of using Espresso:

1. Espresso workflow is simple to use

The way Espresso works is by allowing developers to build a test suite as a stand-alone APK that can be installed on the target devices alongside the application under test and be executed very quickly.

2. Fast and reliable feedback to developers

As developers are trying to accelerate deployment, Espresso gives them fast feedback on their code changes so they can move on to the next feature or defect fix; having a robust and fast test framework plays a key role.

Espresso does not require any server (like Selenium Remote WebDriver) to communicate with; instead it runs side-by-side with the app and delivers very fast (minutes) test results to the developer.

3. Less mobile testing flakiness

Because Espresso offers a synchronized method of execution, the stability of the test cycle is very high. There’s a built-in mechanism in Espresso that, prior to moving to the next steps in the test, validates that the Element or Object is actually displayed on the screen. This eliminates test execution from breaking when confronted with “objects not detected” and other errors.

4. Developing Espresso test automation isn’t hard

Developing Espresso test automation is quite easy. It is based on Java and Junit, which is a core skillset for any Android app developer. Because Espresso works seamlessly within the Android Studio IDE, there’s no setup or ramping up and no “excuses” – to actually shift quality in the in-cycle stage of the app SDLC.

In addition to the above, there is of course the large community powered by Google that pushes the Espresso test automation framework and allow easy and fast ramp up for newcomers.

Learn more using the Espresso Cheat sheet below:

Espresso Test Automation Framework

Perfecto is offering support for both Android Studio IDE as well as the ability to install and launch an Espresso test suite (APK) on real devices in the cloud across various locations and user conditions. For more information, please refer to the Perfecto Community and search for “Android Studio” or “Espresso.”

7 Mobile Test Automation Best Practices

Developing a mobile test automation scenario isn’t that complicated. Developers and testers use a variety of commercial test automation frameworks or open source tools such as Selenium and Appium to do automation. However, when trying to execute these tests on real devices or integrate them into an Agile or CI (continuous integration) workflow, things get a little complicated.

The major challenges around mobile test automation

The essence of developing test automation is to be able to use and re-use scripts many times, across platforms and environments. Test automation should be as maintainable as possible, especially as new platforms and product features are released. Many organizations that develop test automation for their mobile apps face the following challenges:

  1. Executing the tests against a variety of real mobile devices
  2. Executing these tests in parallel
  3. Leveraging existing test code (re-usability) for new tests
  4. Including real end-user environments/conditions (changing network conditions, low battery) in the tests
  5. Overcoming unexpected interruptions (incoming call, apps running in background)
  6. Running these tests unattended — over night, as part of a Jenkins CI job

These are just few of the challenges organizations confront when trying to progress from older SDLC processes and meet faster releases and enhanced Dev–>Build–>Deploy–>Test–>Deploy cycles.

7 practical test automation tips

Overcoming these challenges starts with few changes in the overall mobile app dev and test processes.

Consider these seven recommendations for building sustainable unattended automation.

Test automation

The key to mobile test automation is to start with a small number of test cases, automate them, and assure that they are robust enough and can be executed in parallel and unattended. Only then should you invest more and grow the test suite.

An important question to ask at the start is: What should I be automating? Organization often do not choose the right tests to automate, resulting in lost development time, weak ROI, and an over-reliance on manual testing.

To learn more about the 7 Ways to Overcome Test Automation Obstacles, please join us next week for a webinar hosted by myself, automation expert and author Daniel Knott, and Perfecto’s Director of Technology Uzi Eilon.

Tests to Include Within Automation Suite

When developing a mobile or desktop test automation plan organization often struggle with the right scope and coverage for the project.

In previous post, i covered the test coverage recommendations in a mobile project and now, i would like to also expand on the topic of which tests to automate.

Achieving release agility with high quality is fully dependent today more than ever on continuous testing which is gained through proper test automation, however automating every test scenario is not feasible and not necessary to meet this goal.

In the below table  we can see some very practical examples of test cases with various parameters with a Y/N recommendation whether to automate or no.

As shown below, and as a rule for both Mobile, Web and other projects the key tests by definition which should be added to an automation suite (from ROI perspective and TTM) are the ones who are:

  • Required to be executed against various data sets
  • Tests which ought to run against multiple environments (Devices, Browsers, Locations)
  • Complex test scenario’s (these are time consuming and error prone when done manually)
  • Tedious and repetitive test cases are a must to automate
  • Tests which are dependent on various aspects (can be other tests, other environments etc.)

Picture1

Bottom line: Automation is key in today’s digital world, but doing it right and wisely can shorten time to market, redundant resources and a lot of wasted R&D time chasing unimportant defects coming from irrelevant tests

Happy Testing!

 

 

Few Best Practices Around Mobile Testing and Agility

When looking at some key blockers for Dev and Test team which are trying to either increase their existing test coverage, release more frequently without compromising quality we see some common pitfalls which with some planning in advance can be unblocked.

Let’s look first at the core mobile testing pillars:

MobileTestPillars-3

The above boxes represent either a full or a subset of a mobile app testing plan. Some of the above can fit into a functional test cycle, some regression or unit and some can be pre-release acceptance tests.

The importance of planning the test coverage and the contents of each iteration in the cycle can be a critical task to the overall app life cycle velocity.

In order to meet both Quality and Velocity goals, Dev/test/QE teams ought to include portions of tests in a model which is based on tests stability.

Let me explain – When trying to include in a CI acceptance test cycle or a functional test cycle more tests than needed, without really debugging each of the tests on few devices, there is a high risks of few tests to fail due to unexpected pop ups, bugs in the tests, specific device issues etc. – such tests will obviously damage and block the entire test cycle.

In order to have a fluent CI/Automation cycle, the recommended practice is to start with a small but robust subset which was already executed few times in the past on more than 1 real device, and were debugged with high probability of not getting stuck etc. Only once this suite was “certified” as stable, it would make sense to increase with the right dependencies and validation points the scope of your cycle, and add more automation tests in order to increase the CI cycle scope.

Such a paced approach which may seem trivial does not happen in many organizations, therefore as soon as a new device is introduced, or a new test is added to cover new features or screen, or simply when a new device unexpected pop up comes up – the CI process breaks.

This results in slow down of the process, delays in release and development tasks and frustration.

To summarize:

  • Construct your CI and automation cycle and “certify” each test case and only once it is stable and can run unattended – add it to the acceptance test suite
  • continuously debug your entire relevant test suite whenever a new feature, OS, device are introduced to assure nothing breaks your process
    • Also assess the tests efficiency in detecting bugs – the ones who keep running and doesn’t add value might be candidates for elimination, making room for newer and more efficient tests
  • Less == More –> Assess the most valuable tests which are candidates to identify more bugs than others and include them in the cycle, redundant tests just consume time, resources, and can put your entire cycle in danger
  • Make sure you can gain access to all of your devices under tests (DUT) at all time for development, debugging, and continuous testing
  • Include sufficient debugging artifacts in your test code either through Try’s and Catches, visual screen/scenario’s validation or other debugging logs, outputs, vitals.

Happy unattended testing!