Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Google Mobile Friendly With Perfecto and Quantum

Guest Blog Post by Amir Rozenberg, Senior Director of Product Management, Perfecto

resize

Google recently announced “Mobile First Indexing”, from Google:

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

screen-shot-2017-02-13-at-5-33-26-pm

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

  • Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same
  • Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that
  • The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉
  • This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

 

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

  • Added 2 Gherkin commands
// If you navigate directly to this page
Then I check mobileFriendly URL "http://www.nfl.com"
// If you got to this page through clicks
Then I check mobileFriendly current URL
  • Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)
  • And the script example is pretty simple:
@Web
Feature: NFL validate

  @SimpleValidation
  Scenario: Validate NFL
    Given I open browser to webpage "http://www.nfl.com"
    Then I check mobileFriendly current URL
    Then I check mobileFriendly URL "http://www.nfl.com"
    Then I wait "5" seconds to see the text "video"

 

That’s it. Next steps:

 

Ideas for future improvement:

  • You can automate the validation such that every click would trigger a check with Google behind the scenes.

Just for fun, some more screenshots for detailed analysis for NFL.com:

 

screen-shot-2017-02-13-at-5-33-48-pm

 

screen-shot-2017-02-13-at-5-34-09-pm

screen-shot-2017-02-13-at-5-34-23-pm

 

 

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir

Shifting Mobile App Quality Into the Dev Build Cycles

It’s no doubt that quality is becoming a joint feature team responsibility, and with that in mind – it is not enough for traditional QA engineers to develop and execute test automation post a successful build, but actually the growing expectations now are from the Dev team to also take part and include as many tests as they can in their build cycles per each code commit.

Tests can be either unit, functional, UI or even small scale performance tests.

With that in mind, Dev team need a convenient environment that allows them to perform these quality related activities so they deliver better code faster!

Developers today are specifically challenged with the following:

  1. Solving issues that come from production or from their QA teams that require a specific device or/and environment that’s usually not available for the dev team
  2. Validation of newly developed apps or features within apps across different environments and devices as part of their dev process
  3. Lack of shared assets for the entire dev team
  4. Ability to get a “long USB cable” that enables full remote device capabilities & debugging

Perfecto just made available as part of its continuous quality lab in the cloud a set of new tools and capabilities that addresses these requirements and enable Dev team to accomplish their goals.

Perfecto’s DevTunnel solution for Android that is part of the recent 9.4 release is the 1st significant step toward helping developers accomplish more tests as part of the build cycle.

dt

With the above challenges and requirements in mind, Perfecto has developed a unique solution called the “DevTunnel” which enables developers to get enhanced remote access to mobile devices in the cloud and perform any operation that they could have done with these devices if they were locally connected – things like debugging, running unit tests, testing UI at scale from within the IDE and more.

espredebug

In addition, when referring to Android Dev activities, it’s clear that Android Studio & IntelliJ IDEA are the leading IDE’s to operate in. For that, Perfecto invested in developing a robust plugin that integrates nicely into the development workflow.

Espresso Framework

It’s no doubt that Espresso test automation framework is becoming more and more adopted across the developers for various reasons like:

  1. Embedded into Android Studio play an important role for Android developers.
  2. It’s very fast and easy to execute and receive feedback on Android devices

Espresso can be used within the Perfecto lab today in the following 2 modes

  • Locally – Execution through DevTunnel (see below)
  • Via Continuous Integration (CI) – using a command for Espresso test execution through Jenkins server

In the community series targeted to Dev Tunnel, you can learn more about the capabilities, use cases and get samples to get you started with the new capability.

To see this also in action, please refer to the video playlist that demonstrates how to get started and install DevTunnel, use Perfecto Lab within Android Studio with Espresso for testing and debugging purposes and more.

 

Good Luck!

7 Mobile Test Automation Best Practices

Developing a mobile test automation scenario isn’t that complicated. Developers and testers use a variety of commercial test automation frameworks or open source tools such as Selenium and Appium to do automation. However, when trying to execute these tests on real devices or integrate them into an Agile or CI (continuous integration) workflow, things get a little complicated.

The major challenges around mobile test automation

The essence of developing test automation is to be able to use and re-use scripts many times, across platforms and environments. Test automation should be as maintainable as possible, especially as new platforms and product features are released. Many organizations that develop test automation for their mobile apps face the following challenges:

  1. Executing the tests against a variety of real mobile devices
  2. Executing these tests in parallel
  3. Leveraging existing test code (re-usability) for new tests
  4. Including real end-user environments/conditions (changing network conditions, low battery) in the tests
  5. Overcoming unexpected interruptions (incoming call, apps running in background)
  6. Running these tests unattended — over night, as part of a Jenkins CI job

These are just few of the challenges organizations confront when trying to progress from older SDLC processes and meet faster releases and enhanced Dev–>Build–>Deploy–>Test–>Deploy cycles.

7 practical test automation tips

Overcoming these challenges starts with few changes in the overall mobile app dev and test processes.

Consider these seven recommendations for building sustainable unattended automation.

Test automation

The key to mobile test automation is to start with a small number of test cases, automate them, and assure that they are robust enough and can be executed in parallel and unattended. Only then should you invest more and grow the test suite.

An important question to ask at the start is: What should I be automating? Organization often do not choose the right tests to automate, resulting in lost development time, weak ROI, and an over-reliance on manual testing.

To learn more about the 7 Ways to Overcome Test Automation Obstacles, please join us next week for a webinar hosted by myself, automation expert and author Daniel Knott, and Perfecto’s Director of Technology Uzi Eilon.

Tests to Include Within Automation Suite

When developing a mobile or desktop test automation plan organization often struggle with the right scope and coverage for the project.

In previous post, i covered the test coverage recommendations in a mobile project and now, i would like to also expand on the topic of which tests to automate.

Achieving release agility with high quality is fully dependent today more than ever on continuous testing which is gained through proper test automation, however automating every test scenario is not feasible and not necessary to meet this goal.

In the below table  we can see some very practical examples of test cases with various parameters with a Y/N recommendation whether to automate or no.

As shown below, and as a rule for both Mobile, Web and other projects the key tests by definition which should be added to an automation suite (from ROI perspective and TTM) are the ones who are:

  • Required to be executed against various data sets
  • Tests which ought to run against multiple environments (Devices, Browsers, Locations)
  • Complex test scenario’s (these are time consuming and error prone when done manually)
  • Tedious and repetitive test cases are a must to automate
  • Tests which are dependent on various aspects (can be other tests, other environments etc.)

Picture1

Bottom line: Automation is key in today’s digital world, but doing it right and wisely can shorten time to market, redundant resources and a lot of wasted R&D time chasing unimportant defects coming from irrelevant tests

Happy Testing!

 

 

Few Best Practices Around Mobile Testing and Agility

When looking at some key blockers for Dev and Test team which are trying to either increase their existing test coverage, release more frequently without compromising quality we see some common pitfalls which with some planning in advance can be unblocked.

Let’s look first at the core mobile testing pillars:

MobileTestPillars-3

The above boxes represent either a full or a subset of a mobile app testing plan. Some of the above can fit into a functional test cycle, some regression or unit and some can be pre-release acceptance tests.

The importance of planning the test coverage and the contents of each iteration in the cycle can be a critical task to the overall app life cycle velocity.

In order to meet both Quality and Velocity goals, Dev/test/QE teams ought to include portions of tests in a model which is based on tests stability.

Let me explain – When trying to include in a CI acceptance test cycle or a functional test cycle more tests than needed, without really debugging each of the tests on few devices, there is a high risks of few tests to fail due to unexpected pop ups, bugs in the tests, specific device issues etc. – such tests will obviously damage and block the entire test cycle.

In order to have a fluent CI/Automation cycle, the recommended practice is to start with a small but robust subset which was already executed few times in the past on more than 1 real device, and were debugged with high probability of not getting stuck etc. Only once this suite was “certified” as stable, it would make sense to increase with the right dependencies and validation points the scope of your cycle, and add more automation tests in order to increase the CI cycle scope.

Such a paced approach which may seem trivial does not happen in many organizations, therefore as soon as a new device is introduced, or a new test is added to cover new features or screen, or simply when a new device unexpected pop up comes up – the CI process breaks.

This results in slow down of the process, delays in release and development tasks and frustration.

To summarize:

  • Construct your CI and automation cycle and “certify” each test case and only once it is stable and can run unattended – add it to the acceptance test suite
  • continuously debug your entire relevant test suite whenever a new feature, OS, device are introduced to assure nothing breaks your process
    • Also assess the tests efficiency in detecting bugs – the ones who keep running and doesn’t add value might be candidates for elimination, making room for newer and more efficient tests
  • Less == More –> Assess the most valuable tests which are candidates to identify more bugs than others and include them in the cycle, redundant tests just consume time, resources, and can put your entire cycle in danger
  • Make sure you can gain access to all of your devices under tests (DUT) at all time for development, debugging, and continuous testing
  • Include sufficient debugging artifacts in your test code either through Try’s and Catches, visual screen/scenario’s validation or other debugging logs, outputs, vitals.

Happy unattended testing!

Selenium Is the New Testing Tool Standard

Seems like the debate in the world of test automation tools is over.

If few years back HP QTP/UFT (formerly WinRunner) was the standard and most commonly used tool for test automation in the QA space, those days are over.

The shift toward Agile, Devops and such trends together with the digital transformation which includes multi platform testing of Mobile, Web, IOT in a very short amount of times changed the tools landscape and the testing requirements.

See below a snapshot of the top required testing tools which show that the shift already started in 2011 where Selenium passed HP tools in the market adoption.

qtp vs selenium

Sourcehttp://www.seleniumguide.com/

The requirements today are that testing is done as early as possible in the project life cycle (SDLC) and to enforce this process, developers ought to play a significant role – Testing is now being developed and executed by all Agile team members including developers, testers, ops people and others.

In order for the shift and the adoption to grow the tools need to be tightly integrated into the developers environment (IDE’s) which in the digital space might be Eclipse, Android Studio, Visual Studio, Xcode or other cross platform IDE’s like PhoneGap or Titanium.

The additional aspect of test framework adoption such as Selenium and Appium lies in the Open-Source nature of these tools. The flexibility of such open source tools to get extended by developers according to their needs is a great deal compared to closed testing tools such as UFT which are disconnected from the IDE and development environments.

We shall continue to monitor the tools space and movement, but seems like the open source tools is becoming standard for Agile, DevOps practitioners which find these tools suitable for their shift left activities, keeping up with the market dynamics and competition, as well as great enablers for quality and velocity maintainability.

To get some heads up into what is the future of Selenium, and how are the efforts moving on toward making the web browsers drivers (Chrome, Firefox, IE etc.) standard and managed by the browser vendors, refer to this great session (courtesy of Applitools)

http://testautomation.applitools.com/post/120437769417/the-future-of-selenium-andreas-tolfsen-video

Planning Mobile Test Coverage

In any conversation i participate the topic of test coverage comes up – and it is indeed a great challenge for business, practitioners whether they are developers or testers (Agile, DevOps, Waterfall etc.)

Before we understand the how, let’s understand the objectives and coverage definition.

Coverage Aspects:

  1. Device coverage
  2. Market coverage
  3. Test case/use case coverage
  4. Environment conditions coverage

When we mention device coverage, we should try and include some relevant factor, not just the DUT (Device under test), because it is simply not enough.

Device Coverage

Proper device coverage shall include few important properties and the more permutations you’re going to include in your test lab the higher coverage you will reach. Some of the MUST properties which i would recommend to have as part of the mix are:

  • Screen size & Resolution
  • PPI (Pixel per inch)
  • OSV (OS Version)

To that mix you need to relate to leading market devices and also to legacy devices which are still popular by many users in various geo’s (e.g. Samsung Galaxy S3, iPad 2) in order to obtain both legacy OS and new OS coverage + the above device characteristics.

Market Coverage

Let’s understand Market coverage – This term relates to a combination of data sources to which some teams may have access to, and some won’t. Such coverage term will typically be a combination of leading market statistics and organizational web traffic or monitoring reports which would highlight information around most usage coming from which platform, browsers etc. When combining both Market and Org. data teams can best match their target audience and test against what’s right for their customers from current top usage perspective and in addition get market coverage around new and emerging devices/OSV to allow them to stay on top of market trends.

Another important aspect around coverage is of course the test cases themselves.

Test Case Coverage

Determining the right test cases to execute against each platform and in each test iteration throughout the SDLC (software development life cycle) is a crucial AGILE enabler and an efficiency driver. When there is a robust automation foundation within the organization teams can take advantage of this system and sometime “fail” by overloading it with either redundant test cases or inefficient test cases which does not add the right value. The key to increase test case coverage is to combine Manual & Automation testing (automate of course as much as possible) but only include the cross platform robust test automation and unit tests which are repeatable, valuable for a quick feedback loop between Dev and QA and leave the platform specific tests, corner cases, and such either to be done manually or as a separate JOB/cycle to assure flawless CI/Automation process.

Even with the above in mind, keep in mind that automation without ongoing maintenance, review of the test code will eventually fail especially around mobile due to constant platform specif changes, new features added or new unexpected popups which may block automation tests from running end to end.

Test Environment

Last, for a digital test coverage the user experience and the environment in which the user operates in is everything. Not covering the right environment would eventually waste testing and dev time since these efforts will be done against the wrong or “happy path only” environment. A real mobile environment takes into account the following:

  • Network conditions (2G, 3G, Wifi)
  • Background applications running as a “noise background” – consuming resources, taking over GPS resources or camera
  • incoming calls/popups
  • different screen orientation changes while app is in the foreground
  • Location of the app 
  • Locale & language

When taking all of the above under consideration, organizations can really build a test lab which provides sufficient coverage for their product and can easily adjust the lab based on market and product dynamics.

Happy Testing!