Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Selenium Is the New Testing Tool Standard

Seems like the debate in the world of test automation tools is over.

If few years back HP QTP/UFT (formerly WinRunner) was the standard and most commonly used tool for test automation in the QA space, those days are over.

The shift toward Agile, Devops and such trends together with the digital transformation which includes multi platform testing of Mobile, Web, IOT in a very short amount of times changed the tools landscape and the testing requirements.

See below a snapshot of the top required testing tools which show that the shift already started in 2011 where Selenium passed HP tools in the market adoption.

qtp vs selenium

Sourcehttp://www.seleniumguide.com/

The requirements today are that testing is done as early as possible in the project life cycle (SDLC) and to enforce this process, developers ought to play a significant role – Testing is now being developed and executed by all Agile team members including developers, testers, ops people and others.

In order for the shift and the adoption to grow the tools need to be tightly integrated into the developers environment (IDE’s) which in the digital space might be Eclipse, Android Studio, Visual Studio, Xcode or other cross platform IDE’s like PhoneGap or Titanium.

The additional aspect of test framework adoption such as Selenium and Appium lies in the Open-Source nature of these tools. The flexibility of such open source tools to get extended by developers according to their needs is a great deal compared to closed testing tools such as UFT which are disconnected from the IDE and development environments.

We shall continue to monitor the tools space and movement, but seems like the open source tools is becoming standard for Agile, DevOps practitioners which find these tools suitable for their shift left activities, keeping up with the market dynamics and competition, as well as great enablers for quality and velocity maintainability.

To get some heads up into what is the future of Selenium, and how are the efforts moving on toward making the web browsers drivers (Chrome, Firefox, IE etc.) standard and managed by the browser vendors, refer to this great session (courtesy of Applitools)

http://testautomation.applitools.com/post/120437769417/the-future-of-selenium-andreas-tolfsen-video

Best practices for iOS mobile application testing

Hi

iOS changed the mobility game, no doubt about it. It paved the way for the ‘mobile era’ by offering amazing functionality with a simple user experience.  However when it comes to testing and monitoring, working with the iPhone/iPad mobile application can be anything but simple…

As the iOS app market continues to produce record growth, challenges and complexities surrounding iOS application testing also continue to interfere with development. A key challenge of iOS testing is that, unlike the open-source Android OS, Apple iOS is a closed operating system. Added complexity during the development and testing stages arises with a closed system, since users can’t extract necessary data from low level objects, which are essential for test automation. So, what’s the best approach for getting the necessary level of access to the iOS device – rooting (jailbreaking) or compile-time source instrumentation? Should you base your testing on native objects or OCR-based screen analysis?

Let’s take a deeper look into some of these challenges and why a cloud-based hybrid approach is important to offer developers and testers the necessary coverage, capabilities and flexibility to deliver better iOS apps and deploy them with confidence.

Rooting (jailbreaking) vs. Source Instrumentation (compile-time)

There are two common methods used today in the mobile testing industry to address this challenge (i.e. access to the low level objects): rooting (jailbreaking) and source instrumentation (i.e. compile-time solution).

Jailbreaking refers to the process of removing the limitations placed by Apple on the iOS device in order to get low level (root) access to the operating system. This allows the tester to be able to recognize the objects within the application being tested.

Source Instrumentation is performed by compiling the application being tested with an additional piece of code that provides access (“back door”) to the low level OS for object recognition. This code enables the tester to execute the low level calls and get the Object ID’s from the operating systems (without the need to root/jailbreak the device).

The decision as what approach to adopt strongly depends on several considerations (below are just few):

1)    The used SDLC process

2)    Corporate policies

3)    Application under test

4)    Frequency of testing

Perfecto Mobile provides its end users with the freedom to choose what fits them best, while taking into consideration the advantages and disadvantages of each approach. When customers need to quickly test either a new iOS version or a new iOS device, the jailbreaking approach is less suitable. In such a case, the compile-time method is preferable – even though it complicates the SDLC by introducing additional code to the application being tested.

On the other hand, using a jailbroken device lets you test the application with the exact code by which it will be released (compile-time mandates that before store submission, you will remove the “back-door” or be exposed to serious security issues). This eliminates the need for compilation and intrusive operations which could potentially pose a risk to quality. Companies using a compile-time approach should also consider possible regulations (such as HIPAA) which enforce testing on the final binary (and not on debug version, test friendly version, etc.)

The combined (hybrid) approach lets you choose which type of tests to implement on which iOS device according to the nature of your application, project needs, and policy. When the test devices are deployed and securely managed in a “private cloud” (such as that offered by Perfecto Mobile), such a configuration guarantees that the jailbreak method does not introduce any risks or abuse of the platform for non-testing purposes. The jailbroken device is used only for testing purposes in a closed and secure testing environment. This is analogous to the use the way iOS devices used for development have a “developer signature,” as well as the way Android devices used for development have more levels of access than those required during the normal ALM cycle.

The Need for a Hybrid Approach to Object Recognition

Testing a mobile application requires strong object recognition capabilities. The use of visual analysis might not be sufficient, for example, the OCR technology can detect UI issues and glitches on the test devices, but cannot ensure 100% accuracy due to its heuristic nature. On the other hand, low level objects might “miss” the obvious qualification that a visual analysis can easily detect. That’s why a hybrid approach incorporating both visual and Native object analysis is imperative for covering all mobile business cases. Such an approach is supported by Perfecto Mobile.

Object level analysis vs. Visual analysis

This screenshot above shows the differences of using an object level analysis as opposed to visual analysis (object level analysis would not have detected the overlapping of the button on the text).

The Perfecto Mobile Approach: Go Cloud, Go Hybrid

Perfecto Mobile’s experience as a market leader has taught us that the best approach is to present each customer with all possible alternatives making them available inside the cloud.

Real devices + emulators (in the cloud),  OCR screen analysis + OS level native objects (in the cloud), rooted/jailbroken device + non-rooted/jailbroken devices (in the cloud)

With hundreds of thousands of automation hours running every month on our platform, we are well-positioned to suggest and guide, but not to “judge” what’s best for everyone…

Perfecto Mobile hybrid object support on a rooted android and a non-jailbroken iPhone

Regards.

Eran Kinsbruner

IDC 2012 report around mobile and additional post on mobile testing

Hi

Redirecting to 2 new posts i’ve made on PerfectoMobile blog:

http://blog.perfectomobile.com/2012/11/02/idc-report-mobile-operating-systems-stats-for-q32012/

http://blog.perfectomobile.com/2012/10/26/testing-mobile-enterprise-business-critical-applications/

Enjoy

Eran

Jamo automation solution

Hello

In this short post, i will shortly give a taste about Jamo Solution – Mobile test automation solution

As you know, the mobile market is growing rapidly and so are the solutions.

We described in earlier blogs some of the players in the mobile automation and cloud, and one of them is JAMO

http://www.jamosolutions.com/

Jamo provides unique mobile automation to the various mobile platforms on a non jail break or rooted handsets (Android, iOS, Windows and BlackBerry). The solution is based on plug in which is installed on the leading IDE’s (eclipse, QTP, Visual Studio) which communicates with a Jamo device manager component on the PC, and a small agent which is running on the target device and allows the real objects recognition per platform. The add on allows to record/replay and get the GUI objects from each target device (real object) to be used on your automated script. The solution is secured per each customer. For governance, near army institute, banking and secured applications this is the solution of choice according to the company spokesman.

The solution was well proved and used with customers, and is being deployed globally by various partner and integrators which works with Jamo which is located in Belgium & Switzerland

For more information you may contact the representatives, and get the trials/demos and more

Good Luck

Eran

Windows Phone 8 handsets are starting to pop out

Hi,

As many anticipated, we are starting to see more and more investment in the new Windows Phone platform, by many OEM’s and not only Nokia which is collaborating for a while with Microsoft.

In this short post, i will list the new upcoming Windows Phone 8 phones which you will soon start to see.

HTC:

HTC is announcing the launch of its new Windows Phone 8 phone called HTC Accord.

The phone comes with a 1.5 GHZ dual core SnapDragon processor, 4.3 ” Screen, 8 MP camera, External microSD card, NFC support, as well rumors says that the phone will have support for LTE communication.

Read more at: http://www.htcaccord.com/

Samsung:

Samsung announced its new ATIV-S Windows Phone 8 phone with the following characteristics: Super AmoLed 4.8 ” Screen, 1.5 GHZ dual core processor, Full HD 8 MP rear camera, with a 1.9 MP front camera. support for an external MicroSD card (which is new in the WP platform) and NFC support!. The phone which is running the WP8 will support Internet Explorer 10 browser, Mobile Office suite and the new cloud storage service SkyDrive.

The ATIV brand actually starts a new line of products by Samsung for WP8 (ATIV Tab 10.1”, ATIV Smart PC and more)

Read more at: http://www.samsung.com/global/ativ/ativ_s.html

Nokia:

Nokia which is of course the WP platform pioneer, is also announcing on 2 new phones running WP8 called Nokia Lumia 920 and Nokia Lumia 820.

The Nokia Lumia 920 will come with a 4.5” screen, and the Nokia Lumia 820 with a smaller screen of 4.3”

The news around these 2 phone is about their support in the new PureView camera technology, which for these 2 phones will give a 21 Mega Pixel support.

Read more about these 2 new handsets at: http://www.theverge.com/2012/8/31/3281985/nokia-lumia-920-specs-pictures-leak

http://www.engadget.com/2012/08/31/nokia-lumia-820-920-leak/

Summary:

As I always state, the mobile world is dynamic and constantly changing, and we are already seeing that even for the new Windows Phone platform the biggest OEM’s are starting to dive in so it will be interesting to see how such change impacts the mobile market, and the existing iOS/Android and the RIM platforms.

From testing perspective we also see variety of new screen sizes which was and will always be a challenge for testers and test automation (Above we mention already 4.3”, 4.5”, 4.8”). Tablets for WP8 as you saw above are also starting to be deployed extending this platform market.

Regards,

Eran Kinsbruner

Cross browser comparison (Focus on iOS)

Hi

It is a fact that more and more hybrid/web application are being developed lately, HTML5 applications and more

The common assumption by the application developers is that since it is a web application is will run cross platform without too much effort/QA and UI activities.

It might be true in some cases (simple ones), however on top of the complications in testing web application, we must not forget that each app need to be compliant to iOS and Android UI guidelines (icons, fonts etc.), and also important point to keep in mind is – The cross browser compatibility.

Each user (Android or iOS and soon Windows Phone) may chose the prefered browser which he likes to surf through and use. THat user will not change his browser and will expect that “your” application will run top-notch on his preferred browser.

In this post i will not cover all existing iOS browsers (nor android), but the top leading ones – My recommendation is to prepare similar matrix of testing for the existing web browsers for mobile and perform at least some level of sanity on each to assure your application works properly and also meets the desired guidelines.

For iOS we are familiar with the following browsers (The below iTunes URLs for downloading each:

– Safari (THe most common and default browser)

– Chrome (Google) – http://itunes.apple.com/us/app/chrome/id535886823?mt=8

– Dolphin – http://itunes.apple.com/us/app/dolphin-browser/id452204407?mt=8

– Opera Mini – http://itunes.apple.com/us/app/opera-mini-web-browser/id363729560?mt=8

– Mercury – http://itunes.apple.com/us/app/mercury-web-browser-most-advanced/id331012646?mt=8

Please see below some screenshots of the exact same web page (BBC News) which is a high quality web site being run on the above browsers (Not too much difference which is good news :), however there are some L&F differences – for other apps i am sure that the situation will be different)

Safari Browser:

Google Chrome:

Mercury Browser:

Dolphin Browser:

Opera Mini:

Not to forget that the above if appears quite different from one browser to the other will introduce additional challenges for the automation team.

To sum up, mobile known matrix of devices and os will also extend toward mobile

Browsers per platform, a vital thing to cover which will also complicate automation.

Good Luck

Regards,

Eran

Picking the right handsets for your project

Hi

We all know that the mobile world is dynamic, plenty of new handsets are being shipped at the same time in which we develop our product and testing on the (what we believe) is the “hottest” handsets in the market.

It is clear that being agile and fast in the way we develop, test and deploy our mobile products is a key to be attractive in the market, however it is also impossible to support all handsets and be ahead of the market.

So – The way to be up to date in the offering, is not simple but possible.

When you start developing your product keep in mind that by picking the “right” 10 handsets which are “hot” in the market you can reach the coverage of ~50% of the market (Note that there are lead devices which represent a whole family of handsets and can give you a lot of value by testing on it), as well if you go to ~30 devices you may reach up to ~80% coverage of the market.

How should you decide than?

The way to do the picking of handsets should combine the 2 following aspects:

– Market research

– Right family identification/lead devices

Market Research: The way to determine what is relevant in the market is to do some research and analysis – either through leading mobile blogs, or even simple – going through the leading mobile operators in the world, and seeing what they are currently selling (e.g. Vodafone global lists today in the top list of devices in Germany: Samsung Galaxy SIII, Samsung Galaxy SII, SEMC Xperia Arc S etc. – http://shop.vodafone.de/Shop/smartphones/, if you go to Vodafone UK you will see mostly the same ones, as well as HTC One X and others http://www.vodafone.co.uk/brands/android/index.htm)

Doing a matrix and unification of handsets between the world leading carriers in Europe/U.S/Asia should give you the lead handsets which you would like to support and test in the 3-6 months ahead (Per OS – Android, iOS, Windows Phone and BlackBerry).

Families: The aspect of family should be a subset of the above list If e.g you reached a common list of lets say 50 handsets, i am mostly certain that the list can be cut into half by doing proper comparison between the various handsets by their OS, Screen resolution and OEM (This can be done through sites like GSM Arena – http://www.gsmarena.com)  and minimizing the list by leads, members and families.

Please find attached to the post an up to date list of common handsets by OEM which is sold world wide these days to ease your pain 🙂  –> As you will see, there are a lot of similar handsets across all large operators which can show the main devices to focos on.

MobileWorldHandsetsDistribution

P.S: With regards to the leading Android/iOS tablets these days:

iOS – iPad 2 and iPad 3

Android – Samsung Galaxy Tab 10.1, Motorola Xoom, Asus Nexus 7, Dell Streak 7, Samsung Galaxy Tab 7, Sony Tablet S. Asus Transformer TF300, Asus Transformer TF700

Regards,

Eran

Cross Platform mobile development tool – Titanium

All

This is a Testing blog however i recently get a lot of questions around Mobile development/porting for cross platform projects (Android/iOS etc.).

For this cases there are several good and free tools out there which can be used such as PhoneGap (http://phonegap.com/download/), EggPlant (http://www.testplant.com/products/eggplant/for-cross-platform-testing/) and Titanium.

In this short post i will just point you out to the tool and give a very high level details about it for you guys to go and try it out for yourself

Titanium is a free tool by Appcelerator (http://www.appcelerator.com/platform/titanium-sdk) which allows quite quickly to develop a cross platform mobile application which can be than deployed on iOS and Android Phones/Emulators and also PC Web browsers.

The tool allows to develop in Java Script and customize your resources pending the operating system you which to deploy on:

You need to have on your machine the Android SDK and point its location in the Titanium SDK to allow execution and debugging on the Android Emulator.

You have through the SDK options to create new project from samples or templates (HTML5, Tabbed APP and more).

Feel free to give it a try and comment

Regards,

Eran Kinsbruner