Trends in Cross Browser Testing and Web Development

Typically, i”ll write a lot on mobile app testing, tools, trends, coverage and such.

In this blog, I actually wanted to share some up to date trends as I see them in the web landscape.

The web market has shifted a lot over the past years alongside the mobile space. We see a clear use of specific development languages, development frameworks and of course specific test frameworks aimed to test Angular, jQuery, Bootstrap,.Net and other websites.

From a Dev Language perspective, the web FE developer is mostly using the following languages as part of his job:

Sourcehttp://vintaytime.com/premium/top-programming-languages/

As a clear trend in web development, it shows that JavaScript is the leading language used by web developers. It’s actually not a huge surprise since if you move to the top frameworks used by these web developers, you will see quite a few that are based on JavaScript.

There are some trends seen recently by developers around shifting to non AngularJS web development framework like Aurelia, React, and Vue.JS that are seeing a growing usage and adoption by developers due to considerations such as (larger list of Pro’s/Con’s are in source 1 below). With this trend in mind, and you’ll read in my references below, the new solutions are still not as complete as AngularJS is.

  • Shorter learning curve
  • Simple to use, clean
  • Flexibility
  • Lightweight compared to others (less than half the size of AngularJS e.g.)
  • Better performing
  • Easy to integrate with other front-end stack tools
  • Responsive server-side rendering (Vue.JS supports it, reduces time for users to see rendered content)
  • SEO Friendly
  • Good documentation and Community Support
  • Good debugging capabilities

Source 1: https://www.slant.co/topics/4306/~angular-js-alternatives

Source 2: https://w3techs.com/technologies/details/js-angularjs/all/all

Now, that we have seen the leading web development languages, and frameworks used these days, let’s drill down into what test automation engineers are adopting.

Selenium without a doubt is the leading and base for most frameworks, however, even in this space, we see new and innovative test frameworks such as Casper.JSTestCafeBuster.JSNightwatch.JS together with the traditional Webdriver.IO and of course Protractor.

If we examine the below visual (SourceNPM Trends), it’s a clear market dominance between Selenium and Protractor that underneath its implementation does uses Selenium WebDriver, and supports Jasmine and Mocha tools.

The advantage of tools like Protractor is that they support much easier web sites that were developed in various frameworks like AngularJS, Vue.JS etc. Such advantage allows test automation engineers to agnostically use them for multiple websites regardless to the frameworks they are built with.

It is not that easy, and pink as I described above, but it does give a good headstart when starting to build the test automation Foundation.

Thre are few other players in that space that are aimed at specific unit testing, and headless browser testing (Phantom.JS, Casper.JS, JSDom etc.).

As I blogged in the past, from a test automation strategy perspective, teams might find it beneficial and more complete to leverage a set of test frameworks rather than using only one. If the aim is to have non-UI headless browser testing together with Unit testing and also UI based testing, then a combination of tools like Protractor, Casper.JS, QUnit might be a valid approach.

I hope you find this post useful, and can “swim” in the hectic tools landscape. As always, it is important to match the tool to the product requirements, development methodology (BDD, Agile, Waterfall etc.), supported languages and more.

Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Google Mobile Friendly With Perfecto and Quantum

Guest Blog Post by Amir Rozenberg, Senior Director of Product Management, Perfecto

resize

Google recently announced “Mobile First Indexing”, from Google:

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

screen-shot-2017-02-13-at-5-33-26-pm

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

  • Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same
  • Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that
  • The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉
  • This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

 

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

  • Added 2 Gherkin commands
// If you navigate directly to this page
Then I check mobileFriendly URL "http://www.nfl.com"
// If you got to this page through clicks
Then I check mobileFriendly current URL
  • Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)
  • And the script example is pretty simple:
@Web
Feature: NFL validate

  @SimpleValidation
  Scenario: Validate NFL
    Given I open browser to webpage "http://www.nfl.com"
    Then I check mobileFriendly current URL
    Then I check mobileFriendly URL "http://www.nfl.com"
    Then I wait "5" seconds to see the text "video"

 

That’s it. Next steps:

 

Ideas for future improvement:

  • You can automate the validation such that every click would trigger a check with Google behind the scenes.

Just for fun, some more screenshots for detailed analysis for NFL.com:

 

screen-shot-2017-02-13-at-5-33-48-pm

 

screen-shot-2017-02-13-at-5-34-09-pm

screen-shot-2017-02-13-at-5-34-23-pm

 

 

Model-Based Testing and Test Impact Analysis

In my previous blogs and over the years, I already stated how complicated, demanding and challenging is the mobile space, therefore it seems obvious that there needs to be a structured method of building test automation and meeting test coverage goals for mobile apps.

While there are various tools and techniques, in this blog I would like to focus on a methodology that has been around for a while but was never adopted in a serious and scalable way by organizations due to the fact that it is extremely hard to accomplish, there are no sufficient tools out there that support it when it comes to non-proprietary open-source tools and more.

First things first, let’s define what is a Model-Based testing

Model-based testing (MBT) is an application for designing and optionally executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment.

mbt

Fig 1: MBT workflow example. Source: Wikipedia

In the context of mobile, if we think about an end-user workflow with an application it will usually start with a Login to the app, performing an action, going back, performing a secondary action and often even a 3rd action based on the previous output of the 2nd. Complex Ha

The importance of modeling a mobile application serves few business goals:

  1. App release velocity
  2. App testing coverage (use cases)
  3. App test automation coverage (%)
  4. Overall app quality
  5. Cross-team synchronization (Dev, Test, Business)

 

As already covered in an old blog  I wrote, mobile apps would behave differently, and support different functionality based on the platform they are running (E.g., not every iOS device support both Touch ID and/or 3D Touch gesture). Therefore, being able to not only model the app and generate the right test cases but also to match these tests across the different platforms can be a key to achieving many of the 1-5 goals above.

In the market, today there are various commercial tools that can assist in MBT like CA Agile Requirement Designer, Tricentis Tosca, and others.

Looking at an example provided by one of the commercial vendors in the market (Tricentis), it can show a common workflow around MBT. A team aims to test an application; therefore, they would scan it using the MBT tool to “learn” its use cases, capabilities, and other artifacts so they can stack these into a common repository that can serve the team to build test automation.

tosca

Fig 2: Tricentis Tosca MBTA tool

In Fig 2., Tricentis examines a web page to learn its entire options, objects, and other data related items. Once the app is scanned, it can be easily converted into a flow diagram that can serve as the basis for test automation scenarios.

With the above goals in mind, it is important to understand that having an MBT tool that serves the automation team is a good thing, but only if it increases the team efficiency, its release velocity, and the overall test automation coverage. If the output of such tool would be a long list of test cases that either does not cover the most important user flows, or it includes many duplicates than this wouldn’t serve the purpose of MBT but rather will delay existing processes, and add unnecessary work to teams that are already under pressure.

In addition to the above commercial tools, there is an older but free tool that allows Android MBT with robotium called MobiGuitar. This tool not just offers MBT capabilities but also code coverage driven by the generated test scripts.

A best practice in that regards would be to probably use an MBT tool that can generate all the application related artifacts that include the app object repository, the full set of use cases, and allow all of that to be exported to leading open-source test automation frameworks like Selenium, Appium, and others.

Mobile Specific MBT – Best Practices and Examples

Drilling down into a workflow that CA would recommend around MBT would look as follows – In reality, the below is easier said than done for a Mobile App compared to Web and Desktop:

  1. The business analysts will create the story using tools like CA Agile Requirement Designer or such (see below more examples)
  2. The story is then passed to an ALM tool (e.g.: CA Agile Central [formerly Rally], Jira, etc.) for project tracking
  3. Teams use the MBT tools to collaborate
    1. The automation engineer adds the automation code snippets to the nodes where needed or adds additional nodes for automation.
    2. The programmer updates the model for technical specs or more technical details.
    3. The Test Data engineer assigns test data to the model
  4. Changes to the story are synchronized with the ALM Tool
  5. Test cases are synchronized with the ALM Tool
  6. The programmer completes coding
  7. The code is promoted from Dev to QA
  8. Testing begins
    1. The tester uses the test cases with test data from MBT tools for manual test case execution
    2. The automation scripts with test data are executed for functional and regression testing

To learn more about efficient MBT solutions, practices please refer to these sources:

Mobile Testing: Difference Between BDD, ATDD/TDD

Last week I presented in the Joe Colantonio AutomationGuild online conference – Kudos to Joe for a great event!

ag-logo-small

Among multiple interesting questions that I got post my session,  like what is the best test coverage for mobile projects? how to design effective non-functional and performance testing in mobile and RWD?, I also got a question about the differences between BDD and ATDD.

My session was around an Open Source test automation framework called Quantum that supports cucumber BDD (Behavior Driven Development) and this obviously triggered the question.

Definition: BDD and ATDD

ATDD Acceptance Test Driven Development

Based on Wikipedia’s definition (referenced above), ATDD is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example,behavior-driven development (BDD),example-driven development (EDD), and support-driven development also called story test–driven development (SDD).

All these processes aid developers and testers in understanding the customer’s needs prior to implementation and allow customers to be able to converse in their own domain language.

ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

BDD Behavior Driven Development

Again, based on Wikipedia’s definition (referenced above), BDD is a software development process that emerged from test-driven development (TDD)Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

Mobile Testing In the Context of BDD and ATDD

The way to look at the 2 agile like practices of BDD, ATDD, TDD is from the context of higher velocity and quality requirements.

Organizations are aiming to release faster to market, with great quality, sufficient test coverage and in the same time of course – meet the business goals and customer satisfaction. To achieve these goals, teams ought to be strongly collaborative from the very beginning app development and design stages.

Once organizations have the customer product requirements, and they can start developing through user stories, acceptance criteria’s and such the product & the tests several goals can be met:

  • High customer-vendor alignment == Customer satisfaction
  • Faster time to market, app is tested along the SDLC
  • Quality is in sync with customer needs and there are much less redundant tests
  • There are no Communication gaps or barriers between Dev, Test, Marketing, Management

 

Looking at the below example of a BDD-based test automation test code, it is very easy to understand the functionality and use cases under test, the desired test outcome.

quantum123

As can be seen in the screenshot above, the script installs and launches on an available Samsung device the TestApp.APK file performs a successful login and presses on a menu item. As a final step, it also performs a mobile visual validation to assure that the test also passes, and also as an automaton anchor, the test code reached the expected screen.

It is important to mention that the test framework, tools that can support both TDD, ATDD and BDD can be in many cases similar, and in our case above – one can still develop and test from a BDD or ATTD standpoint by using a Cucumber test automation framework (Cucumber, Quantum).

If we would compare the above functional use case, or as stated in the cucumber language “Scenario” to a scenario that would fit an ATDD based approach – we would most likely need to introduce the known “3 amigos” approach  –> three perspectives of customer (what problem are we trying to solve?), development (how might we solve this problem?), and testing (what about…).

 

Since a real ATDD best practice will determine a Gherkin like app scenario’s before the development even starts, the above BDD example will be a precondition test for the app development team to make sure that they develop against acceptance criteria that in our example is a successful app install and log in.

An additional example of an acceptance test that also involves a layer of login/Register that I can reference would like this:

effective-testing-practices-in-an-agile-environment-28-638

I can understand that confusion between BDD and ATDD since as mentioned above, they can look a lot like the same.

Bottom line, and as I responded to the event last week – both BDD/ATDD/TDD are methods to better sync the various counterparts involved in shipping a working product to the market, faster, with higher quality and with the right functionality that would meet the customer requirements. Implementing it using Gherkin method makes a lot of sense due to the easy alignment and common language these counterparts uses during the SDLC workflow.

Happy Testing!

How to Efficiently Test Your Mobile App for Battery Drain?

With my experience in the mobile space over the past 2 decades, I rarely run across efficient mobile app testing that assures resource usage by the app as part of the overall test strategy and test plan.

Teams would often focus on the app usability, functionality, performance and security and as long as the app performs what it was designed to do – the app will get pushed to production as is.

Resource Consumption As an App Quality Priority

Let’s have a look at one of 2016 most popular mobile native apps, that is Pokemon Go. This mobile app alone, require constant GPS location services being active, it keeps the screen fully lit when in the foreground, operates the camera, plays sounds and renders 3d graphics content.

If we translate the above resource consumption when running this App on a fully charged Android device, research shows that in 2 hours and 40 minutes the phone will drop from 100% to 0% battery.

The thing is of course, that the end user will typically have at least 10 others apps running in the background at the same time, hence the battery drain of the device will be of course faster.

From a recent research done by AVAST, you can see 2 set of greediest apps in the market in Q3 2016. The 2 visuals below taken from the report show 2 sets of apps – 1 that is usually launched at the device startup, and the 2nd set of apps mostly launched by the users.

batteryd1

batteryd2

How to Test the App for Battery Drain?

Teams need to come as close as possible to their end-users, this is a clear requirement in today’s market. This means that from a battery drain testing perspective, the test environment needs to mimic the real user from the device perspective, OS, network conditions (2G, 3G, Wifi, Roaming), background popular apps installed and running on the device and of course a varying set of devices in the lab with different battery states.

  • Test against multiple devices 

Device hardware is different across both models manufacturers. Each battery will obviously have a limited capacity than the other. Each device after a while will have degraded battery chemistry that impacts the performance, the duration it can last and more. This is why a variety of new, legacy and different battery capacities needs to be a consideration in any mobile device lab. This is a general requirement for mobile app quality, but in the context of battery testing – this gets a different angle that ought to be leveraged by the teams.

  • Listen to the market and end-users’

Since the market constantly changes, the “known state” and quality of your app including battery consumption and other resources consumption may change as well. This can happen due to app different performance on a new device that you have no experience with or it can be due to a new OS version released to the market by Google or Apple – we have seen plenty of examples like that, including the recent iOS 10.2 release.

It is very hard to monitor these things in products, so one advice should be to start testing the app on OS Beta versions and measure the app battery consumption prior to the OS is released as GA – this can eliminate issues around new OS versions. Other methods that are commonly used by mobile teams is to monitor the app store and either get notified by the end-users’ about such issues (less preferred). Continuously including such tests on a refreshed device lab will reduce the risks and identify issues earlier in the cycle and prior to production. Make these tests or a subset of these part of your CI cycle to enhance test coverage and reduce risks.

screenshot-2016-12-22-at-01-42-05

Summary

In today’s market, there is not good automation method to test app battery drain, therefore my recommendation is to create a plethora of devices in the lab with varying conditions as mentioned above and measure the battery drain through native apps on the devices as well as timer measurements. The tests should be first against the app running on a clean device and than on a real end user device.

How To Adapt to Mobile Testing Complexity Increase Due to External App Context Features?

If you follow my blogs, white papers and webinars for the past years you are already familiar with the most known common challenges around mobile app testing such as:

  • Device/OS proliferation and market fragmentation
  • Ability to test the real end-user environment within and outside of the app
  • testing both visual aspects/UI as well as native elements of the app
  • Keeping up with the agile and release cadence while maintaining high app quality
  • Testing for a full digital experience for both mobile, web, IOT etc.

 

If the above is somehow addressed by various tools, techniques and guidelines there is a growing trend in the industry in both iOS and Android platforms that are adding another layer of complexity to testers and developers. With iOS 10 and Android 7 as the latest OS releases but also with earlier versions, we start to see more abilities to engage with the app outside of the app.

imessage-apps-2-800x525

If we look at the recent change made in iOS 10 around iMessage, it is clear that Apple is trying to enable mobile app developers better engagement with their end-users’ even outside of the app itself.  Heavy messaging users can remain in the same app/screen that they’re using and respond quickly to external apps notifications in various ways.

This innovation is a clear continuation to the Force Touch (3D Touch) functionality that was introduced with iOS 9 and iPhone 6S/6S Plus that also allows users to click on the App icon without opening the full app and perform a quick action like writing a new facebook status, upload an image to facebook or other app related activities.

Add to the above capabilities the recent Android 7.1 App Shortcuts support which allow users to create relevant shortcuts on the device screen for app capabilities that they commonly use. More example that you can refer to is the Android 7.0 split window feature – allowing app to consume 1/2 or 1/3 of the device screen while the remaining screen is allocated to a different app that might compete with yours on HW/System resources etc.

So What Has Changed?

Quick answer – A lot 🙂

As I recently wrote in my blog around mobile test optimization, the testing plans for different mobile OS versions is becoming more and more complex and requires a solid methodology so teams can defer the right tests (manual/automation) to the right platforms based on supported features of the app and the capabilities of the devices – testing app shortcuts (see below an example)  is obviously irrelevant on Android 7.0 and below, so the test matrix/decision tree needs to accommodate this.

appshortcuts

To be able to test different app context you need to make sure that you have the following capabilities from a tool perspective in place and also to include the following test scenarios in your test plan.

  1. Testing tools now must support the App under test and also the full device system in order to engage with system popups, iMessage apps, device screen for force-touch based testing etc.
  2. The test plan in whatever tree or tool is being managed, ought to accommodate to the variance between platforms and devices and allow relevant testing of apps–>features–>devices (see my referenced blog above for more insights)
  3. New test scenarios to be considered if your app leverages such capabilities
    1. What happens when incoming events like calls or text messages occur while the app interacts within an iMessage/Split screen/shortcut etc. also what happens when these apps receive other notifications (lock screen or within the unlocked device screen)
    2. What happens to the app when there are degraded environment conditions like loss of network connection, flight mode is on etc. – note that apps like iMessage rely on network availability
    3. If your app engages with a 3rd party app – take into account that these apps are also exposed to defects that are not under your control – Facebook, iMessage, others. If they are not working well or crashes, you need to simulate early in your testing activities such scenario and understand the impact on your app and business
    4. Apps that work with iMessage as an example might require a different app submission process and also might be part of a separate binary build that needs to be tested properly – take this into account.
    5. Since the above complexities are all dependent on the market and OS releases, make sure that any Beta version that is released gets proper testing by your teams to ensure no regressions occur.

I hope these insights can help you plan for a new trend/future that I see growing in the mobile space that IMO does add an extra layer of challenges to existing test plans.

Comments are always welcomed.

Happy Testing!

3 Ways to Make Mobile Manual Testing Less Painful

With 60% of the industry still functioning at 30% mobile test automation it’s clear that manual testing is taking a major chunk of a testing team’s time. As we acknowledge the need for both manual and automation testing, and without drilling down into the caveats of manual testing, let’s understand how can teams can reduce the time it takes, and even transition to an automated approach to testing.

1. Manual and Automation Testing: Analyze Your Existing Test Suite

You and your testing team should be well-positioned to optimize your test suite in one of the following ways:
– Scope out irrelevant manual tests per specific test cycles (e.g. not all manual tests are required for each sanity or regression test)
– Eliminate tests that consistently pass and don’t add value
– Identify and consolidate duplicate tests
– Suggest manual tests that are better-suited for automation (e.g. data driven tests or tests that rely on timely setup of environments)

The result should be a mixture of both manual and automation testing approaches, with the goal of shifting more of the testing toward automation.

manual and automation testing mix

2. Consider a Smooth Transition to “Easier” Test Frameworks

In most cases, the blocker for increasing test automation lies inside the QA team, and is often related to a skills gap. Today’s common mobile test automation tools are open-source, and require medium to high-level development skills in languages such as Java, C#, Python and Java Script. These skills are hard to find within traditional testing teams. On the other hand, if QA teams utilize alternatives such as Behavior Driven Development (BDD) solutions like Cucumber, it creates an easier path toward automation by virtue of using a common language that is easy to get started and scale.

manual and automation testing cucumber behavior driven development

3. Shift More Test Automation Inside the Dev Cycle

When thinking about your existing test automation code and the level of code coverage it provides, there may be a functional coverage overlap between the automated and manual testing. If the automation scripts are shared across the SDLC and are also executed post-commit on every build, this can shrink some of the manual validation work the testing team needs to do. Also – and miraculously related! – by joining forces with your development and test automation teams and having them help with test automation, it will decrease workload and create shorter cycles, resulting in happier manual testers.

Bottom Line:

Most businesses will continue to have a mix of manual and automation testing. Manual testing will never go away and in some cases, it is even a product requirement. But as you optimize your overall testing strategy, investing in techniques like BDD can make things much easier for everyone involved with both manual and automation testing throughout the lifecycle.

manual automation testing with real devices

4 Benefits of Using the Espresso Test Automation Tool

If you’re an Android developer, you’re probably familiar with Google’s Espresso test automation framework. As an open-source tool, it’s very easy for developers to use and extend within their working environment (Android Studio IDE).

But before discussing the benefits of Espresso, let’s understand the motivations and pains developers and test automation engineers face today while trying to validate their Android application (APK) throughout the build/dev/test workflow.

  • Each build needs to be validated after code changes are made.
  • Dependencies on remote servers and other workstations for testing slow down the process.
  • Unit and functional tests need to be easy to execute from both an IDE and continuous integration perspective.
  • Apps need to be tested using the latest Android OS APIs that support new platform features and OS versions.
  • Testing needs to occur on both emulators and real devices.

In light of these challenges, it’s clear why the adoption of the Espresso automation framework is high. Even though Espresso is an instrumentation-based test framework, it has many benefits to both developers and test automation engineers. It uses Junit underneath the hood, so Espresso is easy to use within leading IDEs and provides useful testing annotations and assertions. It’s also fully integrated within the leading Google Android IDE – Android Studio.

Here are four main benefits of using Espresso:

1. Espresso workflow is simple to use

The way Espresso works is by allowing developers to build a test suite as a stand-alone APK that can be installed on the target devices alongside the application under test and be executed very quickly.

2. Fast and reliable feedback to developers

As developers are trying to accelerate deployment, Espresso gives them fast feedback on their code changes so they can move on to the next feature or defect fix; having a robust and fast test framework plays a key role.

Espresso does not require any server (like Selenium Remote WebDriver) to communicate with; instead it runs side-by-side with the app and delivers very fast (minutes) test results to the developer.

3. Less mobile testing flakiness

Because Espresso offers a synchronized method of execution, the stability of the test cycle is very high. There’s a built-in mechanism in Espresso that, prior to moving to the next steps in the test, validates that the Element or Object is actually displayed on the screen. This eliminates test execution from breaking when confronted with “objects not detected” and other errors.

4. Developing Espresso test automation isn’t hard

Developing Espresso test automation is quite easy. It is based on Java and Junit, which is a core skillset for any Android app developer. Because Espresso works seamlessly within the Android Studio IDE, there’s no setup or ramping up and no “excuses” – to actually shift quality in the in-cycle stage of the app SDLC.

In addition to the above, there is of course the large community powered by Google that pushes the Espresso test automation framework and allow easy and fast ramp up for newcomers.

Learn more using the Espresso Cheat sheet below:

Espresso Test Automation Framework

Perfecto is offering support for both Android Studio IDE as well as the ability to install and launch an Espresso test suite (APK) on real devices in the cloud across various locations and user conditions. For more information, please refer to the Perfecto Community and search for “Android Studio” or “Espresso.”

Joe Colantonio’s Test Talk: Mobile Testing Coverage Optimization

How does a company nowadays put together a comprehensive test strategy for delivering high-quality experiences for their applications on any device? I think this is the question I get asked most frequently and it is the biggest challenge in today’s market, how to tackle mobile testing and responsive web testing. The solution can be the difference between an app rated 1 star or an app rated 5 stars.

Play Podcast

I had a lot of fun talking to Joe Colantonio from Test Talks about how to create a successful app starting with my Digital Test Coverage Optimizer. Listen to the full talk to hear my ideas on moving from manual testing to automation, tracking the mobile market, the difference between testing in simulators and emulators versus real devices and more.

https://joecolantonio.com/testtalks/110-mobile-testing-coverage-optimization-eran-kinsbruner/

 

JC