HTML5 Makes Mobile Testing More Encouraging For Gaming Apps

The need for mobile testing of gaming applications has long been apparent. With app stores for Android and iOS devices more or less overflowing with games of all different kinds, it stands to reason that a lot of developers are rushing the process. That is to say, there are a lot of cheaper or less refined games reaching the market in this category, and testing will reveal as much. As one article put it in an overarching write-up on the very concept of mobile testing, app store halls are now littered with thousands of one-star reviews that tell a tragic tale.

This is largely due to frantic developers hoping to be first to market with a given idea, or simply hoping to flood the market with multiple ideas, without worrying much about quality. However, it’s also because the coding languages that serve as the foundation of mobile games are varied and can be inconsistent. Certainly, a capable and creative developer can always find a way to design a beautiful and intuitive experience. But as technology improves and evolves, newer design methods will improve the overall quality of the mobile gaming market to a degree.

One thing to watch in this regard is HTML5. Hailed as a borderline revolutionary cross-platform design option for games, it has been somewhat slow to emerge in the mobile market in a big way. Casino games, in particular, are beginning to show its potential. As a popular gaming platform puts it, there are plenty of developers out there who are more than happy to indulge gamers’ craving for new experiences. In doing so, one method they’ve embraced is the use of HTML5 to put together slot games that exist at online casinos but can easily be downloaded as high-quality apps also. It’s a subtle but smoother version of a game being adapted from scratch to suit mobile devices.

We’ve seen this same process occur with some one-off arcade games as well. Bejeweled, for instance, made such a seamless transition from browser mobile devices because it was actually one of the earlier major HTML5 games. In fact, the same can be said of Angry Birds. Those are probably the two most high profile examples, though there are other mobile games that are recognized for high quality and built on HTML5 as well.

This should gradually be leading to a better selection of high quality, high-performance mobile games, and while it’s difficult to make any kind of overarching statement about mobile game testing, one article, in particular, indicated that HTML5 has had an encouraging effect. Discussing a focus on mobile, the writers declared that they weren’t even bothering to test desktops because performance had been so consistently strong. They referred to this as a positive sign for the maturity of HTML5.

By extension, that means good things for the mobile gaming market.

Advertisements

Complementing Cross-Browser Testing with Headless Unit Testing Solutions

Nothing new in the land of cross-browser testing. Selenium as the underlying API layer serves leading frameworks including WebDriverIO, Protractor (Angular based testing), NightWatchJS, RobotJS and many others.

For web application developers that require fast feedback capabilities post their code commit or bug resolution, there are various testing options. Some would quickly test manually on a set of local or cloud based VM’s, some will develop unit tests (qUnit etc.), but there are also very mature cross browser testing solutions that add more layers of coverage and insights in an automated and easy way.

In a recent eBook that I developed, I’m covering the 10 emerging cross-browser testing tools with a set of considerations around how to choose the right one or the right mix of them.

As can be seen in the 10 tools shown above, there is a mix of a unit as well as E2E functional testing tools mostly javascript based.

Developers who would like to include as part of their quick sanity post commit a validation of the load time it takes the site to load, can easily add this PhantomJS based test into their CI post build acceptance testing and get such visibility after each successful build – that, match the result with a benchmark and take decisions.

In a quick test that I ran on the NFL.com website, I was able to not only detect a slow load of 10sec. but I also identified a long set of errors while the page is loaded.

Another powerful capability tools like PhantomJS can offer is the ability to both capture a specific rendering of a web page by a pre-defined viewport, as well as the ability to generate a page HAR file for network traffic analysis (I am aware that it is not the newest tool, and that Goole already provides a newer version, but still this is a valuable open-source free tool that can help add coverage capabilities to any web development team).

So if as an example, the load time with errors above turns on a red light regarding that site, with 2 simple tests that BTW PhantomJS provides as their starting kit in GIT, the developer can address the above 2 use cases of HAR file generation as well as page rendering screenshot.

The result of the above snippet is the screenshot below:

The HAR file creation that is based on the following GIT code sample will result in the following (I am using the google add-on HTTP Archive Viewer for Chrome, it can be done simply with other HAR viewers as well):

Bottom line

You can download my latest eBook and learn more, but in general – leverage both unit testing powerful tools, as well as traditional E2E tests, hence they do complement each other and add their unique value – And it’s Free!

Happy Testing!

Optimizing Mobile Test Automation Across The Pipeline

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Oil Transport

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Find the Key in the Picture

Recommended Practices

  • Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.
  • Match your device under tests capabilities to the test code and application under test. Make sure that you focus e.g. your fingerprint based tests only on the devices that support it (API XX and above).
  • Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.
  • Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.
  • Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

CI_Dash1.png

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Happy Testing!

Trends in Cross Browser Testing and Web Development

Typically, i”ll write a lot on mobile app testing, tools, trends, coverage and such.

In this blog, I actually wanted to share some up to date trends as I see them in the web landscape.

The web market has shifted a lot over the past years alongside the mobile space. We see a clear use of specific development languages, development frameworks and of course specific test frameworks aimed to test Angular, jQuery, Bootstrap,.Net and other websites.

From a Dev Language perspective, the web FE developer is mostly using the following languages as part of his job:

Sourcehttp://vintaytime.com/premium/top-programming-languages/

As a clear trend in web development, it shows that JavaScript is the leading language used by web developers. It’s actually not a huge surprise since if you move to the top frameworks used by these web developers, you will see quite a few that are based on JavaScript.

There are some trends seen recently by developers around shifting to non AngularJS web development framework like Aurelia, React, and Vue.JS that are seeing a growing usage and adoption by developers due to considerations such as (larger list of Pro’s/Con’s are in source 1 below). With this trend in mind, and you’ll read in my references below, the new solutions are still not as complete as AngularJS is.

  • Shorter learning curve
  • Simple to use, clean
  • Flexibility
  • Lightweight compared to others (less than half the size of AngularJS e.g.)
  • Better performing
  • Easy to integrate with other front-end stack tools
  • Responsive server-side rendering (Vue.JS supports it, reduces time for users to see rendered content)
  • SEO Friendly
  • Good documentation and Community Support
  • Good debugging capabilities

Source 1: https://www.slant.co/topics/4306/~angular-js-alternatives

Source 2: https://w3techs.com/technologies/details/js-angularjs/all/all

Now, that we have seen the leading web development languages, and frameworks used these days, let’s drill down into what test automation engineers are adopting.

Selenium without a doubt is the leading and base for most frameworks, however, even in this space, we see new and innovative test frameworks such as Casper.JSTestCafeBuster.JSNightwatch.JS together with the traditional Webdriver.IO and of course Protractor.

If we examine the below visual (SourceNPM Trends), it’s a clear market dominance between Selenium and Protractor that underneath its implementation does uses Selenium WebDriver, and supports Jasmine and Mocha tools.

The advantage of tools like Protractor is that they support much easier web sites that were developed in various frameworks like AngularJS, Vue.JS etc. Such advantage allows test automation engineers to agnostically use them for multiple websites regardless to the frameworks they are built with.

It is not that easy, and pink as I described above, but it does give a good headstart when starting to build the test automation Foundation.

Thre are few other players in that space that are aimed at specific unit testing, and headless browser testing (Phantom.JS, Casper.JS, JSDom etc.).

As I blogged in the past, from a test automation strategy perspective, teams might find it beneficial and more complete to leverage a set of test frameworks rather than using only one. If the aim is to have non-UI headless browser testing together with Unit testing and also UI based testing, then a combination of tools like Protractor, Casper.JS, QUnit might be a valid approach.

I hope you find this post useful, and can “swim” in the hectic tools landscape. As always, it is important to match the tool to the product requirements, development methodology (BDD, Agile, Waterfall etc.), supported languages and more.

Optimizing Android Test Automation Development

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

  • Application under test
  • Test code itself
  • Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

  • Map the different features that your app supports or requires the users to grant permissions for
  • Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

  • Use an existing ADB command that extracts from a connected device/s the supported feature
    • ADB SHELL
      • PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older  Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprint functionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

Happy Optimization!

How the “Digital Quality Handbook” Was Born

Travel back with me… to late September 2016. It’s the Jewish New Year, and I am in Boston, MA. As I celebrate the passing of another great year, I think to myself, “After being in the software quality space for nearly 20 years, isn’t it about time that I reach out to the community of thought leaders and influencers and create an asset that can fill a gap in the market that we can give back to the world?” A book. A practical book. A “how-to” for DevOps practitioners, designed to make them better, faster, and more… perfect(o).

You see, when it comes to assuring the quality of web, mobile and IoT apps, the market is still struggling with key questions around test coverage, automation best practices, optimization of test automation suites, accomplishing more tests within the pipeline of software build cycle, the practice of shifting left and much, much more.

So, while my wife and children continued celebrating in the next room, I immediately (right then and there) started writing the intro to a book that would, eventually, bring together actionable ideas and practices from many of the world’s most recognized experts, thought leaders and influencers in the area of software quality.

To make it easier to both develop and consume the content, the book is set out in four logical sections:

  1. Introduction to continuous quality and the digital space
  2. Advanced test automation practices
  3. Achieving DevOps maturity in the digital era
  4. Expanding quality coverage with UX and non-functional testing

If you’re reading this article close to its post date, I’m currently down in Orlando, participating in a book signing at the StarEast testing conference. Danny McKeown from Paychex, one of the technical reviewers of the book, is with me, both participating in the signing and speaking at the event.

To name the market leaders who took part and contributed to this book:

  1. Microsoft (Donovan Brown)
  2. Applitools (Adam Carmi)
  3. TestFairy (Yair Bar-On)
  4. Applause (Doron Reuveni)
  5. CA & BlazeMeter (Jonathon Wright, Noga Cohen, Jacob Sharir)
  6. InfoStretch (Manish Mathuria)
  7. Rabobank (Wim Selles)
  8. Utopia Solutions (Lee Barnes)
  9. Angie Jones
  10. Jean Ann Harrison
  11. Lior Kinsbruner

And from Perfecto:

  1. Amir Rozenberg
  2. Roy Nuriel
  3. Paul Bruce
  4. Chris Willis
  5. Uzi Eilon
  6. Yoram Mizrachi
  7. Roi Carmel

Without this crew of contributors, the book wouldn’t be what it is today. Some of the contributed content includes:

  • The best way to include visual analysis testing as part of your test code, using any available open-source framework
  • How to develop API tests that complement your mobile UI test automation
  • How to include non-functional performance testing and UX as part of your overall test strategy
  • How to extend open-source tools like Protractor to better test your hybrid app
  • The bible of UX testing
  • What a valid and high-ranked XPATH should look like (with link to an online free tool that provides that rank to you)
  • How to include chatbots testing into your existing mobile testing plans
  • Where in the overall SDLC strategy does crowdsource testing and beta testing fit

Fun fact: We launched the book on Amazon on March 3rd. On March 5th, at approximately 2:51pm Eastern Time, the book had been added to the Hot New Releases in the Software Testing sidebar, and made it to the #1 Bestseller slot in that same category. We took a screen shot. It really happened!

To get your own copy of the book, please refer to this URL – and if you find it valuable, feel free to share your feedback with me.

Happy Reading!

Recent Web Browser Quality Related Innovations

Yea, I know that my blog title is mobiletestingblog, but that’s not a mistake in the title 🙂

There is no distinction anymore around which platform is used to consume content today, whether it’s a smartphone, tablet or a desktop browser when it comes to web apps.

If your company is developing a web app or responsive website, these sites ought to be tested thoroughly against all of the above platforms. The majority of web traffic BTW today is coming from mobile devices.

In general, it is good to know that from a desktop browser market share, there are less familiar players such as UC browser by Alli-baba and Samsung Internet browsers that hold a nice chunk of the market (globally) – so, avoiding them as  part of your test coverage matrix might not be a good strategy.

Sourcehttp://gs.statcounter.com/browser-market-share

In general, the below would be the formula for web testing that I would recommend these days, however if from a web traffic analysis and supported geographies you have a requirement to target China, Europe, and others – then the above metrics should be added to the mix either in addition to the below, or as an adjustment.

With that in mind, I wanted to highlight in this post some recent web specific tools that are out there, free and can be extremely useful for both developers and testers.

In Google Chrome 59 (Beta is already available today!), Google is introducing new code coverage built-in tools that can allow both developers and testers to record the screen activities and report back in a nice dashboard how much of the site content (javascript, and more) was actually executed in an aim to optimize the website quality, performance and much more.

From a user perspective, they only need to enable the Code Coverage option from within the developer tools in Chrome, so it is added under the Sources menu option as seen in the below

Once that is done, simply start capturing the code coverage by clicking on the Record button to get an output like the below – simple, valuable, and unfortunately only available as free and built-in solution within this browser compared to FireFox/Safari and others 😦

I went and used this new tool on Geico.com responsive site and nearly completed the most common transaction of querying for a new car insurance. At the end of the recording, i received the below chart that as you’ll see – shows a usage of only ~60% of the site JavaScript code in this journey.

When drilling down deeper to a specific .JS source file, you can see a highlighted source with Green/Red where it is actually used and unused – this is what your web developers need to see and optimize wisely.

Let’s see a key feature that was recently introduced in FireFox also, and cab be useful for both Dev and testers.

2 weeks ago, Mozilla released FireFox 53 that is their 1st step in a new project called Quantum, that aims to enhance performance, stability and more.

Among the innovations in that release are compact themes, usability features like reading time for the page, new permission model (see below), faster performance and few other bug fixes for stability.

 

Detailed release notes on FF 53 can be found here: https://www.mozilla.org/en-US/firefox/53.0/releasenotes/.

In addition to the newly introduced features, and if you’re not aware of – FireFox offer quite useful developer tools including object inspector, performance monitor, debugger and network monitor that can also enhance your overall web Dev and Test activities (see example below)

Performance Monitoring Tools From Within FireFox Developer Tools

Network Monitoring Options From Within FireFox Developer Tools

Bottom Line

With Chrome and FireFox being the leading Desktop and Mobile browsers, it is very important for web teams to continuously monitor the early releases from Google and Mozilla, and as the 1st Beta or Dev branches are available to validate – Do It. This can not only reveal earlier regressions but might also as mentioned in this blog, offer you some new productivity tools that can increase the value to your overall Dev and Test activities.

Android Privacy Policy May Break Your Test Automation Scripts

Last month, Google announced its plans to purge play store apps that do not include a privacy policy and the app required security permissions upon app installation.

Behind that requirement, Google is trying to provide its users maximum transparency on what the app is requiring, and what data it collects when users consume the apps.

An example for a native app that already implemented that requirement is StateFarm insurance, see below

So, with that simple request to mobile Android app developers, there are few quality implications.

Immediate Implications and Requirements

  1. Revise and continuously maintain your test code
    1. The above screen obviously was not planned for in the latest test automation cycle, which means that a new cycle will get stuck and fail since this is a new screen with a request for a user action – Teams ought to develop new test steps that upon initial app installation would test the following: When user Clicks Accept the app launches successfully, while when users Click Decline the app closes)
    2. Coverage matrix implications: Existing test suites should cover the above new scenario’s on the supported platforms – Device/OS combinations
  2. Varying permissions across platforms
    1. Most apps will require unique permissions that are (hopefully!) being used and required by the app to functions (see below visual from iUbenda).
    2. Different OS versions of iOS and Android might behave differently and support different security features like in Android 6.0 and above (Doze, Permission groups etc.)
  3. Compatibility of device/OS features and permissions
    1. Wha happens in the above regards, once an app supports even a normal related permission e.g. USE_FINGERPRINT. This permission since it resides in the normal group, it will be automatically granted to the user, however, what if the DUT (device under test) does not support this feature? How are teams differentiating in an automated way the test execution with regards to the device capability? Matching the device features and the test case as part of a dynamic test execution can be a powerful agile capability especially in the growing mobile fragmented market.

 

 

As seen in the above, from a testing perspective, Android apps that support “Dangerous Permissions” would require the Dev/Test teams to develop and validate a varying use case or device/OS behavior test case when tested on Android 6.0 and above compared to Android 5.1 and below (e.g Android API level < version 23).

 

 

Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Google Mobile Friendly With Perfecto and Quantum

Guest Blog Post by Amir Rozenberg, Senior Director of Product Management, Perfecto

resize

Google recently announced “Mobile First Indexing”, from Google:

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

screen-shot-2017-02-13-at-5-33-26-pm

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

  • Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same
  • Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that
  • The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉
  • This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

 

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

  • Added 2 Gherkin commands
// If you navigate directly to this page
Then I check mobileFriendly URL "http://www.nfl.com"
// If you got to this page through clicks
Then I check mobileFriendly current URL
  • Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)
  • And the script example is pretty simple:
@Web
Feature: NFL validate

  @SimpleValidation
  Scenario: Validate NFL
    Given I open browser to webpage "http://www.nfl.com"
    Then I check mobileFriendly current URL
    Then I check mobileFriendly URL "http://www.nfl.com"
    Then I wait "5" seconds to see the text "video"

 

That’s it. Next steps:

 

Ideas for future improvement:

  • You can automate the validation such that every click would trigger a check with Google behind the scenes.

Just for fun, some more screenshots for detailed analysis for NFL.com:

 

screen-shot-2017-02-13-at-5-33-48-pm

 

screen-shot-2017-02-13-at-5-34-09-pm

screen-shot-2017-02-13-at-5-34-23-pm