Mobile Testing: Difference Between BDD, ATDD/TDD

Last week I presented in the Joe Colantonio AutomationGuild online conference – Kudos to Joe for a great event!

ag-logo-small

Among multiple interesting questions that I got post my session,  like what is the best test coverage for mobile projects? how to design effective non-functional and performance testing in mobile and RWD?, I also got a question about the differences between BDD and ATDD.

My session was around an Open Source test automation framework called Quantum that supports cucumber BDD (Behavior Driven Development) and this obviously triggered the question.

Definition: BDD and ATDD

ATDD Acceptance Test Driven Development

Based on Wikipedia’s definition (referenced above), ATDD is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example,behavior-driven development (BDD),example-driven development (EDD), and support-driven development also called story test–driven development (SDD).

All these processes aid developers and testers in understanding the customer’s needs prior to implementation and allow customers to be able to converse in their own domain language.

ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

BDD Behavior Driven Development

Again, based on Wikipedia’s definition (referenced above), BDD is a software development process that emerged from test-driven development (TDD)Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

Mobile Testing In the Context of BDD and ATDD

The way to look at the 2 agile like practices of BDD, ATDD, TDD is from the context of higher velocity and quality requirements.

Organizations are aiming to release faster to market, with great quality, sufficient test coverage and in the same time of course – meet the business goals and customer satisfaction. To achieve these goals, teams ought to be strongly collaborative from the very beginning app development and design stages.

Once organizations have the customer product requirements, and they can start developing through user stories, acceptance criteria’s and such the product & the tests several goals can be met:

  • High customer-vendor alignment == Customer satisfaction
  • Faster time to market, app is tested along the SDLC
  • Quality is in sync with customer needs and there are much less redundant tests
  • There are no Communication gaps or barriers between Dev, Test, Marketing, Management

 

Looking at the below example of a BDD-based test automation test code, it is very easy to understand the functionality and use cases under test, the desired test outcome.

quantum123

As can be seen in the screenshot above, the script installs and launches on an available Samsung device the TestApp.APK file performs a successful login and presses on a menu item. As a final step, it also performs a mobile visual validation to assure that the test also passes, and also as an automaton anchor, the test code reached the expected screen.

It is important to mention that the test framework, tools that can support both TDD, ATDD and BDD can be in many cases similar, and in our case above – one can still develop and test from a BDD or ATTD standpoint by using a Cucumber test automation framework (Cucumber, Quantum).

If we would compare the above functional use case, or as stated in the cucumber language “Scenario” to a scenario that would fit an ATDD based approach – we would most likely need to introduce the known “3 amigos” approach  –> three perspectives of customer (what problem are we trying to solve?), development (how might we solve this problem?), and testing (what about…).

 

Since a real ATDD best practice will determine a Gherkin like app scenario’s before the development even starts, the above BDD example will be a precondition test for the app development team to make sure that they develop against acceptance criteria that in our example is a successful app install and log in.

An additional example of an acceptance test that also involves a layer of login/Register that I can reference would like this:

effective-testing-practices-in-an-agile-environment-28-638

I can understand that confusion between BDD and ATDD since as mentioned above, they can look a lot like the same.

Bottom line, and as I responded to the event last week – both BDD/ATDD/TDD are methods to better sync the various counterparts involved in shipping a working product to the market, faster, with higher quality and with the right functionality that would meet the customer requirements. Implementing it using Gherkin method makes a lot of sense due to the easy alignment and common language these counterparts uses during the SDLC workflow.

Happy Testing!

What You Need To Know When Planning Your Test Lab in 2017

As we kick-off 2017, I am thrilled to release the most updated 6th edition of the Digital Test Coverage Index report, a guide to help you decide how to build your test lab. 2016 was an exciting year in the Digital space, and as usual, Q4 market movement is sure to impact 2017 development and testing plans. And it doesn’t appear that the market is slowing down, with continued innovation expected this year. In this post, I will summarize the key insights we saw last quarter, as well as few important things that are projected for 2017 that should be applied when building your test lab.

dtci

Key Takeaways

  • Beta OS versions remain an important aspect of your test coverage strategy. With Apple releasing 5 different minor versions of iOS 10 since it’s release in September 2016, iPhone/iOS 10 beta are a “must-include in your test lab” device/OS combination. On the browser side, Chrome and Firefox beta versions are also critical test targets for sustaining the quality of your mobile web/responsive websites.
  • The Android fragmentation trend is changing, with Google putting pressure on device manufacturers to keep pace with the latest OS versions. As evidence, we already see that Android 6.x has the greatest market share as of Q42016, with roughly 27%, followed by Android Lollipop. With Google releasing its first Android Pixel devices, the market is already starting to see a boost in Android 7 Nougat adoption which is expected to grow within Q12017 to between 2-5% market share.
  • Galaxy S7 and S7 Edge were a turning point for Samsung: Over the last year, Samsung has seen a revenue slowdown due, in part, to competition from both Apple and emerging Android manufacturers OnePlus, Xiaomi, and Huawei. With the launch of Samsung S7 & S7 Edge, the company is regaining its position. We can see in this edition of the Index (and the previous one,) that Samsung is the leading brand in many countries, which should impact the test coverage plans in Brazil, India, Netherlands, UK, Germany and U.S.
  • The mobile app engagement methods are evolving, with various enterprises counting on the mobile platform to drive more revenues and attract more users. We are seeing greater adoption of external application integration either through dedicated OS-level applications like the iOS iMessage or through other solutions like the Google app shortcuts that were recently introduced as part of Android 7.1. These changes represent a challenge from a testing perspective, since there is now additional outside-of-app dependencies that the Dev and QA teams need to manage.
  • Test Lab size is expected to slightly grow YoY as the market matures:   Looking at the annual growth projection below, we see a slight growth in the need for a 10, 25 and 32 device lab, based on new the devices that are being introduced into the market faster than old devices are retired. What we see is an annual introduction of around 15 leading devices per year with an average retirement of 5-7 per year (due to decreased usage, terminated support by vendor etc.). Integrating these numbers into the 30%-80% model would bring the annual growth as demonstrated in the following graph.

annual_growth

 

2017 Trends

As this is the first Index for 2017, here are the most important market events that will impact both Dev and QA teams in the digital space, in the categories of Mobile, Web or both.

New Players

The most significant player to joins the mobile space in 2017 is Nokia. After struggling for many years to become a relevant vendor, and being unsuccessful under the Windows Phone brand, Nokia is now back in the game with a new series of Android-based devices that are supposed to be introduced during MWC 2017. A second player that is going to penetrate the mobile market is Microsoft who is supposed to introduce the first Microsoft Surface Phone during H1 2017.

Innovative Technologies

During 2017 we will definitely continue to see more IoT devices, smartwatches, and additional features coming from both Google and Apple, in the mobile, automotive and smart home markets. In addition, we might see the first foldable touch smartphone released to the market by Samsung under the name “Samsung X”. In addition, we should see a growing trend of external App interfaces in various forms such as bots, iMessages, App Shortcuts and Voice based features. The market refers to these trends as result of “App Fatigue” which is causing organizations to innovate and change the way their end-users are interacting with the apps and consuming data. From a testing perspective, this is obviously a change from existing methods and will require re-thinking and new development of test cases. In a recent blog, I addressed the above – feel free to read more about it here.

Key Device Launches to Consider for an Updated Test Lab

Most of the below can be seen in the market calendar for 2017, but the highlights are listed here as well:

  • Samsung S8/S8 Edge flagship devices from Samsung are due by February 2017 and should be the successors of the highly successful S7/S7 Edge devices
  • iPhone 8/iPhone 8 Plus together with iOS 11 launch in MID-September 2017 will mark the 10th anniversary for the Apple iPhone series. This launch is expected to be a groundbreaking one for iOS users.
  • Huawei Mate 9/Mate 9 Pro, and in general, the Huawei smartphone portfolio is continuing its global growth. 2017 should continue the growth trend both in China and India, but also as seen in this Index report in many European countries where we are already seeing devices like Huawei P8, P9, and others in use.

From a web perspective, we are not going to see any major surprises from the leading browsers like Chrome, FireFox, and Safari. However, from Microsoft Edge browser, we expect a significant market share uptick as more and more users adopt Windows 10 and abandon legacy Windows OS machines.

cal2017

 

In the Index report, you may find all the information necessary to better plan for 2017, as well as market calendars for both mobile and the web, plus a rich collection of insights and takeaways. DOWNLOAD HERE.

Happy Testing in 2017!

My 2017 Continuous Quality Predictions

A guest post by Amir Rozenberg, Sr. Director of product management at Perfecto Mobile & Yoram (Perfecto CTO)
resize
========================================================================
As 2016 winds down and we look into 2017, I’d like to share few thoughts on trends in delivering high-quality digital applications. This post is organized in two parts: Start with a collection of observations of key market trends and drivers; followed with continuous quality implications. While this article focuses on examples and quotes from the banking vertical, the discussion is certainly applicable more broadly.

2017 – Year of accelerated digital transformation with user experience in focus:

eiu-temenos-report_the-3-rs-of-retail-banking-regulate-revise-re-envisage-8-238x180
Image courtesy: Banking Technology

 

 

    1. Increased formal digital engagement: Consumers want independence and access, ‘self-serve’ or ‘Direct Banking’ in the banking space, at a time and location of their preference. As A.T. Kearney reports , many transactions done today by the bank will be done by the customer. That is a big opportunity that many banks capitalize on via their online apps. atkearney
    2. Informal digital presence: Implementation of multi-channel approach inclusive of social networks is proliferating as a complementary touch point with the customer. Activities include proactively scanning social networks for disgruntled customers and addressing their challenges individually, marketing and advertising new services and streamlining services. For example, allowing users to log into their online bank account using their social network presence. One bank reports a short-term marketing effort in those channels increased 13% mobile app enrollments, doubling their social activity following etc. (Source)
    3. Improved operating efficiency: Another strong driver for the digital transformation is introducing efficient processes and leveraging new channels to better manage expenditure. According to McKinsey, digital transformation can enable retail banks to increase revenues in upward of 30% and decrease expenditure by 20%-25%.
      In addition to slashing branches for efficient online service (Ally Bank: “Instead of spending money on expensive branches, we pass the savings on to you” ), DBS also treated customer care flows and improved their efficiency.
  • User Experience & efficiency: functional and delightful experience are top of mind for both customers as well as vendors: “Our customers don’t benchmark us against banks,” said Hari Gopalkrishnan, BOFA CIO of client-facing platform technology, in an interview with InformationWeek. “They benchmark us against Uber and Amazon.”. On the application side, there is a strong emphasis on the end user efficiency as they try to accomplish the task at hand. At DBS, 250 million customer hours wasted each year were saved in 2016 by improving bank-side processes and enabling more online self-serve transactions by customers.
    Further, investments are made in the area of streamlining user flows. One example is text entry replacement by using the onboard sensors: location-via GPS, check and barcode scanning via the camera, or speech dictation via Siri, Google speech etc. “Solutions that combine the ability to find, analyze and assemble data into formats that can be read in natural language will improve both the speed and the quality of business content delivery. Personal assistants such as Apple’s Siri and Microsoft’s Cortana — as well as IBM Watson, with its cognitive technology — provide richer and more interactive content.”- From Gartner’s report “Top Strategic Predictions for 2016 and Beyond: The Future Is a Digital Thing

Challenges & Implications

Having looked at some of the trends, the implication of accelerated digital transformation, focus on user experience now are met with competitive pressure and the need to deliver product faster to market. Many organizations are adopting agile methodologies, and from a continuous quality perspective, let’s discuss some challenges and implications:
  • (Simplified) Automation at scale: With an ever-growing matrix of test cases and shrinking test cycle, I believe (/hope) attention will be given in 2017 to designing/implementing automation at scale in organizations. There are many challenges, such as the skill set of testers/developers, cross-team collaboration, tooling, timing, and budgets. But everyone needs to agree that having over 20%-25% of manual testing, or spending too much time maintaining test script maintenance is simply blocking coverage, quality and eventually business success.
    • Always-on lab: A robust and stable lab is an absolute requirement to remove fragility, the biggest reason for test failure. Almost always this means a lab in the cloud: Device on a desk or a local lab will break the regression test in the critical moment.
    • Scripts: Need to be based on core functions which are robust, mature and reusable. Handle unplanned cases (popups), apply retry mechanism, apply baseline for the environment (Database in the right place, servers are operational, WiFi is on, no popups, etc.)
    • Switch to “always green” mode: if you need to review your results every morning and spend 1-2 hours on it, you’re doing something wrong and you can’t really scale your automation. Prefer green over coverage. A false negative is the worse disease of automation. Unless something really bad happens, your scripts should end with green status, no excuses.
    • Test automation framework: This is the building block that will drive sustainability and scale. There are many frameworks out there, some are offered as open source, some by system integrators. Here are some thoughts on selecting your test framework:
      1. Skill set and tools match: Testers skills vary. We typically see many manual testers who are supported by a core team of advanced coders. Those who code, operate in Java, Javascript, C#, Python, Ruby etc. The foundation for automation at scale is a set of sustainable and reusable automation assets (so your time spent on maintenance is limited): A solid object repository, scripts, test orchestration, and planning etc. A good framework will allow multi-language support (in a way that supports your organization) and multi-skill-level: Java-like scripting for codes, BDD-like scripting (ex.: Cucumber) for those new to coding.
      2. Open source and modular: There are significant benefits to adopting technology with the wide community behind it. We recommend selecting a solution that is made of architectural components that are best in class in its function. Shameless plug: Perfecto and Infostretch came with an open source framework named Quantum. The objective is to provide a complete package where experienced as well as non-coders can write test scripts based on smart XPath and a smart object repository via Java and Cucumber. Test orchestration and reporting are also available via TestNG. The framework is made of best-of-breed open source components, we welcome the community and our customers to try it out and give feedback.
    • Efficient, role-driven reporting: Considering automation at scale, it is mandatory to provide a strong reporting suite. The tester needs to quickly recognize trends in last nights’ regression test (hopefully made of thousands of executions), and drill down from the trends to a single test execution to analyze the root cause and compare it against yesterday’s execution or another platform. By the same token, quality visibility (‘build health’) mandates transparency. (another shameless plug:) Perfecto’s new set of dashboards enables the application team as well as executives to understand build weaknesses and establish an informed go/no-go decision.
 Next, on the challenges list, let’s discuss the client side:
  • Client side increased capabilities… and vulnerabilities: The focus to drive more functionality and streamline the user experience drive a larger coverage matrix. We’re seeing thick client applications strengthening and enriching the experience. As such, demand for test coverage and process change are needed
    1. Coverage: The proliferation of using onboard sensors and connected devices (see below) will drive the need to expand the test environment and capabilities to include those. In 2016 we saw increased use of image injection scenarios as well as touch ID. I believe in 2017 speech input will gain momentum as well as ongoing innovation around augmented reality (perhaps less in banking, rather other verticals). All of these scenarios need to be covered.
      • Of particular interest is the IoT space: This is an area that’s been growing rapidly over the last few years, whether consumer products, medical or industrial applications. “The relationships between machines and people are becoming increasingly competitive, as smart machines acquire the capabilities to perform more and more daily activities“. In Gartner’s IoT forecast, we estimate that, by 2020, more than 35 billion things will be connected to the Internet. Particularly in banking, IoT represents an interesting opportunity. For example, authenticating the customer in the branch with biometric sensing accessories will streamline experiences and increase security. Other examples include contactless payments and access to account functions from a wearable accessory (Source)
    2. Accessibility: since 2015 over 240 businesses have been sued over accessibility compliance according to WSJ. TechCrunch advice is to plan, design and implement accessibility in the app,  and work closely with a council on the regulations. We too are seeing growing demand for accessibility related coverage. This is certainly an area we’re going to pay close attention to in the near future.
Lastly, process and maturity changes:
  • Process changes and (quality) maturity growth: As we work with our customer and the market is maturing, we are fortunate to observe market trends that are happening (some slower than others, but still)
    1. CoE collaboration with the app team: As agile is implemented in many of our customers, we’re witnessing first hand the autonomy and independence driven by the application team. While the application team creates, builds and tests code, they still may need centralized perspective on quality practices and tooling needed for success. Some of these teams consider and curious about the application usage and behavior in production (more below). Our recommendation to the various teams is to seek and drive collaboration: for example, establish a slim, robust and stable acceptance test to build a common language between the tests that are run in the cycle and those running after.
    2. DevOps: Teams are seeking efficiencies and transparency in managing quality across the SDLC. One area is shifting testing earlier in the cycle, covered nicely by my colleague Eran. devops1The second is using the same (testing) tool and approach for production (‘Testing in production’). This approach reduces delays in time to launch (no need to wait until production monitoring scripts are created) and enables visibility to usage, behavior and weaknesses of the app in production. I believe traditional production-dedicated APM tools will need to find ways to merge into the cycle to survive.
    3. New entrants in the developer/quality workflow: I believe new opportunities and startups will emerge in areas that simplify/automate testing and predict the impact of code changes in advance. Imagine proactive code scanning tools integrated with production insight that direct developers about the risk associated with the area of code you’re about to touch, or automated test code/plan generators. This area has plenty of room for growth.

 

Advanced areas

  • Shifting, even more, testing left: In further mature teams we find that automation drives further test cases into the nightly regressions test, because it provides high value (as opposed to finding bugs late) and it’s frankly, possible. The area of introducing real user conditions in the cycle provides critical insight. Other areas such as small-scale multi user test (for code efficiency and multi-threading behavior), some level of security, accessibility tests etc.
  • Transitioning testing to user journey: Lastly, an advanced topic I’d like to mention is changing the perspective from a matrix of test cases X browsers/devices/OS/version X real user conditions into diagonal, if you will, user journeys across platforms. To go by example, take a typical journey for bank loan research: consumers are likely to begin their engagement on a large screen where they research rates, terms etc. They may summarize findings and take decisions using excel (local/online). They may apply for a loan over their desktop browser or on their tablet, and then continue the interaction on their mobile device. In those ‘diagonal’ test journeys one could then classify different journeys into different personas: There’s the consumer, the loan officer, the customer care professional etc. All of whom go through different journeys on different screens. Being able to provide a quality score per build for specific persona’ journey would be very meaningful to the business to make decisions. The point being, in a limited time available for quality activities, one could consider creating user journeys across specific screens rather than trying the complete rows and columns across test cases and screens matrix.
To summarize, I see an exciting 2017 coming with lots of changes and innovation in delivering digital applications that work. Certainly looking forward to taking part!

How to Efficiently Test Your Mobile App for Battery Drain?

With my experience in the mobile space over the past 2 decades, I rarely run across efficient mobile app testing that assures resource usage by the app as part of the overall test strategy and test plan.

Teams would often focus on the app usability, functionality, performance and security and as long as the app performs what it was designed to do – the app will get pushed to production as is.

Resource Consumption As an App Quality Priority

Let’s have a look at one of 2016 most popular mobile native apps, that is Pokemon Go. This mobile app alone, require constant GPS location services being active, it keeps the screen fully lit when in the foreground, operates the camera, plays sounds and renders 3d graphics content.

If we translate the above resource consumption when running this App on a fully charged Android device, research shows that in 2 hours and 40 minutes the phone will drop from 100% to 0% battery.

The thing is of course, that the end user will typically have at least 10 others apps running in the background at the same time, hence the battery drain of the device will be of course faster.

From a recent research done by AVAST, you can see 2 set of greediest apps in the market in Q3 2016. The 2 visuals below taken from the report show 2 sets of apps – 1 that is usually launched at the device startup, and the 2nd set of apps mostly launched by the users.

batteryd1

batteryd2

How to Test the App for Battery Drain?

Teams need to come as close as possible to their end-users, this is a clear requirement in today’s market. This means that from a battery drain testing perspective, the test environment needs to mimic the real user from the device perspective, OS, network conditions (2G, 3G, Wifi, Roaming), background popular apps installed and running on the device and of course a varying set of devices in the lab with different battery states.

  • Test against multiple devices 

Device hardware is different across both models manufacturers. Each battery will obviously have a limited capacity than the other. Each device after a while will have degraded battery chemistry that impacts the performance, the duration it can last and more. This is why a variety of new, legacy and different battery capacities needs to be a consideration in any mobile device lab. This is a general requirement for mobile app quality, but in the context of battery testing – this gets a different angle that ought to be leveraged by the teams.

  • Listen to the market and end-users’

Since the market constantly changes, the “known state” and quality of your app including battery consumption and other resources consumption may change as well. This can happen due to app different performance on a new device that you have no experience with or it can be due to a new OS version released to the market by Google or Apple – we have seen plenty of examples like that, including the recent iOS 10.2 release.

It is very hard to monitor these things in products, so one advice should be to start testing the app on OS Beta versions and measure the app battery consumption prior to the OS is released as GA – this can eliminate issues around new OS versions. Other methods that are commonly used by mobile teams is to monitor the app store and either get notified by the end-users’ about such issues (less preferred). Continuously including such tests on a refreshed device lab will reduce the risks and identify issues earlier in the cycle and prior to production. Make these tests or a subset of these part of your CI cycle to enhance test coverage and reduce risks.

screenshot-2016-12-22-at-01-42-05

Summary

In today’s market, there is not good automation method to test app battery drain, therefore my recommendation is to create a plethora of devices in the lab with varying conditions as mentioned above and measure the battery drain through native apps on the devices as well as timer measurements. The tests should be first against the app running on a clean device and than on a real end user device.

Shifting Mobile App Quality Into the Dev Build Cycles

It’s no doubt that quality is becoming a joint feature team responsibility, and with that in mind – it is not enough for traditional QA engineers to develop and execute test automation post a successful build, but actually the growing expectations now are from the Dev team to also take part and include as many tests as they can in their build cycles per each code commit.

Tests can be either unit, functional, UI or even small scale performance tests.

With that in mind, Dev team need a convenient environment that allows them to perform these quality related activities so they deliver better code faster!

Developers today are specifically challenged with the following:

  1. Solving issues that come from production or from their QA teams that require a specific device or/and environment that’s usually not available for the dev team
  2. Validation of newly developed apps or features within apps across different environments and devices as part of their dev process
  3. Lack of shared assets for the entire dev team
  4. Ability to get a “long USB cable” that enables full remote device capabilities & debugging

Perfecto just made available as part of its continuous quality lab in the cloud a set of new tools and capabilities that addresses these requirements and enable Dev team to accomplish their goals.

Perfecto’s DevTunnel solution for Android that is part of the recent 9.4 release is the 1st significant step toward helping developers accomplish more tests as part of the build cycle.

dt

With the above challenges and requirements in mind, Perfecto has developed a unique solution called the “DevTunnel” which enables developers to get enhanced remote access to mobile devices in the cloud and perform any operation that they could have done with these devices if they were locally connected – things like debugging, running unit tests, testing UI at scale from within the IDE and more.

espredebug

In addition, when referring to Android Dev activities, it’s clear that Android Studio & IntelliJ IDEA are the leading IDE’s to operate in. For that, Perfecto invested in developing a robust plugin that integrates nicely into the development workflow.

Espresso Framework

It’s no doubt that Espresso test automation framework is becoming more and more adopted across the developers for various reasons like:

  1. Embedded into Android Studio play an important role for Android developers.
  2. It’s very fast and easy to execute and receive feedback on Android devices

Espresso can be used within the Perfecto lab today in the following 2 modes

  • Locally – Execution through DevTunnel (see below)
  • Via Continuous Integration (CI) – using a command for Espresso test execution through Jenkins server

In the community series targeted to Dev Tunnel, you can learn more about the capabilities, use cases and get samples to get you started with the new capability.

To see this also in action, please refer to the video playlist that demonstrates how to get started and install DevTunnel, use Perfecto Lab within Android Studio with Espresso for testing and debugging purposes and more.

 

Good Luck!

How To Adapt to Mobile Testing Complexity Increase Due to External App Context Features?

If you follow my blogs, white papers and webinars for the past years you are already familiar with the most known common challenges around mobile app testing such as:

  • Device/OS proliferation and market fragmentation
  • Ability to test the real end-user environment within and outside of the app
  • testing both visual aspects/UI as well as native elements of the app
  • Keeping up with the agile and release cadence while maintaining high app quality
  • Testing for a full digital experience for both mobile, web, IOT etc.

 

If the above is somehow addressed by various tools, techniques and guidelines there is a growing trend in the industry in both iOS and Android platforms that are adding another layer of complexity to testers and developers. With iOS 10 and Android 7 as the latest OS releases but also with earlier versions, we start to see more abilities to engage with the app outside of the app.

imessage-apps-2-800x525

If we look at the recent change made in iOS 10 around iMessage, it is clear that Apple is trying to enable mobile app developers better engagement with their end-users’ even outside of the app itself.  Heavy messaging users can remain in the same app/screen that they’re using and respond quickly to external apps notifications in various ways.

This innovation is a clear continuation to the Force Touch (3D Touch) functionality that was introduced with iOS 9 and iPhone 6S/6S Plus that also allows users to click on the App icon without opening the full app and perform a quick action like writing a new facebook status, upload an image to facebook or other app related activities.

Add to the above capabilities the recent Android 7.1 App Shortcuts support which allow users to create relevant shortcuts on the device screen for app capabilities that they commonly use. More example that you can refer to is the Android 7.0 split window feature – allowing app to consume 1/2 or 1/3 of the device screen while the remaining screen is allocated to a different app that might compete with yours on HW/System resources etc.

So What Has Changed?

Quick answer – A lot 🙂

As I recently wrote in my blog around mobile test optimization, the testing plans for different mobile OS versions is becoming more and more complex and requires a solid methodology so teams can defer the right tests (manual/automation) to the right platforms based on supported features of the app and the capabilities of the devices – testing app shortcuts (see below an example)  is obviously irrelevant on Android 7.0 and below, so the test matrix/decision tree needs to accommodate this.

appshortcuts

To be able to test different app context you need to make sure that you have the following capabilities from a tool perspective in place and also to include the following test scenarios in your test plan.

  1. Testing tools now must support the App under test and also the full device system in order to engage with system popups, iMessage apps, device screen for force-touch based testing etc.
  2. The test plan in whatever tree or tool is being managed, ought to accommodate to the variance between platforms and devices and allow relevant testing of apps–>features–>devices (see my referenced blog above for more insights)
  3. New test scenarios to be considered if your app leverages such capabilities
    1. What happens when incoming events like calls or text messages occur while the app interacts within an iMessage/Split screen/shortcut etc. also what happens when these apps receive other notifications (lock screen or within the unlocked device screen)
    2. What happens to the app when there are degraded environment conditions like loss of network connection, flight mode is on etc. – note that apps like iMessage rely on network availability
    3. If your app engages with a 3rd party app – take into account that these apps are also exposed to defects that are not under your control – Facebook, iMessage, others. If they are not working well or crashes, you need to simulate early in your testing activities such scenario and understand the impact on your app and business
    4. Apps that work with iMessage as an example might require a different app submission process and also might be part of a separate binary build that needs to be tested properly – take this into account.
    5. Since the above complexities are all dependent on the market and OS releases, make sure that any Beta version that is released gets proper testing by your teams to ensure no regressions occur.

I hope these insights can help you plan for a new trend/future that I see growing in the mobile space that IMO does add an extra layer of challenges to existing test plans.

Comments are always welcomed.

Happy Testing!

Mobile Testing On Real Devices Vs. Emulators

Though it seems the debate over the importance of testing on real devices and basing a Go/No-Go release decision only on real devices is over i am still being asked – why it’s important to test on real devices? What are the emulators limitation?

In this blog i will try to summarize some key points and differences that might help address the above questions.

emulatorslimitations

End users Use Real Devices and Not Emulators

Developing and deploying a mobile app to the market isn’t intended to be used on desktops with mouse and keyboards but on real devices with small screens, limited hardware, RAM, storage and many other unique attributes. Testing on a different target then the end-users will use simply exposes organizations to quality risks, security, performance and others.

The end-users engage with the application with unique gestures like TouchID, Force Touch, Voice commands. End-users operate their mobile apps in conjunctions with many other background apps, system processes — These conditions are simply either hard to mimic on emulators or are unsupported by emulator.

As seen also in the above visual, Emulators don’t carry the real hardware as a real device would – this includes chip-set, screen, sensors and so forth.

Platform OS Differences

Mobile devices are running a different OS flavor than the one that runs on Emulators. Think about a Samsung device or other launched by Verizon, T-Mobile, AT&T and other large carriers – these platform versions that run on the devices are by far different than the ones that run on Emulators.

Thinking about devices and carriers, note that real devices receive plenty of notifications like push notification, location alerts, incoming text messages (whats app etc.), google play store/app store app updates and so forth –> these are not relevant in Emulators and by not testing in these real environment conditions, the test coverage is simply incomplete and wrong.

real_env_conditions

The above image was taken actually from my own device when i was travelling to New York last week – look at the amount of background pop-ups, notifications and real conditions like network, locations, battery while i simply use the Waze app. This is a very common scenario for most end users that consume any mobile app. There is no way to mimic all of the above scenarios on Emulators in real-time, real network conditions etc.

Think also on varying network condition simulation that transition from Wifi to real carrier network, than add lose of network connection at all that impact location, notifications and more.

Wasting a lot of time in testing against the wrong platforms costs money, exposes risks and is inefficient.

Innovative Use Cases Simulation

With the recent Mobile OS platforms that were recently released to the market including Android 7.1.1 and iOS 10.x we start to see a growing trend of apps that are being used in different contexts.

appshortcuts

With Android 7.1.1 we now see App-Shortcuts (above image) that allows developers to actually create a shortcut to a specific feature of the application. This is already achievable with iOS 9 force-touch capabilities. Add to these use cases like iMessage Apps that were introduced in iOS10, Split window in Android 7.0 and you understand that an app can be engaged by the user either through part of the screen or within a totally different app like iMessage.

With such complexities the test plans for once are getting more fragmented across devices and platforms but the gaps between what an Emulator can simply provide developers and testers and what a real device in a real environment can is growing.

Bottom Line

Developers might find at a given stage of the app value of using Emulators and i am not taking that away – testing on an Emulator within the native IDE’s in early stages is great, however when thinking about the complete SDLC, release criteria and test coverage there is no doubt that real devices are the only way to go.

Don’t believe me, ask Google – https://developer.android.com/studio/run/device.html

google

Happy And REAL Device Testing 🙂

3 Ways to Make Mobile Manual Testing Less Painful

With 60% of the industry still functioning at 30% mobile test automation it’s clear that manual testing is taking a major chunk of a testing team’s time. As we acknowledge the need for both manual and automation testing, and without drilling down into the caveats of manual testing, let’s understand how can teams can reduce the time it takes, and even transition to an automated approach to testing.

1. Manual and Automation Testing: Analyze Your Existing Test Suite

You and your testing team should be well-positioned to optimize your test suite in one of the following ways:
– Scope out irrelevant manual tests per specific test cycles (e.g. not all manual tests are required for each sanity or regression test)
– Eliminate tests that consistently pass and don’t add value
– Identify and consolidate duplicate tests
– Suggest manual tests that are better-suited for automation (e.g. data driven tests or tests that rely on timely setup of environments)

The result should be a mixture of both manual and automation testing approaches, with the goal of shifting more of the testing toward automation.

manual and automation testing mix

2. Consider a Smooth Transition to “Easier” Test Frameworks

In most cases, the blocker for increasing test automation lies inside the QA team, and is often related to a skills gap. Today’s common mobile test automation tools are open-source, and require medium to high-level development skills in languages such as Java, C#, Python and Java Script. These skills are hard to find within traditional testing teams. On the other hand, if QA teams utilize alternatives such as Behavior Driven Development (BDD) solutions like Cucumber, it creates an easier path toward automation by virtue of using a common language that is easy to get started and scale.

manual and automation testing cucumber behavior driven development

3. Shift More Test Automation Inside the Dev Cycle

When thinking about your existing test automation code and the level of code coverage it provides, there may be a functional coverage overlap between the automated and manual testing. If the automation scripts are shared across the SDLC and are also executed post-commit on every build, this can shrink some of the manual validation work the testing team needs to do. Also – and miraculously related! – by joining forces with your development and test automation teams and having them help with test automation, it will decrease workload and create shorter cycles, resulting in happier manual testers.

Bottom Line:

Most businesses will continue to have a mix of manual and automation testing. Manual testing will never go away and in some cases, it is even a product requirement. But as you optimize your overall testing strategy, investing in techniques like BDD can make things much easier for everyone involved with both manual and automation testing throughout the lifecycle.

manual automation testing with real devices

iOS 10 Is About to Disrupt Mobile Testing Plans

For background: Apple has decided with iOS 10 to make certain iPads, iPods and iPhones obsolete by stopping the OS they can run at iOS 9.x.

The list of devices that will not be able to upgrade to iOS 10 and above are:

  • iPad 2
  • iPad 3rd gen
  • iPad mini
  • iPhone 4S
  • iPod touch 5th gen

Why is this a problem for developers and testers?

Let’s examine iOS 9 adoption. Based on the data below from Mixpanel, two weeks after iOS 9 was launched on September 16, 2015, 40% of users already upgraded to iOS 9. Today, iOS 9 adoption is close to 90%.

ios9_adoption

Now, here’s a quick look at market share numbers for iOS devices.

2016-phone_localytics

ipad2

Based on this data from Localytics and the recent Perfecto Digital Test Coverage Index report it’s clear that at least iPad 2, iPhone 4S and iPad mini are among the most-used devices in various markets, including the U.S. (see below)

index1-jpg

Implications for mobile testing plans

The information above means one thing. Dev and test teams will need to support the new iPhone 7/7 Plus along with other Apple devices that can run iOS 10. Additionally, they need to reserve a portion of testing for iOS 9 for the older devices mentioned above. Unfortunately, this will create latency in testing activities. It will also require test automation so tests can be easily executed on devices running iOS 9 and iOS 10.

Another side effect may be lower device adoption for iOS 10 because large groups of users will simply be stuck on iPad 2, iPad Mini and iPhone 4S. We also see a clear market trend of older iPads continuing to be the most popular, as iPad users take longer to upgrade to new tablets than Android tablet users.

The end result is we’re going to see growing iOS fragmentation in various markets and complex testing ahead.

Open source test framework implications

iOS 10 is not just disrupting the mobile device and OS landscape, it’s also impacting open-source test frameworks such as Appium.

As you can read in the threads below, iOS 10 is “breaking” Appium test framework functionality, impacting installations and the launching of IPA files. It is also causing issues working with XCUITest and the iOS WebKit.

It’s worth noting that vendors such as  Perfecto are able to overcome this challenge as they continue to support Appium test automation with iOS 10 on iOS devices for all beta versions and the GA version from the first day it was publicly available.

3 Motivations That Made Me Switch From iOS to Android

As a mobile evangelist at Perfecto, i foresee the entire mobile and web space for the past 10+ years, following major trends both in the device/hardware front as well as the platform/OS (operating System) front.

I was an Apple user for the past 2 years, using an iPhone 6 Plus device both for my personal as well as my work daily activities. Last month i decided it’s time for a change and i replaced my iPhone with a Google Nexus 6P phablet.

Let me explain some of my reasons to that switch:

  1. Quality and Innovation
  2. Platform Restrictions
  3. Future Looking

vpcqffbw

Quality and Innovation

In the front of quality and/vs. innovation i found out that as a 2 year trend, Apple’s iOS was constantly straggling with quality that mostly came on top of innovative features and end user -experience. For the past 2 years Apple released 10 versions of iOS 8 stopping at a stable GA of iOS 8.4.1, while for iOS 9 Apple released 10+ versions stopping at a recent 9.3.5 GA release that addresses security issues. To compare this trend to Android platform – Android 5.0 Lollipop released in November 2014 and was enhanced till latest version of 5.1.1 (~5 versions in 2 years). Android Marshmallow 6.0 was released in October 2015 and since than only had an additional version of 6.0.1 release. Last month (August 22nd) Google released its new Nougat 7.0 release that is available to users (like me) that hold a Nexus device. iOS 10 is just around the corner with the iPhone 7 devices, but based on the current trend and enormous public Beta versions, it seems like no major changes are expected in the quality/release cadence.

In the Android history we see some major enhancements around sensor based capabilities for payment, logging in as well as UX (user experience) features such as multi window support (see below image), android Doze (battery saving capability). In iOS we also see enhancements around sensors like force-touch, apple pay however these features IMO come in short compared to the platform stability over the past 24 months and the platform constrains which i’ll highlight in the next section.

20160823_142250 Screenshot_20160823-141941

Platform Restrictions

From an ens user perspective, some of the important platform features involves the ability to customize his UX and look and feel of his personal device. Also having the ability to easily manage his media files such as photos and music with a reasonable storage availability. Apple flagship device with massive market share across regions is the iPhone 6/6S with a default storage (un-expandable) of 16GB – I hardly know a person who has this device/storage size that is happy with that, and does not need to constantly delete files, cancel auto savings of WhatsApp media files and alike.  In addition, continuously working with iTunes software as a dependency to media/songs sync is a pain and often i found myself losing my favorite music files or getting them duplicated by simply having to switch from 1 PC to another (people do that, and there are procedures that might have prevented this outcome but still). Compared to the above, most Android devices that are not coming with an external storage option are by default coming with a 64 GB internal memory, and in addition working with music file system is a simple and straight forward task to do.

Switching from my iPhone and iTunes to a Nexus device while having my Gmail account was a very simple thing to do, my music, photos and apps easily “followed” me to the Android device that is already running Android 7 in a stable way.

iOS is not all bad, don’t get me wrong – from an adoption perspective, and device/OS fragmentation this is by far a much better managed platform compared to Android that rolls out its latest GA version in a 4-6 months delay to a non-Nexus device (example: Samsung). In addition the iOS tablets are still a leader in that front with 4-6 years old tablets like iPad Air, iPad 2 that are the most commonly used tablets in the market that can still run iOS 9 OS versions. It is not the case when it comes to Android tablets that tend to be replaced by their end-users in a shorter period of time that iPads.

 

Market_Cal

Future Looking

From a future looking perspective, my opinion is that Google is still going to have a global market share advantage over Apple and will continue to innovate with less frequent releases due to quality than Apple. 2017 is going to show us a continuous battle between Android 7 and iOS 10 in a market that becomes more and more digital and mobile dependent, and with this in mind – the challenge of quality, innovation and less restrictions will be even more critical to independent users as well as large enterprises who are already today fully digital.

As an end-user, i would look at both Google and Apple and examine how their overall digital strategy will transform and enable easier connectivity with smart devices like watches etc., as well as less limited storage and device/OS customization. From a Dev and Test perspective i would assume we will continue to see growing adoption of open-source tools such as Espresso, XCTest UI, Appium etc. as a method of keeping up with the OS platform vendors – Only such open-source frameworks can easily and dynamically grow and support new features and functionalities compared to legacy/commercial tools which are slower to introduce new API’s and new capabilities  into their solutions.