Wednesday, March 30, 2016

Free VRs to schools for a better future

The older I am, the more I feel how the current education system needs a massive improvement. The weakest point I can see, is its inability to motivate students to learn. A classic would say: "Teacher can only show you the way, you have to take it". This is unfortunately working only for students whose source of motivation is coming from someone else (usually from parents). It would never work for students whose parents are not capable of explaining the importance of a proper education (usually such parents do not have time, or knowledge). Therefore, teachers have to motivate students.

They would never be successful with current means (books from 80-90s). Interactive screens, smart devices for each student, it is just not enough. They need to go beyond! They need means which would enable them to associate boring facts with emotions.

Virtual reality to rescue us all!

In my dreams, my children would take to school instead of pens and notebooks (or their electronic counterparts) only one device: a VR set. During each class, they would connect to a host room created by a teacher, together with all other students in the class.

With help of the gamification, they would be able to master the material far more quickly. Imagine, that on a history class, they would be able to profoundly feel the atmosphere of World War 2 battle (Call of Duty like FPS). Or, they would be able to quickly meet people from very different cultures. In science class, they would be able to land on the moon. My favorite, on literature, they would investigate the murder committed by Raskolnikov in a logical adventure.

The educational games can draw inspiration from MMORPG games: best students would be able to lead the raid on the final bosses for each class (e.g. Napoleon). They would gain some kind of gems (William Wallace's sword), which would give them some kind of advantage for the next classes. Gained experience points would also influence their final exam. The VR simulation would be full logical games, and quizzes, which would require home preparation. Why not to have a final exam fully scripted as a test in VR?

Step by step, day by day, we can positively influence children from their very early days to their adulthood. I believe, that this way, there will be no Neo-Nazism, and young people would find what they adore, what they are keen about faster.

State of the art games, and VR headsets show us that it is not a science fiction, and that we are already ready for it. Big game studios and VR producers just need to find it as a business opportunity, and maybe to cooperate with governments to push this forward.

So looking forward for a better future :)

Sunday, January 24, 2016

Appium workarounds #1

Appium is a great tool indeed. For Android for example, it integrates projects like UI Automator & Chrome driver, provides server, client API, instruments emulators/real devices. As expected, all of this components has own bugs. Followings is a list of my workarounds, which should work reliably until fixed in upstream (one I know root cause I will report them :)).

Components versions used:

Appium java-client 3.2.0
Appium server        1.4.16
Chrome driver         2.20
Emulator                 Android 6.0, API level 23 with Intel x86 image

Clear password field reliably

Sometimes, password fields are not cleared reliably. And you just end up with password appended to the prefilled one. It is because UIAutomator right now can not get password field value, and thus the automatic fallback which attempts to clear text field until its value is empty - fails.

What about having something like PasswordField widget, which would have clear method as follows:
public void clear() {;
  for(int i = 0; i <; passwordValue.length(); i++) {
    driver.pressKeyCode(67); //BACKSPACE
It should be enough to click to the middle of password field, when passwords are not too long.

Find element which just recently fade in

Sometimes, Appium was not able to find an element (e.g. android.widget.Button) which just recently fade in. It was not a timing issue. It was non-deterministic, made tests flaky and almost drove me nuts.

Calling driver.getPageSource(), before attempting to find such an element solved my problem.

Inspired from this Appium bug report.

Set date picker field more reliably

There are multiple tutorials how to set Date Picker on Android. They simply advice to call WebElement#sendKeys on day, month and year elements. Sometimes, it just fails to clear completely previous value, and results with wrong date set. Easy solution is to set new value multiple times:
WebElement picker = driver.findElement(PICKER_LOCATOR);

int counter = 0;
while((!picker.getText().equalsIgnoreCase(value)) && (counter < MAX_TRIES_TO_SET_VALUE)) {
if(counter >= MAX_TRIES_TO_SET_VALUE) {
  throw new IllegalStateException("It was not possible to set new value: " + value + 
                      " in Android picker after " + MAX_TRIES_TO_SET_VALUE + " tries.");
}; // confirm entered value
Normally this is not a good practice, and I am trying to avoid repeating doing something until successful, as it introduces false positive results. However, as far as this not a problem of our component, it is OK.

Sunday, January 10, 2016

Lesson learned from Google Test Automation Conference 2015

On November 10-11, 2015, there was the 9th GTAC. Although I was not there, I enjoyed it very much :) How come?

It is because of brilliant recordings:

  • recordings were available very soon after the event (2 weeks later)
  • great video & audio quality
  • audience questions were repeated by the moderator
  • but the most important was the outstanding content
This way, one can enjoy talks performed by professionals from big enterprises such as: Google YouTube team, Google Chrome OS team, Twitter, Uber, Spotify, Netflix, LinkedIn, Lockheed Martin and more.

I often try to watch conference videos. However, I always give up to finish them all. It is because they are publicly available at least half a year after an event, and thus often outdated. These talks were different though!

Following are notes from each talk. Hopefully, someone will find it useful, and encouraged to see its full version.

Keynote - Jürgen Allgayer (Google YouTube)

  • Cultural change, which consisted from:
    • take a SNAPSHOT where we are (how many bugs occurs in the staging phase, etc.)
    • make SLA (what is our goal?, need for a tool which will tell: this team is this effective)
    • and agree on it in whole organisation
    • continuously measure according to defined SLA, to see where we are
    • how many manual tests? Where do we find bugs more often?
  • Goal: No manual regressions, but instead manual exploratory.

The Uber Challenge of Cross-Application/Cross-Device Testing - Apple Chow & Bian Jiang

  • biggest challenge: two separated apps (driver and passenger), while same scenario can be completed using both apps.
    • solution: in house framework called Octopus, which is capable of running two emulators, and manage communication between them
    • Octopus uses signaling to make sure tests are executed in the right order -> asynchronous timeouts
    • Octopus focus: iOS, Android, parallel, signaling, extensible (does not matter what UI framework is used)
    • the communication is done through USB as most reliable channel
    • sending files to communicate - most reliable
  • Why the communication is not mocked? Answer: This is part of your happy path, to finally ensure you are good to go. It does not replaces your unit tests.

Robot Assisted Test Automation - Hans Kuosmanen & Natalia Leinonen (OptoFidelity)

  • When robot based testing is needed?
    • complex interacting components and apps
    • testing reboot or early boot reliability
    • medical industry, safety
    • Chrome OS uses it

Mobile Game Test Automation Using Real Devices - Jouko Kaasila (Bitbar/Testdroid)

  • use OpenCV for image comparison
    • side note: OpenCV is capable of lot of interesting things (object detection, machine learning, video analysis, GPU accelerated computer vision), BSD licence
  • parallel server side execution
  • Appium server, Appium client, OpenCV - all on one virtual machine instance
    • screenshots do not go through internet

Chromecast Test Automation - Brian Gogan (Google)

  • testing WIFI functionality in ''Test beds'' (if I heard the name correctly)
    • small faraday cage which can block signal
    • shield rooms
  • Bad WIFI network - software emulated (netem)
  • (6:35) things which gone bad 
    • test demand exceeded device supply
    • test results varying across devices ( e.g. HDMI ) - solutions: support groups in device manager, add allocation wait time & alerts, SLA < 5 wait time for any device in any group, full traceability of device and test run
  • (6:44) things that gone really wrong
    • unreliable devices, arbitrary going offline for many offline reasons
      • fried hardware, overheating, loss of network connection, kernel bugs, broken recovery mechanism, mutable MAC - solutions: monitoring, logging, redundancy, connectivity - sanity checks at device allocation time, static IP, quarantine broken devices, buy good hardware
  • first prototype for testing lab on card board

Using Robots for Android App Testing - Dr.Shauvik Roy Choudhary (Georgia Tech/Checkdroid)

  • 3 ways to navigate / explore app
    • random (Monkey)
    • model based
    • systematic exploration strategy
  • (12:00) - tools comparison

Your Tests Aren't Flaky - Alister Scott (Automattic)

  • A rerun culture is toxic.
  • There is no such thing as flakiness if you you have testable app.
  • Application test-ability is more than IDs for every element.
  • Application test-ability == Application usability.
  • How to kill flakiness
    • do not rerun tests, use flaky tests as an insight -> build test-ability
  • (16:10) - very strong statement to fight flaky tests - I would make a big poster and make it visible for all testers in QA department :)
    • ''What I do have are a very particular set of skills, skills I have acquired over a very long testing career. Skills that make me a nightmare for flaky tests like you. I will look for you, I will find you, and I will kill you'' - Liam Neeson, Test Engineer

Large-Scale Automated Visual Testing - Adam Carmi (Applitools)

  • why not pixel to pixel comparison?
    • anti-aliasing - on each machine is different - different algorithm used
    • same with pixel brightness
  • screenshots baseline maintenance should be code less

Hands Off Regression Testing - Karin Lundberg (Twitter) and Puneet Khanduri (Twitter)

  • in house project Diffy - makes diff on responses from 2 production and new candidate servers
    • more clear from this slide
    • interesting way how to deal with the "noise" (time-stamps, random numbers)
  • use production traffic by instrumenting clusters

Automated Accessibility Testing for Android Applications - Casey Burkhardt (Google)

  • we all have on daily bases accessibility problem: driving car, coking
  • accessibility is about challenging developers assumption that user can hear, see the content, interact with the app, distinguish colors
  • Android services: talk back, BrailleBack
  • (8:22) - common mistakes
  • (10:34) - accessibility test framework
    • can interact with Espresso

Statistical Data Sampling - Celal Ziftci (Google) and Ben Greenberg (MIT graduate student)

  • getting testing data from production
    • collecting logs from requests and responses
  • need to take into consideration whole production data
    • they managed to reduce the sample to minimum

Nest Automation Infrastructure - Usman Abdullah (Nest), Giulia Guidi (Nest) and Sam Gordon (Nest)

  • (4:15) - Challenges of IoT
    • coordinating sensors
    • battery powered devices
  • (4:55) - Solutions
  • motion detection challenges
    • end to end pipeline
    • reproducibility
    • test duration
  • Motion detection tested with camera in front of TV :)

Enabling Streaming Experiments at Netflix - Minal Mishra (Netflix)

  • Canary deployment for Web Apps
    • Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.
    • I knew the process under different names: Android stage roll out of the app, or Phased rollout.
    • Danilo Sato is describing this in mode detail here.

Mock the Internet - Yabin Kang (LinkedIn)

  • Flashback proxy - their in-house project, which acts as a gateway proxy for three tier architecture communication with the outside world (external partners, Google, Facebook, etc.)
  • it works in record, replay mode
  • it can act as as proxy between components of three tier architecture, or as a proxy between communication of mobile clients
  • mocks the network layer

Effective Testing of a GPS Monitoring Station Receiver - Andrew Knodt (Lockheed Martin)

  • GPS can be divided into three segments:
    • user segments (mobile client) who receives signal
    • space segment - satellites
    • control segment - tells satellites what to do, 31 satellites currently operating
  • Monitoring station receiver, user in control segment - measure distance to each satellite

Automation on Wearable Devices - Anurag Routroy (Intel)

  • 3:00 - how to setup android wearable real device to test on
  • 7:00 - how to start Appium session for wearable device

Unified Infra and CI Integration Testing (Docker/Vagrant) - Maxim Guenis (Supersonic)

  • using docker to create database with pre-populated data, MySQL snapshot, so each test session start with fresh data
  • vagrant + docker
    • because they need iOS, Windows
  • Not using Docker in production, it is not mature enough and because of legacy code
  • docker plus Selenium
    • it handles Selenium server
    • good for CI
  • Docker runs inside Jenkins slaves, runs smoothly
  • Running 100 browser instances simultaneously, requires powerful workstations though
  • One Selenium Grid for each stack
  • static vs. dynamic program analysis
  • great book XUnit Test Patterns
  • copy and paste tests increase “test dept”
  • verify part of test often helps to find similarities among tests, and later refactor them
  • Soot framework, opensource library, do analysis on java bytecode (also Android), used for finding refactorable test methods

Coverage is Not Strongly Correlated with Test Suite Effectiveness - Laura Inozemtseva (University of Waterloo)

  • how can we estimate a fault detection ability of test suite - mutation testing
    • good mutation candidates: change plus for minus, change constant values
  • kind of well known and obvious facts presented

Fake Backends with RpcReplay - Matt Garrett (Google)

  • problem with moc/stubs: we need to ensure they are working, so we test them as well
  • they record request and responses (RPC server), and they serve them instead of starting expensive servers
  • a continuous job which updates RPC logs
  • as bonus, no problem with broken dependencies. Tests run against last green microservices, so if one microservice is broken, then devs are not blocked.

Chrome OS Test Automation Lab - Simran Basi (Google) and Chris Sosa (Google)

  • Chrome OS development model
    • stable releases from branches, no development on branches, just cherry-picks from always stable trunk
    • all feature implementation and bug fixes on trunk first
  • using BuildBot - a CI framework
  • they are emulating change in distance from WIFI router to chrome OS
  • type of testing they are doing
  • they are using AutoTest
  • (19:40) Chrome OS partners goals for testing: OEM, SoC, HW component vendors, Independent Bios vendors
  • What kind of bugs real devices found on top of emulators: wifi, bluetooth, kernel, touchpad, anything low level

Sunday, July 19, 2015

Very bad network simulation for testing of mobile applications [PART 2]

In the previous post we talked about the need for platform independent, scriptable solution for testing of your mobile applications in a poor internet conditions. To complement the theory with something executable, this post will introduce scripts (Debian Linux like), and a guide to setup your own WIFI access point, which would simulate slow, unreliable mobile internet. You will be able to connect with your Android, iOS, Windows, whatever devices and see from your office, how your apps adapt.

This tutorial will be divided into following sections:

  1. Failure when firstly attempting to solve this problem.
  2. Obtaining the right USB WIFI dongle.
  3. Tutorial for creating an AP from your Linux based workstation.
  4. Script for changing the Quality of service (QoS) characteristics of your AP.
  5. Script for setting particular QoS, simulating GRPS, EDGE, 3G, LTE, whatever networks.
  6. Example usage

Failure, when firstly attempting to solve this problem.

My first attempt did not end up successfully. I am not saying it is a wrong way, but I was just not able to go this way. The plan was to:

  • Buy WIFI dongle with ability to be in AP mode.
  • Virtualize OpenWRT (a small Linux distro, usually run on routers) in the VirtualBox.
  • Install on that virtual machine a Cellular-Data-Network-Simulator - which is capable running on OpenWRT, and is established on well known technologies: tc, iptables and CloudShark.
  • Connect with devices to that AP, and use CloudShark to sniff the network in order to see particular packets.
It looked promising. It would be just an integration of already existing parts, not reinventing a wheel. A fairy tail. It worked. Even when I found out that it would require some work to do in order to script the way, how the devices are connected to the Cellular-Data-Network-Simulator, and the way how the QoS characteristics are changed in order to switch among 2G, 3G, etc. networks. But it was nothing impossible to overcome. The biggest problem which I encountered after I set it up, was the stability of the AP. It switched off the WIFI dongle at random intervals. I studied various OpenWRT log files, but have not found the root cause, hence I was not able to fix it. I needed to think out a different way. Following describes my second attempt, which finally worked.

Obtaining the right USB WIFI dongle.

First things first. Before buying the WIFI dongle, checkout its chipset, and see, whether it is supported by some Linux driver. I am using TP-Link TL-W22N. Its chipset AR9271 is supported by ath9k_htc driver.

Tutorial for creating an AP from your Linux based workstation.

Next you will need to setup various things properly: hostapd, DHCP server, firewall. I followed this great post (automated in the install script for Debian like systems here). In that install script, you can also spot a part (wifi_access_point), which would enable you to start the AP as a service.

Script for changing of Quality of service (QoS) characteristics of your AP.

Now, you should be able to connect to the created AP with your devices. It should provide you a similar Internet connection quality as you have on your workstation. To simulate various cellular data networks we need to limit it somehow.

Following script does it by setting various firewall rules. You will need to alter it a bit before using it.
  1. Set IF_IN to network interface name which is dedicated to the created AP.
  2. Set IF_OUT to network interface name by which is your workstation connected the Internet.
  3. Set IP_IN to a IP address space which will be assigned to your connected devices (you chose this when setting up a DHCP server).
  4. Set IP_OUT to the IP address of your application backend server.
Save the following script, and named it e.g.
#  tc uses the following units when passed as a parameter.
#  kbps: Kilobytes per second 
#  mbps: Megabytes per second
#  kbit: Kilobits per second
#  mbit: Megabits per second
#  bps: Bytes per second 
#       Amounts of data can be specified in:
#       kb or k: Kilobytes
#       mb or m: Megabytes
#       mbit: Megabits
#       kbit: Kilobits
#  To get the byte figure from bits, divide the number by 8 bit

# Name of the traffic control command.

# The network interface we're planning on limiting bandwidth.

# IP address of the machine we are controlling
IP_IN=     # Host IP
IP_OUT= #the address of your backend server

# Filter options for limiting the intended interface.
U32_IN="$TC filter add dev $IF_IN protocol ip parent 1: prio 1 u32"
U32_OUT="$TC filter add dev $IF_OUT protocol ip parent 2: prio 1 u32"

start() {
    ping -c 1 $IP_OUT >/dev/null 2>&1
    if [ $? -ne 0 ]; then
 echo "Error:"
        echo "The IP address: $IP_OUT is not reachable!"
 echo "Check out the backend server address!"
 exit -1

    $TC qdisc add dev $IF_IN root handle 1: htb default 30
    # download bandwidth
    $TC class add dev $IF_IN parent 1: classid 1:1 htb rate "$1"
    $U32_IN match ip dst $IP_IN/24 flowid 1:1
    # in delay
    $TC qdisc add dev $IF_IN parent 1:1 handle 10: netem delay "$3" "$4" distribution normal
    # in packet loss
    $TC qdisc add dev $IF_IN parent 10: netem loss "$7" "$8"

    # upload bandwidth
    $TC qdisc add dev $IF_OUT root handle 2: htb default 20
    $TC class add dev $IF_OUT parent 2: classid 2:1 htb rate "$2"
    $U32_OUT match ip dst $IP_OUT/32 flowid 2:1
    # out delay
    $TC qdisc add dev $IF_OUT parent 2:1 handle 20: netem delay "$5" "$6" distribution normal
    $U32_OUT match ip dst $IP_OUT/32 flowid 20:

stop() {

# Stop the bandwidth shaping.
    $TC qdisc del dev $IF_IN root
    $TC qdisc del dev $IF_OUT root

show() {

# Display status of traffic control status.
    echo "Interface for download:"
    $TC -s qdisc ls dev $IF_IN
    echo "Interface for upload:"
    $TC -s qdisc ls dev $IF_OUT


case "$1" in

    if [ "$#" -ne 8 ]; then
        echo "ERROR: Illegal number of parameters"
        echo "Usage: ./ start [downloadLimit] [uploadLimit] [inDelayMax] [inDelayMin] [outDelayMax] [outDelayMin] [packetLossPercentage] "
        echo "[downloadLimit]  See man page of tc command to see supported formats, e.g. 1mbit."
 echo "[uploadLimit] The same as for downloadLimit applies here."
 echo "[inDelayMax] Max delay in miliseconds for requests outgoing from AP."
 echo "[inDelayMin] Min in delay."
 echo "[outDelayMax] Max Delay in miliseconds for requests outgoing to servers."
 echo "[outDelayMin] Min out delay."
 echo "[packetLossPercentage] The percentage of packet lost"
 echo "Example: / 1mbit 1mbit 50ms 20ms 30ms 10ms 5%"
        exit -1

    echo "Starting  shaping quality of service: "
    start $2 $3 $4 $5 $6 $7 $8
    echo "done"


    echo "Stopping shaping quality of service: "
    echo "done"


    echo "Shaping quality of service status for $IF_IN and $IF_OUT:"
    echo ""


    echo "Usage: {start|stop|show}"


exit 0

Script for setting particular QoS, simulating GRPS, EDGE, 3G, LTE, whatever networks.

Now when you have a script to limit the QoS characteristic of your created AP, you will need to do some measurements, in order to have a clue what bandwidth, what latency, and what packet loss various cellular data networks have. You will need to find out a way how to measure these characteristics in the environment where your customers use your application.

The reason is that the same data network type (e.g. 3G) can have different QoS characteristics on different places. There are other factors in play as well: mobile Internet provider, hour of the day, city vs. village, weather and like. For the measurement I used handy mobile applications (for bandwidth and latency) and Fing for a double check up, as it is able to ping any server you like.

Off-topic: Would not be awesome, if there is a web service which would give me the average QoS characteristics of any place in the world for a particular hour of the day, for a particular data carrier, for particular weather and other conditions? I submitted a bachelor thesis assignment, but so far no enrollment :) And it would be IMO quite easy to setup: a mobile application with gamification characteristics to find out the particular statistics, store them, and make them available via some REST endpoints.

Save the following script as, into the same directory as previous script was saved. The measured values, you can see, are valid for average morning in Brno, Czech republic. The average was made after one week of measuring.


echo -n "Shaping WIFI to "

case "$1" in

 echo "GRPRS"
 $QOS stop > /dev/null 2>&1
 $QOS start 80kbit 20kbit 200ms 40ms 200ms 40ms 5%

 echo "EDGE"
 $QOS stop > /dev/null 2>&1
 $QOS start 200kbit 260kbit 120ms 40ms 120ms 40ms 5%

 echo "HDSPA"
 $QOS stop > /dev/null 2>&1
 $QOS start 2400kbit 2400kbit 100ms 100ms 100ms 100ms 5%

 echo "LTE"

 echo "FULL"
 $QOS stop > /dev/null 2>&1

 echo "DISABLED"
 $QOS stop > /dev/null 2>&1
 $QOS start 1kbit 1kbit 5000ms 5000ms 5000ms 5000ms 5%



exit 0

Example usage

So if you followed the steps, you should be able now to:
  1. Start the AP by: service wifi_access_point start.
  2. Simulate e.g. EDGE, by issuing: EDGE
Ideas have no limits. Use these scripts to e.g. stress network test your application (write a bash script which would randomly switch among all types of the network in a random intervals), or use WireShark to go deeper, to see actual packets being transmitted. Your development team would love you, if you attach to your bug report a saved transmission with a packet level information. Fixing of tough, non-deterministic network issues becomes more easy.

Disclaimer: I am still improving the scripts, use them on your own danger :) Any feedback on how you utilized these scripts, or your improvements would be deeply appreciated.

Thursday, June 25, 2015

Very bad network simulation for testing of mobile applications [PART 1]


Mobile internet is a must for smartphones. Most of the apps are somehow connected to the server, syncing every now and then. Whether it is to just show an advertisement, syncing your local changes with your profile somewhere in the cloud or maybe protecting the app from being distributed as cracked one, without paying for it.

But there is also another category of mobile applications, which heavily depend on the Internet connection. One example of such are applications intended for communication. Let's consider for instance PhoneX app.

All its features (secure calls, secure messaging and secure file transfer) require a decent Internet connection to work. But it just begins with its main features, everything from authentication, through contacts presence and server push notifications establish TCP or UDP connections with the servers.

Disadvantages of traditional way

With such applications, QE teams have to devote non-trivial effort to test applications functionality under various network conditions. There are various ways how to simulate real user conditions. Firstly, one can buy SIM cards for all of his devices, enable mobile data and spend lot of time with travelling around city. This method makes the testing environment the most real one, but one has to consider its downsides as well:

  • Out of reach of your computer, its more difficult to automate some of the app routines while you are moving, to offload mundane repeating of interactions with the app. In your office, it would be more easy to setup a script or to write a functional test which would send 200 subsequent messages or so.
  • Quality of service statistics vary around the globe significantly. And you do not have to go so far. For example 3G bandwidth, latency and jitter is quite different in two towns not far away from each other (100 Km). Needles to say that some places can only dream about LTE, and that these QoS characteristics vary also according to day hour (you would not like testing at 1 AM somewhere in the public transportation). Simulating all these different conditions in laboratory would be indeed more efficient.
  • It would be more difficult to intercept the communication e.g. with WireShark. It is sometimes handy, when developers need to see actual transmitted packets, in order to fix the issue.
  • It is more reliable to save mobile system logs such as logcat on Android right to the computer. Do now know why, but it is often the case for me, that some of the logs are missing when saving them to the file on the device (maybe some buffer limitation, who knows). I found more reliable to have phones connected to the computer and save such logs right away there.
  • Total lost of connection, or lost of some of the packets is more easily to be scripted in your testing laboratory, then in the real world.
  • Users use also various WIFI APs, which restrictions (e.g. isolation of clients) can badly affect your application features.
  • The most obvious reason is the time spent while moving out there, comparing with the time spent in the comfort of your air conditioned office furnitured with the most ergonomic seats out there.
For sure there are other reasons, why I consider simulating of poor internet connection to be done in the laboratory as better option than trying to reproduce the bugs outside. Please, I am not saying that it can substitute all testing while you are moving with the device. I am just saying that it can replace most of the testing under various network conditions.

Next part

In the next part we we will look into how to setup a WIFI Access Point, and some scripts which would enable simulating of poor internet connection. iOS platform has a solution for this already (Settings -> Developer -> Network Link Conditioner). Our solution would be platform independent, and would solve all of the disadvantages described above. Stay tuned.

Tuesday, June 2, 2015

Recording tests on Android (neither root, nor KitKat required)

A test suite is good only when provides a good feedback. Testing mobile apps is cumbersome, and far from robust (actually all UI tests are like that). A meaningful test report is inevitable. That is why, I really like to have executions of my tests recorded. Such recordings are great thing to avoid a repeated execution of the test to find out why it failed (repeated execution of tests should be avoided as plague).

It is awesome that Google added a a native support for recording of your Android 4.4.x+ device screen, but what the other folks with lower Android versions. We can not afford to test only on 4.4+, as it is wise to support at least 4.0+. A rooted device is not the answer for me, as we need to test on real devices, devices which are actually used by our customers.

OK, all Android versions are capable to take a screen capture, so why not to use this feature. The following describes small bash scripts, which in simple words create a video (actually a .gif with 2fps) from such screen captures. It is then easy to use them to record your functional UI tests (showcased on Appium tests).

Firstly, the script which takes screenshots until not terminated into a specified directory on your device:

adb -s $1 shell rm -r $DIR > /dev/null 2>&1
adb -s $1 shell mkdir $DIR > /dev/null 2>&1
for (( i=1; ; i++ ))
 name=`date +%s`
 adb -s $1 shell screencap -p "$DIR/$name.png"
You can try it by executing ./ [serialNumberOfDevice].

Secondly, the script which retrieves taken screenshots from devices into your computer, re-sizes them into smaller resolution, and finally creates an animated .gif:

mkdir "$1"
cd "$1"
adb -s $1 pull $DIR_REMOTE
echo "Resizing screenshots to smaller size!"
mogrify -resize 640x480 *.png
echo "Converting to .gif."
convert -delay 50 *.png "$1"-test.gif
echo "Clearing..."
cp "$1"-test.gif ..
cd ..
rm -rf "$1"
Try it by executing ./ [serialNumberOfDevice] [pathToDirectoryIntoWhichSaveScreenshots]. Just to note that it uses the imagemagic and its sub packages.

Here is an example of .gif created by scripts above, while sending encrypted files through the PhoneX app for a secure communication:
So we have some scripts to execute (indeed there are things to improve, a parameter checking etc.). There are various ways how to use them in your tests, all depend on what testing framework you are using, and in what language your tests are written in. We use the Appium, and its Java client. Following shows executing of the first ( script in the beginning of each test class:
public class AbstractTest {
    private Process takeScreenshotsProcess = null;

    protected void setupDevice1() throws Exception {
        takeScreenshotsProcess = startTakingOfScreenshots(DEVICE_1_NAME);
        //for readability omitted Appium API calls to setup device for testing

    protected Process startTakingOfScreenshots(String deviceName) throws Exception {
        String[] cmd = { "sh/", getDeviceSerialNumber(deviceName)};
        return Runtime.getRuntime().exec(cmd);

    public void tearDown() {
        if(takeScreenshotsProcess != null) {
Hopefully the code above is somehow self explanatory. It starts taking of screenshots before Appium API calls prepare a device for a testing (installs APK, etc.). The same pattern can be used for any number of devices.

Next steps are to use the script in the end of your CI job (e.g. Jenkins). I prefer fine granular CI jobs, which are short to execute, to provide a quick feedback. Therefore, each job is a one test (or matrix of tests), and that is why, starting of taking screenshots is done in the @Before method, and terminated in the @After method.

Please, bear in mind, that previous are just examples. They need to be polished and altered to ones needs. Enjoy testing.

Friday, January 27, 2012

Migration to Arquillian - done

Or how the RichFaces functional tests suites were migrated to Arquillian framework.

Table of contents
  1. Migration motivation
  2. Arquillian Ajocado project set up
  3. Writing tests
  4. RichFaces Selenium vs. Ajocado API

Migration motivation
Initial reason for migrating was a problem with Maven cargo plugin and its support of JBoss AS 7. In a short time, we also realized how many additional advantages Arquillian would bring into our project. My task was to prove this concept by porting functional tests of RichFaces showcase app to the Arquillian framework.

Our former functional test suite was written as Selenium tests, more precisely we used our homemade framework (RichfFaces Selenium)
in top of Selenium 1, from which Arquillian Ajocado was born. You can read more about RichFaces Selenium on its author blog.

So the benefits of the new platform - Arquillian + Arquillian Ajocado - were pretty obvious:
  • support for various containers (JBoss AS 6.0, JBoss AS 7.0, Tomcat 7 and many others, see this for more)
  • some of them are managed by Arquillian, so starting, deploying etc. is done automatically, therefore they are suitable for CI tools like Jenkins for example
  • Drone extension brings features of type safe Selenium 1.0 API by providing Ajocado, and also comes with Selenium 2.0 support and it's WebDriver API
  • tests rapid development with Ajocado
  • Ajocado best feature is not only the type safe API, but also it fills in Selenium gaps with very useful tools for testing Ajax requests, by it's waitAjax and guardXHR methods, which are so essential in AJAX frameworks like RichFaces
  • Arquillian future support of mobile devices testing, and current WebDriver support of mobile devices testing with it's Android and iOS plugins.
  • last but not least Arquillian is an opensource project with quite big community, it is quickly evolving, and as it is with opensource, when you do not have the feature, you can either easily develop it with support of community (which I found out for myself when I was developing Tomcat managed container for Arquillian) or you can file a feature request.

The only drawback, which we were aware of, was the API incompatibilities between RichFaces Selenium and Ajocado. I will return to them in the end.

Arquillian Ajocado project set up
The best way to set up Arquillian project is described in the documentation. As recommended it is good to configure it as Maven project. The first recommendation fulfilled from RichFaces side. 
In short, two configuration files need to be written or altered. Here are the examples of such from migrated RichFaces projects, pom.xml and arquillian.xml.

As you can read in docs, the only thing you need to add to your pom.xml is Arquillian dependencies and some profiles, which represents desired containers on which will be the testing application deployed. There is also an option to run tests from these containers, but in our project it is enough to run them on client.
An example of such dependencies are:

With this, you bring to your project all required Ajocado dependencies, and also WebDriver object. Of course you have other options, like use instead of TestNG the JUnit test framework. For complete set up, again please see the corresponding docs.

Next xml snippet is required Maven profile, which represents container into which our application under test will be deployed. This is an example of JBoss AS 7.1.0.CR1b Arquillian container.
Note that 7.1.0.Final are going to be released soon (7 February 2012), and than it will not take much time to release also arquillian managed dependency. So in order to use newer versions of container, please checkout available maven dependencies (JBoss Nexus) or JBoss AS download page and change accordingly.
When this profile is executed in standard way, mvn -Pjbossas-managed-7-1, the distribution of JBoss AS is downloaded from Maven repo, and unziped to the target directory.
With help of surefire plugin, we then set system property arquillian.launch, which fires the right configuration from arquillian.xml. Indeed you can achieve this by -Darquillian.launch=[correspondingArquillianXMLQualifier]. And lastly, we set up JBOSS_HOME environmental variable,to say to Arquillian where our managed container is installed. Same thing can be achieved by setting up correctly jbossHome property in arquillian.xml.

The last required config file is arquillian.xml, placed on the classpath, so ideal place for it is src/test/resource. Example which set ups config for above mentioned JBoss AS, config for Ajocado, Selenium server and WebDriver would look like:

<arquillian xmlns="" xmlns:xsi=""

    <property name="maxTestClassesBeforeRestart">10</property>

  <container qualifier="jbossas-managed-7-1">
      <property name="javaVmArguments">-Xms1024m -Xmx1024m -XX:MaxPermSize=512m</property>
      <property name="serverConfig">standalone-full.xml</property>
    <protocol type="jmx-as7">
      <property name="executionType">REMOTE</property>

  <extension qualifier="selenium-server">
    <property name="browserSessionReuse">true</property>
    <property name="port">8444</property>

  <extension qualifier="ajocado">
    <property name="browser">*firefox</property>
    <property name="contextRoot">http://localhost:8080/</property>
    <property name="seleniumTimeoutAjax">7000</property>
    <property name="seleniumMaximize">true</property>
    <property name="seleniumPort">8444</property>
    <property name="seleniumHost">localhost</property>

  <extension qualifier="webdriver"> 
    <property name="implementationClass">org.openqa.selenium.firefox.FirefoxDriver</property>

In this file we defined that the maximum number of tests classes which will be executed is 10. Then the container will be restarted. This is a workaround for famous OutOfMemory exception: PERM_GEN thrown after multiple deployments on containers. The number 10 was chosen by multiple running the suite and it seems for now that this is the optimal number of tests, but you should test your test suite and you should choose your number. See also the MaxPermsSize, set for all containers for a quite big chunk of memory, this is also due to above mentioned error.
Next configuration is for JBoss AS managed container, there is JBOSS_HOME omitted since we are setting it up in pom.xml. Then there are additional JVM arguments, mainly increasing the permanent memory size and heap size. These setups seems to be the best effective, we can manage to run 10 test classes and run it quickly. 
Configuration for selenium server consists from property browserSessionReuse, which determines whether the same session of browser should be use, in other words whether there will be start of new browser after each test. The true value accelerates tests quite dramatically. Selenium server need to be run for Ajocado tests, for WebDriver not. For further configuration options, please see the docs.

There is also a need to alter your Java code, to run with Arquillian. If you will be following docs, you will be successful for sure. I am just providing our approach. 
We have one base class which is common for whole test suite. It contains method for deploying application under test. We are deploying whole application for all test classes, which will be probably replaced by deploying only what is needed, with help of ShrinkWrap project. This improvement should accelerate the testing.
Since we want to write both Ajocado and WebDriver tests we are providing two classes which particular tests will extends. It is not possible now to have Ajocado and WebDriver objects simultaneously accessible from the same test class.

Writing tests
Writing tests with Ajocado and Arquillian is really simple and fast. It is so because of Ajocado is targeting on rapid development with his OO API as much as possible. With these features and modern IDE code completing, it is more pleasure than struggle to write tests. This is an example of such test, lets examine it further.

So, as I mentioned above, to run successfully test, there should be:

Method for deploying application under test:

@Deployment(testable = false)

public static WebArchive createTestArchive() {

        WebArchive war = ShrinkWrap.createFromZipFile(WebArchive.class, new    File("target/showcase.war"));

        return war;

We are deploying a war, which was copied into project build directory with help of Maven dependency plugin. This method is located in the parent of test, as it is the same for all tests. The argument in the annotations stands for running tests on the client rather than on server side.

Method for loading correct page on the browser:

@BeforeMethod(groups = { "arquillian" })
public void loadPage() {

  String addition = getAdditionToContextRoot();
  this.contextRoot = getContextRoot();, "/showcase/", addition));

We have set our tests that it is loading the correct page according to the test class name, so the method getAdditionToContextRoot is dealing with it, the context root can be set in arquillian.xml and finally you just load that page as you are used to with Selenium 1, but again instead of String you are using higher object. Just note that if you are using testng.xml for including some test groups, you need to add you before, after ... methods to the group arquillian, and also to include this group in particular testng.xml

protected JQueryLocator commandButton = jq("input[type=submit]");
protected JQueryLocator input = jq("input[type=text]");
protected JQueryLocator outHello = jq("#out");

public void testTypeSomeCharactersAndClickOnTheButton() {

  * type a string and click on the button, check the outHello
  String testString = "Test string";

  //write something to the input

  selenium.typeKeys(input, testString);

  //check whether after click an AJAX request was fired

  String expectedOutput = "Hello " + testString + " !";
  assertEquals(selenium.getText(outHello), expectedOutput, "The output should be:   " + expectedOutput);

I think the test is pretty much self explanatory. Here you can also see the differences between Selenium 1 and Ajocado. Aim mainly your focus on the fact that Ajocado uses various objects instead of just String for everything. In this example it is JQyeryLocator, which provide convenient and fast way for locating page elements by JQuery selectors.

RichFaces Selenium vs. Ajocado API
The API differences where one of the last problems we had. As we were using RichFaces Selenium, which has very similar API to Ajocado, the migration could be done automatically. However, at first I had to migrate the whole suite manually to see exactly the differences. With the list of API changes I was able to develop small Java app to automate this migrating in future. It was created mainly for our purposes, as there were new tests to migrate each day, but you can accommodate that app for your purposes too. It is nothing big, it can be probably easily done in bash, but I am quite weak in bash scripting, so I did in Java.

For complete Ajocado vs. RichFaces Selenium differences, please visit the mention app sources, where you can find all. Here I am providing just the most important ones:

Richfaces Selenium Ajoado
ElementLocator.getAsString ElementLocator.getRawLocator
AjaxSeleniumProxy.getInstance() AjaxSeleniumContext.getProxy()
SystemProperties SystemPropertiesConfiguration, and its methods are no more static, for example seleniumDebug is retrieved in this way: AjocadoConfigurationContext.getProxy().isSeleniumDebug()
JQueryLocator.getNthOccurence JQueryLocator.get
RetrieverFactory.RETRIEVE_TEXT TextRetriever.getInstance()
removed getNthChildElement(i) can be replaced by JQueryLocator(SimplifiedFormat.format("{0}:nth-child({1})", something.getRawLocator(), index));
RequestTypeGuard RequestGuardInterceptor
RequestTypeGuardFactory RequestGuardFactory
RequestInterceptor RequestGuard
CommandInterceptionException CommandInterceptorException
keyPress(String) keyPress(char)
keyPressNative(String) keyPressNative(int), so it is possible now to use KeyEvent static fields directly
isNotDisplayed elementNotVisible, what is important about all displayed vs visible change is that visible methods will fail when the element is not present, displayed methods will return true, so keep it in mind while using visible methods, whether you need to use at first elementPresent
selenium.isDisplayed selenium.isVisible
selenium.getRequestInterceptor() selenium.getRequestGuard()
clearRequestTypeDone() clearRequestDone()
waitXhr, waitHttp guard(selenium, RequestType.XHR), guard(selenium, RequestType.HTTP)
FrameLocator it has now two implementations FrameIndexLocator and FrameDOMLocator
ElementLocator methods almost all mothods were removed, only few lasted, since now ElementLocator is implementing Iterator interface, and it is possible to replace them easily with it. Example of this is here (see the initializeStateDataFromRow method)