User Information

Contents

Installation

This page describes the 3 methods of installing Workload Automation 3. The first option is to use pip which will install the latest release of WA, the latest development version from github or via a Dockerfile.

Prerequisites

Operating System

WA runs on a native Linux install. It has been tested on recent Ubuntu releases, but other recent Linux distributions should work as well. It should run on either 32-bit or 64-bit OS, provided the correct version of dependencies (see below) are installed. Officially, other environments are not supported. WA has been known to run on Linux Virtual machines and in Cygwin environments, though additional configuration may be required in both cases (known issues include makings sure USB/serial connections are passed to the VM, and wrong python/pip binaries being picked up in Cygwin). WA should work on other Unix-based systems such as BSD or Mac OS X, but it has not been tested in those environments. WA does not run on Windows (though it should be possible to get limited functionality with minimal porting effort).

Note

If you plan to run Workload Automation on Linux devices only, SSH is required, and Android SDK is optional if you wish to run WA on Android devices at a later time. Then follow the steps to install the necessary python packages to set up WA.

However, you would be starting off with a limited number of workloads that will run on Linux devices.

Android SDK

To interact with Android devices you will need to have the Android SDK with at least one platform installed. To install it, download the ADT Bundle from here. Extract it and add <path_to_android_sdk>/sdk/platform-tools and <path_to_android_sdk>/sdk/tools to your PATH. To test that you’ve installed it properly, run adb version. The output should be similar to this:

adb version
Android Debug Bridge version 1.0.39

Once that is working, run

android update sdk

This will open up a dialog box listing available android platforms and corresponding API levels, e.g. Android 4.3 (API 18). For WA, you will need at least API level 18 (i.e. Android 4.3), though installing the latest is usually the best bet.

Optionally (but recommended), you should also set ANDROID_HOME to point to the install location of the SDK (i.e. <path_to_android_sdk>/sdk).

Python

Workload Automation 3 currently supports Python 3.5+

Note

If your system’s default python version is still Python 2, please replace the commands listed here with their Python3 equivalent (e.g. python3, pip3 etc.)

pip

pip is the recommended package manager for Python. It is not part of standard Python distribution and would need to be installed separately. On Ubuntu and similar distributions, this may be done with APT:

sudo apt-get install python-pip

Note

Some versions of pip (in particluar v1.5.4 which comes with Ubuntu 14.04) are know to set the wrong permissions when installing packages, resulting in WA failing to import them. To avoid this it is recommended that you update pip and setuptools before proceeding with installation:

sudo -H pip install --upgrade pip
sudo -H pip install --upgrade setuptools

If you do run into this issue after already installing some packages, you can resolve it by running

sudo chmod -R a+r /usr/local/lib/python3.X/dist-packages
sudo find /usr/local/lib/python3.X/dist-packages -type d -exec chmod a+x {} \;

(The paths above will work for Ubuntu; they may need to be adjusted for other distros).

Python Packages

Note

pip should automatically download and install missing dependencies, so if you’re using pip, you can skip this section. However some packages the will be installed have C plugins and will require Python development headers to install. You can get those by installing python-dev package in apt on Ubuntu (or the equivalent for your distribution).

Workload Automation 3 depends on the following additional libraries:

  • pexpect

  • docutils

  • pySerial

  • pyYAML

  • python-dateutil

  • louie

  • pandas

  • devlib

  • wrapt

  • requests

  • colorama

  • future

You can install these with pip:

sudo -H pip install pexpect
sudo -H pip install pyserial
sudo -H pip install pyyaml
sudo -H pip install docutils
sudo -H pip install python-dateutil
sudo -H pip install devlib
sudo -H pip install pandas
sudo -H pip install louie
sudo -H pip install wrapt
sudo -H pip install requests
sudo -H pip install colorama
sudo -H pip install future

Some of these may also be available in your distro’s repositories, e.g.

sudo apt-get install python-serial

Distro package versions tend to be older, so pip installation is recommended. However, pip will always download and try to build the source, so in some situations distro binaries may provide an easier fall back. Please also note that distro package names may differ from pip packages.

Optional Python Packages

Note

Unlike the mandatory dependencies in the previous section, pip will not install these automatically, so you will have to explicitly install them if/when you need them.

In addition to the mandatory packages listed in the previous sections, some WA functionality (e.g. certain plugins) may have additional dependencies. Since they are not necessary to be able to use most of WA, they are not made mandatory to simplify initial WA installation. If you try to use an plugin that has additional, unmet dependencies, WA will tell you before starting the run, and you can install it then. They are listed here for those that would rather install them upfront (e.g. if you’re planning to use WA to an environment that may not always have Internet access).

  • nose

  • mock

  • daqpower

  • sphinx

  • sphinx_rtd_theme

  • psycopg2-binary

Installing

Installing the latest released version from PyPI (Python Package Index):

sudo -H pip install wlauto

This will install WA along with its mandatory dependencies. If you would like to install all optional dependencies at the same time, do the following instead:

sudo -H pip install wlauto[all]

Alternatively, you can also install the latest development version from GitHub (you will need git installed for this to work):

git clone git@github.com:ARM-software/workload-automation.git workload-automation
cd workload-automation
sudo -H python setup.py install

Note

Please note that if using pip to install from github this will most likely result in an older and incompatible version of devlib being installed alongside WA. If you wish to use pip please also manually install the latest version of devlib.

Note

Please note that while a requirements.txt is included, this is designed to be a reference of known working packages rather to than to be used as part of a standard installation. The version restrictions in place as part of setup.py should automatically ensure the correct packages are install however if encountering issues please try updating/downgrading to the package versions list within.

If the above succeeds, try

wa --version

Hopefully, this should output something along the lines of

"Workload Automation version $version".

Dockerfile

As an alternative we also provide a Dockerfile that will create an image called wadocker, and is preconfigured to run WA and devlib. Please note that the build process automatically accepts the licenses for the Android SDK, so please be sure that you are willing to accept these prior to building and running the image in a container.

The Dockerfile can be found in the “extras” directory or online at https://github.com/ARM-software/workload-automation/blob/next/extras/Dockerfile which contains additional information about how to build and to use the file.

(Optional) Post Installation

Some WA plugins have additional dependencies that need to be satisfied before they can be used. Not all of these can be provided with WA and so will need to be supplied by the user. They should be placed into ~/.workload_automation/dependencies/<extension name> so that WA can find them (you may need to create the directory if it doesn’t already exist). You only need to provide the dependencies for workloads you want to use.

APK Files

APKs are application packages used by Android. These are necessary to install on a device when running an ApkWorkload or derivative. Please see the workload description using the show command to see which version of the apk the UI automation has been tested with and place the apk in the corresponding workloads dependency directory. Automation may also work with other versions (especially if it’s only a minor or revision difference – major version differences are more likely to contain incompatible UI changes) but this has not been tested. As a general rule we do not guarantee support for the latest version of an app and they are updated on an as needed basis. We do however attempt to support backwards compatibility with previous major releases however beyond this support will likely be dropped.

Gaming Workloads

Some workloads (games, demos, etc) cannot be automated using Android’s UIAutomator framework because they render the entire UI inside a single OpenGL surface. For these, an interaction session needs to be recorded so that it can be played back by WA. These recordings are device-specific, so they would need to be done for each device you’re planning to use. The tool for doing is revent and it is packaged with WA. You can find instructions on how to use it in the How To section.

This is the list of workloads that rely on such recordings:

angrybirds_rio

templerun2

Maintaining Centralized Assets Repository

If there are multiple users within an organization that may need to deploy assets for WA plugins, that organization may wish to maintain a centralized repository of assets that individual WA installs will be able to automatically retrieve asset files from as they are needed. This repository can be any directory on a network filer that mirrors the structure of ~/.workload_automation/dependencies, i.e. has a subdirectories named after the plugins which assets they contain. Individual WA installs can then set remote_assets_path setting in their config to point to the local mount of that location.

(Optional) Uninstalling

If you have installed Workload Automation via pip and wish to remove it, run this command to uninstall it:

sudo -H pip uninstall wa

Note

This will not remove any user configuration (e.g. the ~/.workload_automation directory)

(Optional) Upgrading

To upgrade Workload Automation to the latest version via pip, run:

sudo -H pip install --upgrade --no-deps wa

User Guide

This guide will show you how to quickly start running workloads using Workload Automation 3.


Install

Note

This is a quick summary. For more detailed instructions, please see the Installation section.

Make sure you have Python 3.5+ and a recent Android SDK with API level 18 or above installed on your system. A complete install of the Android SDK is required, as WA uses a number of its utilities, not just adb. For the SDK, make sure that either ANDROID_HOME environment variable is set, or that adb is in your PATH.

Note

If you plan to run Workload Automation on Linux devices only, SSH is required, and Android SDK is optional if you wish to run WA on Android devices at a later time.

However, you would be starting off with a limited number of workloads that will run on Linux devices.

In addition to the base Python install, you will also need to have pip (Python’s package manager) installed as well. This is usually a separate package.

Once you have those, you can install WA with:

sudo -H pip install wlauto

This will install Workload Automation on your system, along with its mandatory dependencies.

Alternatively we provide a Dockerfile that which can be used to create a Docker image for running WA along with its dependencies. More information can be found in the Installation section.

(Optional) Verify installation

Once the tarball has been installed, try executing

wa -h

You should see a help message outlining available subcommands.

(Optional) APK files

A large number of WA workloads are installed as APK files. These cannot be distributed with WA and so you will need to obtain those separately.

For more details, please see the installation section.

List Command

In order to get started with using WA we first we need to find out what is available to use. In order to do this we can use the list command followed by the type of plugin that you wish to see.

For example to see what workloads are available along with a short description of each you run:

wa list workloads

Which will give an output in the format of:

   adobereader:    The Adobe Reader workflow carries out the following typical
                   productivity tasks.
    androbench:    Executes storage performance benchmarks
angrybirds_rio:    Angry Birds Rio game.
        antutu:    Executes Antutu 3D, UX, CPU and Memory tests
     applaunch:    This workload launches and measures the launch time of applications
                   for supporting workloads.
   benchmarkpi:    Measures the time the target device takes to run and complete the
                   Pi calculation algorithm.
     dhrystone:    Runs the Dhrystone benchmark.
     exoplayer:    Android ExoPlayer
     geekbench:    Geekbench provides a comprehensive set of benchmarks engineered to
                   quickly and accurately measure
                   processor and memory performance.
  #..

The same syntax can be used to display commands, energy_instrument_backends, instruments, output_processors, resource_getters, targets. Once you have found the plugin you are looking for you can use the show command to display more detailed information. Alternatively please see the Plugin Reference for an online version.

Show Command

If you want to learn more information about a particular plugin, such as the parameters it supports, you can use the “show” command:

wa show dhrystone

If you have pandoc installed on your system, this will display man page-like description of the plugin, and the parameters it supports. If you do not have pandoc, you will instead see the same information as raw restructured text.

Configure Your Device

There are multiple options for configuring your device depending on your particular use case.

You can either add your configuration to the default configuration file config.yaml, under the $WA_USER_DIRECTORY/ directory or you can specify it in the config section of your agenda directly.

Alternatively if you are using multiple devices, you may want to create separate config files for each of your devices you will be using. This allows you to specify which device you would like to use for a particular run and pass it as an argument when invoking with the -c flag.

wa run dhrystone -c my_device.yaml

By default WA will use the “most specific” configuration available for example any configuration specified inside an agenda will override a passed configuration file which will in turn overwrite the default configuration file.

Note

For a more information about configuring your device please see Setting Up A Device.

Android

By default, the device WA will use is set to ‘generic_android’. WA is configured to work with a generic Android device through adb. If you only have one device listed when you execute adb devices, and your device has a standard Android configuration, then no extra configuration is required.

However, if your device is connected via network, you will have to manually execute adb connect <device ip> (or specify this in your agenda) so that it appears in the device listing.

If you have multiple devices connected, you will need to tell WA which one you want it to use. You can do that by setting device in the device_config section.

# ...

device_config:
        device: 'abcdef0123456789'
        # ...
# ...

Linux

First, set the device to ‘generic_linux’

# ...
  device: 'generic_linux'
# ...

Find the device_config section and add these parameters

# ...

device_config:
        host: '192.168.0.100'
        username: 'root'
        password: 'password'
        # ...
# ...

Parameters:

  • Host is the IP of your target Linux device

  • Username is the user for the device

  • Password is the password for the device

Enabling and Disabling Augmentations

Augmentations are the collective name for “instruments” and “output processors” in WA3.

Some augmentations are enabled by default after your initial install of WA, which are specified in the config.yaml file located in your WA_USER_DIRECTORY, typically ~/.workload_autoamation.

Note

Some Linux devices may not be able to run certain augmentations provided by WA (e.g. cpufreq is disabled or unsupported by the device).

# ...

augmentations:
    # Records the time it took to run the workload
    - execution_time

    # Collects /proc/interrupts before and after execution and does a diff.
    - interrupts

    # Collects the contents of/sys/devices/system/cpu before and after
    # execution and does a diff.
    - cpufreq

    # Generate a txt file containing general status information about
    # which runs failed and which were successful.
    - status

    # ...

If you only wanted to keep the ‘execution_time’ instrument enabled, you can comment out the rest of the list augmentations to disable them.

This should give you basic functionality. If you are working with a development board or you need some advanced functionality additional configuration may be required. Please see the device setup section for more details.

Note

In WA2 ‘Instrumentation’ and ‘Result Processors’ were divided up into their own sections in the agenda. In WA3 they now fall under the same category of ‘augmentations’. For compatibility the old naming structure is still valid however using the new entry names is recommended.

Running Your First Workload

The simplest way to run a workload is to specify it as a parameter to WA run run sub-command:

wa run dhrystone

You will see INFO output from WA as it executes each stage of the run. A completed run output should look something like this:

INFO     Creating output directory.
INFO     Initializing run
INFO     Connecting to target
INFO     Setting up target
INFO     Initializing execution context
INFO     Generating jobs
INFO         Loading job wk1 (dhrystone) [1]
INFO     Installing instruments
INFO     Installing output processors
INFO     Starting run
INFO     Initializing run
INFO         Initializing job wk1 (dhrystone) [1]
INFO     Running job wk1
INFO         Configuring augmentations
INFO         Configuring target for job wk1 (dhrystone) [1]
INFO         Setting up job wk1 (dhrystone) [1]
INFO         Running job wk1 (dhrystone) [1]
INFO         Tearing down job wk1 (dhrystone) [1]
INFO         Completing job wk1
INFO     Job completed with status OK
INFO     Finalizing run
INFO         Finalizing job wk1 (dhrystone) [1]
INFO     Done.
INFO     Run duration: 9 seconds
INFO     Ran a total of 1 iterations: 1 OK
INFO     Results can be found in wa_output

Once the run has completed, you will find a directory called wa_output in the location where you have invoked wa run. Within this directory, you will find a “results.csv” file which will contain results obtained for dhrystone, as well as a “run.log” file containing detailed log output for the run. You will also find a sub-directory called ‘wk1-dhrystone-1’ that contains the results for that iteration. Finally, you will find various additional information in the wa_output/__meta subdirectory for example information extracted from the target and a copy of the agenda file. The contents of iteration-specific subdirectories will vary from workload to workload, and, along with the contents of the main output directory, will depend on the augmentations that were enabled for that run.

The run sub-command takes a number of options that control its behaviour, you can view those by executing wa run -h. Please see the Commands section for details.

Create an Agenda

Simply running a single workload is normally of little use. Typically, you would want to specify several workloads, setup the device state and, possibly, enable additional augmentations. To do this, you would need to create an “agenda” for the run that outlines everything you want WA to do.

Agendas are written using YAML markup language. A simple agenda might look like this:

config:
        augmentations:
            - ~execution_time
            - targz
        iterations: 2
workloads:
        - memcpy
        - name: dhrystone
          params:
                mloops: 5
                threads: 1

This agenda:

  • Specifies two workloads: memcpy and dhrystone.

  • Specifies that dhrystone should run in one thread and execute five million loops.

  • Specifies that each of the two workloads should be run twice.

  • Enables the targz output processor, in addition to the output processors enabled in the config.yaml.

  • Disables execution_time instrument, if it is enabled in the config.yaml

An agenda can be created using WA’s create command or in a text editor and saved as a YAML file.

For more options please see the Defining Experiments With an Agenda documentation.

Using the Create Command

The easiest way to create an agenda is to use the ‘create’ command. For more in-depth information please see the Create Command documentation.

In order to populate the agenda with relevant information you can supply all of the plugins you wish to use as arguments to the command, for example if we want to create an agenda file for running dhrystone on a generic_android device and we want to enable the execution_time and trace-cmd instruments and display the metrics using the csv output processor. We would use the following command:

wa create agenda generic_android dhrystone execution_time trace-cmd csv -o my_agenda.yaml

This will produce a my_agenda.yaml file containing all the relevant configuration for the specified plugins along with their default values as shown below:

config:
    augmentations:
    - execution_time
    - trace-cmd
    - csv
    iterations: 1
    device: generic_android
    device_config:
        adb_server: null
        big_core: null
        core_clusters: null
        core_names: null
        device: null
        disable_selinux: true
        executables_directory: null
        load_default_modules: true
        logcat_poll_period: null
        model: null
        modules: null
        package_data_directory: /data/data
        shell_prompt: !<tag:wa:regex> '8:^.*(shell|root)@.*:/\S* [#$] '
        working_directory: null
    execution_time: {}
    trace-cmd:
        buffer_size: null
        buffer_size_step: 1000
        events:
        - sched*
        - irq*
        - power*
        - thermal*
        functions: null
        no_install: false
        report: true
        report_on_target: false
    csv:
        extra_columns: null
        use_all_classifiers: false
workloads:
-   name: dhrystone
    params:
        cleanup_assets: true
        delay: 0
        duration: 0
        mloops: 0
        taskset_mask: 0
        threads: 4

Run Command

These examples show some useful options that can be used with WA’s run command.

Once we have created an agenda to use it with WA we can pass it as a argument to the run command e.g.:

wa run <path/to/agenda> (e.g. wa run ~/myagenda.yaml)

By default WA will use the “wa_output” directory to stores its output however to redirect the output to a different directory we can use:

wa run dhrystone -d my_output_directory

We can also tell WA to use additional config files by supplying it with the -c argument. One use case for passing additional config files is if you have multiple devices you wish test with WA, you can store the relevant device configuration in individual config files and then pass the file corresponding to the device you wish to use for that particular test.

Note

As previously mentioned, any more specific configuration present in the agenda file will overwrite the corresponding config parameters specified in the config file(s).

wa run -c myconfig.yaml ~/myagenda.yaml

To use the same output directory but override the existing contents to store new dhrystone results we can specify the -f argument:

wa run -f dhrystone

To display verbose output while running memcpy:

wa run --verbose memcpy

Output

The output directory will contain subdirectories for each job that was run, which will in turn contain the generated metrics and artifacts for each job. The directory will also contain a run.log file containing the complete log output for the run, and a __meta directory with the configuration and metadata for the run. Metrics are serialized inside result.json files inside each job’s subdirectory. There may also be a __failed directory containing failed attempts for jobs that have been re-run.

Augmentations may add additional files at the run or job directory level. The default configuration has status and csv augmentations enabled which generate a status.txt containing status summary for the run and individual jobs, and a results.csv containing metrics from all jobs in a CSV table, respectively.

See Output Directory Structure for more information.

In order to make it easier to access WA results from scripts, WA provides an API that parses the contents of the output directory:

>>> from wa import RunOutput
>>> ro = RunOutput('./wa_output')
>>> for job in ro.jobs:
...     if job.status != 'OK':
...         print('Job "{}" did not complete successfully: {}'.format(job, job.status))
...         continue
...     print('Job "{}":'.format(job))
...     for metric in job.metrics:
...         if metric.units:
...             print('\t{}: {} {}'.format(metric.name, metric.value, metric.units))
...         else:
...             print('\t{}: {}'.format(metric.name, metric.value))
...
Job "wk1-dhrystone-1":
        thread 0 score: 20833333
        thread 0 DMIPS: 11857
        thread 1 score: 24509804
        thread 1 DMIPS: 13950
        thread 2 score: 18011527
        thread 2 DMIPS: 10251
        thread 3 score: 26371308
        thread 3 DMIPS: 15009
        time: 1.001251 seconds
        total DMIPS: 51067
        total score: 89725972
        execution_time: 1.4834280014 seconds

See Output for details.

Uninstall

If you have installed Workload Automation via pip, then run this command to uninstall it:

sudo pip uninstall wa

Note

It will not remove any user configuration (e.g. the ~/.workload_automation directory).

Upgrade

To upgrade Workload Automation to the latest version via pip, run:

sudo pip install --upgrade --no-deps wa

How Tos

Defining Experiments With an Agenda

An agenda specifies what is to be done during a Workload Automation run, including which workloads will be run, with what configuration, which augmentations will be enabled, etc. Agenda syntax is designed to be both succinct and expressive.

Agendas are specified using YAML notation. It is recommended that you familiarize yourself with the linked page.

Specifying which workloads to run

The central purpose of an agenda is to specify what workloads to run. A minimalist agenda contains a single entry at the top level called “workloads” that maps onto a list of workload names to run:

workloads:
        - dhrystone
        - memcpy
        - rt_app

This specifies a WA run consisting of dhrystone followed by memcpy, followed by rt_app workloads, and using the augmentations specified in config.yaml (see Configuration section).

Note

If you’re familiar with YAML, you will recognize the above as a single-key associative array mapping onto a list. YAML has two notations for both associative arrays and lists: block notation (seen above) and also in-line notation. This means that the above agenda can also be written in a single line as

workloads: [dhrystone, memcpy, rt-app]

(with the list in-lined), or

{workloads: [dhrystone, memcpy, rt-app]}

(with both the list and the associative array in-line). WA doesn’t care which of the notations is used as they all get parsed into the same structure by the YAML parser. You can use whatever format you find easier/clearer.

Note

WA plugin names are case-insensitive, and dashes (-) and underscores (_) are treated identically. So all of the following entries specify the same workload: rt_app, rt-app, RT-app.

Multiple iterations

There will normally be some variability in workload execution when running on a real device. In order to quantify it, multiple iterations of the same workload are usually performed. You can specify the number of iterations for each workload by adding iterations field to the workload specifications (or “specs”):

workloads:
        - name: dhrystone
          iterations: 5
        - name: memcpy
          iterations: 5
        - name: cyclictest
          iterations: 5

Now that we’re specifying both the workload name and the number of iterations in each spec, we have to explicitly name each field of the spec.

It is often the case that, as in in the example above, you will want to run all workloads for the same number of iterations. Rather than having to specify it for each and every spec, you can do with a single entry by adding iterations to your config section in your agenda:

config:
        iterations: 5
workloads:
        - dhrystone
        - memcpy
        - cyclictest

If the same field is defined both in config section and in a spec, then the value in the spec will overwrite the value. For example, suppose we wanted to run all our workloads for five iterations, except cyclictest which we want to run for ten (e.g. because we know it to be particularly unstable). This can be specified like this:

config:
        iterations: 5
workloads:
        - dhrystone
        - memcpy
        - name: cyclictest
          iterations: 10

Again, because we are now specifying two fields for cyclictest spec, we have to explicitly name them.

Configuring Workloads

Some workloads accept configuration parameters that modify their behaviour. These parameters are specific to a particular workload and can alter the workload in any number of ways, e.g. set the duration for which to run, or specify a media file to be used, etc. The vast majority of workload parameters will have some default value, so it is only necessary to specify the name of the workload in order for WA to run it. However, sometimes you want more control over how a workload runs.

For example, by default, dhrystone will execute 10 million loops across four threads. Suppose your device has six cores available and you want the workload to load them all. You also want to increase the total number of loops accordingly to 15 million. You can specify this using dhrystone’s parameters:

config:
        iterations: 5
workloads:
        - name: dhrystone
          params:
                threads: 6
                mloops: 15
        - memcpy
        - name: cyclictest
          iterations: 10

Note

You can find out what parameters a workload accepts by looking it up in the Workloads section or using WA itself with “show” command:

wa show dhrystone

see the Commands section for details.

In addition to configuring the workload itself, we can also specify configuration for the underlying device which can be done by setting runtime parameters in the workload spec. Explicit runtime parameters have been exposed for configuring cpufreq, hotplug and cpuidle. For more detailed information on Runtime Parameters see the runtime parameters section. For example, suppose we want to ensure the maximum score for our benchmarks, at the expense of power consumption so we want to set the cpufreq governor to “performance” and enable all of the cpus on the device, (assuming there are 8 cpus available), which can be done like this:

config:
        iterations: 5
workloads:
        - name: dhrystone
          runtime_params:
                governor: performance
                num_cores: 8
          workload_params:
                threads: 6
                mloops: 15
        - memcpy
        - name: cyclictest
          iterations: 10

I’ve renamed params to workload_params for clarity, but that wasn’t strictly necessary as params is interpreted as workload_params inside a workload spec.

Runtime parameters do not automatically reset at the end of workload spec execution, so all subsequent iterations will also be affected unless they explicitly change the parameter (in the example above, performance governor will also be used for memcpy and cyclictest. There are two ways around this: either set reboot_policy WA setting (see Configuration section) such that the device gets rebooted between job executions, thus being returned to its initial state, or set the default runtime parameter values in the config section of the agenda so that they get set for every spec that doesn’t explicitly override them.

If additional configuration of the device is required which are not exposed via the built in runtime parameters, you can write a value to any file exposed on the device using sysfile_values, for example we could have also performed the same configuration manually (assuming we have a big.LITTLE system and our cores 0-3 and 4-7 are in 2 separate DVFS domains and so setting the governor for cpu0 and cpu4 will affect all our cores) e.g.

config:
        iterations: 5
workloads:
        - name: dhrystone
        runtime_params:
                sysfile_values:
                    /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
                    /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor: performance
                    /sys/devices/system/cpu/cpu0/online: 1
                    /sys/devices/system/cpu/cpu1/online: 1
                    /sys/devices/system/cpu/cpu2/online: 1
                    /sys/devices/system/cpu/cpu3/online: 1
                    /sys/devices/system/cpu/cpu4/online: 1
                    /sys/devices/system/cpu/cpu5/online: 1
                    /sys/devices/system/cpu/cpu6/online: 1
                    /sys/devices/system/cpu/cpu7/online: 1
        workload_params:
                threads: 6
                mloops: 15
    - memcpy
    - name: cyclictest
        iterations: 10

Here, we’re specifying a sysfile_values runtime parameter for the device. For more information please see setting sysfiles.

APK Workloads

WA has various resource getters that can be configured to locate APK files but for most people APK files should be kept in the $WA_USER_DIRECTORY/dependencies/SOME_WORKLOAD/ directory. (by default ~/.workload_automation/dependencies/SOME_WORKLOAD/). The WA_USER_DIRECTORY environment variable can be used to change the location of this directory. The APK files need to be put into the corresponding directories for the workload they belong to. The name of the file can be anything but as explained below may need to contain certain pieces of information.

All ApkWorkloads have parameters that affect the way in which APK files are resolved, exact_abi, force_install and prefer_host_package. Their exact behaviours are outlined below.

exact_abi

If this setting is enabled WA’s resource resolvers will look for the devices ABI with any native code present in the apk. By default this setting is disabled since most apks will work across all devices. You may wish to enable this feature when working with devices that support multiple ABI’s (like 64-bit devices that can run 32-bit APK files) and are specifically trying to test one or the other.

force_install

If this setting is enabled WA will always use the APK file on the host, and re-install it on every iteration. If there is no APK on the host that is a suitable version and/or ABI for the workload WA will error when force_install is enabled.

prefer_host_package

This parameter is used to specify a preference over host or target versions of the app. When set to True WA will prefer the host side version of the APK. It will check if the host has the APK and whether it meets the version requirements of the workload. If so, and the target also already has same version nothing will be done, otherwise WA will overwrite the targets installed application with the host version. If the host is missing the APK or it does not meet version requirements WA will fall back to the app on the target if present and is a suitable version. When this parameter is set to False WA will prefer to use the version already on the target if it meets the workloads version requirements. If it does not it will fall back to searching the host for the correct version. In both modes if neither the host nor target have a suitable version, WA will produce and error and will not run the workload.

version

This parameter is used to specify which version of uiautomation for the workload is used. In some workloads e.g. geekbench multiple versions with drastically different UI’s are supported. A APKs version will be automatically extracted therefore it is possible to have multiple apks for different versions of a workload present on the host and select between which is used for a particular job by specifying the relevant version in your agenda.

variant_name

Some workloads use variants of APK files, this is usually the case with web browser APK files, these work in exactly the same way as the version.

IDs and Labels

It is possible to list multiple specs with the same workload in an agenda. You may wish to do this if you want to run a workload with different parameter values or under different runtime configurations of the device. The workload name therefore does not uniquely identify a spec. To be able to distinguish between different specs (e.g. in reported results), each spec has an ID which is unique to all specs within an agenda (and therefore with a single WA run). If an ID isn’t explicitly specified using id field (note that the field name is in lower case), one will be automatically assigned to the spec at the beginning of the WA run based on the position of the spec within the list. The first spec without an explicit ID will be assigned ID wk1, the second spec without an explicit ID will be assigned ID wk2, and so forth.

Numerical IDs aren’t particularly easy to deal with, which is why it is recommended that, for non-trivial agendas, you manually set the ids to something more meaningful (or use labels – see below). An ID can be pretty much anything that will pass through the YAML parser. The only requirement is that it is unique to the agenda. However, is usually better to keep them reasonably short (they don’t need to be globally unique), and to stick with alpha-numeric characters and underscores/dashes. While WA can handle other characters as well, getting too adventurous with your IDs may cause issues further down the line when processing WA output (e.g. when uploading them to a database that may have its own restrictions).

In addition to IDs, you can also specify labels for your workload specs. These are similar to IDs but do not have the uniqueness restriction. If specified, labels will be used by some output processes instead of (or in addition to) the workload name. For example, the csv output processor will put the label in the “workload” column of the CSV file.

It is up to you how you chose to use IDs and labels. WA itself doesn’t expect any particular format (apart from uniqueness for IDs). Below is the earlier example updated to specify explicit IDs and label dhrystone spec to reflect parameters used.

config:
        iterations: 5
workloads:
        - id: 01_dhry
          name: dhrystone
          label: dhrystone_15over6
          runtime_params:
                cpu0_governor: performance
          workload_params:
                threads: 6
                mloops: 15
        - id: 02_memc
          name: memcpy
        - id: 03_cycl
          name: cyclictest
          iterations: 10

Classifiers

Classifiers can be used in 2 distinct ways, the first use is being supplied in an agenda as a set of key-value pairs which can be used to help identify sub-tests of a run, for example if you have multiple sections in your agenda running your workloads at different frequencies you might want to set a classifier specifying which frequencies are being used. These can then be utilized later, for example with the csv output processor with use_all_classifiers set to True and this will add additional columns to the output file for each of the classifier keys that have been specified allowing for quick comparison.

An example agenda is shown here:

config:
    augmentations:
        - csv
    iterations: 1
    device: generic_android
    csv:
        use_all_classifiers: True
sections:
    - id: max_speed
      runtime_parameters:
          frequency: 1700000
      classifiers:
          freq: 1700000
    - id: min_speed
      runtime_parameters:
          frequency: 200000
      classifiers:
          freq: 200000
workloads:
-   name: recentfling

The other way that they can used is by being automatically added by some workloads to identify their results metrics and artifacts. For example some workloads perform multiple tests with the same execution run and therefore will use metrics to differentiate between them, e.g. the recentfling workload will use classifiers to distinguish between which loop a particular result is for or whether it is an average across all loops ran.

The output from the agenda above will produce a csv file similar to what is shown below. Some columns have been omitted for clarity however as can been seen the custom frequency classifier column has been added and populated, along with the loop classifier added by the workload.

id              | workload      | metric                    | freq      | loop    | value ‖
max_speed-wk1   | recentfling   | 90th Percentile           | 1700000   | 1       | 8     ‖
max_speed-wk1   | recentfling   | 95th Percentile           | 1700000   | 1       | 9     ‖
max_speed-wk1   | recentfling   | 99th Percentile           | 1700000   | 1       | 16    ‖
max_speed-wk1   | recentfling   | Jank                      | 1700000   | 1       | 11    ‖
max_speed-wk1   | recentfling   | Jank%                     | 1700000   | 1       | 1     ‖
# ...
max_speed-wk1   | recentfling   | Jank                      | 1700000   | 3       | 1     ‖
max_speed-wk1   | recentfling   | Jank%                     | 1700000   | 3       | 0     ‖
max_speed-wk1   | recentfling   | Average 90th Percentqile  | 1700000   | Average | 7     ‖
max_speed-wk1   | recentfling   | Average 95th Percentile   | 1700000   | Average | 8     ‖
max_speed-wk1   | recentfling   | Average 99th Percentile   | 1700000   | Average | 14    ‖
max_speed-wk1   | recentfling   | Average Jank              | 1700000   | Average | 6     ‖
max_speed-wk1   | recentfling   | Average Jank%             | 1700000   | Average | 0     ‖
min_speed-wk1   | recentfling   | 90th Percentile           | 200000    | 1       | 7     ‖
min_speed-wk1   | recentfling   | 95th Percentile           | 200000    | 1       | 8     ‖
min_speed-wk1   | recentfling   | 99th Percentile           | 200000    | 1       | 14    ‖
min_speed-wk1   | recentfling   | Jank                      | 200000    | 1       | 5     ‖
min_speed-wk1   | recentfling   | Jank%                     | 200000    | 1       | 0     ‖
# ...
min_speed-wk1   | recentfling   | Jank                      | 200000    | 3       | 5     ‖
min_speed-wk1   | recentfling   | Jank%                     | 200000    | 3       | 0     ‖
min_speed-wk1   | recentfling   | Average 90th Percentile   | 200000    | Average | 7     ‖
min_speed-wk1   | recentfling   | Average 95th Percentile   | 200000    | Average | 8     ‖
min_speed-wk1   | recentfling   | Average 99th Percentile   | 200000    | Average | 13    ‖
min_speed-wk1   | recentfling   | Average Jank              | 200000    | Average | 4     ‖
min_speed-wk1   | recentfling   | Average Jank%             | 200000    | Average | 0     ‖

Sections

It is a common requirement to be able to run the same set of workloads under different device configurations. E.g. you may want to investigate the impact of changing a particular setting to different values on the benchmark scores, or to quantify the impact of enabling a particular feature in the kernel. WA allows this by defining “sections” of configuration with an agenda.

For example, suppose that we want to measure the impact of using 3 different cpufreq governors on 2 benchmarks. We could create 6 separate workload specs and set the governor runtime parameter for each entry. However, this introduces a lot of duplication; and what if we want to change spec configuration? We would have to change it in multiple places, running the risk of forgetting one.

A better way is to keep the two workload specs and define a section for each governor:

config:
        iterations: 5
        augmentations:
            - ~cpufreq
            - csv
        sysfs_extractor:
                paths: [/proc/meminfo]
        csv:
            use_all_classifiers: True
sections:
        - id: perf
          runtime_params:
                cpu0_governor: performance
        - id: inter
          runtime_params:
                cpu0_governor: interactive
        - id: sched
          runtime_params:
                cpu0_governor: sched
workloads:
        - id: 01_dhry
          name: dhrystone
          label: dhrystone_15over6
          workload_params:
                threads: 6
                mloops: 15
        - id: 02_memc
          name: memcpy
          augmentations: [sysfs_extractor]

A section, just like an workload spec, needs to have a unique ID. Apart from that, a “section” is similar to the config section we’ve already seen – everything that goes into a section will be applied to each workload spec. Workload specs defined under top-level workloads entry will be executed for each of the sections listed under sections.

Note

It is also possible to have a workloads entry within a section, in which case, those workloads will only be executed for that specific section.

In order to maintain the uniqueness requirement of workload spec IDs, they will be namespaced under each section by prepending the section ID to the spec ID with a dash. So in the agenda above, we no longer have a workload spec with ID 01_dhry, instead there are two specs with IDs perf-01-dhry and inter-01_dhry.

Note that the config section still applies to every spec in the agenda. So the precedence order is – spec settings override section settings, which in turn override global settings.

Section Groups

Section groups are a way of grouping sections together and are used to produce a cross product of each of the different groups. This can be useful when you want to run a set of experiments with all the available combinations without having to specify each combination manually.

For example if we want to investigate the differences between running the maximum and minimum frequency with both the maximum and minimum number of cpus online, we can create an agenda as follows:

sections:
  - id: min_freq
  runtime_parameters:
      freq: min
  group: frequency
 - id: max_freq
  runtime_parameters:
      freq: max
  group: frequency

 - id: min_cpus
   runtime_parameters:
      cpus: 1
  group: cpus
 - id: max_cpus
   runtime_parameters:
      cpus: 8
  group: cpus

workloads:
-  dhrystone

This will results in 8 jobs being generated for each of the possible combinations.

min_freq-min_cpus-wk1 (dhrystone)
min_freq-max_cpus-wk1 (dhrystone)
max_freq-min_cpus-wk1 (dhrystone)
max_freq-max_cpus-wk1 (dhrystone)
min_freq-min_cpus-wk1 (dhrystone)
min_freq-max_cpus-wk1 (dhrystone)
max_freq-min_cpus-wk1 (dhrystone)
max_freq-max_cpus-wk1 (dhrystone)

Each of the generated jobs will have classifiers for each group and the associated id automatically added.

# ...
print('Job ID: {}'.format(job.id))
print('Classifiers:')
for k, v in job.classifiers.items():
    print('  {}: {}'.format(k, v))

Job ID: min_freq-min_cpus-no_idle-wk1
Classifiers:
    frequency: min_freq
    cpus: min_cpus

Augmentations

Augmentations are plugins that augment the execution of workload jobs with additional functionality; usually, that takes the form of generating additional metrics and/or artifacts, such as traces or logs. There are two types of augmentations:

Instruments

These “instrument” a WA run in order to change it’s behaviour (e.g. introducing delays between successive job executions), or collect additional measurements (e.g. energy usage). Some instruments may depend on particular features being enabled on the target (e.g. cpufreq), or on additional hardware (e.g. energy probes).

Output processors

These post-process metrics and artifacts generated by workloads or instruments, as well as target metadata collected by WA, in order to generate additional metrics and/or artifacts (e.g. generating statistics or reports). Output processors are also used to export WA output externally (e.g. upload to a database).

The main practical difference between instruments and output processors, is that the former rely on an active connection to the target to function, where as the latter only operated on previously collected results and metadata. This means that output processors can run “off-line” using wa process command.

Both instruments and output processors are configured in the same way in the agenda, which is why they are grouped together into “augmentations”. Augmentations are enabled by listing them under augmentations entry in a config file or config section of the agenda.

config:
        augmentations: [trace-cmd]

The code above illustrates an agenda entry to enabled trace-cmd instrument.

If your have multiple augmentations entries (e.g. both, in your config file and in the agenda), then they will be combined, so that the final set of augmentations for the run will be their union.

Note

WA2 did not have have augmentationts, and instead supported “instrumentation” and “result_processors” as distinct configuration enetries. For compantibility, these entries are still supported in WA3, however they should be considered to be depricated, and their use is discouraged.

Configuring augmentations

Most augmentations will take parameters that modify their behavior. Parameters available for a particular augmentation can be viewed using wa show <augmentation name> command. This will also show the default values used. Values for these parameters can be specified by creating an entry with the augmentation’s name, and specifying parameter values under it.

config:
        augmentations: [trace-cmd]
        trace-cmd:
                events: ['sched*', 'power*', irq]
                buffer_size: 100000

The code above specifies values for events and buffer_size parameters for the trace-cmd instrument, as well as enabling it.

You may specify configuration for the same augmentation in multiple locations (e.g. your config file and the config section of the agenda). These entries will be combined to form the final configuration for the augmentation used during the run. If different values for the same parameter are present in multiple entries, the ones “more specific” to a particular run will be used (e.g. values in the agenda will override those in the config file).

Note

Creating an entry for an augmentation alone does not enable it! You must list it under augmentations in order for it to be enabed for a run. This makes it easier to quickly enabled and diable augmentations with complex configurations, and also allows defining “static” configuation in top-level config, without actually enabling the augmentation for all runs.

Disabling augmentations

Sometimes, you may wish to disable an augmentation for a particular run, but you want to keep it enabled in general. You could modify your config file to temporarily disable it. However, you must then remember to re-enable it afterwards. This could be inconvenient and error prone, especially if you’re running multiple experiments in parallel and only want to disable the augmentation for one of them.

Instead, you can explicitly disable augmentation by specifying its name prefixed with a tilde (~) inside augumentations.

config:
        augmentations: [trace-cmd, ~cpufreq]

The code above enables trace-cmd instrument and disables cpufreq instrument (which is enabled in the default config).

If you want to start configuration for an experiment form a “blank slate” and want to disable all previously-enabled augmentations, without necessarily knowing what they are, you can use the special ~~ entry.

config:
        augmentations: [~~, trace-cmd, csv]

The code above disables all augmentations enabled up to that point, and enabled trace-cmd and csv for this run.

Note

The ~~ only disables augmentations from previously-processed sources. Its ordering in the list does not matter. For example, specifying augmentations: [trace-cmd, ~~, csv] will have exactly the same effect as above – i.e. both trace-cmd and csv will be enabled.

Workload-specific augmentation

It is possible to enable or disable (but not configure) augmentations at workload or section level, as well as in the global config, in which case, the augmentations would only be enabled/disabled for that workload/section. If the same augmentation is enabled at one level and disabled at another, as with all WA configuration, the more specific settings will take precedence over the less specific ones (i.e. workloads override sections that, in turn, override global config).

Augmentations Example
config:
        augmentations: [~~, fps]
        trace-cmd:
                events: ['sched*', 'power*', irq]
                buffer_size: 100000
        file_poller:
                files:
                        - /sys/class/thermal/thermal_zone0/temp
sections:
        - classifers:
                type: energy
        augmentations: [energy_measurement]
        - classifers:
                type: trace
        augmentations: [trace-cmd, file_poller]
workloads:
        - gmail
        - geekbench
        - googleplaybooks
        - name: dhrystone
          augmentations: [~fps]

The example above shows an experiment that runs a number of workloads in order to evaluate their thermal impact and energy usage. All previously-configured augmentations are disabled with ~~, so that only configuration specified in this agenda is enabled. Since most of the workloads are “productivity” use cases that do not generate their own metrics, fps instrument is enabled to get some meaningful performance metrics for them; the only exception is dhrystone which is a benchmark that reports its own metrics and has not GUI, so the instrument is disabled for it using ~fps.

Each workload will be run in two configurations: once, to collect energy measurements, and once to collect thermal data and kernel trace. Trace can give insight into why a workload is using more or less energy than expected, but it can be relatively intrusive and might impact absolute energy and performance metrics, which is why it is collected separately. Classifiers are used to separate metrics from the two configurations in the results.

Other Configuration

As mentioned previously, config section in an agenda can contain anything that can be defined in config.yaml. Certain configuration (e.g. run_name) makes more sense to define in an agenda than a config file. Refer to the Configuration section for details.

config:
        project: governor_comparison
        run_name: performance_vs_interactive

        device: generic_android
        reboot_policy: never

        iterations: 5
        augmentations:
            - ~cpufreq
            - csv
        sysfs_extractor:
                paths: [/proc/meminfo]
        csv:
            use_all_classifiers: True
sections:
        - id: perf
          runtime_params:
                sysfile_values:
                cpu0_governor: performance
        - id: inter
          runtime_params:
                cpu0_governor: interactive
workloads:
        - id: 01_dhry
          name: dhrystone
          label: dhrystone_15over6
          workload_params:
                threads: 6
                mloops: 15
        - id: 02_memc
          name: memcpy
          augmentations: [sysfs_extractor]
        - id: 03_cycl
          name: cyclictest
          iterations: 10

Setting Up A Device

WA should work with most Android devices out-of-the box, as long as the device is discoverable by adb (i.e. gets listed when you run adb devices). For USB-attached devices, that should be the case; for network devices, adb connect would need to be invoked with the IP address of the device. If there is only one device connected to the host running WA, then no further configuration should be necessary (though you may want to tweak some Android settings).

If you have multiple devices connected, have a non-standard Android build (e.g. on a development board), or want to use of the more advanced WA functionality, further configuration will be required.

Android

General Device Setup

You can specify the device interface by setting device setting in a config file or section. Available interfaces can be viewed by running wa list targets command. If you don’t see your specific platform listed (which is likely unless you’re using one of the Arm-supplied platforms), then you should use generic_android interface (this is what is used by the default config).

device: generic_android

The device interface may be configured through device_config setting, who’s value is a dict mapping setting names to their values. Some of the most common parameters you might want to change are outlined below.

device

If you have multiple Android devices connected to the host machine, you will need to set this to indicate to WA which device you want it to use. The will be the adb name the is displayed when running adb devices

working_directory

WA needs a “working” directory on the device which it will use for collecting traces, caching assets it pushes to the device, etc. By default, it will create one under /sdcard which should be mapped and writable on standard Android builds. If this is not the case for your device, you will need to specify an alternative working directory (e.g. under /data/local).

load_default_modules

A number of “default” modules (e.g. for cpufreq subsystem) are loaded automatically, unless explicitly disabled. If you encounter an issue with one of the modules then this setting can be set to False and any specific modules that you require can be request via the modules entry.

modules

A list of additional modules to be installed for the target. Devlib implements functionality for particular subsystems as modules. If additional modules need to be loaded, they may be specified using this parameter.

Please see the devlib documentation for information on the available modules.

core_names

core_names should be a list of core names matching the order in which they are exposed in sysfs. For example, Arm TC2 SoC is a 2x3 big.LITTLE system; its core_names would be ['a7', 'a7', 'a7', 'a15', 'a15'], indicating that cpu0-cpu2 in cpufreq sysfs structure are A7’s and cpu3 and cpu4 are A15’s.

Note

This should not usually need to be provided as it will be automatically extracted from the target.

A typical device_config inside config.yaml may look something like

device_config:
        device: 0123456789ABCDEF
# ...

or a more specific config could be:

device_config:
        device: 0123456789ABCDEF
        working_direcory: '/sdcard/wa-working'
        load_default_modules: True
        modules: ['hotplug', 'cpufreq']
        core_names : ['a7', 'a7', 'a7', 'a15', 'a15']
        # ...
Configuring Android

There are a few additional tasks you may need to perform once you have a device booted into Android (especially if this is an initial boot of a fresh OS deployment):

  • You have gone through FTU (first time usage) on the home screen and in the apps menu.

  • You have disabled the screen lock.

  • You have set sleep timeout to the highest possible value (30 mins on most devices).

  • You have set the locale language to “English” (this is important for some workloads in which UI automation looks for specific text in UI elements).

Juno Setup

Note

At the time of writing, the Android software stack on Juno was still very immature. Some workloads may not run, and there maybe stability issues with the device.

The full software stack can be obtained from Linaro:

https://releases.linaro.org/android/images/lcr-reference-juno/latest/

Please follow the instructions on the “Binary Image Installation” tab on that page. More up-to-date firmware and kernel may also be obtained by registered members from ARM Connected Community: http://www.arm.com/community/ (though this is not guaranteed to work with the Linaro file system).

UEFI

Juno uses UEFI to boot the kernel image. UEFI supports multiple boot configurations, and presents a menu on boot to select (in default configuration it will automatically boot the first entry in the menu if not interrupted before a timeout). WA will look for a specific entry in the UEFI menu ('WA' by default, but that may be changed by setting uefi_entry in the device_config). When following the UEFI instructions on the above Linaro page, please make sure to name the entry appropriately (or to correctly set the uefi_entry).

There are two supported ways for Juno to discover kernel images through UEFI. It can either load them from NOR flash on the board, or from the boot partition on the file system. The setup described on the Linaro page uses the boot partition method.

If WA does not find the UEFI entry it expects, it will create one. However, it will assume that the kernel image resides in NOR flash, which means it will not work with Linaro file system. So if you’re replicating the Linaro setup exactly, you will need to create the entry manually, as outline on the above-linked page.

Rebooting

At the time of writing, normal Android reboot did not work properly on Juno Android, causing the device to crash into an irrecoverable state. Therefore, WA will perform a hard reset to reboot the device. It will attempt to do this by toggling the DTR line on the serial connection to the device. In order for this to work, you need to make sure that SW1 configuration switch on the back panel of the board (the right-most DIP switch) is toggled down.

Linux

General Device Setup

You can specify the device interface by setting device setting in a config file or section. Available interfaces can be viewed by running wa list targets command. If you don’t see your specific platform listed (which is likely unless you’re using one of the Arm-supplied platforms), then you should use generic_linux interface.

device: generic_linux

The device interface may be configured through device_config setting, who’s value is a dict mapping setting names to their values. Some of the most common parameters you might want to change are outlined below.

host

This should be either the the DNS name or IP address of the device.

username

The login name of the user on the device that WA will use. This user should have a home directory (unless an alternative working directory is specified using working_directory config – see below), and, for full functionality, the user should have sudo rights (WA will be able to use sudo-less acounts but some instruments or workload may not work).

password

Password for the account on the device. Either this of a keyfile (see below) must be specified.

keyfile

If key-based authentication is used, this may be used to specify the SSH identity file instead of the password.

property_files

This is a list of paths that will be pulled for each WA run into the __meta subdirectory in the results. The intention is to collect meta-data about the device that may aid in reporducing the results later. The paths specified do not have to exist on the device (they will be ignored if they do not). The default list is ['/proc/version', '/etc/debian_version', '/etc/lsb-release', '/etc/arch-release']

In addition, working_directory, core_names, modules etc. can also be specified and have the same meaning as for Android devices (see above).

A typical device_config inside config.yaml may look something like

device_config:
        host: 192.168.0.7
        username: guest
        password: guest
        # ...

Chrome OS

General Device Setup

You can specify the device interface by setting device setting in a config file or section. Available interfaces can be viewed by running wa list targets command. If you don’t see your specific platform listed (which is likely unless you’re using one of the Arm-supplied platforms), then you should use generic_chromeos interface.

device: generic_chromeos

The device interface may be configured through device_config setting, who’s value is a dict mapping setting names to their values. The ChromeOS target is essentially the same as a linux device and requires a similar setup, however it also optionally supports connecting to an android container running on the device which will be automatically detected if present. If the device supports android applications then the android configuration is also supported. In order to support this WA will open 2 connections to the device, one via SSH to the main OS and another via ADB to the android container where a limited subset of functionality can be performed.

In order to distinguish between the two connections some of the android specific configuration has been renamed to reflect the destination.

android_working_directory

WA needs a “working” directory on the device which it will use for collecting traces, caching assets it pushes to the device, etc. By default, it will create one under /sdcard which should be mapped and writable on standard Android builds. If this is not the case for your device, you will need to specify an alternative working directory (e.g. under /data/local).

A typical device_config inside config.yaml for a ChromeOS device may look something like

device_config:
        host: 192.168.0.7
        username: root
        android_working_direcory: '/sdcard/wa-working'
        # ...

Note

This assumes that your Chromebook is in developer mode and is configured to run an SSH server with the appropriate ssh keys added to the authorized_keys file on the device.

Adding a new target interface

If you are working with a particularly unusual device (e.g. a early stage development board) or need to be able to handle some quirk of your Android build, configuration available in generic_android interface may not be enough for you. In that case, you may need to write a custom interface for your device. A device interface is an Extension (a plug-in) type in WA and is implemented similar to other extensions (such as workloads or instruments). Pleaser refer to the adding a custom target section for information on how this may be done.

Automating GUI Interactions With Revent

Overview and Usage

The revent utility can be used to record and later play back a sequence of user input events, such as key presses and touch screen taps. This is an alternative to Android UI Automator for providing automation for workloads.

Using revent with workloads

Some workloads (pretty much all games) rely on recorded revents for their execution. ReventWorkloads require between 1 and 4 revent files to be ran. There is one mandatory recording, run, for performing the actual execution of the workload and the remaining stages are optional. setup can be used to perform the initial setup (navigating menus, selecting game modes, etc). extract_results can be used to perform any actions after the main stage of the workload for example to navigate a results or summary screen of the app. And finally teardown can be used to perform any final actions for example exiting the app.

Because revents are very device-specific*, these files would need to be recorded for each device.

The files must be called <device name>.(setup|run|extract_results|teardown).revent, where <device name> is the name of your device (as defined by the model name of your device which can be retrieved with adb shell getprop ro.product.model or by the name attribute of your customized device class).

WA will look for these files in two places: <installdir>/wa/workloads/<workload name>/revent_files and $WA_USER_DIRECTORY/dependencies/<workload name>. The first location is primarily intended for revent files that come with WA (and if you did a system-wide install, you’ll need sudo to add files there), so it’s probably easier to use the second location for the files you record. Also, if revent files for a workload exist in both locations, the files under $WA_USER_DIRECTORY/dependencies will be used in favour of those installed with WA.

*

It’s not just about screen resolution – the event codes may be different even if devices use the same screen.

Recording

WA features a record command that will automatically deploy and start revent on the target device.

If you want to simply record a single recording on the device then the following command can be used which will save the recording in the current directory:

wa record

There is one mandatory stage called ‘run’ and 3 optional stages: ‘setup’, ‘extract_results’ and ‘teardown’ which are used for playback of a workload. The different stages are distinguished by the suffix in the recording file path. In order to facilitate in creating these recordings you can specify --setup, --extract-results, --teardown or --all to indicate which stages you would like to create recordings for and the appropriate file name will be generated.

You can also directly specify a workload to create recordings for and WA will walk you through the relevant steps. For example if we waned to create recordings for the Angrybirds Rio workload we can specify the workload flag with -w. And in this case WA can be used to automatically deploy and launch the workload and record setup (-s) , run (-r) and teardown (-t) stages for the workload. In order to do this we would use the following command with an example output shown below:

wa record -srt -w angrybirds_rio
INFO     Setting up target
INFO     Deploying angrybirds_rio
INFO     Press Enter when you are ready to record SETUP...
[Pressed Enter]
INFO     Press Enter when you have finished recording SETUP...
[Pressed Enter]
INFO     Pulling '<device_model>setup.revent' from device
INFO     Press Enter when you are ready to record RUN...
[Pressed Enter]
INFO     Press Enter when you have finished recording RUN...
[Pressed Enter]
INFO     Pulling '<device_model>.run.revent' from device
INFO     Press Enter when you are ready to record TEARDOWN...
[Pressed Enter]
INFO     Press Enter when you have finished recording TEARDOWN...
[Pressed Enter]
INFO     Pulling '<device_model>.teardown.revent' from device
INFO     Tearing down angrybirds_rio
INFO     Recording(s) are available at: '$WA_USER_DIRECTORY/dependencies/angrybirds_rio/revent_files'

Once you have made your desired recordings, you can either manually playback individual recordings using the replay command or, with the recordings in the appropriate dependencies location, simply run the workload using the run command and then all the available recordings will be played back automatically.

For more information on available arguments please see the Record command.

Note

By default revent recordings are not portable across devices and therefore will require recording for each new device you wish to use the workload on. Alternatively a “gamepad” recording mode is also supported. This mode requires a gamepad to be connected to the device when recording but the recordings produced in this mode should be portable across devices.

Replaying

If you want to replay a single recorded file, you can use wa replay providing it with the file you want to replay. An example of the command output is shown below:

wa replay my_recording.revent
INFO     Setting up target
INFO     Pushing file to target
INFO     Starting replay
INFO     Finished replay

If you are using a device that supports android you can optionally specify a package name to launch before replaying the recording.

If you have recorded the required files for your workload and have placed the in the appropriate location (or specified the workload during recording) then you can simply run the relevant workload and your recordings will be replayed at the appropriate times automatically.

For more information run please read Replay

Revent vs UiAutomator

In general, Android UI Automator is the preferred way of automating user input for Android workloads because, unlike revent, UI Automator does not depend on a particular screen resolution, and so is more portable across different devices. It also gives better control and can potentially be faster for doing UI manipulations, as input events are scripted based on the available UI elements, rather than generated by human input.

On the other hand, revent can be used to manipulate pretty much any workload, where as UI Automator only works for Android UI elements (such as text boxes or radio buttons), which makes the latter useless for things like games. Recording revent sequence is also faster than writing automation code (on the other hand, one would need maintain a different revent log for each screen resolution).

Note

For ChromeOS targets, UI Automator can only be used with android applications and not the ChomeOS host applications themselves.

User Reference

Configuration

Agenda

An agenda can be thought of as a way to define an experiment as it specifies what is to be done during a Workload Automation run. This includes which workloads will be run, with what configuration and which augmentations will be enabled, etc. Agenda syntax is designed to be both succinct and expressive and is written using YAML notation.

There are three valid top level entries which are: config, workloads, sections.

An example agenda can be seen here:

config:                     # General configuration for the run
    user_directory: ~/.workload_automation/
    default_output_directory: 'wa_output'
    augmentations:          # A list of all augmentations to be enabled and disabled.
    - trace-cmd
    - csv
    - ~dmesg                # Disable the dmseg augmentation

    iterations: 1           # How many iterations to run each workload by default

    device: generic_android
    device_config:
        device: R32C801B8XY # The adb name of our device we want to run on
        disable_selinux: true
        load_default_modules: true
        package_data_directory: /data/data

    trace-cmd:              # Provide config for the trace-cmd augmentation.
        buffer_size_step: 1000
        events:
        - sched*
        - irq*
        - power*
        - thermal*
        no_install: false
        report: true
        report_on_target: false
    csv:                    # Provide config for the csv augmentation
        use_all_classifiers: true

sections:                   # Configure what sections we want and their settings
    - id: LITTLES           # Run workloads just on the LITTLE cores
      runtime_parameters:   # Supply RT parameters to be used for this section
            num_little_cores: 4
            num_big_cores: 0

    - id: BIGS               # Run workloads just on the big cores
      runtime_parameters:    # Supply RT parameters to be used for this section
            num_big_cores: 4
            num_little_cores: 0

workloads:                  # List which workloads should be run
-   name: benchmarkpi
    augmentations:
        - ~trace-cmd        # Disable the trace-cmd instrument for this workload
    iterations: 2           # Override the global number of iteration for this workload
    params:                 # Specify workload parameters for this workload
        cleanup_assets: true
        exact_abi: false
        force_install: false
        install_timeout: 300
        markers_enabled: false
        prefer_host_package: true
        strict: false
        uninstall: false
-   dhrystone               # Run the dhrystone workload with all default config

This agenda will result in a total of 6 jobs being executed on our Android device, 4 of which running the BenchmarkPi workload with its customized workload parameters and 2 running dhrystone with its default configuration. The first 3 will be running on only the little cores and the latter running on the big cores. For all of the jobs executed the output will be processed by the csv processor,(plus any additional processors enabled in the default config file), however trace data will only be collected for the dhrystone jobs.

config

This section is used to provide overall configuration for WA and its run. The config section of an agenda will be merged with any other configuration files provided (including the default config file) and merged with the most specific configuration taking precedence (see Config Merging for more information. The only restriction is that run_name can only be specified in the config section of an agenda as this would not make sense to set as a default.

Within this section there are multiple distinct types of configuration that can be provided. However in addition to the options listed here all configuration that is available for sections can also be entered here and will be globally applied.

Configuration

The first is to configure the behaviour of WA and how a run as a whole will behave. The most common options that that you may want to specify are:

device

The name of the ‘device’ that you wish to perform the run on. This name is a combination of a devlib Platform and Target. To see the available options please use wa list targets.

device_config

The is a dict mapping allowing you to configure which target to connect to (e.g. host for an SSH connection or device to specific an ADB name) as well as configure other options for the device for example the working_directory or the list of modules to be loaded onto the device. (For more information please see here)

execution_order

Defines the order in which the agenda spec will be executed.

reboot_policy

Defines when during execution of a run a Device will be rebooted.

max_retries

The maximum number of times failed jobs will be retried before giving up.

allow_phone_home

Prevent running any workloads that are marked with ‘phones_home’.

For more information and a full list of these configuration options please see Run Configuration and Meta Configuration.

Plugins
augmentations

Specify a list of which augmentations should be enabled (or if prefixed with a ~, disabled).

Note

While augmentations can be enabled and disabled on a per workload basis, they cannot yet be re-configured part way through a run and the configuration provided as part of an agenda config section or separate config file will be used for all jobs in a WA run.

<plugin_name>

You can also use this section to supply configuration for specific plugins, such as augmentations, workloads, resource getters etc. To do this the plugin name you wish to configure should be provided as an entry in this section and should contain a mapping of configuration options to their desired settings. If configuration is supplied for a plugin that is not currently enabled then it will simply be ignored. This allows for plugins to be temporarily removed without also having to remove their configuration, or to provide a set of defaults for a plugin which can then be overridden.

<global_alias>

Some plugins provide global aliases which can set one or more configuration options at once, and these can also be specified here. For example if you specify a value for the entry remote_assets_url this will set the URL the http resource getter will use when searching for any missing assets.


workloads

Here you can specify a list of workloads to be run. If you wish to run a workload with all default values then you can specify the workload name directly as an entry, otherwise a dict mapping should be provided. Any settings provided here will be the most specific and therefore override any other more generalised configuration for that particular workload spec. The valid entries are as follows:

workload_name

(Mandatory) The name of the workload to be run

iterations

Specify how many iterations the workload should be run

label

Similar to IDs but do not have the uniqueness restriction. If specified, labels will be used by some output processors instead of (or in addition to) the workload name. For example, the csv output processor will put the label in the “workload” column of the CSV file.

augmentations

The instruments and output processors to enable (or disabled using a ~) during this workload.

classifiers

Classifiers allow you to tag metrics from this workload spec which are often used to help identify what runtime parameters were used when post processing results.

workload_parameters

Any parameters to configure that particular workload in a dict form.

Alias: workload_params

Note

You can see available parameters for a given workload with the show command or look it up in the Plugin Reference.

runtime_parameters

A dict mapping of any runtime parameters that should be set for the device for that particular workload. For available parameters please see runtime parameters.

Alias: runtime_parms

Note

Unless specified elsewhere these configurations will not be undone once the workload has finished. I.e. if the frequency of a core is changed it will remain at that frequency until otherwise changed.

Note

There is also a shorter params alias available, however this alias will be interpreted differently depending on whether it is used in workload entry, in which case it will be interpreted as workload_params, or at global config or section (see below) level, in which case it will be interpreted as runtime_params.


sections

Sections are used for for grouping sets of configuration together in order to reduce the need for duplicated configuration (for more information please see Sections). Each section specified will be applied for each entry in the workloads section. The valid configuration entries are the same as the "workloads" section as mentioned above, except you can additionally specify:

workloads

An entry which can be provided with the same configuration entries as the workloads top level entry.


Run Configuration

In addition to specifying run execution parameters through an agenda, the behaviour of WA can be modified through configuration file(s). The default configuration file is ~/.workload_automation/config.yaml (the location can be changed by setting WA_USER_DIRECTORY environment variable, see Environment Variables section below). This file will be created when you first run WA if it does not already exist. This file must always exist and will always be loaded. You can add to or override the contents of that file on invocation of Workload Automation by specifying an additional configuration file using --config option. Variables with specific names will be picked up by the framework and used to modify the behaviour of Workload automation e.g. the iterations variable might be specified to tell WA how many times to run each workload.


execution_order:

type: 'str'

Defines the order in which the agenda spec will be executed. At the moment, the following execution orders are supported:

"by_iteration"

The first iteration of each workload spec is executed one after the other, so all workloads are executed before proceeding on to the second iteration. E.g. A1 B1 C1 A2 C2 A3. This is the default if no order is explicitly specified.

In case of multiple sections, this will spread them out, such that specs from the same section are further part. E.g. given sections X and Y, global specs A and B, and two iterations, this will run

X.A1, Y.A1, X.B1, Y.B1, X.A2, Y.A2, X.B2, Y.B2
"by_section"

Same as "by_iteration", however this will group specs from the same section together, so given sections X and Y, global specs A and B, and two iterations, this will run

X.A1, X.B1, Y.A1, Y.B1, X.A2, X.B2, Y.A2, Y.B2
"by_workload"

All iterations of the first spec are executed before moving on to the next spec. E.g:

X.A1, X.A2, Y.A1, Y.A2, X.B1, X.B2, Y.B1, Y.B2
"random"

Execution order is entirely random.

allowed values: 'by_iteration', 'by_section', 'by_workload', 'random'

default: 'by_iteration'

reboot_policy:

type: 'RebootPolicy'

This defines when during execution of a run the Device will be rebooted. The possible values are:

"as_needed"

The device will only be rebooted if the need arises (e.g. if it becomes unresponsive.

"never"

The device will never be rebooted.

"initial"

The device will be rebooted when the execution first starts, just before executing the first workload spec.

"each_job"

The device will be rebooted before each new job.

"each_spec"

The device will be rebooted before running a new workload spec.

Note

This acts the same as each_job when execution order is set to by_iteration

"run_completion"

The device will be rebooted after the run has been completed.

allowed values: 'never', 'as_needed', 'initial', 'each_spec', 'each_job', 'run_completion'

default: 'as_needed'

device:

type: 'str'

This setting defines what specific Device subclass will be used to interact the connected device. Obviously, this must match your setup.

default: 'generic_android'

retry_on_status:

type: 'list_of_Enums'

This is list of statuses on which a job will be considered to have failed and will be automatically retried up to max_retries times. This defaults to ["FAILED", "PARTIAL"] if not set. Possible values are:

"OK"

This iteration has completed and no errors have been detected

"PARTIAL"

One or more instruments have failed (the iteration may still be running).

"FAILED"

The workload itself has failed.

"ABORTED"

The user interrupted the workload.

allowed values: RUNNING, OK, PARTIAL, FAILED, ABORTED, SKIPPED

default: ['FAILED', 'PARTIAL']

max_retries:

type: 'integer'

The maximum number of times failed jobs will be retried before giving up.

Note

This number does not include the original attempt

default: 2

bail_on_init_failure:

type: 'boolean'

When jobs fail during their main setup and run phases, WA will continue attempting to run the remaining jobs. However, by default, if they fail during their early initialization phase, the entire run will end without continuing to run jobs. Setting this to False means that WA will instead skip all the jobs from the job spec that failed, but continue attempting to run others.

default: True

bail_on_job_failure:

type: 'boolean'

When a job fails during its run phase, WA will attempt to retry the job, then continue with remaining jobs after. Setting this to True means WA will skip remaining jobs and end the run if a job has retried the maximum number of times, and still fails.

default: False

allow_phone_home:

type: 'boolean'

Setting this to False prevents running any workloads that are marked with ‘phones_home’, meaning they are at risk of exposing information about the device to the outside world. For example, some benchmark applications upload device data to a database owned by the maintainers.

This can be used to minimise the risk of accidentally running such workloads when testing confidential devices.

default: True


Meta Configuration

There are also a couple of settings are used to provide additional metadata for a run. These may get picked up by instruments or output processors to attach context to results.

user_directory:

type: 'expanded_path'

Path to the user directory. This is the location WA will look for user configuration, additional plugins and plugin dependencies.

default: '~/.workload_automation'

assets_repository:

type: 'str'

The local mount point for the filer hosting WA assets.

default: ''

logging:

type: 'LoggingConfig'

WA logging configuration. This should be a dict with a subset of the following keys:

:normal_format: Logging format used for console output
:verbose_format: Logging format used for verbose console output
:file_format: Logging format used for run.log
:color: If ``True`` (the default), console logging output will
        contain bash color escape codes. Set this to ``False`` if
        console output will be piped somewhere that does not know
        how to handle those.

default:

{
    file_format: %(asctime)s %(levelname)-8s %(name)s: %(message)s,
    verbose_format: %(asctime)s %(levelname)-8s %(name)s: %(message)s,
    regular_format: %(levelname)-8s %(message)s,
    color: True
}
verbosity:

type: 'integer'

Verbosity of console output.

default: 0

default_output_directory:

type: 'str'

The default output directory that will be created if not specified when invoking a run.

default: 'wa_output'

extra_plugin_paths:

type: 'list_of_strs'

A list of additional paths to scan for plugins.


Environment Variables

In addition to standard configuration described above, WA behaviour can be altered through environment variables. These can determine where WA looks for various assets when it starts.

WA_USER_DIRECTORY

This is the location WA will look for config.yaml, plugins, dependencies, and it will also be used for local caches, etc. If this variable is not set, the default location is ~/.workload_automation (this is created when WA is installed).

Note

This location must be writable by the user who runs WA.

WA_LOG_BUFFER_CAPACITY

Specifies the capacity (in log records) for the early log handler which is used to buffer log records until a log file becomes available. If the is not set, the default value of 1000 will be used. This should sufficient for most scenarios, however this may need to be increased, e.g. if plugin loader scans a very large number of locations; this may also be set to a lower value to reduce WA’s memory footprint on memory-constrained hosts.


Runtime Parameters

Runtime parameters are options that can be specified to automatically configure device at runtime. They can be specified at the global level in the agenda or for individual workloads.

Example

Say we want to perform an experiment on an Android big.LITTLE devices to compare the power consumption between the big and LITTLE clusters running the dhrystone and benchmarkpi workloads. Assuming we have additional instrumentation active for this device that can measure the power the device is consuming, to reduce external factors we want to ensure that the device is in airplane mode turned on for all our tests and the screen is off only for our dhrystone run. We will then run 2 sections will each enable a single cluster on the device, set the cores to their maximum frequency and disable all available idle states.

config:
    runtime_parameters:
          airplane_mode: true
#..
workloads:
        - name: dhrystone
          iterations: 1
          runtime_parameters:
                screen_on: false
                unlock_screen: 'vertical'
        - name: benchmarkpi
          iterations: 1
sections:
        - id: LITTLES
          runtime_parameters:
                num_little_cores: 4
                little_governor: userspace
                little_frequency: max
                little_idle_states: none
                num_big_cores: 0

        - id: BIGS
          runtime_parameters:
                num_big_cores: 4
                big_governor: userspace
                big_frequency: max
                big_idle_states: none
                num_little_cores: 0
HotPlug

Parameters:

num_cores

An int that specifies the total number of cpu cores to be online.

num_<core_name>_cores

An int that specifies the total number of that particular core to be online, the target will be queried and if the core_names can be determine a parameter for each of the unique core names will be available.

cpu<core_no>_online

A boolean that specifies whether that particular cpu, e.g. cpu0 will be online.

If big.LITTLE is detected for the device and additional 2 parameters are available:

num_big_cores

An int that specifies the total number of big cpu cores to be online.

num_little_cores

An int that specifies the total number of little cpu cores to be online.

Note

Please note that if the device in question is operating its own dynamic hotplugging then WA may be unable to set the CPU state or will be overridden. Unfortunately the method of disabling dynamic hot plugging will vary from device to device.

CPUFreq
frequency

An int that can be used to specify a frequency for all cores if there are common frequencies available.

Note

When settings the frequency, if the governor is not set to userspace then WA will attempt to set the maximum and minimum frequencies to mimic the desired behaviour.

max_frequency

An int that can be used to specify a maximum frequency for all cores if there are common frequencies available.

min_frequency

An int that can be used to specify a minimum frequency for all cores if there are common frequencies available.

governor

A string that can be used to specify the governor for all cores if there are common governors available.

governor

A string that can be used to specify the governor for all cores if there are common governors available.

gov_tunables

A dict that can be used to specify governor tunables for all cores, unlike the other common parameters these are not validated at the beginning of the run therefore incorrect values will cause an error during runtime.

<core_name>_frequency

An int that can be used to specify a frequency for cores of a particular type e.g. ‘A72’.

<core_name>_max_frequency

An int that can be used to specify a maximum frequency for cores of a particular type e.g. ‘A72’.

<core_name>_min_frequency

An int that can be used to specify a minimum frequency for cores of a particular type e.g. ‘A72’.

<core_name>_governor

A string that can be used to specify the governor for cores of a particular type e.g. ‘A72’.

<core_name>_governor

A string that can be used to specify the governor for cores of a particular type e.g. ‘A72’.

<core_name>_gov_tunables

A dict that can be used to specify governor tunables for cores of a particular type e.g. ‘A72’, these are not validated at the beginning of the run therefore incorrect values will cause an error during runtime.

cpu<no>_frequency

An int that can be used to specify a frequency for a particular core e.g. ‘cpu0’.

cpu<no>_max_frequency

An int that can be used to specify a maximum frequency for a particular core e.g. ‘cpu0’.

cpu<no>_min_frequency

An int that can be used to specify a minimum frequency for a particular core e.g. ‘cpu0’.

cpu<no>_governor

A string that can be used to specify the governor for a particular core e.g. ‘cpu0’.

cpu<no>_governor

A string that can be used to specify the governor for a particular core e.g. ‘cpu0’.

cpu<no>_gov_tunables

A dict that can be used to specify governor tunables for a particular core e.g. ‘cpu0’, these are not validated at the beginning of the run therefore incorrect values will cause an error during runtime.

If big.LITTLE is detected for the device an additional set of parameters are available:

big_frequency

An int that can be used to specify a frequency for the big cores.

big_max_frequency

An int that can be used to specify a maximum frequency for the big cores.

big_min_frequency

An int that can be used to specify a minimum frequency for the big cores.

big_governor

A string that can be used to specify the governor for the big cores.

big_governor

A string that can be used to specify the governor for the big cores.

big_gov_tunables

A dict that can be used to specify governor tunables for the big cores, these are not validated at the beginning of the run therefore incorrect values will cause an error during runtime.

little_frequency

An int that can be used to specify a frequency for the little cores.

little_max_frequency

An int that can be used to specify a maximum frequency for the little cores.

little_min_frequency

An int that can be used to specify a minimum frequency for the little cores.

little_governor

A string that can be used to specify the governor for the little cores.

little_governor

A string that can be used to specify the governor for the little cores.

little_gov_tunables

A dict that can be used to specify governor tunables for the little cores, these are not validated at the beginning of the run therefore incorrect values will cause an error during runtime.

CPUIdle
idle_states

A string or list of strings which can be used to specify what idles states should be enabled for all cores if there are common idle states available. ‘all’ and ‘none’ are also valid entries as a shorthand

<core_name>_idle_states

A string or list of strings which can be used to specify what idles states should be enabled for cores of a particular type e.g. ‘A72’. ‘all’ and ‘none’ are also valid entries as a shorthand

cpu<no>_idle_states

A string or list of strings which can be used to specify what idles states should be enabled for a particular core e.g. ‘cpu0’. ‘all’ and ‘none’ are also valid entries as a shorthand

If big.LITTLE is detected for the device and additional set of parameters are available:

big_idle_states

A string or list of strings which can be used to specify what idles states should be enabled for the big cores. ‘all’ and ‘none’ are also valid entries as a shorthand

little_idle_states

A string or list of strings which can be used to specify what idles states should be enabled for the little cores. ‘all’ and ‘none’ are also valid entries as a shorthand.

Android Specific Runtime Parameters
brightness

An int between 0 and 255 (inclusive) to specify the brightness the screen should be set to. Defaults to 127.

airplane_mode

A boolean to specify whether airplane mode should be enabled for the device.

rotation

A String to specify the screen orientation for the device. Valid entries are NATURAL, LEFT, INVERTED, RIGHT.

screen_on

A boolean to specify whether the devices screen should be turned on. Defaults to True.

unlock_screen

A String to specify how the devices screen should be unlocked. Unlocking screen is disabled by default. vertical, diagonal and horizontal are the supported values (see devlib.AndroidTarget.swipe_to_unlock()). Note that unlocking succeeds when no passcode is set. Since unlocking screen requires turning on the screen, this option overrides value of screen_on option.

Setting Sysfiles

In order to perform additional configuration of a target the sysfile_values runtime parameter can be used. The value for this parameter is a mapping (an associative array, in YAML) of file paths onto values that should be written into those files. sysfile_values is the only runtime parameter that is available for any (Linux) device. Other runtime parameters will depend on the specifics of the device used (e.g. its CPU cores configuration) as detailed above.

Note

By default WA will attempt to verify that the any sysfile values were written correctly by reading the node back and comparing the two values. If you do not wish this check to happen, for example the node you are writing to is write only, you can append an ! to the file path to disable this verification.

For example the following configuration could be used to enable and verify that cpu0 is online, however will not attempt to check that its governor have been set to userspace:

- name: dhrystone
runtime_params:
      sysfile_values:
            /sys/devices/system/cpu/cpu0/online: 1
            /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor!: userspace

Configuration Merging

WA configuration can come from various sources of increasing priority, as well as being specified in a generic and specific manner. For example WA’s global config file would be considered the least specific vs the parameters of a workload in an agenda which would be the most specific. WA has two rules for the priority of configuration:

  • Configuration from higher priority sources overrides configuration from lower priority sources.

  • More specific configuration overrides less specific configuration.

There is a situation where these two rules come into conflict. When a generic configuration is given in config source of high priority and a specific configuration is given in a config source of lower priority. In this situation it is not possible to know the end users intention and WA will error.

This functionality allows for defaults for plugins, targets etc. to be configured at a global level and then seamless overridden without the need to remove the high level configuration.

Dependent on specificity, configuration parameters from different sources will have different inherent priorities. Within an agenda, the configuration in “workload” entries will be more specific than “sections” entries, which in turn are more specific than parameters in the “config” entry.

Configuration Includes

It is possible to include other files in your config files and agendas. This is done by specifying include# (note the trailing hash) as a key in one of the mappings, with the value being the path to the file to be included. The path must be either absolute, or relative to the location of the file it is being included from (not to the current working directory). The path may also include ~ to indicate current user’s home directory.

The include is performed by removing the include# loading the contents of the specified into the mapping that contained it. In cases where the mapping already contains the key to be loaded, values will be merged using the usual merge method (for overwrites, values in the mapping take precedence over those from the included files).

Below is an example of an agenda that includes other files. The assumption is that all of those files are in one directory

# agenda.yaml
config:
   augmentations: [trace-cmd]
   include#: ./my-config.yaml
sections:
   - include#: ./section1.yaml
   - include#: ./section2.yaml
include#: ./workloads.yaml
# my-config.yaml
augmentations: [cpufreq]
# section1.yaml
runtime_parameters:
   frequency: max
# section2.yaml
runtime_parameters:
   frequency: min
# workloads.yaml
workloads:
   - dhrystone
   - memcpy

The above is equivalent to having a single file like this:

# agenda.yaml
config:
   augmentations: [cpufreq, trace-cmd]
sections:
   - runtime_parameters:
        frequency: max
   - runtime_parameters:
        frequency: min
workloads:
   - dhrystone
   - memcpy

Some additional details about the implementation and its limitations:

  • The include# must be a key in a mapping, and the contents of the included file must be a mapping as well; it is not possible to include a list (e.g. in the examples above workload: part must be in the included file.

  • Being a key in a mapping, there can only be one include# entry per block.

  • The included file must have a .yaml extension.

  • Nested inclusions are allowed. I.e. included files may themselves include files; in such cases the included paths must be relative to that file, and not the “main” file.


Commands

Installing the wa package will add wa command to your system, which you can run from anywhere. This has a number of sub-commands, which can be viewed by executing

wa -h

Individual sub-commands are discussed in detail below.

Run

The most common sub-command you will use is run. This will run the specified workload(s) and process its resulting output. This takes a single mandatory argument which specifies what you want WA to run. This could be either a workload name, or a path to an agenda” file that allows to specify multiple workloads as well as a lot additional configuration (see Defining Experiments With an Agenda section for details). Executing

wa run -h

Will display help for this subcommand that will look something like this:

usage: wa run [-h] [-c CONFIG] [-v] [--version] [-d DIR] [-f] [-i ID]
      [--disable INSTRUMENT]
      AGENDA

Execute automated workloads on a remote device and process the resulting
output.

positional arguments:
  AGENDA                Agenda for this workload automation run. This defines
                        which workloads will be executed, how many times, with
                        which tunables, etc. See example agendas in
                        /usr/local/lib/python3.X/dist-packages/wa for an
                        example of how this file should be structured.

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        specify an additional config.yaml
  -v, --verbose         The scripts will produce verbose output.
  --version             show program's version number and exit
  -d DIR, --output-directory DIR
                        Specify a directory where the output will be
                        generated. If the directory already exists, the script
                        will abort unless -f option (see below) is used, in
                        which case the contents of the directory will be
                        overwritten. If this option is not specified, then
                        wa_output will be used instead.
  -f, --force           Overwrite output directory if it exists. By default,
                        the script will abort in this situation to prevent
                        accidental data loss.
  -i ID, --id ID        Specify a workload spec ID from an agenda to run. If
                        this is specified, only that particular spec will be
                        run, and other workloads in the agenda will be
                        ignored. This option may be used to specify multiple
                        IDs.
  --disable INSTRUMENT  Specify an instrument or output processor to disable
                        from the command line. This equivalent to adding
                        "~{metavar}" to the instruments list in the
                        agenda. This can be used to temporarily disable a
                        troublesome instrument for a particular run without
                        introducing permanent change to the config (which one
                        might then forget to revert). This option may be
                        specified multiple times.

List

This lists all plugins of a particular type. For example

wa list instruments

will list all instruments currently included in WA. The list will consist of plugin names and short descriptions of the functionality they offer e.g.

#..
           cpufreq:    Collects dynamic frequency (DVFS) settings before and after
                       workload execution.
             dmesg:    Collected dmesg output before and during the run.
energy_measurement:    This instrument is designed to be used as an interface to
                       the various energy measurement instruments located
                       in devlib.
    execution_time:    Measure how long it took to execute the run() methods of
                       a Workload.
       file_poller:    Polls the given files at a set sample interval. The values
                       are output in CSV format.
               fps:    Measures Frames Per Second (FPS) and associated metrics for
                       a workload.
#..

You can use the same syntax to quickly display information about commands, energy_instrument_backends, instruments, output_processors, resource_getters, targets and workloads

Show

This will show detailed information about an plugin (workloads, targets, instruments etc.), including a full description and any relevant parameters/configuration that are available. For example executing

wa show benchmarkpi

will produce something like:

benchmarkpi
-----------

Measures the time the target device takes to run and complete the Pi
calculation algorithm.

http://androidbenchmark.com/howitworks.php

from the website:

The whole idea behind this application is to use the same Pi calculation
algorithm on every Android Device and check how fast that process is.
Better calculation times, conclude to faster Android devices. This way you
can also check how lightweight your custom made Android build is. Or not.

As Pi is an irrational number, Benchmark Pi does not calculate the actual Pi
number, but an approximation near the first digits of Pi over the same
calculation circles the algorithms needs.

So, the number you are getting in milliseconds is the time your mobile device
takes to run and complete the Pi calculation algorithm resulting in a
approximation of the first Pi digits.

parameters
~~~~~~~~~~

cleanup_assets : boolean
    If ``True``, if assets are deployed as part of the workload they
    will be removed again from the device as part of finalize.

    default: ``True``

package_name : str
    The package name that can be used to specify
    the workload apk to use.

install_timeout : integer
    Timeout for the installation of the apk.

    constraint: ``value > 0``

    default: ``300``

version : str
    The version of the package to be used.

variant : str
    The variant of the package to be used.

strict : boolean
    Whether to throw an error if the specified package cannot be found
    on host.

force_install : boolean
    Always re-install the APK, even if matching version is found already installed
    on the device.

uninstall : boolean
    If ``True``, will uninstall workload's APK as part of teardown.'

exact_abi : boolean
    If ``True``, workload will check that the APK matches the target
    device ABI, otherwise any suitable APK found will be used.

markers_enabled : boolean
    If set to ``True``, workloads will insert markers into logs
    at various points during execution. These markers may be used
    by other plugins or post-processing scripts to provide
    measurements or statistics for specific parts of the workload
    execution.

Note

You can also use this command to view global settings by using wa show settings

Create

This aids in the creation of new WA-related objects for example agendas and workloads. For more detailed information on creating workloads please see the adding a workload section for more details.

As an example to create an agenda that will run the dhrystone and memcpy workloads that will use the status and hwmon augmentations, run each test 3 times and save into the file my_agenda.yaml the following command can be used:

wa create agenda dhrystone memcpy status hwmon -i 3 -o my_agenda.yaml

Which will produce something like:

config:
    augmentations:
    - status
    - hwmon
    status: {}
    hwmon: {}
    iterations: 3
workloads:
-   name: dhrystone
    params:
        cleanup_assets: true
        delay: 0
        duration: 0
        mloops: 0
        taskset_mask: 0
        threads: 4
-   name: memcpy
    params:
        buffer_size: 5242880
        cleanup_assets: true
        cpus: null
        iterations: 1000

This will be populated with default values which can then be customised for the particular use case.

Additionally the create command can be used to initialize (and update) a Postgres database which can be used by the postgres output processor.

The most of database connection parameters have a default value however they can be overridden via command line arguments. When initializing the database WA will also save the supplied parameters into the default user config file so that they do not need to be specified time the output processor is used.

As an example if we had a database server running on at 10.0.0.2 using the standard port we could use the following command to initialize a database for use with WA:

wa create database -a 10.0.0.2 -u my_username -p Pa55w0rd

This will log into the database server with the supplied credentials and create a database (defaulting to ‘wa’) and will save the configuration to the ~/.workload_automation/config.yaml file.

With updates to WA there may be changes to the database schema used. In this case the create command can also be used with the -U flag to update the database to use the new schema as follows:

wa create database -a 10.0.0.2 -u my_username -p Pa55w0rd -U

This will upgrade the database sequentially until the database schema is using the latest version.

Process

This command allows for output processors to be ran on data that was produced by a previous run.

There are 2 ways of specifying which processors you wish to use, either passing them directly as arguments to the process command with the --processor argument or by providing an additional config file with the --config argument. Please note that by default the process command will not rerun processors that have already been ran during the run, in order to force a rerun of the processors you can specific the --force argument.

Additionally if you have a directory containing multiple run directories you can specify the --recursive argument which will cause WA to walk the specified directory processing all the WA output sub-directories individually.

As an example if we had performed multiple experiments and have the various WA output directories in our my_experiments directory, and we now want to process the outputs with a tool that only supports CSV files. We can easily generate CSV files for all the runs contained in our directory using the CSV processor by using the following command:

wa process -r -p csv my_experiments

Record

This command simplifies the process of recording revent files. It will automatically deploy revent and has options to automatically open apps and record specified stages of a workload. Revent allows you to record raw inputs such as screen swipes or button presses. This can be useful for recording inputs for workloads such as games that don’t have XML UI layouts that can be used with UIAutomator. As a drawback from this, revent recordings are specific to the device type they were recorded on. WA uses two parts to the names of revent recordings in the format, {device_name}.{suffix}.revent. - device_name can either be specified manually with the -d argument or it can be automatically determined. On Android device it will be obtained from build.prop, on Linux devices it is obtained from /proc/device-tree/model. - suffix is used by WA to determine which part of the app execution the recording is for, currently these are either setup, run, extract_results or teardown. All stages except run are optional for playback and to specify which stages should be recorded the -s, -r, -e or -t arguments respectively, or optionally -a to indicate all stages should be recorded.

The full set of options for this command are:

usage: wa record [-h] [-c CONFIG] [-v] [--version] [-d DEVICE] [-o FILE] [-s]
                 [-r] [-e] [-t] [-a] [-C] [-p PACKAGE | -w WORKLOAD]

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        specify an additional config.yaml
  -v, --verbose         The scripts will produce verbose output.
  --version             show program's version number and exit
  -d DEVICE, --device DEVICE
                        Specify the device on which to run. This will take
                        precedence over the device (if any) specified in
                        configuration.
  -o FILE, --output FILE
                        Specify the output file
  -s, --setup           Record a recording for setup stage
  -r, --run             Record a recording for run stage
  -e, --extract_results Record a recording for extract_results stage
  -t, --teardown        Record a recording for teardown stage
  -a, --all             Record recordings for available stages
  -C, --clear           Clear app cache before launching it
  -p PACKAGE, --package PACKAGE
                        Android package to launch before recording
  -w WORKLOAD, --workload WORKLOAD
                        Name of a revent workload (mostly games)

For more information please see Revent Recording.

Replay

Alongside record wa also has a command to playback a single recorded revent file. It behaves similar to the record command taking a subset of the same options allowing you to automatically launch a package on the device

usage: wa replay [-h] [-c CONFIG] [-v] [--debug] [--version] [-p PACKAGE] [-C]
             revent

positional arguments:
  revent                The name of the file to replay

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        specify an additional config.py
  -v, --verbose         The scripts will produce verbose output.
  --debug               Enable debug mode. Note: this implies --verbose.
  --version             show program's version number and exit
  -p PACKAGE, --package PACKAGE
                        Package to launch before recording
  -C, --clear           Clear app cache before launching it

For more information please see Revent Replaying.


Output Directory Structure

This is an overview of WA output directory structure.

Note

In addition to files and subdirectories described here, other content may present in the output directory for a run, depending on the enabled augmentations.

Overview

The output directory will contain a subdirectory for every job iteration that was run, as well as some additional entries. The following diagram illustrates the typical structure of WA output directory:

wa_output/
├── __meta/
│   ├── config.json
│   ├── jobs.json
│   ├── raw_config
│   │   ├── cfg0-config.yaml
│   │   └── agenda.yaml
│   ├── run_info.json
│   └── target_info.json
├── __failed/
│   └── wk1-dhrystone-1-attempt1
├── wk1-dhrystone-1/
│   └── result.json
├── wk1-dhrystone-2/
│   └── result.json
├── wk2-memcpy-1/
│   └── result.json
├── wk2-memcpy-2/
│   └── result.json
├── result.json
└── run.log

This is the directory structure that would be generated after running two iterations each of dhrystone and memcpy workloads with no augmentations enabled, and with the first attempt at the first iteration of dhrystone having failed.

You may notice that a number of directories named wk*-x-x were generated in the output directory structure. Each of these directories represents a job. The name of the output directory is as stated here.

Output Directory Entries

result.json

Contains a JSON structure describing the result of the execution, including collected metrics and artifacts. There will be one for each job execution, and one for the overall run. The run result.json will only contain metrics/artifacts for the run as a whole, and will not contain results for individual jobs.

You typically would not access result.json files directly. Instead you would either enable augmentations to format the results in easier to manage form (such as CSV table), or use Output to access the results from scripts.

run.log

This is a log of everything that happened during the run, including all interactions with the target, and all the decisions made by the framework. The output is equivalent to what you would see on the console when running with --verbose option.

Note

WA source contains a syntax file for Vim that will color the initial part of each log line, in a similar way to what you see on the console. This may be useful for quickly spotting error and warning messages when scrolling through the log.

https://github.com/ARM-software/workload-automation/blob/next/extras/walog.vim

__meta

This directory contains configuration and run metadata. See Configuration and Metadata below for details.

__failed

This directory will only be present if one or more job executions has failed and were re-run. This directory contains output directories for the failed attempts.

job execution output subdirectory

Each subdirectory will be named <job id>_<workload label>_<iteration number>, and will, at minimum, contain a result.json (see above). Additionally, it may contain raw output from the workload, and any additional artifacts (e.g. traces) generated by augmentations. Finally, if workload execution has failed, WA may gather some additional logging (such as the UI state at the time of failure) and place it here.

Configuration and Metadata

As stated above, the __meta directory contains run configuration and metadata. Typically, you would not access these files directly, but would use the Output to query the metadata.

For more details about WA configuration see Configuration.

config.json

Contains the overall run configuration, such as target interface configuration, and job execution order, and various “meta-configuration” settings, such as default output path, verbosity level, and logging formatting.

jobs.json

Final configuration for all jobs, including enabled augmentations, workload and runtime parameters, etc.

raw_config

This directory contains copies of config file(s) and the agenda that were parsed in order to generate configuration for this run. Each config file is prefixed with cfg<N>-, where <N> is the number indicating the order (with respect to the other other config files) in which it was parsed, e.g. cfg0-config.yaml is always a copy of $WA_USER_DIRECTORY/config.yaml. The one file without a prefix is the agenda.

run_info.json

Run metadata, e.g. duration, start/end timestamps and duration.

target_info.json

Extensive information about the target. This includes information about the target’s CPUS configuration, kernel and userspace versions, etc. The exact content will vary depending on the target type (Android vs Linux) and what could accessed on a particular device (e.g. if /proc/config.gz exists on the target, the kernel config will be included).