wlauto.result_processors package

Submodules

wlauto.result_processors.cpustate module

class wlauto.result_processors.cpustate.CpuStatesProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Process power ftrace to produce CPU state and parallelism stats.\n\n Parses trace-cmd output to extract power events and uses those to generate\n statistics about parallelism and frequency/idle core residency.\n\n .. note:: trace-cmd instrument must be enabled and configured to collect\n at least ``power:cpu_idle`` and ``power:cpu_frequency`` events.\n Reporting should also be enabled (it is by default) as\n ``cpustate`` parses the text version of the trace.\n Finally, the device should have ``cpuidle`` module installed.\n\n This generates two reports for the run:\n\n *parallel.csv*\n\n Shows what percentage of time was spent with N cores active (for N\n from 0 to the total number of cores), for a cluster or for a system as\n a whole. It contain the following columns:\n\n :workload: The workload label\n :iteration: iteration that was run\n :cluster: The cluster for which statics are reported. The value of\n ``"all"`` indicates that this row reports statistics for\n the whole system.\n :number_of_cores: number of cores active. ``0`` indicates the cluster\n was idle.\n :total_time: Total time spent in this state during workload execution\n :%time: Percentage of total workload execution time spent in this state\n :%running_time: Percentage of the time the cluster was active (i.e.\n ignoring time the cluster was idling) spent in this\n state.\n\n *cpustate.csv*\n\n Shows percentage of the time a core spent in a particular power state. The first\n column names the state is followed by a column for each core. Power states include\n available DVFS frequencies (for heterogeneous systems, this is the union of\n frequencies supported by different core types) and idle states. Some shallow\n states (e.g. ARM WFI) will consume different amount of power depending on the\n current OPP. For such states, there will be an entry for each opp. ``"unknown"``\n indicates the percentage of time for which a state could not be established from the\n trace. This is usually due to core state being unknown at the beginning of the trace,\n but may also be caused by dropped events in the middle of the trace.\n\n '
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'cpustates'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function integer>, 'mandatory': None, 'name': 'first_cluster_state', 'constraint': None, 'default': 2, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function integer>, 'mandatory': None, 'name': 'first_system_state', 'constraint': None, 'default': 3, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'write_iteration_reports', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'use_ratios', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'create_timeline', 'constraint': None, 'default': True, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'create_utilization_timeline', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'str'>, 'mandatory': None, 'name': 'start_marker_handling', 'constraint': None, 'default': 'try', 'allowed_values': ['ignore', 'try', 'error'], 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'no_idle', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_iteration_result(result, context)[source]
process_run_result(result, context)[source]
set_initial_state(context)[source]
validate(*args, **kwargs)

wlauto.result_processors.dvfs module

class wlauto.result_processors.dvfs.DVFS(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
calculate()[source]
core_modules = []
description = "\n Reports DVFS state residency data form ftrace power events.\n\n This generates a ``dvfs.csv`` in the top-level results directory that,\n for each workload iteration, reports the percentage of time each CPU core\n spent in each of the DVFS frequency states (P-states), as well as percentage\n of the time spent in idle, during the execution of the workload.\n\n .. note:: ``trace-cmd`` instrument *MUST* be enabled in the instrumentation,\n and at least ``'power*'`` events must be enabled.\n\n\n "
finalize(*args, **kwargs)
flush_parse_initialize()[source]

Store state, cpu_id for each timestamp from trace.txt and flush all the values for next iterations.

generate_csv(context)[source]

generate the ‘’‘dvfs.csv’‘’ with the state, frequency and cores

get_cluster(cpu_id, state)[source]
get_cluster_freq()[source]
get_state_name(state)[source]
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'dvfs'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
parse()[source]

Parse the trace.txt

store timestamp, state, cpu_id
---------------------------------------------------------------------------------
                    |timestamp|                       |state|        |cpu_id|
<idle>-0     [001]   294.554380: cpu_idle:             state=4294967295 cpu_id=1
<idle>-0     [001]   294.554454: power_start:          type=1 state=0 cpu_id=1
<idle>-0     [001]   294.554458: cpu_idle:             state=0 cpu_id=1
<idle>-0     [001]   294.554464: power_end:            cpu_id=1
<idle>-0     [001]   294.554471: cpu_idle:             state=4294967295 cpu_id=1
<idle>-0     [001]   294.554590: power_start:          type=1 state=0 cpu_id=1
<idle>-0     [001]   294.554593: cpu_idle:             state=0 cpu_id=1
<idle>-0     [001]   294.554636: power_end:            cpu_id=1
<idle>-0     [001]   294.554639: cpu_idle:             state=4294967295 cpu_id=1
<idle>-0     [001]   294.554669: power_start:          type=1 state=0 cpu_id=1
percentage()[source]

Normalize the result with total execution time.

populate(time1, time2)[source]
process_iteration_result(result, context)[source]

Parse the trace.txt for each iteration, calculate DVFS residency state/frequencies and dump the result in csv and flush the data for next iteration.

unique_freq()[source]

Determine the unique Frequency and state

update_cluster_freq(state, cpu_id)[source]

Update the cluster frequency and current cluster

update_state(state, cpu_id)[source]

Update state of each cores in every cluster. This is done for each timestamp.

validate(*args, **kwargs)

wlauto.result_processors.mongodb module

class wlauto.result_processors.mongodb.MongodbUploader(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Uploads run results to a MongoDB instance.\n\n MongoDB is a popular document-based data store (NoSQL database).\n\n '
export_iteration_result(result, context)[source]
export_run_result(result, context)[source]
finalize(*args, **kwargs)
generate_bundle(context)[source]

The bundle will contain files generated during the run that have not already been processed. This includes all files for which there isn’t an explicit artifact as well as “raw” artifacts that aren’t uploaded individually. Basically, this ensures that everything that is not explicilty marked as an “export” (which means it’s guarnteed not to contain information not accessible from other artifacts/scores) is avialable in the DB. The bundle is compressed, so it shouldn’t take up too much space, however it also means that it’s not easy to query for or get individual file (a trade off between space and convinience).

gridfs_directory_exists(path)[source]
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'mongodb'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'str'>, 'mandatory': None, 'name': 'uri', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'str'>, 'mandatory': True, 'name': 'host', 'constraint': None, 'default': 'localhost', 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function integer>, 'mandatory': True, 'name': 'port', 'constraint': None, 'default': 27017, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'str'>, 'mandatory': True, 'name': 'db', 'constraint': None, 'default': 'wa', 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'dict'>, 'mandatory': None, 'name': 'extra_params', 'constraint': None, 'default': {}, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'dict'>, 'mandatory': None, 'name': 'authentication', 'constraint': None, 'default': {}, 'allowed_values': None, 'global_alias': None, 'override': False})"])
upload_artifact(context, artifact)[source]
validate(*args, **kwargs)

wlauto.result_processors.notify module

class wlauto.result_processors.notify.NotifyProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = 'Display a desktop notification when the run finishes\n\n Notifications only work in linux systems. It uses the generic\n freedesktop notification specification. For this results processor\n to work, you need to have python-notify installed in your system.\n\n '
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'notify'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_run_result(result, context)[source]
validate(*args, **kwargs)

wlauto.result_processors.sqlite module

class wlauto.result_processors.sqlite.SqliteResultProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Stores results in an sqlite database.\n\n This may be used accumulate results of multiple runs in a single file.\n\n '
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'sqlite'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'str'>, 'mandatory': None, 'name': 'database', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': 'sqlite_database', 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'overwrite', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': 'sqlite_overwrite', 'override': False})"])
process_iteration_result(result, context)[source]
process_run_result(result, context)[source]
validate(*args, **kwargs)

wlauto.result_processors.standard module

This module contains a few “standard” result processors that write results to text files in various formats.

class wlauto.result_processors.standard.CsvReportProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

Creates a results.csv in the output directory containing results for all iterations in CSV format, each line containing a single metric.

aliases = AC([])
artifacts = AC([])
core_modules = []
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'csv'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'use_all_classifiers', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': 'use_all_classifiers', 'override': False})", "Param({'kind': <function list_of_strs>, 'mandatory': None, 'name': 'extra_columns', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_iteration_result(result, context)[source]
process_run_result(result, context)[source]
validate(*args, **kwargs)
class wlauto.result_processors.standard.JsonReportProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

Creates a results.json in the output directory containing results for all iterations in JSON format.

aliases = AC([])
artifacts = AC([])
core_modules = []
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'json'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_run_result(result, context)[source]
validate(*args, **kwargs)
class wlauto.result_processors.standard.StandardProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Creates a ``result.txt`` file for every iteration that contains metrics\n for that iteration.\n\n The metrics are written in ::\n\n metric = value [units]\n\n format.\n\n '
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'standard'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_iteration_result(result, context)[source]
validate(*args, **kwargs)
class wlauto.result_processors.standard.SummaryCsvProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

Similar to csv result processor, but only contains workloads’ summary metrics.

aliases = AC([])
artifacts = AC([])
core_modules = []
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'summary_csv'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_run_result(result, context)[source]
validate(*args, **kwargs)

wlauto.result_processors.status module

class wlauto.result_processors.status.StatusTxtReporter(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Outputs a txt file containing general status information about which runs\n failed and which were successful\n\n '
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'status'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_run_result(result, context)[source]
validate(*args, **kwargs)

wlauto.result_processors.syeg module

class wlauto.result_processors.syeg.SyegResult(max_iter)[source]

Bases: object

average
best
deviation
run_values
suite_version
class wlauto.result_processors.syeg.SyegResultProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Generates a CSV results file in the format expected by SYEG toolchain.\n\n Multiple iterations get parsed into columns, adds additional columns for mean\n and standard deviation, append number of threads to metric names (where\n applicable) and add some metadata based on external mapping files.\n\n '
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'syeg_csv'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <type 'str'>, 'mandatory': None, 'name': 'outfile', 'constraint': None, 'default': 'syeg_out.csv', 'allowed_values': None, 'global_alias': None, 'override': False})"])
process_run_result(result, context)[source]
validate(*args, **kwargs)

wlauto.result_processors.uxperf module

class wlauto.result_processors.uxperf.UxPerfResultProcessor(**kwargs)[source]

Bases: wlauto.core.result.ResultProcessor

aliases = AC([])
artifacts = AC([])
core_modules = []
description = '\n Parse logcat for UX_PERF markers to produce performance metrics for\n workload actions using specified instrumentation.\n\n An action represents a series of UI interactions to capture.\n\n NOTE: The UX_PERF markers are turned off by default and must be enabled in\n a agenda file by setting ``markers_enabled`` for the workload to ``True``.\n '
export_iteration_result(result, context)[source]
finalize(*args, **kwargs)
initialize(*args, **kwargs)
kind = 'result_processor'
name = 'uxperf'
parameters = AC(["Param({'kind': <type 'list'>, 'mandatory': None, 'name': 'modules', 'constraint': None, 'default': None, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'add_timings', 'constraint': None, 'default': True, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'add_frames', 'constraint': None, 'default': False, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function numeric>, 'mandatory': None, 'name': 'drop_threshold', 'constraint': None, 'default': 5, 'allowed_values': None, 'global_alias': None, 'override': False})", "Param({'kind': <function boolean>, 'mandatory': None, 'name': 'generate_csv', 'constraint': None, 'default': True, 'allowed_values': None, 'global_alias': None, 'override': False})"])
validate(*args, **kwargs)

Module contents