High-level Benchmarks#

datapipe_testbench.auto_benchmark#

Defines what is an AutoBenchmark.

class AutoBenchmark[source]#

Bases: Benchmark

A Benchmark with minimal configuration needed.

Attributes:
datalevelstr

dl1, dl2, dl0 (dl1images not supported atm)

col_listslist[tuple]

list of tuples of columns, each tuple defines

chunk_sizeint

Chunk size when reading input event data.

custom_cols: dict[str, Callable]

Custom columns to generate.

custom_axisdict[str,axis.Axis]

Override the automatic axis definition for columns.

compare_to_reference(metric_store_list: list[MetricsStore], result_store: ResultStore)[source]#

Perform the comparison study on metrics associated with a set of experiments.

This function takes a list of MetricStores that have been previously filled by datapipe_testbench.benchmark.Benchmark.generate_metrics(), generates plots and comparison results, and writes the output to a ResultsStore. If there are more than one MetricStore to compare, the first one is considered the _reference_.

Parameters:
metric_store_list: list[MetricsStore]

The list of MetricStores to compare, the first of which is the _reference_ .

result_store: ResultStore

WHere the results of the comparison study are stored.

generate_metrics(metric_store: MetricsStore)[source]#

Produce metrics for this benchmark later comparison.

Called once per benchmark per MetricsStore, i.e. for one set of input events, defines how to transform the event into to a metrics stored in a MetricsStore that can later be compared.

Parameters:
metric_store: MetricsStore

Where to store the metrics generated by this Benchmark. It must be initialized with an datapipe_testbench.inputdataset.InputDataset containing the inputs to use for the transformation into metrics.

make_report(result_store: ResultStore)[source]#

Make a report for the benchmark using the results of the comparisons.

The result_store must already contain the outputs of compare_to_reference().

Parameters:
result_storeResultStore

Contains the input comparisons and is where the report will be stored.

plot_all(mstore: MetricsStore, rstore: ResultStore)[source]#

Plot all histograms related to the current benchmark from one metric store.

This is a convenience function that basically call compare_to_reference with only one metric store.

Parameters:
mstoreMetricsStore

Input MetricsStore (contains all histograms intermediate files).

rstoreResultStore

Output ResultStore (will contains all plots files).