datapipe_testbench.benchmarks.dl3#

Benchmarks using DL3 information, e.g. IRFs.

class IRFBenchmark(do_irf_sensitivity=False)[source]#

Bases: Benchmark

Top level performance benchmarks.

Coverst requirements: PROG-0020, PROG-0030, PROG-0040, PROG-0050, PROG-0060, PROG-0070, PROG-0080, PROG-0090

For generation, requires: * GADF IRF files

compare_to_reference(metric_store_list: list[MetricsStore], result_store: ResultStore)[source]#

Perform the comparison study on metrics associated with a set of experiments.

This function takes a list of MetricStores that have been previously filled by datapipe_testbench.benchmark.Benchmark.generate_metrics(), generates plots and comparison results, and writes the output to a ResultsStore. If there are more than one MetricStore to compare, the first one is considered the _reference_.

Parameters:
metric_store_list: list[MetricsStore]

The list of MetricStores to compare, the first of which is the _reference_ .

result_store: ResultStore

WHere the results of the comparison study are stored.

generate_metrics(metric_store: MetricsStore)[source]#

Produce metrics for this benchmark later comparison.

Called once per benchmark per MetricsStore, i.e. for one set of input events, defines how to transform the event into to a metrics stored in a MetricsStore that can later be compared.

Parameters:
metric_store: MetricsStore

Where to store the metrics generated by this Benchmark. It must be initialized with an datapipe_testbench.inputdataset.InputDataset containing the inputs to use for the transformation into metrics.

outputs() dict[source]#

Return mapping of key to Path of all outputs that this Benchmark generates.

property required_inputs: set[str]#

Return the set of required keys in the InputDataset that this Benchmark needs.

These are checked before generating metrics