API Reference#
High-level API#
datapipe_testbench - CTA benchmark system.
Licensed under a 3-clause BSD style license - see LICENSE
- class InputDataset(name: str, dl0: Path | None = None, dl1: Path | None = None, dl1_images: Path | None = None, dl2: Path | None = None, dl2_signal: Path | None = None, dl2_background: Path | None = None, dl3_irf: Path | None = None, dl3_benchmark: Path | None = None)[source]#
Bases:
objectAn inputDataset defines common inputs that can be used by Benchmarks.
- property dtypes#
Return list of field names.
- generate_all_metrics(input_dataset_list: list[InputDataset], benchmark_list: list[Benchmark], experiments_path: Path | str, skip_existing=False) list[MetricsStore][source]#
Generate all metrics for a given list of inputs and benchmarks.
This calls
datapipe_testbench.benchmark.Benchmark.generate_metricsfor each input dataset. The output will be stored in experiments_path, with subdirectories created using the name field of eachInputDataset. The subdirectories will contain the generated metrics.- Parameters:
- input_dataset_listlist[InputDataset]
Input information for each experiment
- benchmark_listlist[Benchmark]
Which benchmarks to generate metrics for
- experiments_pathPath | str
Where to write out the metrics. Inside, a subdirectory for each inputdataset will be generated
- skip_existing: bool
If True, don’t re-generate metrics that already exist. Note that this does not detect changes in the Benchmark options, it only checks if the required outputs already exist.
- Returns:
- list[MetricsStore]:
list of generated metrics, corresponding to each InputDataset. These can also be loaded up at a future time.
- rename_telescope_type(old_name: str, new_name: str) None[source]#
Rename a telescope type, to handle cases where the name changed in the simulations.
Get the current list of re-mappings with print_telescope_type_transforms()
- run_comparison_study(name: str, experiment_names: list[str], benchmark_list: list[Benchmark], experiments_path: Path | str, studies_path: Path | str, metadata: dict | None = None) ResultStore[source]#
Generate output plots comparing the list of Metrics for the given Benchmarks.
The outputs will be written to
{studies_path}/{name}/*- Parameters:
- name: str
Name of this study
- experiment_names: list[str]
list of names of experiments within experiments_path. These will be the name of the InputDataset used to write them if you used
generate_all_metrics()- benchmark_listlist[Benchmark]
List of Benchmarks to generate plots for.
- experiments_path: Path | str
Base path of where your experiments are stored
- studies_pathPath | str
Base path of where to store study results. The output will be in a subdirectory of this path by name.
- metadata: dict | None
any other metadata you want to store with this study
- Returns:
- ResultStore:
the output
Low-level API#
- Benchmarks and Metrics
- High-level Benchmarks
- Reporting and Plotting
- Input/Output
- Utilities
- Visualization