ctf4science.eval_module.save_results#

ctf4science.eval_module.save_results(dataset_name: str, method_name: str, batch_id: str, pair_id: int | str, config: dict[str, Any], predictions: ndarray, results: dict[str, float] | None = None) Path#

Save configuration, predictions, and optional evaluation results for a run.

Writes config.yaml, predictions.npy, and optionally evaluation_results.yaml under results/{dataset_name}/{method_name}/{batch_id}/pair{pair_id}/.

Parameters:
dataset_namestr

Name of the dataset.

method_namestr

Name of the method or model.

batch_idstr

Batch identifier and folder name for the batch run.

pair_idint or str

Sub-dataset (pair) identifier.

configdict

Configuration dictionary used for the run.

predictionsndarray

Predicted data array to save.

resultsdict, optional

Evaluation results (metric name -> score). If None, no evaluation_results.yaml is written.

Returns:
pathlib.Path

Path to the directory containing the run results.