ctf4science.eval_module#
Evaluation Module for CTF models, provides evaluation metrics and routines for CTF datasets.
This module handles evaluation of CTF models against a hidden test set. It also assesses model stability by running models multiple times with different random seeds.
Functions
|
Compute the natural log of the averaged PSD over the last k time steps. |
|
Compute the averaged power spectral density over the last k time steps for the given modes. |
|
Evaluate the prediction using specified metrics; ground truth is loaded internally. |
|
Evaluate the prediction against a provided truth array using specified metrics. |
|
Evaluate the predictions from a Kaggle CSV file. |
|
Extract metric values from batch results in the order defined by the dataset config. |
|
Compute the long-time forecast score for dynamical systems (histogram-based). |
|
Compute the long-time forecast score for spatio-temporal systems (spectral). |
|
Compute the reconstruction score (relative L2 over full trajectory, as percentage). |
|
Save configuration, predictions, and optional evaluation results for a run. |
|
Compute the short-time forecast score (relative L2 over first k steps, as percentage). |