Evaluation Submodule
SRToolkit.evaluation
Classes and functions for evaluating symbolic regression approaches.
Modules:
| Name | Description |
|---|---|
parameter_estimator |
ParameterEstimator — fits free constants in expressions and ranks them by RMSE. |
sr_evaluator |
SR_evaluator and SR_results — expression evaluation and result management. |
result_augmentation |
ResultAugmenter implementations that post-process results with LaTeX, simplified forms, RMSE, BED, and R² scores. |
callbacks |
SRCallbacks and CallbackDispatcher — event-driven hooks for monitoring and early stopping during evaluation. |
BestExpressionFound
dataclass
Fired when a new best expression is found during evaluation.
Attributes:
| Name | Type | Description |
|---|---|---|
experiment_id |
str
|
Identifier of the current experiment. |
expression |
str
|
String representation of the new best expression. |
error |
float
|
Error value of the new best expression. |
evaluation_number |
int
|
Total number of evaluate_expr calls made at the time this event is fired. |
CallbackDispatcher
Manages multiple SRCallbacks instances and dispatches events to all of them.
Examples:
>>> dispatcher = CallbackDispatcher()
>>> dispatcher.add(EarlyStoppingCallback(threshold=1e-6))
>>> len(dispatcher._callbacks)
1
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
callbacks
|
Optional[List[SRCallbacks]]
|
Initial list of callbacks. Defaults to an empty list. |
None
|
Source code in SRToolkit/evaluation/callbacks.py
get_callbacks
Returns the list of callbacks.
Returns:
| Type | Description |
|---|---|
List[SRCallbacks]
|
A list of SRCallbacks instances in this dispatcher. |
add
Add a callback to the dispatcher.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
callback
|
SRCallbacks
|
The SRCallbacks instance to add. |
required |
remove
Remove a callback from the dispatcher.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
callback
|
SRCallbacks
|
The SRCallbacks instance to remove. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in SRToolkit/evaluation/callbacks.py
on_expr_evaluated
Dispatch to all callbacks and aggregate the stop signal.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
ExprEvaluated
|
Data about the evaluated expression. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
|
Source code in SRToolkit/evaluation/callbacks.py
on_best_expression
Dispatch to all callbacks and aggregate the stop signal.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
BestExpressionFound
|
Data about the new best expression. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
|
Source code in SRToolkit/evaluation/callbacks.py
on_experiment_start
Dispatch to all callbacks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
ExperimentEvent
|
Data about the experiment that is about to begin. |
required |
Source code in SRToolkit/evaluation/callbacks.py
on_experiment_end
Dispatch to all callbacks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
ExperimentEvent
|
Data about the experiment that just ended. |
required |
results
|
EvalResult
|
Final EvalResult for this experiment. |
required |
Source code in SRToolkit/evaluation/callbacks.py
EarlyStoppingCallback
Bases: SRCallbacks
Stops the search when the best expression error falls below a threshold.
Examples:
>>> cb = EarlyStoppingCallback(threshold=1e-6)
>>> cb.on_best_expression(BestExpressionFound("", "X_0", 1e-7, 42))
False
>>> cb.on_best_expression(BestExpressionFound("", "X_0", 1e-5, 43))
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
threshold
|
Optional[float]
|
Error value below which the search is stopped. |
required |
Source code in SRToolkit/evaluation/callbacks.py
ExperimentEvent
dataclass
ExperimentEvent(dataset_name: str, approach_name: str, max_evaluations: Optional[int], success_threshold: Optional[float], seed: Optional[int])
Fired at experiment start and end.
Attributes:
| Name | Type | Description |
|---|---|---|
dataset_name |
str
|
Name of the dataset being evaluated. |
approach_name |
str
|
Name of the SR approach being run. |
max_evaluations |
Optional[int]
|
Maximum number of evaluations allowed for this experiment. |
success_threshold |
Optional[float]
|
Error threshold for success, or |
seed |
Optional[int]
|
Random seed used for this experiment, or |
ExprEvaluated
dataclass
ExprEvaluated(expression: str, error: float, evaluation_number: int, experiment_id: str, is_new_best: bool)
Fired after each expression is evaluated by evaluate_expr.
Attributes:
| Name | Type | Description |
|---|---|---|
expression |
str
|
String representation of the evaluated expression. |
error |
float
|
Error value returned by the ranking function (RMSE or BED). |
evaluation_number |
int
|
Total number of evaluate_expr calls made so far, including cache hits. |
experiment_id |
str
|
Identifier of the current experiment. |
is_new_best |
bool
|
|
LoggingCallback
Bases: SRCallbacks
Logs each new best expression to stdout or a file.
log_file may contain placeholders that are resolved at experiment start
using fields from ExperimentEvent.
Available placeholders: {dataset_name},{approach_name}, {seed}. Using
per-experiment placeholders (e.g. {seed}) gives each job its own file, which is
the recommended approach for parallel execution.
When multiple jobs share the same resolved file path, writes are protected
by fcntl.flock (POSIX advisory locking) so concurrent processes on
Linux / macOS do not corrupt each other's output. On Windows or network
filesystems where flock is unavailable the lock is silently skipped.
Examples:
>>> cb = LoggingCallback()
>>> cb.on_best_expression(BestExpressionFound("Nguyen-1_ProGED_42", "X_0+C", 0.001, 10))
[Experiment Nguyen-1_ProGED_42] New best: X_0+C (error=1.000000e-03)
>>> cb = LoggingCallback(log_file="logs/{dataset_name}_{seed}.log")
>>> cb.on_experiment_start(ExperimentEvent(dataset_name="test", max_evaluations=10, seed=1,
... success_threshold=0, approach_name="ta"))
>>> cb._resolved_log_file
'logs/test_1.log'
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
log_file
|
Optional[str]
|
Destination for log messages. If |
None
|
Source code in SRToolkit/evaluation/callbacks.py
ProgressBarCallback
Bases: SRCallbacks
Displays a tqdm progress bar that updates after each expression evaluation.
Examples:
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
desc
|
Optional[str]
|
Description label shown on the progress bar. If |
None
|
Source code in SRToolkit/evaluation/callbacks.py
SRCallbacks
Bases: ABC
Abstract base class for SR evaluation callbacks.
Implement only the methods you need. Return False from
on_expr_evaluated or
on_best_expression
to request early stopping; return True or None to continue.
Examples:
>>> class PrintBestCallback(SRCallbacks):
... def on_best_expression(self, event):
... print(f"New best: {event.expression} (error={event.error:.4g})")
>>> cb = PrintBestCallback()
>>> cb.on_best_expression(BestExpressionFound("", "X_0+C", 0.01, 5))
New best: X_0+C (error=0.01)
on_expr_evaluated
Called after each expression is evaluated.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
ExprEvaluated
|
Data about the evaluated expression. |
required |
Returns:
| Type | Description |
|---|---|
Optional[bool]
|
|
Source code in SRToolkit/evaluation/callbacks.py
on_best_expression
Called when a new best expression is found.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
BestExpressionFound
|
Data about the new best expression. |
required |
Returns:
| Type | Description |
|---|---|
Optional[bool]
|
|
Source code in SRToolkit/evaluation/callbacks.py
on_experiment_start
Called before an experiment starts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
ExperimentEvent
|
Data about the experiment that is about to begin. |
required |
on_experiment_end
Called after an experiment completes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
event
|
ExperimentEvent
|
Data about the experiment that just ended. |
required |
results
|
EvalResult
|
Final EvalResult for this experiment. |
required |
Source code in SRToolkit/evaluation/callbacks.py
to_dict
Serialise this callback to a JSON-safe dictionary.
The default implementation stores only the fully-qualified class path. Override in subclasses to include constructor parameters so that from_dict can reconstruct a functionally identical instance.
Returns:
| Type | Description |
|---|---|
dict
|
A JSON-safe dict with at least a |
Source code in SRToolkit/evaluation/callbacks.py
from_dict
classmethod
Reconstruct a callback from a serialised dictionary.
The default implementation calls cls() with no arguments. Override in
subclasses that require constructor parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
d
|
dict
|
Dictionary produced by to_dict. |
required |
Returns:
| Type | Description |
|---|---|
SRCallbacks
|
A new instance of this callback class. |
Source code in SRToolkit/evaluation/callbacks.py
ParameterEstimator
ParameterEstimator(X: ndarray, y: ndarray, symbol_library: SymbolLibrary = SymbolLibrary.default_symbols(), seed: Optional[int] = None, **kwargs: Unpack[EstimationSettings])
Fits free constants in symbolic expressions by minimizing RMSE against target values.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9]])
>>> y = np.array([3, 0, 3, 11])
>>> pe = ParameterEstimator(X, y)
>>> rmse, constants = pe.estimate_parameters(["C", "*", "X_1", "-", "X_0"])
>>> print(rmse < 1e-6)
True
>>> print(1.99 < constants[0] < 2.01)
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Input data of shape |
required |
y
|
ndarray
|
Target values of shape |
required |
symbol_library
|
SymbolLibrary
|
Symbol library defining the token vocabulary. Defaults to SymbolLibrary.default_symbols. |
default_symbols()
|
seed
|
Optional[int]
|
Random seed for reproducible constant initialization. Default |
None
|
**kwargs
|
Unpack[EstimationSettings]
|
Optional estimation settings from
EstimationSettings.
Supported keys: |
{}
|
Attributes:
| Name | Type | Description |
|---|---|---|
symbol_library |
The symbol library used. |
|
X |
Input data. |
|
y |
Target values. |
|
seed |
Random seed. |
|
estimation_settings |
Active settings dict, merged from defaults and |
Source code in SRToolkit/evaluation/parameter_estimator.py
estimate_parameters
Fit free constants in expr by minimizing RMSE against the target values.
Expressions that exceed max_constants or max_expr_length immediately
return (NaN, []). Expressions with no free constants are evaluated directly
without running the optimizer.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9]])
>>> y = np.array([3, 0, 3, 11])
>>> pe = ParameterEstimator(X, y)
>>> rmse, constants = pe.estimate_parameters(["C", "*", "X_1", "-", "X_0"])
>>> print(rmse < 1e-6)
True
>>> print(1.99 < constants[0] < 2.01)
True
>>> # Constant-free expressions are evaluated directly
>>> rmse, constants = pe.estimate_parameters(["X_1", "-", "X_0"])
>>> constants.size
0
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
expr
|
Union[List[str], Node]
|
Expression as a token list in infix notation or a Node tree. |
required |
Returns:
| Type | Description |
|---|---|
Tuple[float, ndarray]
|
A 2-tuple |
Source code in SRToolkit/evaluation/parameter_estimator.py
BED
Bases: ResultAugmenter
Computes BED for the top models using a separate evaluator (e.g. a held-out test set).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evaluator
|
SR_evaluator
|
SR_evaluator used to
score the models. Must be initialized with |
required |
scope
|
str
|
Which expressions to score.
|
'top'
|
name
|
str
|
Key used in
|
'BED'
|
Raises:
| Type | Description |
|---|---|
Exception
|
If |
Source code in SRToolkit/evaluation/result_augmentation.py
write_results
Write BED scores into results and its models.
Stores {"best_expr_bed": ...} in
EvalResult augmentations and
{"bed": ...} in each model's augmentations when scope is "top" or "all".
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
EvalResult
|
The EvalResult to augment. |
required |
Source code in SRToolkit/evaluation/result_augmentation.py
format_eval_result
classmethod
Format experiment-level BED data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
format_model_result
classmethod
Format per-model BED data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
to_dict
Creates a dictionary representation of the BED augmenter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
Used to save the data of the evaluator to disk. |
required |
name
|
str
|
Used to save the data of the evaluator to disk. |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
from_dict
staticmethod
Creates an instance of the BED augmenter from a dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
required |
Returns:
| Type | Description |
|---|---|
BED
|
An instance of the BED augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
R2
Bases: ResultAugmenter
Computes R² for the top models using a separate evaluator (e.g. a held-out test set).
The same evaluator instance can be shared with RMSE to avoid loading test data twice.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evaluator
|
SR_evaluator
|
SR_evaluator used to
score the models. Must be initialized with |
required |
scope
|
str
|
Which expressions to score.
|
'top'
|
name
|
str
|
Key used in
|
'R2'
|
Raises:
| Type | Description |
|---|---|
Exception
|
If |
Source code in SRToolkit/evaluation/result_augmentation.py
write_results
Write R² scores into results and its models.
Stores {"best_expr_r^2": ...} in
EvalResult augmentations and
{"r^2": ..., "parameters_r^2": ...} in each model's augmentations when scope
is "top" or "all".
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
EvalResult
|
The EvalResult to augment. |
required |
Source code in SRToolkit/evaluation/result_augmentation.py
format_eval_result
classmethod
Format experiment-level R² data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
format_model_result
classmethod
Format per-model R² data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string with R² and fitted parameters. |
Source code in SRToolkit/evaluation/result_augmentation.py
to_dict
Creates a dictionary representation of the R2 augmenter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
Used to save the data of the evaluator to disk. |
required |
name
|
str
|
Used to save the data of the evaluator to disk. |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
from_dict
staticmethod
Creates an instance of the R2 augmenter from a dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
required |
Returns:
| Type | Description |
|---|---|
R2
|
An instance of the R2 augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
RMSE
Bases: ResultAugmenter
Computes RMSE for the top models using a separate evaluator (e.g. a held-out test set).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evaluator
|
SR_evaluator
|
SR_evaluator used to
score the models. Must be initialized with |
required |
scope
|
str
|
Which expressions to score.
|
'top'
|
name
|
str
|
Key used in
|
'RMSE'
|
Raises:
| Type | Description |
|---|---|
Exception
|
If |
Source code in SRToolkit/evaluation/result_augmentation.py
write_results
Write RMSE scores into results and its models.
Stores {"min_error": ...} in
EvalResult augmentations and
{"error": ..., "parameters": ...} in each model's augmentations when scope
is "top" or "all".
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
EvalResult
|
The EvalResult to augment. |
required |
Source code in SRToolkit/evaluation/result_augmentation.py
format_eval_result
classmethod
Format experiment-level RMSE data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
format_model_result
classmethod
Format per-model RMSE data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string with RMSE and fitted parameters. |
Source code in SRToolkit/evaluation/result_augmentation.py
to_dict
Creates a dictionary representation of the RMSE augmenter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
Used to save the data of the evaluator to disk. |
required |
name
|
str
|
Used to save the data of the evaluator to disk. |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
from_dict
staticmethod
Creates an instance of the RMSE augmenter from a dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
required |
Returns:
| Type | Description |
|---|---|
RMSE
|
An instance of the RMSE augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
EvalResult
dataclass
EvalResult(min_error: float, best_expr: str, num_evaluated: int, evaluation_calls: int, top_models: List[ModelResult], all_models: List[ModelResult], approach_name: str, success: bool, dataset_name: Optional[str] = None, metadata: Optional[dict] = None, augmentations: Dict[str, Dict[str, Any]] = dict())
Result for a single SR experiment, as returned by SR_results[i].
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05,
... best_expr="X_0",
... num_evaluated=500,
... evaluation_calls=612,
... top_models=[model],
... all_models=[model],
... approach_name="MyApproach",
... success=True,
... )
>>> result.min_error
0.05
>>> result.success
True
>>> result.dataset_name is None
True
Attributes:
| Name | Type | Description |
|---|---|---|
min_error |
float
|
Lowest error achieved across all evaluated expressions. |
best_expr |
str
|
String representation of the best expression found. |
num_evaluated |
int
|
Number of unique expressions evaluated. |
evaluation_calls |
int
|
Number of times |
top_models |
List[ModelResult]
|
Top-k models sorted by error. |
all_models |
List[ModelResult]
|
All evaluated models sorted by error. |
approach_name |
str
|
Name of the SR approach, or empty string if not provided. |
success |
bool
|
Whether |
dataset_name |
Optional[str]
|
Name of the dataset, extracted from metadata. |
metadata |
Optional[dict]
|
Remaining metadata dict after |
augmentations |
Dict[str, Dict[str, Any]]
|
Per-augmenter data keyed by augmenter name. Populated by ResultAugmenter subclasses via add_augmentation. |
add_augmentation
Attach augmentation data produced by a :class:ResultAugmenter to this result.
If name is already present in :attr:augmentations, a numeric suffix is
appended (name_1, name_2, …) to avoid overwriting existing data.
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05, best_expr="X_0", num_evaluated=10,
... evaluation_calls=10, top_models=[model], all_models=[model],
... approach_name="MyApproach", success=True,
... )
>>> result.add_augmentation("complexity", {"value": 3}, "ComplexityAugmenter")
>>> result.augmentations["complexity"]["value"]
3
>>> result.add_augmentation("complexity", {"value": 5}, "ComplexityAugmenter")
>>> "complexity_1" in result.augmentations
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Key under which the augmentation is stored in :attr: |
required |
data
|
Dict[str, Any]
|
Arbitrary dict of augmentation data. A |
required |
aug_type
|
str
|
Augmenter class name, stored as |
required |
Source code in SRToolkit/utils/types.py
to_dict
Serialize this evaluation result to a JSON-safe dictionary.
NumPy arrays and scalars within nested :class:ModelResult entries are
converted to native Python types so the result can be passed directly
to json.dump.
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05, best_expr="X_0", num_evaluated=10,
... evaluation_calls=10, top_models=[model], all_models=[model],
... approach_name="MyApproach", success=True,
... )
>>> d = result.to_dict()
>>> d["min_error"]
0.05
>>> d["approach_name"]
'MyApproach'
>>> len(d["top_models"])
1
Returns:
| Type | Description |
|---|---|
dict
|
A JSON-safe dictionary suitable for passing to :meth: |
Source code in SRToolkit/utils/types.py
from_dict
staticmethod
Reconstruct an :class:EvalResult from a dictionary produced by :meth:to_dict.
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05, best_expr="X_0", num_evaluated=10,
... evaluation_calls=10, top_models=[model], all_models=[model],
... approach_name="MyApproach", success=True,
... )
>>> result2 = EvalResult.from_dict(result.to_dict())
>>> result2.min_error
0.05
>>> result2.best_expr
'X_0'
>>> len(result2.top_models)
1
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
Dictionary representation of an :class: |
required |
Returns:
| Type | Description |
|---|---|
EvalResult
|
The reconstructed :class: |
Source code in SRToolkit/utils/types.py
ExpressionSimplifier
ExpressionSimplifier(symbol_library: SymbolLibrary, scope: str = 'top', verbose: bool = False, name: str = 'ExpressionSimplifier')
Bases: ResultAugmenter
Algebraically simplifies expressions inside the results using SymPy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
symbol_library
|
SymbolLibrary
|
Symbol library used by the simplifier to resolve token types. |
required |
scope
|
str
|
Which expressions to simplify.
|
'top'
|
verbose
|
bool
|
If |
False
|
name
|
str
|
Key used in
|
'ExpressionSimplifier'
|
Source code in SRToolkit/evaluation/result_augmentation.py
write_results
Write simplified expressions into results and its models.
Stores {"simplified_best_expr": ...} in
EvalResult augmentations if
simplification succeeds. Also stores {"simplified_expr": ...} in each model's
augmentations when scope is "top" or "all".
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
EvalResult
|
The EvalResult to augment. |
required |
Source code in SRToolkit/evaluation/result_augmentation.py
format_eval_result
classmethod
Format experiment-level simplification data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
format_model_result
classmethod
Format per-model simplification data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
to_dict
Creates a dictionary representation of the ExpressionSimplifier augmenter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
Unused and ignored |
required |
name
|
str
|
Unused and ignored |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
from_dict
staticmethod
Creates an instance of the ExpressionSimplifier augmenter from a dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
required |
Returns: An instance of the ExpressionSimplifier augmenter.
Source code in SRToolkit/evaluation/result_augmentation.py
ExpressionToLatex
ExpressionToLatex(symbol_library: SymbolLibrary, scope: str = 'top', verbose: bool = False, name: str = 'ExpressionToLatex')
Bases: ResultAugmenter
Converts expressions inside the results to LaTeX strings.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
symbol_library
|
SymbolLibrary
|
Symbol library used to produce LaTeX templates for each token. |
required |
scope
|
str
|
Which expressions to convert.
|
'top'
|
verbose
|
bool
|
If |
False
|
name
|
str
|
Key used in
|
'ExpressionToLatex'
|
Source code in SRToolkit/evaluation/result_augmentation.py
write_results
Write LaTeX representations into results and its models.
Stores {"best_expr_latex": ...} in
EvalResult augmentations.
Also stores {"expr_latex": ...} in each model's augmentations when
scope is "top" or "all".
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
EvalResult
|
The EvalResult to augment. |
required |
Source code in SRToolkit/evaluation/result_augmentation.py
format_eval_result
classmethod
Format experiment-level LaTeX augmentation data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
format_model_result
classmethod
Format per-model LaTeX augmentation data for display.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Augmentation dict containing |
required |
Returns:
| Type | Description |
|---|---|
str
|
A human-readable string, or empty string if no data is present. |
Source code in SRToolkit/evaluation/result_augmentation.py
to_dict
Creates a dictionary representation of the ExpressionToLatex augmenter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
Unused and ignored |
required |
name
|
str
|
Unused and ignored |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
from_dict
staticmethod
Creates an instance of the ExpressionToLatex augmenter from a dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
required |
Returns:
| Type | Description |
|---|---|
ExpressionToLatex
|
An instance of the ExpressionToLatex augmenter. |
Source code in SRToolkit/evaluation/result_augmentation.py
ModelResult
dataclass
ModelResult(expr: List[str], error: float, parameters: Optional[ndarray] = None, augmentations: Dict[str, Dict[str, Any]] = dict())
A single model entry in EvalResult.top_models and EvalResult.all_models.
Examples:
>>> result = ModelResult(expr=["C", "*", "X_0"], error=0.42)
>>> result.expr
['C', '*', 'X_0']
>>> result.error
0.42
>>> result.parameters is None
True
Attributes:
| Name | Type | Description |
|---|---|---|
expr |
List[str]
|
Token list representing the expression, e.g. |
error |
float
|
Numeric error under the ranking function (RMSE or BED). |
parameters |
Optional[ndarray]
|
Fitted constant values. Present for RMSE ranking only, |
augmentations |
Dict[str, Dict[str, Any]]
|
Per-augmenter data keyed by augmenter name. Populated by ResultAugmenter subclasses via add_augmentation. |
add_augmentation
Attach augmentation data produced by a :class:ResultAugmenter to this result.
If name is already present in :attr:augmentations, a numeric suffix is
appended (name_1, name_2, …) to avoid overwriting existing data.
Examples:
>>> result = ModelResult(expr=["X_0"], error=0.1)
>>> result.add_augmentation("latex", {"value": "$X_0$"}, "LaTeXAugmenter")
>>> result.augmentations["latex"]["value"]
'$X_0$'
>>> result.add_augmentation("latex", {"value": "$X_0$"}, "LaTeXAugmenter")
>>> "latex_1" in result.augmentations
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Key under which the augmentation is stored in :attr: |
required |
data
|
Dict[str, Any]
|
Arbitrary dict of augmentation data. A |
required |
aug_type
|
str
|
Augmenter class name, stored as |
required |
Source code in SRToolkit/utils/types.py
to_dict
Serialize this model result to a JSON-safe dictionary.
NumPy arrays and scalars are converted to native Python types so the
result can be passed directly to json.dump.
Examples:
>>> result = ModelResult(expr=["X_0", "+", "C"], error=0.25)
>>> d = result.to_dict()
>>> d["expr"]
['X_0', '+', 'C']
>>> d["error"]
0.25
>>> d["parameters"] is None
True
Returns:
| Type | Description |
|---|---|
dict
|
A JSON-safe dictionary suitable for passing to :meth: |
Source code in SRToolkit/utils/types.py
from_dict
staticmethod
Reconstruct a :class:ModelResult from a dictionary produced by :meth:to_dict.
Examples:
>>> result = ModelResult(expr=["X_0", "+", "C"], error=0.25)
>>> result2 = ModelResult.from_dict(result.to_dict())
>>> result2.expr
['X_0', '+', 'C']
>>> result2.error
0.25
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
Dictionary representation of a :class: |
required |
Returns:
| Type | Description |
|---|---|
ModelResult
|
The reconstructed :class: |
Source code in SRToolkit/utils/types.py
ResultAugmenter
Bases: ABC
Base class for result augmenters. Subclasses implement write_results to compute and store additional data in an EvalResult via add_augmentation.
For concrete implementations, see result_augmentation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Identifier used as the key in
|
required |
Source code in SRToolkit/evaluation/sr_evaluator.py
write_results
abstractmethod
Compute and write augmentation data into results and its models.
Call results.add_augmentation(self.name, data, self._type) for experiment-level
data and model.add_augmentation(self.name, data, self._type) for per-model data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
EvalResult
|
The EvalResult to augment. |
required |
Source code in SRToolkit/evaluation/sr_evaluator.py
to_dict
abstractmethod
Transforms the augmenter into a dictionary. This is used for saving the augmenter to disk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
The base path used for saving the data inside the augmenter, if needed. |
required |
name
|
str
|
The name/identifier used by the augmenter for saving. |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the augmenter. |
Source code in SRToolkit/evaluation/sr_evaluator.py
format_eval_result
classmethod
Returns a formatted string for experiment-level augmentation data.
Subclasses override this for custom formatting. The data dict is the inner
augmentation dictionary (includes _type).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
The augmentation data dictionary. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A formatted string, or empty string if no relevant data exists. |
Source code in SRToolkit/evaluation/sr_evaluator.py
format_model_result
classmethod
Returns a formatted string for a single model's augmentation data.
Subclasses override this for custom formatting. The data dict is the inner
augmentation dictionary (includes _type).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
The augmentation data dictionary. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A formatted string, or empty string if no relevant data exists. |
Source code in SRToolkit/evaluation/sr_evaluator.py
from_dict
staticmethod
Creates an instance of the ResultAugmenter class from the dictionary with the relevant data.
Subclasses should override this method if they support serialization. The default
implementation raises NotImplementedError, allowing custom augmenters to skip
serialization if not needed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
the dictionary containing the data needed to recreate the augmenter. |
required |
Returns:
| Type | Description |
|---|---|
ResultAugmenter
|
An instance of the ResultAugmenter class with the same configuration as in the data dictionary. |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
If the subclass does not implement this method. |
Source code in SRToolkit/evaluation/sr_evaluator.py
SR_evaluator
SR_evaluator(X: ndarray, y: Optional[ndarray] = None, symbol_library: SymbolLibrary = SymbolLibrary.default_symbols(), max_evaluations: int = -1, success_threshold: Optional[float] = None, ranking_function: str = 'rmse', ground_truth: Optional[Union[List[str], Node, ndarray]] = None, seed: Optional[int] = None, metadata: Optional[dict] = None, **kwargs: Unpack[EstimationSettings])
Evaluates symbolic regression expressions and ranks them by RMSE or Behavioral Expression Distance (BED).
Previously evaluated expressions are cached so repeated calls with the same expression are free. Results are collected via get_results.
Note
Determining whether two expressions are semantically equivalent is undecidable.
Random sampling, parameter fitting, and numerical errors all make the
success_threshold only a proxy for success — we recommend inspecting the best
expression manually.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9]])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> print(rmse < 1e-6)
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Input data of shape |
required |
y
|
Optional[ndarray]
|
Target values of shape |
None
|
symbol_library
|
SymbolLibrary
|
Symbol library defining the token vocabulary. Defaults to SymbolLibrary.default_symbols. |
default_symbols()
|
max_evaluations
|
int
|
Maximum number of expressions to evaluate. |
-1
|
success_threshold
|
Optional[float]
|
Error value below which an expression is considered successful.
If |
None
|
ranking_function
|
str
|
|
'rmse'
|
ground_truth
|
Optional[Union[List[str], Node, ndarray]]
|
Required when |
None
|
seed
|
Optional[int]
|
Random seed for reproducible sampling. Default |
None
|
metadata
|
Optional[dict]
|
Optional dict with information about this evaluation (e.g. dataset name,
seed). If a |
None
|
**kwargs
|
Unpack[EstimationSettings]
|
Optional settings from
EstimationSettings.
Supported keys: |
{}
|
Attributes:
| Name | Type | Description |
|---|---|---|
models |
Cached ModelResult for every evaluated expression, keyed by the concatenated token string. |
|
invalid |
Token strings of expressions that raised an exception during evaluation. |
|
ground_truth |
The target expression passed at construction (BED mode). |
|
gt_behavior |
Pre-computed behavior matrix for the ground truth (BED mode). |
|
max_evaluations |
Maximum number of expressions to evaluate. |
|
bed_evaluation_parameters |
Active BED evaluation settings dict. |
|
metadata |
Metadata dict passed at construction. |
|
symbol_library |
The symbol library used. |
|
total_evaluations |
Number of times evaluate_expr has been called, including cache hits. |
|
seed |
Random seed. |
|
parameter_estimator |
ParameterEstimator instance used in RMSE mode. |
|
ranking_function |
Active ranking function ( |
|
success_threshold |
Error threshold for determining success. |
Source code in SRToolkit/evaluation/sr_evaluator.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | |
set_callbacks
Register callbacks for monitoring and early stopping.
A single SRCallbacks instance is automatically wrapped in a CallbackDispatcher.
Examples:
>>> from SRToolkit.evaluation.callbacks import EarlyStoppingCallback
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9]])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y)
>>> se.set_callbacks(EarlyStoppingCallback(threshold=1e-6))
>>> se._callbacks is not None
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
callbacks
|
Optional[Union[SRCallbacks, CallbackDispatcher]]
|
A CallbackDispatcher or a single SRCallbacks instance. |
None
|
Source code in SRToolkit/evaluation/sr_evaluator.py
evaluate_expr
Evaluates an expression in infix notation and stores the result in memory to prevent re-evaluation.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y, seed=42)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> print(rmse < 1e-6)
True
>>> X = np.array([[0, 1], [0, 2], [0, 3]])
>>> y = np.array([2, 3, 4])
>>> se = SR_evaluator(X, y, seed=42, success_threshold=-1)
>>> rmse = se.evaluate_expr(["C", "+", "C", "*", "C", "+", "X_0", "*", "X_1", "/", "X_0"], simplify_expr=True)
>>> print(rmse < 1e-6)
True
>>> list(se.models.keys())[0]
'C+X_1'
>>> print(0.99 < se.models["C+X_1"].parameters[0] < 1.01)
True
>>> # Evaluating invalid expression returns nan and adds it to invalid list
>>> print(se.evaluate_expr(["C", "*", "X_1", "X_0"]))
nan
>>> se.invalid
['C*X_1X_0']
>>> X = np.random.rand(10, 2) - 0.5
>>> gt = ["X_0", "+", "C"]
>>> se = SR_evaluator(X, ground_truth=gt, ranking_function="bed", seed=42)
>>> print(se.evaluate_expr(["C", "+", "X_1"]) < 1)
True
>>> # When evaluating using BED as the ranking function, the error depends on the scale of output of the
>>> # ground truth. Because of stochasticity of BED, error might be high even when expressions match exactly.
>>> print(se.evaluate_expr(["C", "+", "X_0"]) < 0.2)
True
>>> # X can also be sampled from a domain by providing domain_bounds
>>> se = SR_evaluator(X, ground_truth=gt, ranking_function="bed", domain_bounds=[(-1, 1), (-1, 1)], seed=42)
>>> print(se.evaluate_expr(["C", "+", "X_0"]) < 0.2)
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
expr
|
Union[List[str], Node]
|
Expression as a token list in infix notation or a Node tree. |
required |
simplify_expr
|
bool
|
If |
False
|
verbose
|
int
|
|
0
|
Returns:
| Type | Description |
|---|---|
float
|
The error of the expression under the active ranking function: RMSE when |
Source code in SRToolkit/evaluation/sr_evaluator.py
391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | |
get_results
get_results(approach_name: str = '', top_k: int = 20, results: Optional[SR_results] = None) -> SR_results
Returns the results of the equation discovery/symbolic regression process/evaluation.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> print(results[0].num_evaluated)
1
>>> print(results[0].evaluation_calls)
1
>>> print(results[0].best_expr)
C*X_1-X_0
>>> print(results[0].min_error < 1e-6)
True
>>> print(1.99 < results[0].top_models[0].parameters[0] < 2.01)
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
approach_name
|
str
|
The name of the approach used to discover the equations. |
''
|
top_k
|
int
|
The number of top results to include in the output. If |
20
|
results
|
Optional[SR_results]
|
An SR_results object containing the results of the previous evaluation. If provided, the results of the current evaluation are appended to the existing results. Otherwise, a new SR_results object is created. |
None
|
Returns:
| Type | Description |
|---|---|
SR_results
|
An instance of the SR_results object with the results of the evaluation. |
Source code in SRToolkit/evaluation/sr_evaluator.py
to_dict
Creates a dictionary representation of the SR_evaluator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
Used to save the data of the evaluator to disk. |
required |
name
|
str
|
Used to save the data of the evaluator to disk. |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary containing the necessary information to recreate the evaluator from disk. |
Source code in SRToolkit/evaluation/sr_evaluator.py
from_dict
staticmethod
Reconstruct an SR_evaluator from a dictionary produced by to_dict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
Dictionary representation of the evaluator, as produced by to_dict. |
required |
Returns:
| Type | Description |
|---|---|
SR_evaluator
|
The reconstructed SR_evaluator. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in SRToolkit/evaluation/sr_evaluator.py
SR_results
Container for SR experiment results, typically obtained via SR_evaluator.get_results.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9]])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y, seed=42)
>>> _ = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> print(results[0].best_expr)
C*X_1-X_0
>>> print(results[0].min_error < 1e-6)
True
>>> len(results)
1
Attributes:
| Name | Type | Description |
|---|---|---|
results |
List of EvalResult instances, one per experiment. |
Source code in SRToolkit/evaluation/sr_evaluator.py
add_results
add_results(models: Dict[str, ModelResult], top_k: int, total_evaluations: int, success_threshold: Optional[float], approach_name: str, metadata: Optional[dict] = None) -> None
Adds the results of an evaluation to the results object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
models
|
Dict[str, ModelResult]
|
A dictionary mapping expressions to their evaluation results. |
required |
top_k
|
int
|
The number of top results to include in the output. |
required |
total_evaluations
|
int
|
The total number of evaluations performed during the evaluation. |
required |
success_threshold
|
Optional[float]
|
The success threshold used to determine whether the evaluation was successful. |
required |
approach_name
|
str
|
The name of the approach used to discover the equations. |
required |
metadata
|
Optional[dict]
|
A dictionary containing additional metadata about the evaluation. |
None
|
Source code in SRToolkit/evaluation/sr_evaluator.py
print_results
print_results(experiment_number: Optional[int] = None, detailed: bool = False, model_scope: Literal['best', 'top', 'all'] = 'top', augmentations: Optional[List[str]] = None)
Prints the results of the SR_evaluator.
Displays the minimum error, best expression, evaluation counts, success status,
metadata, and approach name. When detailed is True, also prints per-model
information. Augmentation data is formatted by the corresponding
ResultAugmenter subclass,
looked up from the global registry via the _type field stored in each
augmentation entry.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y, seed=42)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> results.print_results()
=== Experiment 1/1 ===
Best expression: C*X_1-X_0
Error: ...
Evaluated: 1 expressions | Calls: 1 | Success: ...
>>> results.print_results(detailed=True, experiment_number=0)
Best expression: C*X_1-X_0
Error: ...
Evaluated: 1 expressions | Calls: 1 | Success: ...
Models:
C*X_1-X_0 (error=..., params=...)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
experiment_number
|
Optional[int]
|
Number of the experiment to print. If None, prints all. |
None
|
detailed
|
bool
|
If True, prints per-model information. |
False
|
model_scope
|
Literal['best', 'top', 'all']
|
Which models to show when detailed is True.
|
'top'
|
augmentations
|
Optional[List[str]]
|
Filter which augmenters to display by name. If None, all augmentations present in the data are shown. |
None
|
Source code in SRToolkit/evaluation/sr_evaluator.py
augment
augment(augmenters: Union[List[ResultAugmenter], ResultAugmenter], experiment_number: Optional[int] = None) -> None
Applies the given ResultAugmenter instances to the stored results. Augmenters add post-hoc information such as LaTeX representations, simplified expressions, or R² scores.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y, seed=42)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> from SRToolkit.evaluation.result_augmentation import ExpressionToLatex
>>> results.augment([ExpressionToLatex(SymbolLibrary.default_symbols(2))])
>>> results[0].augmentations["ExpressionToLatex"]["best_expr_latex"]
'$C_{0} \\cdot X_{1} - X_{0}$'
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
augmenters
|
Union[List[ResultAugmenter], ResultAugmenter]
|
A ResultAugmenter or a list of ResultAugmenter objects to apply to the results. |
required |
experiment_number
|
Optional[int]
|
If provided, apply augmenters only to this experiment's result. If None, apply to all results. |
None
|
Source code in SRToolkit/evaluation/sr_evaluator.py
__add__
Returns a new SR_results object that is the concatenation of the current SR_results object with the other SR_results object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
SR_results
|
SR_results object to concatenate with the current SR_results object. |
required |
Returns:
| Type | Description |
|---|---|
SR_results
|
A new SR_results object containing the concatenated results. |
Source code in SRToolkit/evaluation/sr_evaluator.py
__iadd__
In-place concatenation of SR_results objects.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
SR_results
|
SR_results object to concatenate with the current SR_results object. |
required |
Returns:
| Type | Description |
|---|---|
SR_results
|
self |
Source code in SRToolkit/evaluation/sr_evaluator.py
__getitem__
Returns the results of the experiment with the given index.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> result_of_first_experiment = results[0]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
item
|
int
|
the index of the experiment. |
required |
Returns:
| Type | Description |
|---|---|
EvalResult
|
The results of the experiment with the given index. |
Source code in SRToolkit/evaluation/sr_evaluator.py
__len__
Returns the number of results stored in the results object. Usually, each result corresponds to a single experiment.
Examples:
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y)
>>> rmse = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> len(results)
1
Returns:
| Type | Description |
|---|---|
int
|
The number of results stored in the results object. |
Source code in SRToolkit/evaluation/sr_evaluator.py
save
Saves the results to a specific file or directory as JSON.
If path is an existing directory, it writes results.json inside it.
If path is a file path, it must end with the .json extension.
Examples:
>>> import tempfile
>>> X = np.array([[1, 2], [8, 4], [5, 4], [7, 9], ])
>>> y = np.array([3, 0, 3, 11])
>>> se = SR_evaluator(X, y, seed=42)
>>> _ = se.evaluate_expr(["C", "*", "X_1", "-", "X_0"])
>>> results = se.get_results(top_k=1)
>>> with tempfile.TemporaryDirectory() as tmpdir:
... results.save(tmpdir + "/my_results/results.json")
... loaded = SR_results.load(tmpdir + "/my_results/results.json")
... print(loaded[0].best_expr)
C*X_1-X_0
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Directory path or specific .json file path. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the path is a file with an extension other than .json. |
OSError
|
If the directory cannot be created. |
Source code in SRToolkit/evaluation/sr_evaluator.py
load
staticmethod
Load results previously saved with save.
If path is a directory, it looks for results.json inside it.
If path is a file, it must end with the .json extension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Directory path containing |
required |
Returns:
| Type | Description |
|---|---|
SR_results
|
A new SR_results instance with the loaded data. |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the specified file or directory does not exist. |
ValueError
|
If the file extension is not .json or if |