Types
SRToolkit.utils.types
Shared type definitions, constants, and result dataclasses for the SRToolkit package.
Defines symbol-type constants (VAR, CONST, FN, OP, LIT),
EstimationSettings for parameter estimation configuration, and ModelResult /
EvalResult for representing SR experiment outcomes.
EstimationSettings
Bases: TypedDict
Shared settings for parameter estimation and BED evaluation.
Passed as **kwargs to SR_dataset, SR_evaluator, and
ParameterEstimator. All fields are optional.
Examples:
>>> settings: EstimationSettings = {"method": "L-BFGS-B", "max_iter": 200}
>>> settings.get("method")
'L-BFGS-B'
>>> settings.get("tol", 1e-6)
1e-06
Attributes:
| Name | Type | Description |
|---|---|---|
method |
str
|
Optimization algorithm for parameter fitting. Default: |
tol |
float
|
Termination tolerance for the optimizer. Default: |
gtol |
float
|
Gradient-norm termination tolerance. Default: |
max_iter |
int
|
Maximum optimizer iterations. Default: |
constant_bounds |
Union[Tuple[float, float]]
|
|
initialization |
str
|
Constant initialization strategy — |
max_constants |
int
|
Maximum number of free constants permitted in a single
expression. Expressions exceeding this limit score |
max_expr_length |
int
|
Maximum expression length in tokens. |
num_points_sampled |
int
|
Number of domain points used when evaluating expression
behavior for BED. |
bed_X |
Optional[ndarray]
|
Fixed evaluation points for BED. If |
num_consts_sampled |
int
|
Number of constant vectors sampled per expression for
BED. Default: |
domain_bounds |
Optional[List[Tuple[float, float]]]
|
Per-variable |
ModelResult
dataclass
ModelResult(expr: List[str], error: float, parameters: Optional[ndarray] = None, augmentations: Dict[str, Dict[str, Any]] = dict())
A single model entry in EvalResult.top_models and EvalResult.all_models.
Examples:
>>> result = ModelResult(expr=["C", "*", "X_0"], error=0.42)
>>> result.expr
['C', '*', 'X_0']
>>> result.error
0.42
>>> result.parameters is None
True
Attributes:
| Name | Type | Description |
|---|---|---|
expr |
List[str]
|
Token list representing the expression, e.g. |
error |
float
|
Numeric error under the ranking function (RMSE or BED). |
parameters |
Optional[ndarray]
|
Fitted constant values. Present for RMSE ranking only, |
augmentations |
Dict[str, Dict[str, Any]]
|
Per-augmenter data keyed by augmenter name. Populated by ResultAugmenter subclasses via add_augmentation. |
add_augmentation
Attach augmentation data produced by a :class:ResultAugmenter to this result.
If name is already present in :attr:augmentations, a numeric suffix is
appended (name_1, name_2, …) to avoid overwriting existing data.
Examples:
>>> result = ModelResult(expr=["X_0"], error=0.1)
>>> result.add_augmentation("latex", {"value": "$X_0$"}, "LaTeXAugmenter")
>>> result.augmentations["latex"]["value"]
'$X_0$'
>>> result.add_augmentation("latex", {"value": "$X_0$"}, "LaTeXAugmenter")
>>> "latex_1" in result.augmentations
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Key under which the augmentation is stored in :attr: |
required |
data
|
Dict[str, Any]
|
Arbitrary dict of augmentation data. A |
required |
aug_type
|
str
|
Augmenter class name, stored as |
required |
Source code in SRToolkit/utils/types.py
to_dict
Serialize this model result to a JSON-safe dictionary.
NumPy arrays and scalars are converted to native Python types so the
result can be passed directly to json.dump.
Examples:
>>> result = ModelResult(expr=["X_0", "+", "C"], error=0.25)
>>> d = result.to_dict()
>>> d["expr"]
['X_0', '+', 'C']
>>> d["error"]
0.25
>>> d["parameters"] is None
True
Returns:
| Type | Description |
|---|---|
dict
|
A JSON-safe dictionary suitable for passing to :meth: |
Source code in SRToolkit/utils/types.py
from_dict
staticmethod
Reconstruct a :class:ModelResult from a dictionary produced by :meth:to_dict.
Examples:
>>> result = ModelResult(expr=["X_0", "+", "C"], error=0.25)
>>> result2 = ModelResult.from_dict(result.to_dict())
>>> result2.expr
['X_0', '+', 'C']
>>> result2.error
0.25
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
Dictionary representation of a :class: |
required |
Returns:
| Type | Description |
|---|---|
ModelResult
|
The reconstructed :class: |
Source code in SRToolkit/utils/types.py
EvalResult
dataclass
EvalResult(min_error: float, best_expr: str, num_evaluated: int, evaluation_calls: int, top_models: List[ModelResult], all_models: List[ModelResult], approach_name: str, success: bool, dataset_name: Optional[str] = None, metadata: Optional[dict] = None, augmentations: Dict[str, Dict[str, Any]] = dict())
Result for a single SR experiment, as returned by SR_results[i].
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05,
... best_expr="X_0",
... num_evaluated=500,
... evaluation_calls=612,
... top_models=[model],
... all_models=[model],
... approach_name="MyApproach",
... success=True,
... )
>>> result.min_error
0.05
>>> result.success
True
>>> result.dataset_name is None
True
Attributes:
| Name | Type | Description |
|---|---|---|
min_error |
float
|
Lowest error achieved across all evaluated expressions. |
best_expr |
str
|
String representation of the best expression found. |
num_evaluated |
int
|
Number of unique expressions evaluated. |
evaluation_calls |
int
|
Number of times |
top_models |
List[ModelResult]
|
Top-k models sorted by error. |
all_models |
List[ModelResult]
|
All evaluated models sorted by error. |
approach_name |
str
|
Name of the SR approach, or empty string if not provided. |
success |
bool
|
Whether |
dataset_name |
Optional[str]
|
Name of the dataset, extracted from metadata. |
metadata |
Optional[dict]
|
Remaining metadata dict after |
augmentations |
Dict[str, Dict[str, Any]]
|
Per-augmenter data keyed by augmenter name. Populated by ResultAugmenter subclasses via add_augmentation. |
add_augmentation
Attach augmentation data produced by a :class:ResultAugmenter to this result.
If name is already present in :attr:augmentations, a numeric suffix is
appended (name_1, name_2, …) to avoid overwriting existing data.
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05, best_expr="X_0", num_evaluated=10,
... evaluation_calls=10, top_models=[model], all_models=[model],
... approach_name="MyApproach", success=True,
... )
>>> result.add_augmentation("complexity", {"value": 3}, "ComplexityAugmenter")
>>> result.augmentations["complexity"]["value"]
3
>>> result.add_augmentation("complexity", {"value": 5}, "ComplexityAugmenter")
>>> "complexity_1" in result.augmentations
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Key under which the augmentation is stored in :attr: |
required |
data
|
Dict[str, Any]
|
Arbitrary dict of augmentation data. A |
required |
aug_type
|
str
|
Augmenter class name, stored as |
required |
Source code in SRToolkit/utils/types.py
to_dict
Serialize this evaluation result to a JSON-safe dictionary.
NumPy arrays and scalars within nested :class:ModelResult entries are
converted to native Python types so the result can be passed directly
to json.dump.
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05, best_expr="X_0", num_evaluated=10,
... evaluation_calls=10, top_models=[model], all_models=[model],
... approach_name="MyApproach", success=True,
... )
>>> d = result.to_dict()
>>> d["min_error"]
0.05
>>> d["approach_name"]
'MyApproach'
>>> len(d["top_models"])
1
Returns:
| Type | Description |
|---|---|
dict
|
A JSON-safe dictionary suitable for passing to :meth: |
Source code in SRToolkit/utils/types.py
from_dict
staticmethod
Reconstruct an :class:EvalResult from a dictionary produced by :meth:to_dict.
Examples:
>>> model = ModelResult(expr=["X_0"], error=0.05)
>>> result = EvalResult(
... min_error=0.05, best_expr="X_0", num_evaluated=10,
... evaluation_calls=10, top_models=[model], all_models=[model],
... approach_name="MyApproach", success=True,
... )
>>> result2 = EvalResult.from_dict(result.to_dict())
>>> result2.min_error
0.05
>>> result2.best_expr
'X_0'
>>> len(result2.top_models)
1
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
dict
|
Dictionary representation of an :class: |
required |
Returns:
| Type | Description |
|---|---|
EvalResult
|
The reconstructed :class: |