SR Dataset
SRToolkit.dataset.sr_dataset
Dataset wrapper for a single symbolic regression problem.
SR_dataset
SR_dataset(X: ndarray, symbol_library: SymbolLibrary, ranking_function: str = 'rmse', y: Optional[ndarray] = None, max_evaluations: int = -1, ground_truth: Optional[Union[List[str], Node, ndarray]] = None, original_equation: Optional[str] = None, success_threshold: Optional[float] = None, seed: Optional[int] = None, dataset_metadata: Optional[dict] = None, dataset_name: str = 'unnamed', **kwargs: Unpack[EstimationSettings])
Wraps input data and evaluation settings for a single symbolic regression problem.
Examples:
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> dataset = SR_dataset(X, SymbolLibrary.default_symbols(2), ground_truth=["X_0", "+", "X_1"],
... y=np.array([3, 7, 11]), max_evaluations=10000, original_equation="z = x + y", success_threshold=1e-6)
>>> evaluator = dataset.create_evaluator()
>>> bool(evaluator.evaluate_expr(["sin", "(", "X_0", ")"]) < dataset.success_threshold)
False
>>> bool(evaluator.evaluate_expr(["u-", "C", "*", "X_1", "+", "X_0"]) < dataset.success_threshold)
True
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Input data of shape |
required |
symbol_library
|
SymbolLibrary
|
The symbol library defining the token vocabulary. |
required |
ranking_function
|
str
|
Ranking function to use. |
'rmse'
|
y
|
Optional[ndarray]
|
Target values used for parameter estimation when |
None
|
max_evaluations
|
int
|
Maximum number of expressions to evaluate. Values less than 0 mean no limit. |
-1
|
ground_truth
|
Optional[Union[List[str], Node, ndarray]]
|
The ground truth expression, as a list of tokens in infix notation, a Node tree, or a numpy array of behavior vectors (see create_behavior_matrix). |
None
|
original_equation
|
Optional[str]
|
Human-readable string of the original equation (e.g. |
None
|
success_threshold
|
Optional[float]
|
Error threshold below which an expression is considered successful. If |
None
|
seed
|
Optional[int]
|
Random seed for reproducibility. |
None
|
dataset_metadata
|
Optional[dict]
|
Optional dictionary of metadata about the dataset (e.g. citation, variable names). |
None
|
dataset_name
|
str
|
Name for this dataset. Defaults to |
'unnamed'
|
**kwargs
|
Unpack[EstimationSettings]
|
Optional estimation settings passed to
SR_evaluator.
Supported keys: |
{}
|
Source code in SRToolkit/dataset/sr_dataset.py
evaluate_approach
evaluate_approach(sr_approach: SR_approach, num_experiments: int = 1, top_k: int = 20, initial_seed: Optional[int] = None, results: Optional[SR_results] = None, callbacks: Optional[Union[SRCallbacks, CallbackDispatcher, List[SRCallbacks]]] = None, verbose: bool = True, adaptation_path: Optional[str] = None) -> SR_results
Evaluates an SR approach on this dataset.
Examples:
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> dataset = SR_dataset(X, SymbolLibrary.default_symbols(2), ground_truth=["X_0", "+", "X_1"],
... y=np.array([3, 7, 11]), max_evaluations=10000, original_equation="z = x + y")
>>> results = dataset.evaluate_approach(my_approach, num_experiments=5)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sr_approach
|
SR_approach
|
The SR approach to evaluate. |
required |
num_experiments
|
int
|
Number of independent experiments (runs) to perform. |
1
|
top_k
|
int
|
Number of top expressions to retain per experiment. |
20
|
initial_seed
|
Optional[int]
|
Seed for random number generation. If |
None
|
results
|
Optional[SR_results]
|
Existing SR_results object to append
results to. If |
None
|
callbacks
|
Optional[Union[SRCallbacks, CallbackDispatcher, List[SRCallbacks]]]
|
Optional list of SRCallbacks, SRCallbacks, or CallbackDispatcher for monitoring and controlling the search. |
None
|
verbose
|
bool
|
If |
True
|
adaptation_path
|
Optional[str]
|
Path to save/load the adapted state for approaches with
|
None
|
Returns:
| Type | Description |
|---|---|
SR_results
|
An SR_results object containing results from all experiments. |
Source code in SRToolkit/dataset/sr_dataset.py
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | |
create_evaluator
create_evaluator(metadata: Optional[Dict[str, Any]] = None, seed: Optional[int] = None) -> SR_evaluator
Creates an instance of the SR_evaluator class from this dataset.
Examples:
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> dataset = SR_dataset(X, SymbolLibrary.default_symbols(2), ground_truth=["X_0", "+", "X_1"],
... y=np.array([3, 7, 11]), max_evaluations=10000, original_equation="z = x + y", success_threshold=1e-6)
>>> evaluator = dataset.create_evaluator()
>>> float(evaluator.evaluate_expr(["sin", "(", "X_0", ")"]))
8.05645397...
>>> float(evaluator.evaluate_expr(["X_1", "+", "X_0"]))
0.0...
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metadata
|
Optional[Dict[str, Any]]
|
Optional dictionary of metadata to attach to the evaluator (e.g. model name, seed). Dataset metadata is merged in automatically. |
None
|
seed
|
Optional[int]
|
Seed for the random number generator. If |
None
|
Returns:
| Type | Description |
|---|---|
SR_evaluator
|
A configured SR_evaluator ready to evaluate expressions against this dataset. |
Raises:
| Type | Description |
|---|---|
Exception
|
If SR_evaluator cannot be instantiated with the current dataset settings. |
Source code in SRToolkit/dataset/sr_dataset.py
__str__
Returns a string describing this dataset.
The string describes the target expression, symbols that should be used, and the success threshold. It also includes any constraints that should be followed when evaluating a model on this dataset. These constraints include the maximum number of expressions to evaluate, the maximum length of the expression, and the maximum number of constants allowed in the expression. If the symbol library contains a symbol for constants, the string also includes the range of constants.
For other metadata, please refer to the attribute self.dataset_metadata.
Examples:
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> dataset = SR_dataset(X, SymbolLibrary.default_symbols(2), ground_truth=["X_0", "+", "X_1"],
... y=np.array([3, 7, 11]), max_evaluations=10000, original_equation="z = x + y", success_threshold=1e-6)
>>> str(dataset)
'Dataset for target expression z = x + y. When evaluating your model on this dataset, you should limit your generative model to only produce expressions using the following symbols: +, -, *, /, ^, u-, sqrt, sin, cos, exp, tan, arcsin, arccos, arctan, sinh, cosh, tanh, floor, ceil, ln, log, ^-1, ^2, ^3, ^4, ^5, pi, e, C, X_0, X_1.\nExpressions will be ranked based on the RMSE ranking function.\nExpressions are deemed successful if the root mean squared error is less than 1e-06. However, we advise that you check the best performing expressions manually to ensure they are correct.\nDataset uses the default limitations (extra arguments) from the SR_evaluator.The expressions in the dataset can contain constants/free parameters.\nFor other metadata, please refer to the attribute self.dataset_metadata.'
Returns:
| Type | Description |
|---|---|
str
|
A string describing this dataset. |
Source code in SRToolkit/dataset/sr_dataset.py
to_dict
Creates a dictionary representation of this dataset. This is mainly used for saving the dataset to disk.
Examples:
>>> import tempfile
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> dataset = SR_dataset(X, SymbolLibrary.default_symbols(2), ground_truth=["X_0", "+", "X_1"],
... y=np.array([3, 7, 11]), max_evaluations=10000, original_equation="z = x + y", success_threshold=1e-6)
>>> dataset.to_dict("data/example_ds")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_path
|
str
|
The path to the directory where the data in the dataset should be saved. |
required |
Returns:
| Type | Description |
|---|---|
dict
|
A dictionary representation of this dataset. |
Source code in SRToolkit/dataset/sr_dataset.py
from_dict
staticmethod
Creates an instance of the SR_dataset class from its dictionary representation. This is mainly used for loading the dataset from disk.
Examples:
>>> import tempfile, os
>>> tmpdir = tempfile.mkdtemp()
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> ds = SR_dataset(X, SymbolLibrary.default_symbols(2), ground_truth=["X_0", "+", "X_1"],
... y=np.array([3, 7, 11]), max_evaluations=10000, original_equation="z = x + y", success_threshold=1e-6)
>>> dataset_dict = ds.to_dict(tmpdir)
>>> dataset = SR_dataset.from_dict(dataset_dict)
>>> dataset.X.shape
(3, 2)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
d
|
dict
|
Dictionary representation of the dataset, as produced by to_dict. |
required |
Returns:
| Type | Description |
|---|---|
SR_dataset
|
A new SR_dataset instance. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the |
Exception
|
If the dataset file or ground truth file cannot be loaded. |