Dataset References¶
dataset
¶
Dataset
¶
Bases: Generic[DatasetConfigType, FlagType, FlagIndexType], ABC
Abstract base class for all datasets in the MESQUAL framework.
The Dataset class provides the fundamental interface for data access and manipulation in MESQUAL. It implements the core principle "Everything is a Dataset" where individual scenarios, collections of scenarios, and scenario comparisons all share the same unified interface.
Key Features
- Unified
.fetch(flag)interface for data access - Attribute management for scenario metadata
- KPI calculation integration
- Database caching support
- Dot notation fetching via
dotfetchproperty - Type-safe generic implementation
Class Type Parameters:
| Name | Bound or Constraints | Description | Default |
|---|---|---|---|
DatasetConfigType
|
Configuration class for dataset behavior |
required | |
FlagType
|
Type used for data flag identification (typically str) |
required | |
FlagIndexType
|
Flag index implementation for flag mapping |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Human-readable identifier for the dataset |
kpi_collection |
KPICollection
|
Collection of KPIs associated with this dataset |
dotfetch |
_DotNotationFetcher
|
Enables dot notation data access |
Example:
>>> # Basic usage pattern
>>> data = dataset.fetch('buses_t.marginal_price')
>>> flags = dataset.accepted_flags
>>> if dataset.flag_is_accepted('generators_t.p'):
... gen_data = dataset.fetch('generators_t.p')
Source code in submodules/mesqual/mesqual/datasets/dataset.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 | |
accepted_flags
abstractmethod
property
¶
accepted_flags: set[FlagType]
Set of all flags accepted by this dataset.
This abstract property must be implemented by all concrete dataset classes to define which data flags can be fetched from the dataset.
Returns:
| Type | Description |
|---|---|
set[FlagType]
|
Set of flags that can be used with the fetch() method |
Example:
>>> print(dataset.accepted_flags)
{'buses', 'buses_t.marginal_price', 'generators', 'generators_t.p', ...}
__init__
¶
__init__(name: str = None, parent_dataset: Dataset = None, flag_index: FlagIndexType = None, attributes: dict = None, database: Database = None, config: DatasetConfigType = None)
Initialize a new Dataset instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Human-readable identifier. If None, auto-generates from class name |
None
|
parent_dataset
|
Dataset
|
Optional parent dataset for hierarchical relationships |
None
|
flag_index
|
FlagIndexType
|
Index for mapping and validating data flags |
None
|
attributes
|
dict
|
Dictionary of metadata attributes for the dataset |
None
|
database
|
Database
|
Optional database for caching expensive computations |
None
|
config
|
DatasetConfigType
|
Configuration object controlling dataset behavior |
None
|
Source code in submodules/mesqual/mesqual/datasets/dataset.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 | |
add_kpis
¶
add_kpis(kpis: Iterable[KPI | KPIFactory | Type[KPI]])
Add multiple KPIs to this dataset's KPI collection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kpis
|
Iterable[KPI | KPIFactory | Type[KPI]]
|
Iterable of KPI instances, factories, or classes to add |
required |
Source code in submodules/mesqual/mesqual/datasets/dataset.py
152 153 154 155 156 157 158 159 160 | |
add_kpi
¶
add_kpi(kpi: KPI | KPIFactory | Type[KPI])
Add a single KPI to this dataset's KPI collection.
Automatically handles different KPI input types by converting factories and classes to KPI instances.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kpi
|
KPI | KPIFactory | Type[KPI]
|
KPI instance, factory, or class to add |
required |
Source code in submodules/mesqual/mesqual/datasets/dataset.py
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 | |
get_accepted_flags_containing_x
¶
get_accepted_flags_containing_x(x: str, match_case: bool = False) -> set[FlagType]
Find all accepted flags containing a specific substring.
Useful for discovering related data flags or filtering flags by category.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
str
|
Substring to search for in flag names |
required |
match_case
|
bool
|
If True, performs case-sensitive search. Default is False. |
False
|
Returns:
| Type | Description |
|---|---|
set[FlagType]
|
Set of accepted flags containing the substring |
Example:
>>> ds = PyPSADataset()
>>> ds.get_accepted_flags_containing_x('generators')
{'generators', 'generators_t.p', 'generators_t.efficiency', ...}
>>> ds.get_accepted_flags_containing_x('BUSES', match_case=True)
set() # Empty because case doesn't match
Source code in submodules/mesqual/mesqual/datasets/dataset.py
235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 | |
flag_is_accepted
¶
flag_is_accepted(flag: FlagType) -> bool
Boolean check whether a flag is accepted by the Dataset.
This method can be optionally overridden in any child-class in case you want to follow logic instead of the explicit set of accepted_flags.
Source code in submodules/mesqual/mesqual/datasets/dataset.py
261 262 263 264 265 266 267 268 | |
fetch
¶
fetch(flag: FlagType, config: dict | DatasetConfigType = None, **kwargs) -> Series | DataFrame
Fetch data associated with a specific flag.
This is the primary method for data access in MESQUAL datasets. It provides a unified interface for retrieving data regardless of the underlying source or dataset type. The method includes automatic caching, post-processing, and configuration management.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
flag
|
FlagType
|
Data identifier flag (must be in accepted_flags) |
required |
config
|
dict | DatasetConfigType
|
Optional configuration to override dataset defaults. Can be a dict or DatasetConfig instance. |
None
|
**kwargs
|
Additional keyword arguments passed to the underlying data fetching implementation |
{}
|
Returns:
| Type | Description |
|---|---|
Series | DataFrame
|
DataFrame or Series containing the requested data |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the flag is not accepted by this dataset |
Examples:
>>> # Basic usage
>>> prices = dataset.fetch('buses_t.marginal_price')
>>>
>>> # With custom configuration
>>> prices = dataset.fetch('buses_t.marginal_price', config={'use_database': False})
Source code in submodules/mesqual/mesqual/datasets/dataset.py
278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 | |
flag_must_be_accepted
¶
flag_must_be_accepted(method)
Decorator that validates flag acceptance before method execution.
Ensures that only accepted flags are processed by dataset methods, providing clear error messages for invalid flag usage.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
The method to decorate |
required |
Returns:
| Type | Description |
|---|---|
|
Decorated method that validates flag acceptance |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the flag is not accepted by the dataset |
Source code in submodules/mesqual/mesqual/datasets/dataset.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | |
dataset_collection
¶
DatasetCollection
¶
Bases: Generic[DatasetType, DatasetConfigType, FlagType, FlagIndexType], Dataset[DatasetConfigType, FlagType, FlagIndexType], ABC
Abstract base class for collections of datasets.
DatasetCollection extends the Dataset interface to handle multiple child datasets while maintaining the same unified API. This enables complex hierarchical structures where collections themselves can be treated as datasets.
Key Features
- Inherits all Dataset functionality
- Manages collections of child datasets
- Provides iteration and access methods
- Aggregates accepted flags from all children
- Supports KPI operations across all sub-datasets
Class Type Parameters:
| Name | Bound or Constraints | Description | Default |
|---|---|---|---|
DatasetType
|
Type of datasets that can be collected |
required | |
DatasetConfigType
|
Configuration class for dataset behavior |
required | |
FlagType
|
Type used for data flag identification |
required | |
FlagIndexType
|
Flag index implementation for flag mapping |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
datasets |
list[DatasetType]
|
List of child datasets in this collection |
Note
This class follows the "Everything is a Dataset" principle, allowing collections to be used anywhere a Dataset is expected.
Source code in submodules/mesqual/mesqual/datasets/dataset_collection.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | |
fetch_merged
¶
fetch_merged(flag: FlagType, config: dict | DatasetConfigType = None, keep_first: bool = True, **kwargs) -> Series | DataFrame
Fetch method that merges dataframes from all child datasets, similar to DatasetMergeCollection.
Source code in submodules/mesqual/mesqual/datasets/dataset_collection.py
177 178 179 180 181 182 183 184 185 186 | |
DatasetLinkCollection
¶
Bases: Generic[DatasetType, DatasetConfigType, FlagType, FlagIndexType], DatasetCollection[DatasetType, DatasetConfigType, FlagType, FlagIndexType]
Links multiple datasets to provide unified data access with automatic routing.
DatasetLinkCollection acts as a unified interface to multiple child datasets, automatically routing data requests to the appropriate child dataset that accepts the requested flag. This is the foundation for platform datasets that combine multiple data interpreters.
Key Features
- Automatic flag routing to appropriate child dataset
- Bidirectional parent-child relationships
- First-match-wins routing strategy
- Overlap detection and warnings
- Maintains all Dataset interface compatibility
Routing Logic
When fetch() is called, iterates through child datasets in order and returns data from the first dataset that accepts the flag.
Example:
>>> # Platform dataset with multiple interpreters
>>> link_collection = DatasetLinkCollection([
... ModelInterpreter(network),
... TimeSeriesInterpreter(network),
... ObjectiveInterpreter(network)
... ])
>>> # Automatically routes to appropriate interpreter
>>> buses = link_collection.fetch('buses') # -> ModelInterpreter
>>> prices = link_collection.fetch('buses_t.marginal_price') # -> TimeSeriesInterpreter
Warning
If multiple child datasets accept the same flag, only the first one will be used. The constructor logs warnings for such overlaps.
Source code in submodules/mesqual/mesqual/datasets/dataset_collection.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 | |
get_dataset_by_type
¶
get_dataset_by_type(ds_type: type[Dataset]) -> DatasetType
Returns instance of child dataset that matches the ds_type.
Source code in submodules/mesqual/mesqual/datasets/dataset_collection.py
280 281 282 283 284 285 | |
DatasetMergeCollection
¶
Bases: Generic[DatasetType, DatasetConfigType, FlagType, FlagIndexType], DatasetCollection[DatasetType, DatasetConfigType, FlagType, FlagIndexType]
Fetch method will merge fragmented Datasets for same flag, e.g.: - fragmented simulation runs, e.g. CW1, CW2, CW3, CWn. - fragmented data sources, e.g. mapping from Excel file with model from simulation platform.
Source code in submodules/mesqual/mesqual/datasets/dataset_collection.py
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 | |
DatasetConcatCollection
¶
Bases: Generic[DatasetType, DatasetConfigType, FlagType, FlagIndexType], DatasetCollection[DatasetType, DatasetConfigType, FlagType, FlagIndexType]
Concatenates data from multiple datasets with MultiIndex structure.
DatasetConcatCollection is fundamental to MESQUAL's multi-scenario analysis capabilities. It fetches the same flag from multiple child datasets and concatenates the results into a single DataFrame/Series with an additional index level identifying the source dataset.
Key Features
- Automatic MultiIndex creation with dataset names
- Configurable concatenation axis and level positioning
- Preserves all dimensional relationships
- Supports scenario and comparison collections
- Enables unified analysis across multiple datasets
MultiIndex Structure
The resulting data structure includes an additional index level (typically named 'dataset') that identifies the source dataset for each data point.
Example:
>>> # Collection of scenario datasets
>>> scenarios = DatasetConcatCollection([
... PyPSADataset(base_network, name='base'),
... PyPSADataset(high_res_network, name='high_res'),
... PyPSADataset(low_gas_network, name='low_gas')
... ])
>>>
>>> # Fetch creates MultiIndex DataFrame
>>> prices = scenarios.fetch('buses_t.marginal_price')
>>> print(prices.columns.names)
['dataset', 'Bus'] # Original Bus index + dataset level
>>>
>>> # Access specific scenario data
>>> base_prices = prices['base']
>>>
>>> # Analyze across scenarios
>>> mean_prices = prices.mean() # Mean across all scenarios
Source code in submodules/mesqual/mesqual/datasets/dataset_collection.py
333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 | |
dataset_comparison
¶
DatasetComparison
¶
Bases: Generic[DatasetType, DatasetConfigType, FlagType, FlagIndexType], DatasetCollection[DatasetType, DatasetConfigType, FlagType, FlagIndexType]
Computes and provides access to differences between two datasets.
DatasetComparison is a core component of MESQUAL's scenario comparison capabilities. It automatically calculates deltas, ratios, or side-by-side comparisons between a variation dataset and a reference dataset, enabling systematic analysis of scenario differences.
Key Features
- Automatic delta computation between datasets
- Multiple comparison types (DELTA, VARIATION, BOTH)
- Handles numeric and non-numeric data appropriately
- Preserves data structure and index relationships
- Configurable unchanged value handling
- Inherits full Dataset interface
Comparison Types
- DELTA: Variation - Reference (default)
- VARIATION: Returns variation data with optional NaN for unchanged values
- BOTH: Side-by-side variation and reference data
Attributes:
| Name | Type | Description |
|---|---|---|
variation_dataset |
The dataset representing the scenario being compared |
|
reference_dataset |
The dataset representing the baseline for comparison |
Example:
>>> # Compare high renewable scenario to base case
>>> comparison = DatasetComparison(
... variation_dataset=high_res_dataset,
... reference_dataset=base_dataset
... )
>>>
>>> # Get price differences
>>> price_deltas = comparison.fetch('buses_t.marginal_price')
>>>
>>> # Get both datasets side-by-side (often used to show model changes)
>>> price_both = comparison.fetch('buses', comparison_type=ComparisonTypeEnum.BOTH)
>>>
>>> # Highlight only changes (often used to show model changes)
>>> price_changes = comparison.fetch('buses', replace_unchanged_values_by_nan=True)
Source code in submodules/mesqual/mesqual/datasets/dataset_comparison.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | |
fetch
¶
fetch(flag: FlagType, config: dict | DatasetConfigType = None, comparison_type: ComparisonTypeEnum = DELTA, replace_unchanged_values_by_nan: bool = False, fill_value: float | int | None = None, **kwargs) -> Series | DataFrame
Fetch comparison data between variation and reference datasets.
Extends the base Dataset.fetch() method with comparison-specific parameters for controlling how the comparison is computed and formatted.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
flag
|
FlagType
|
Data identifier flag to fetch from both datasets |
required |
config
|
dict | DatasetConfigType
|
Optional configuration overrides |
None
|
comparison_type
|
ComparisonTypeEnum
|
How to compare the datasets: - DELTA: variation - reference (default) - VARIATION: variation data only, optionally with NaN for unchanged - BOTH: concatenated variation and reference data |
DELTA
|
replace_unchanged_values_by_nan
|
bool
|
If True, replaces values that are identical between datasets with NaN (useful for highlighting changes) |
False
|
fill_value
|
float | int | None
|
Value to use for missing data in subtraction operations |
None
|
**kwargs
|
Additional arguments passed to child dataset fetch methods |
{}
|
Returns:
| Type | Description |
|---|---|
Series | DataFrame
|
DataFrame or Series with comparison results |
Example:
>>> # Basic delta comparison
>>> deltas = comparison.fetch('buses_t.marginal_price')
>>>
>>> # Highlight only changed values
>>> changes_only = comparison.fetch(
... 'buses_t.marginal_price',
... replace_unchanged_values_by_nan=True
... )
>>>
>>> # Side-by-side comparison
>>> both = comparison.fetch(
... 'buses_t.marginal_price',
... comparison_type=ComparisonTypeEnum.BOTH
... )
Source code in submodules/mesqual/mesqual/datasets/dataset_comparison.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | |
platform_dataset
¶
PlatformDataset
¶
Bases: Generic[DatasetType, DatasetConfigType, FlagType, FlagIndexType], DatasetLinkCollection[DatasetType, DatasetConfigType, FlagType, FlagIndexType], ABC
Base class for platform-specific datasets with automatic interpreter management.
PlatformDataset provides the foundation for integrating MESQUAL with specific energy modeling platforms (PyPSA, PLEXOS, etc.). It manages a registry of data interpreters and automatically instantiates them to handle different types of platform data.
Key Features
- Automatic interpreter registration and instantiation
- Type-safe interpreter management through generics
- Flexible argument passing to interpreter constructors
- Support for study-specific interpreter extensions
- Unified data access through DatasetLinkCollection routing
Architecture
- Uses DatasetLinkCollection for automatic flag routing
- Manages interpreter registry at class level
- Auto-instantiates all registered interpreters on construction
- Supports inheritance and interpreter registration on subclasses
Class Type Parameters:
| Name | Bound or Constraints | Description | Default |
|---|---|---|---|
DatasetType
|
Base type for all interpreters (must be Dataset subclass) |
required | |
DatasetConfigType
|
Configuration class for dataset behavior |
required | |
FlagType
|
Type used for data flag identification |
required | |
FlagIndexType
|
Flag index implementation for flag mapping |
required |
Class Attributes
_interpreter_registry: List of registered interpreter classes
Usage Pattern
- Create platform dataset class inheriting from PlatformDataset
- Define get_child_dataset_type() to specify interpreter base class
- Create interpreter classes inheriting from the base interpreter
- Register interpreters using @PlatformDataset.register_interpreter
- Instantiate platform dataset - interpreters are auto-created
Example:
>>> # Define platform dataset
>>> class PyPSADataset(PlatformDataset[PyPSAInterpreter, ...]):
... @classmethod
... def get_child_dataset_type(cls):
... return PyPSAInterpreter
...
>>> # Register core interpreters
>>> @PyPSADataset.register_interpreter
... class PyPSAModelInterpreter(PyPSAInterpreter):
... @property
... def accepted_flags(self):
... return {'buses', 'generators', 'lines'}
...
>>> @PyPSADataset.register_interpreter
... class PyPSATimeSeriesInterpreter(PyPSAInterpreter):
... @property
... def accepted_flags(self):
... return {'buses_t.marginal_price', 'generators_t.p'}
...
>>> # Register study-specific interpreter
>>> @PyPSADataset.register_interpreter
... class CustomVariableInterpreter(PyPSAInterpreter):
... @property
... def accepted_flags(self):
... return {'custom_metric'}
...
>>> # Use platform dataset
>>> dataset = PyPSADataset(network=my_network)
>>> buses = dataset.fetch('buses') # Routes to ModelInterpreter
>>> prices = dataset.fetch('buses_t.marginal_price') # Routes to TimeSeriesInterpreter
>>> custom = dataset.fetch('custom_metric') # Routes to CustomVariableInterpreter
Notes
- Interpreters are registered at class level and shared across instances
- Registration order affects routing (last registered = first checked)
- All registered interpreters are instantiated for each platform dataset
- Constructor arguments are automatically extracted and passed to interpreters
Source code in submodules/mesqual/mesqual/datasets/platform_dataset.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | |
register_interpreter
classmethod
¶
register_interpreter(interpreter: Type[DatasetType]) -> Type['DatasetType']
Register a data interpreter class with this platform dataset.
This method is typically used as a decorator to register interpreter classes that handle specific types of platform data. Registered interpreters are automatically instantiated when the platform dataset is created.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
interpreter
|
Type[DatasetType]
|
Interpreter class that must inherit from get_child_dataset_type() |
required |
Returns:
| Type | Description |
|---|---|
Type['DatasetType']
|
The interpreter class (unchanged) to support decorator usage |
Raises:
| Type | Description |
|---|---|
TypeError
|
If interpreter doesn't inherit from the required base class |
Example:
>>> @PyPSADataset.register_interpreter
... class CustomInterpreter(PyPSAInterpreter):
... @property
... def accepted_flags(self):
... return {'custom_flag'}
...
... def _fetch(self, flag, config, **kwargs):
... return compute_custom_data()
Source code in submodules/mesqual/mesqual/datasets/platform_dataset.py
149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | |