Skip to content

AreaBorder Variable Accounting

AreaBorderVariableCalculatorBase

Bases: ABC, AreaBorderNamingConventions

Abstract base class for calculating energy variables at area border level.

This base class provides functionality for aggregating line-level energy data (flows, capacities, price spreads) to area border level. An area border represents the interface between two areas (countries, bidding zones, etc.).

The class handles the complex mapping from transmission lines to area borders, including proper handling of line directionality. Lines are classified as either "up" or "down" relative to the border direction based on their node endpoints.

Border directionality: - "Up" direction: From area_from to area_to (as defined in border naming) - "Down" direction: From area_to to area_from - Line direction is determined by comparing line endpoints to border areas

Parameters:

Name Type Description Default
area_border_model_df DataFrame

DataFrame containing area border definitions. Index should be border identifiers (e.g., 'DE-FR', 'FR-BE').

required
line_model_df DataFrame

DataFrame containing transmission line information. Must include node_from_col and node_to_col columns.

required
node_model_df DataFrame

DataFrame containing node information with area assignments. Must include area_column for mapping nodes to areas.

required
area_column str

Column name in node_model_df containing area assignments.

required
node_from_col str

Column name in line_model_df for line starting node.

'node_from'
node_to_col str

Column name in line_model_df for line ending node.

'node_to'

Attributes:

Name Type Description
area_border_model_df

Border model DataFrame

line_model_df

Line model DataFrame

node_model_df

Node model DataFrame

area_column

Name of area assignment column

node_from_col

Name of line from-node column

node_to_col

Name of line to-node column

node_to_area_map

Dictionary mapping node IDs to area names

Raises:

Type Description
ValueError

If required columns are missing from input DataFrames

Example:

>>> import pandas as pd
>>> # Define borders between areas
>>> border_model = pd.DataFrame(index=['DE-FR', 'FR-BE'])
>>> 
>>> # Define transmission lines  
>>> line_model = pd.DataFrame({
...     'node_from': ['DE1', 'FR1'],
...     'node_to': ['FR1', 'BE1'],
...     'capacity': [1000, 800]
... }, index=['Line1', 'Line2'])
>>> 
>>> # Node-to-area mapping
>>> node_model = pd.DataFrame({
...     'country': ['DE', 'FR', 'BE']
... }, index=['DE1', 'FR1', 'BE1'])
>>> 
>>> # Subclass for specific calculation
>>> class MyBorderCalculator(AreaBorderVariableCalculatorBase):
...     @property
...     def variable_name(self):
...         return "my_variable"
...     def calculate(self, **kwargs):
...         return pd.DataFrame()
>>> 
>>> calculator = MyBorderCalculator(
...     border_model, line_model, node_model, 'country'
... )
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_base.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
class AreaBorderVariableCalculatorBase(ABC, AreaBorderNamingConventions):
    """Abstract base class for calculating energy variables at area border level.

    This base class provides functionality for aggregating line-level energy data 
    (flows, capacities, price spreads) to area border level. An area border represents
    the interface between two areas (countries, bidding zones, etc.).

    The class handles the complex mapping from transmission lines to area borders,
    including proper handling of line directionality. Lines are classified as either
    "up" or "down" relative to the border direction based on their node endpoints.

    Border directionality:
    - "Up" direction: From area_from to area_to (as defined in border naming)
    - "Down" direction: From area_to to area_from
    - Line direction is determined by comparing line endpoints to border areas

    Args:
        area_border_model_df: DataFrame containing area border definitions.
            Index should be border identifiers (e.g., 'DE-FR', 'FR-BE').
        line_model_df: DataFrame containing transmission line information.
            Must include node_from_col and node_to_col columns.
        node_model_df: DataFrame containing node information with area assignments.
            Must include area_column for mapping nodes to areas.
        area_column: Column name in node_model_df containing area assignments.
        node_from_col: Column name in line_model_df for line starting node.
        node_to_col: Column name in line_model_df for line ending node.

    Attributes:
        area_border_model_df: Border model DataFrame
        line_model_df: Line model DataFrame  
        node_model_df: Node model DataFrame
        area_column: Name of area assignment column
        node_from_col: Name of line from-node column
        node_to_col: Name of line to-node column
        node_to_area_map: Dictionary mapping node IDs to area names

    Raises:
        ValueError: If required columns are missing from input DataFrames

    Example:

        >>> import pandas as pd
        >>> # Define borders between areas
        >>> border_model = pd.DataFrame(index=['DE-FR', 'FR-BE'])
        >>> 
        >>> # Define transmission lines  
        >>> line_model = pd.DataFrame({
        ...     'node_from': ['DE1', 'FR1'],
        ...     'node_to': ['FR1', 'BE1'],
        ...     'capacity': [1000, 800]
        ... }, index=['Line1', 'Line2'])
        >>> 
        >>> # Node-to-area mapping
        >>> node_model = pd.DataFrame({
        ...     'country': ['DE', 'FR', 'BE']
        ... }, index=['DE1', 'FR1', 'BE1'])
        >>> 
        >>> # Subclass for specific calculation
        >>> class MyBorderCalculator(AreaBorderVariableCalculatorBase):
        ...     @property
        ...     def variable_name(self):
        ...         return "my_variable"
        ...     def calculate(self, **kwargs):
        ...         return pd.DataFrame()
        >>> 
        >>> calculator = MyBorderCalculator(
        ...     border_model, line_model, node_model, 'country'
        ... )
    """

    def __init__(
        self,
        area_border_model_df: pd.DataFrame,
        line_model_df: pd.DataFrame,
        node_model_df: pd.DataFrame,
        area_column: str,
        node_from_col: str = 'node_from',
        node_to_col: str = 'node_to'
    ):
        """Initialize the area border variable calculator.

        Args:
            area_border_model_df: DataFrame with border definitions
            line_model_df: DataFrame with line information including endpoints
            node_model_df: DataFrame with node-to-area mapping
            area_column: Column name for area assignments in node_model_df
            node_from_col: Column name for line starting node in line_model_df
            node_to_col: Column name for line ending node in line_model_df

        Raises:
            ValueError: If required columns are missing from DataFrames
        """
        super().__init__(area_column)
        self.area_border_model_df = area_border_model_df
        self.line_model_df = line_model_df
        self.node_model_df = node_model_df
        self.area_column = area_column
        self.node_from_col = node_from_col
        self.node_to_col = node_to_col
        self.node_to_area_map = self._create_node_to_area_map()
        self._validate_inputs()

    def _validate_inputs(self):
        """Validate input parameters during initialization.

        Raises:
            ValueError: If required columns are missing from DataFrames
        """
        if self.area_column not in self.node_model_df.columns:
            raise ValueError(f"Column '{self.area_column}' not found in node_model_df")
        if self.node_from_col not in self.line_model_df.columns:
            raise ValueError(f"Column '{self.node_from_col}' not found in line_model_df")
        if self.node_to_col not in self.line_model_df.columns:
            raise ValueError(f"Column '{self.node_to_col}' not found in line_model_df")

    def _create_node_to_area_map(self) -> dict[Hashable, str]:
        """Create a mapping dictionary from node IDs to area names.

        Returns:
            Dictionary with node IDs as keys and area names as values.
            Nodes with NaN area assignments will have NaN values.
        """
        return self.node_model_df[self.area_column].to_dict()

    def get_border_lines_in_topological_up_and_down_direction(self, border_id: str) -> tuple[list[Hashable], list[Hashable]]:
        """Get transmission lines for a border classified by topological direction.

        This method identifies which transmission lines connect the two areas of a border
        and classifies them based on their topological direction relative to the border.

        Border directionality logic:
        - "Up" direction: Lines where node_from is in area_from and node_to is in area_to
        - "Down" direction: Lines where node_from is in area_to and node_to is in area_from

        This classification is essential for correctly aggregating directional quantities
        like power flows, where the sign and direction matter for market analysis.

        Args:
            border_id: Border identifier (e.g., 'DE-FR') that will be decomposed
                into area_from and area_to using the naming convention.

        Returns:
            Tuple containing two lists:
            - lines_up: Line IDs for lines in the "up" direction
            - lines_down: Line IDs for lines in the "down" direction

        Example:

            >>> # For border 'DE-FR'
            >>> lines_up, lines_down = calculator.get_border_lines_in_topological_up_and_down_direction('DE-FR')
            >>> # lines_up: Lines from German nodes to French nodes  
            >>> # lines_down: Lines from French nodes to German nodes
        """
        area_from, area_to = self.decompose_area_border_name_to_areas(border_id)
        nodes_in_area_from = self.node_model_df.loc[self.node_model_df[self.area_column] == area_from].index.to_list()
        nodes_in_area_to = self.node_model_df.loc[self.node_model_df[self.area_column] == area_to].index.to_list()
        lines_up = self.line_model_df.loc[
                self.line_model_df[self.node_from_col].isin(nodes_in_area_from)
                & self.line_model_df[self.node_to_col].isin(nodes_in_area_to)
            ].index.to_list()
        lines_down = self.line_model_df.loc[
                self.line_model_df[self.node_from_col].isin(nodes_in_area_to)
                & self.line_model_df[self.node_to_col].isin(nodes_in_area_from)
            ].index.to_list()
        return lines_up, lines_down

    @abstractmethod
    def calculate(self, **kwargs) -> pd.DataFrame:
        """Calculate the border variable. Must be implemented by subclasses.

        This method should contain the specific logic for aggregating line-level
        data to border level for the particular variable type. The implementation
        will vary based on the variable (flows, capacities, prices, etc.) and
        should handle directional aggregation appropriately.

        Args:
            **kwargs: Variable-specific parameters for the calculation

        Returns:
            DataFrame with border-level aggregated data. Index should be datetime
            for time series data, columns should be border identifiers.

        Raises:
            NotImplementedError: This is an abstract method
        """
        pass

    @property
    @abstractmethod
    def variable_name(self) -> str:
        """Name of the variable being calculated.

        This property should return a descriptive name for the variable being
        calculated by this calculator. Used for naming output columns and logging.

        Returns:
            String name of the variable (e.g., 'border_flow', 'border_capacity')
        """
        pass

    def _validate_time_series_data(self, df: pd.DataFrame, data_name: str):
        """Validate that time series data has appropriate datetime index.

        Logs warnings if the data doesn't have a DatetimeIndex, which may indicate
        data formatting issues or non-time-series data being used inappropriately.

        Args:
            df: DataFrame to validate
            data_name: Descriptive name of the data for logging purposes
        """
        if not isinstance(df.index, pd.DatetimeIndex):
            logger.warning(f"{data_name} does not have DatetimeIndex")

variable_name abstractmethod property

variable_name: str

Name of the variable being calculated.

This property should return a descriptive name for the variable being calculated by this calculator. Used for naming output columns and logging.

Returns:

Type Description
str

String name of the variable (e.g., 'border_flow', 'border_capacity')

__init__

__init__(area_border_model_df: DataFrame, line_model_df: DataFrame, node_model_df: DataFrame, area_column: str, node_from_col: str = 'node_from', node_to_col: str = 'node_to')

Initialize the area border variable calculator.

Parameters:

Name Type Description Default
area_border_model_df DataFrame

DataFrame with border definitions

required
line_model_df DataFrame

DataFrame with line information including endpoints

required
node_model_df DataFrame

DataFrame with node-to-area mapping

required
area_column str

Column name for area assignments in node_model_df

required
node_from_col str

Column name for line starting node in line_model_df

'node_from'
node_to_col str

Column name for line ending node in line_model_df

'node_to'

Raises:

Type Description
ValueError

If required columns are missing from DataFrames

Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_base.py
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
def __init__(
    self,
    area_border_model_df: pd.DataFrame,
    line_model_df: pd.DataFrame,
    node_model_df: pd.DataFrame,
    area_column: str,
    node_from_col: str = 'node_from',
    node_to_col: str = 'node_to'
):
    """Initialize the area border variable calculator.

    Args:
        area_border_model_df: DataFrame with border definitions
        line_model_df: DataFrame with line information including endpoints
        node_model_df: DataFrame with node-to-area mapping
        area_column: Column name for area assignments in node_model_df
        node_from_col: Column name for line starting node in line_model_df
        node_to_col: Column name for line ending node in line_model_df

    Raises:
        ValueError: If required columns are missing from DataFrames
    """
    super().__init__(area_column)
    self.area_border_model_df = area_border_model_df
    self.line_model_df = line_model_df
    self.node_model_df = node_model_df
    self.area_column = area_column
    self.node_from_col = node_from_col
    self.node_to_col = node_to_col
    self.node_to_area_map = self._create_node_to_area_map()
    self._validate_inputs()

get_border_lines_in_topological_up_and_down_direction

get_border_lines_in_topological_up_and_down_direction(border_id: str) -> tuple[list[Hashable], list[Hashable]]

Get transmission lines for a border classified by topological direction.

This method identifies which transmission lines connect the two areas of a border and classifies them based on their topological direction relative to the border.

Border directionality logic: - "Up" direction: Lines where node_from is in area_from and node_to is in area_to - "Down" direction: Lines where node_from is in area_to and node_to is in area_from

This classification is essential for correctly aggregating directional quantities like power flows, where the sign and direction matter for market analysis.

Parameters:

Name Type Description Default
border_id str

Border identifier (e.g., 'DE-FR') that will be decomposed into area_from and area_to using the naming convention.

required

Returns:

Type Description
list[Hashable]

Tuple containing two lists:

list[Hashable]
  • lines_up: Line IDs for lines in the "up" direction
tuple[list[Hashable], list[Hashable]]
  • lines_down: Line IDs for lines in the "down" direction

Example:

>>> # For border 'DE-FR'
>>> lines_up, lines_down = calculator.get_border_lines_in_topological_up_and_down_direction('DE-FR')
>>> # lines_up: Lines from German nodes to French nodes  
>>> # lines_down: Lines from French nodes to German nodes
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_base.py
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
def get_border_lines_in_topological_up_and_down_direction(self, border_id: str) -> tuple[list[Hashable], list[Hashable]]:
    """Get transmission lines for a border classified by topological direction.

    This method identifies which transmission lines connect the two areas of a border
    and classifies them based on their topological direction relative to the border.

    Border directionality logic:
    - "Up" direction: Lines where node_from is in area_from and node_to is in area_to
    - "Down" direction: Lines where node_from is in area_to and node_to is in area_from

    This classification is essential for correctly aggregating directional quantities
    like power flows, where the sign and direction matter for market analysis.

    Args:
        border_id: Border identifier (e.g., 'DE-FR') that will be decomposed
            into area_from and area_to using the naming convention.

    Returns:
        Tuple containing two lists:
        - lines_up: Line IDs for lines in the "up" direction
        - lines_down: Line IDs for lines in the "down" direction

    Example:

        >>> # For border 'DE-FR'
        >>> lines_up, lines_down = calculator.get_border_lines_in_topological_up_and_down_direction('DE-FR')
        >>> # lines_up: Lines from German nodes to French nodes  
        >>> # lines_down: Lines from French nodes to German nodes
    """
    area_from, area_to = self.decompose_area_border_name_to_areas(border_id)
    nodes_in_area_from = self.node_model_df.loc[self.node_model_df[self.area_column] == area_from].index.to_list()
    nodes_in_area_to = self.node_model_df.loc[self.node_model_df[self.area_column] == area_to].index.to_list()
    lines_up = self.line_model_df.loc[
            self.line_model_df[self.node_from_col].isin(nodes_in_area_from)
            & self.line_model_df[self.node_to_col].isin(nodes_in_area_to)
        ].index.to_list()
    lines_down = self.line_model_df.loc[
            self.line_model_df[self.node_from_col].isin(nodes_in_area_to)
            & self.line_model_df[self.node_to_col].isin(nodes_in_area_from)
        ].index.to_list()
    return lines_up, lines_down

calculate abstractmethod

calculate(**kwargs) -> DataFrame

Calculate the border variable. Must be implemented by subclasses.

This method should contain the specific logic for aggregating line-level data to border level for the particular variable type. The implementation will vary based on the variable (flows, capacities, prices, etc.) and should handle directional aggregation appropriately.

Parameters:

Name Type Description Default
**kwargs

Variable-specific parameters for the calculation

{}

Returns:

Type Description
DataFrame

DataFrame with border-level aggregated data. Index should be datetime

DataFrame

for time series data, columns should be border identifiers.

Raises:

Type Description
NotImplementedError

This is an abstract method

Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_base.py
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
@abstractmethod
def calculate(self, **kwargs) -> pd.DataFrame:
    """Calculate the border variable. Must be implemented by subclasses.

    This method should contain the specific logic for aggregating line-level
    data to border level for the particular variable type. The implementation
    will vary based on the variable (flows, capacities, prices, etc.) and
    should handle directional aggregation appropriately.

    Args:
        **kwargs: Variable-specific parameters for the calculation

    Returns:
        DataFrame with border-level aggregated data. Index should be datetime
        for time series data, columns should be border identifiers.

    Raises:
        NotImplementedError: This is an abstract method
    """
    pass

BorderCapacityCalculator

Bases: AreaBorderVariableCalculatorBase

Calculates aggregated transmission capacities for area borders.

This calculator aggregates line-level transmission capacities to border level, handling bidirectional capacity data with proper directional aggregation.

Example:

>>> from mesqual.energy_data_handling.network_lines_data import NetworkLineCapacitiesData
>>> import pandas as pd
>>> 
>>> # Create capacity data
>>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
>>> capacities = NetworkLineCapacitiesData(
...     capacities_up=pd.DataFrame({...}),
...     capacities_down=pd.DataFrame({...})
... )
>>> 
>>> calculator = BorderCapacityCalculator(
...     area_border_model_df, line_model_df, node_model_df, 'country'
... )
>>> 
>>> # Calculate capacities for up direction (area_from → area_to)
>>> up_capacities = calculator.calculate(capacities, direction='up')
>>> print(up_capacities)
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_capacity_calculator.py
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
class BorderCapacityCalculator(AreaBorderVariableCalculatorBase):
    """Calculates aggregated transmission capacities for area borders.

    This calculator aggregates line-level transmission capacities to border level,
    handling bidirectional capacity data with proper directional aggregation.

    Example:

        >>> from mesqual.energy_data_handling.network_lines_data import NetworkLineCapacitiesData
        >>> import pandas as pd
        >>> 
        >>> # Create capacity data
        >>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
        >>> capacities = NetworkLineCapacitiesData(
        ...     capacities_up=pd.DataFrame({...}),
        ...     capacities_down=pd.DataFrame({...})
        ... )
        >>> 
        >>> calculator = BorderCapacityCalculator(
        ...     area_border_model_df, line_model_df, node_model_df, 'country'
        ... )
        >>> 
        >>> # Calculate capacities for up direction (area_from → area_to)
        >>> up_capacities = calculator.calculate(capacities, direction='up')
        >>> print(up_capacities)
    """

    @property
    def variable_name(self) -> str:
        return "border_capacity"

    def calculate(
            self,
            line_capacity_data: NetworkLineCapacitiesData,
            direction: Literal['up', 'down'] = 'up'
    ) -> pd.DataFrame:
        """Aggregate line-level transmission capacities to border level.

        Sums transmission capacities of all lines belonging to each border,
        respecting the specified direction and handling bidirectional capacity data.
        Lines are aggregated based on their topological relationship to the border.

        Direction logic:
        - 'up': Capacities for flows from area_from to area_to
        - 'down': Capacities for flows from area_to to area_from

        For each border, the method:
        1. Identifies lines in 'up' and 'down' topological directions
        2. Selects appropriate capacity data based on requested direction
        3. Sums capacities across all border lines
        4. Handles missing data by excluding unavailable lines

        Args:
            line_capacity_data: NetworkLineCapacitiesData containing bidirectional
                capacity time series. Must include capacities_up and capacities_down
                DataFrames with line IDs as columns and timestamps as index.
            direction: Direction for capacity aggregation:
                - 'up': Sum capacities for area_from → area_to flows
                - 'down': Sum capacities for area_to → area_from flows

        Returns:
            DataFrame with border-level capacity aggregations. Index matches the 
            input capacity data, columns are border identifiers. Values represent
            total transmission capacity in MW for each border and timestamp.

        Raises:
            ValueError: If direction is not 'up' or 'down'

        Example:

            >>> # Calculate up-direction capacities (exports from area_from)
            >>> up_caps = calculator.calculate(capacity_data, direction='up')
            >>> 
            >>> # Calculate down-direction capacities (imports to area_from)  
            >>> down_caps = calculator.calculate(capacity_data, direction='down')
            >>> 
            >>> print(f"DE→FR capacity: {up_caps.loc['2024-01-01 12:00', 'DE-FR']:.0f} MW")
        """
        self._validate_time_series_data(line_capacity_data.capacities_up, "capacities_up")
        self._validate_time_series_data(line_capacity_data.capacities_down, "capacities_down")

        border_capacities = {}

        for border_id, border in self.area_border_model_df.iterrows():
            lines_up, lines_down = self.get_border_lines_in_topological_up_and_down_direction(border_id)

            if not lines_up and not lines_down:
                # No lines found for this border - create empty series
                index = line_capacity_data.capacities_up.index
                border_capacities[border_id] = pd.Series(index=index, dtype=float)
                continue

            if direction == 'up':
                # For 'up' direction: use up capacities of lines_up + down capacities of lines_down
                capacity_parts = []
                if lines_up:
                    available_lines_up = [line for line in lines_up if line in line_capacity_data.capacities_up.columns]
                    if available_lines_up:
                        capacity_parts.append(line_capacity_data.capacities_up[available_lines_up])

                if lines_down:
                    available_lines_down = [line for line in lines_down if line in line_capacity_data.capacities_down.columns]
                    if available_lines_down:
                        capacity_parts.append(line_capacity_data.capacities_down[available_lines_down])

            elif direction == 'down':
                # For 'down' direction: use down capacities of lines_up + up capacities of lines_down
                capacity_parts = []
                if lines_up:
                    available_lines_up = [line for line in lines_up if line in line_capacity_data.capacities_down.columns]
                    if available_lines_up:
                        capacity_parts.append(line_capacity_data.capacities_down[available_lines_up])

                if lines_down:
                    available_lines_down = [line for line in lines_down if line in line_capacity_data.capacities_up.columns]
                    if available_lines_down:
                        capacity_parts.append(line_capacity_data.capacities_up[available_lines_down])
            else:
                raise ValueError(f"Unknown capacity direction: {direction}. Must be 'up' or 'down'")

            # Combine and sum capacities
            if capacity_parts:
                all_capacities = pd.concat(capacity_parts, axis=1)
                border_capacities[border_id] = all_capacities.sum(axis=1)
            else:
                # No capacity data available for any lines
                index = line_capacity_data.capacities_up.index
                border_capacities[border_id] = pd.Series(index=index, dtype=float)

        result = pd.DataFrame(border_capacities)
        result.columns.name = self.border_identifier
        return result

calculate

calculate(line_capacity_data: NetworkLineCapacitiesData, direction: Literal['up', 'down'] = 'up') -> DataFrame

Aggregate line-level transmission capacities to border level.

Sums transmission capacities of all lines belonging to each border, respecting the specified direction and handling bidirectional capacity data. Lines are aggregated based on their topological relationship to the border.

Direction logic: - 'up': Capacities for flows from area_from to area_to - 'down': Capacities for flows from area_to to area_from

For each border, the method: 1. Identifies lines in 'up' and 'down' topological directions 2. Selects appropriate capacity data based on requested direction 3. Sums capacities across all border lines 4. Handles missing data by excluding unavailable lines

Parameters:

Name Type Description Default
line_capacity_data NetworkLineCapacitiesData

NetworkLineCapacitiesData containing bidirectional capacity time series. Must include capacities_up and capacities_down DataFrames with line IDs as columns and timestamps as index.

required
direction Literal['up', 'down']

Direction for capacity aggregation: - 'up': Sum capacities for area_from → area_to flows - 'down': Sum capacities for area_to → area_from flows

'up'

Returns:

Type Description
DataFrame

DataFrame with border-level capacity aggregations. Index matches the

DataFrame

input capacity data, columns are border identifiers. Values represent

DataFrame

total transmission capacity in MW for each border and timestamp.

Raises:

Type Description
ValueError

If direction is not 'up' or 'down'

Example:

>>> # Calculate up-direction capacities (exports from area_from)
>>> up_caps = calculator.calculate(capacity_data, direction='up')
>>> 
>>> # Calculate down-direction capacities (imports to area_from)  
>>> down_caps = calculator.calculate(capacity_data, direction='down')
>>> 
>>> print(f"DE→FR capacity: {up_caps.loc['2024-01-01 12:00', 'DE-FR']:.0f} MW")
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_capacity_calculator.py
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
def calculate(
        self,
        line_capacity_data: NetworkLineCapacitiesData,
        direction: Literal['up', 'down'] = 'up'
) -> pd.DataFrame:
    """Aggregate line-level transmission capacities to border level.

    Sums transmission capacities of all lines belonging to each border,
    respecting the specified direction and handling bidirectional capacity data.
    Lines are aggregated based on their topological relationship to the border.

    Direction logic:
    - 'up': Capacities for flows from area_from to area_to
    - 'down': Capacities for flows from area_to to area_from

    For each border, the method:
    1. Identifies lines in 'up' and 'down' topological directions
    2. Selects appropriate capacity data based on requested direction
    3. Sums capacities across all border lines
    4. Handles missing data by excluding unavailable lines

    Args:
        line_capacity_data: NetworkLineCapacitiesData containing bidirectional
            capacity time series. Must include capacities_up and capacities_down
            DataFrames with line IDs as columns and timestamps as index.
        direction: Direction for capacity aggregation:
            - 'up': Sum capacities for area_from → area_to flows
            - 'down': Sum capacities for area_to → area_from flows

    Returns:
        DataFrame with border-level capacity aggregations. Index matches the 
        input capacity data, columns are border identifiers. Values represent
        total transmission capacity in MW for each border and timestamp.

    Raises:
        ValueError: If direction is not 'up' or 'down'

    Example:

        >>> # Calculate up-direction capacities (exports from area_from)
        >>> up_caps = calculator.calculate(capacity_data, direction='up')
        >>> 
        >>> # Calculate down-direction capacities (imports to area_from)  
        >>> down_caps = calculator.calculate(capacity_data, direction='down')
        >>> 
        >>> print(f"DE→FR capacity: {up_caps.loc['2024-01-01 12:00', 'DE-FR']:.0f} MW")
    """
    self._validate_time_series_data(line_capacity_data.capacities_up, "capacities_up")
    self._validate_time_series_data(line_capacity_data.capacities_down, "capacities_down")

    border_capacities = {}

    for border_id, border in self.area_border_model_df.iterrows():
        lines_up, lines_down = self.get_border_lines_in_topological_up_and_down_direction(border_id)

        if not lines_up and not lines_down:
            # No lines found for this border - create empty series
            index = line_capacity_data.capacities_up.index
            border_capacities[border_id] = pd.Series(index=index, dtype=float)
            continue

        if direction == 'up':
            # For 'up' direction: use up capacities of lines_up + down capacities of lines_down
            capacity_parts = []
            if lines_up:
                available_lines_up = [line for line in lines_up if line in line_capacity_data.capacities_up.columns]
                if available_lines_up:
                    capacity_parts.append(line_capacity_data.capacities_up[available_lines_up])

            if lines_down:
                available_lines_down = [line for line in lines_down if line in line_capacity_data.capacities_down.columns]
                if available_lines_down:
                    capacity_parts.append(line_capacity_data.capacities_down[available_lines_down])

        elif direction == 'down':
            # For 'down' direction: use down capacities of lines_up + up capacities of lines_down
            capacity_parts = []
            if lines_up:
                available_lines_up = [line for line in lines_up if line in line_capacity_data.capacities_down.columns]
                if available_lines_up:
                    capacity_parts.append(line_capacity_data.capacities_down[available_lines_up])

            if lines_down:
                available_lines_down = [line for line in lines_down if line in line_capacity_data.capacities_up.columns]
                if available_lines_down:
                    capacity_parts.append(line_capacity_data.capacities_up[available_lines_down])
        else:
            raise ValueError(f"Unknown capacity direction: {direction}. Must be 'up' or 'down'")

        # Combine and sum capacities
        if capacity_parts:
            all_capacities = pd.concat(capacity_parts, axis=1)
            border_capacities[border_id] = all_capacities.sum(axis=1)
        else:
            # No capacity data available for any lines
            index = line_capacity_data.capacities_up.index
            border_capacities[border_id] = pd.Series(index=index, dtype=float)

    result = pd.DataFrame(border_capacities)
    result.columns.name = self.border_identifier
    return result

BorderFlowCalculator

Bases: AreaBorderVariableCalculatorBase

Calculates aggregated power flows for area borders.

This calculator aggregates line-level power flows to border level, handling bidirectional flow data and transmission losses. The calculator can aggregate both sent and received flows, accounting for transmission losses that occur between sending and receiving ends. It supports multiple output formats including directional flows and net flows.

Flow aggregation logic: - Lines and flows are classified as "up" or "down" based on topological direction - Flows are aggregated respecting directionality and loss conventions - Net flows represent the algebraic sum (up_flow - down_flow)

Example:

>>> from mesqual.energy_data_handling.network_lines_data import NetworkLineFlowsData
>>> calculator = BorderFlowCalculator(
...     area_border_model_df, line_model_df, node_model_df, 'country'
... )
>>> # Calculate net sent flows (before losses)
>>> net_flows = calculator.calculate(flow_data, flow_type='sent', direction='net')
>>> print(net_flows)
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_flow_calculator.py
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
class BorderFlowCalculator(AreaBorderVariableCalculatorBase):
    """Calculates aggregated power flows for area borders.

    This calculator aggregates line-level power flows to border level, handling
    bidirectional flow data and transmission losses.
    The calculator can aggregate both sent and received flows, accounting for
    transmission losses that occur between sending and receiving ends. It supports
    multiple output formats including directional flows and net flows.

    Flow aggregation logic:
    - Lines and flows are classified as "up" or "down" based on topological direction
    - Flows are aggregated respecting directionality and loss conventions
    - Net flows represent the algebraic sum (up_flow - down_flow)

    Example:

        >>> from mesqual.energy_data_handling.network_lines_data import NetworkLineFlowsData
        >>> calculator = BorderFlowCalculator(
        ...     area_border_model_df, line_model_df, node_model_df, 'country'
        ... )
        >>> # Calculate net sent flows (before losses)
        >>> net_flows = calculator.calculate(flow_data, flow_type='sent', direction='net')
        >>> print(net_flows)
    """

    @property
    def variable_name(self) -> str:
        return "border_flow"

    def calculate(
        self,
        line_flow_data: NetworkLineFlowsData,
        flow_type: Literal['sent', 'received'] = 'sent',
        direction: Literal['up', 'down', 'net'] = 'net'
    ) -> pd.DataFrame:
        """Aggregate line-level power flows to border level.

        Sums power flows of all lines belonging to each border, respecting flow
        directionality and transmission loss conventions. The aggregation handles
        both pre-loss (sent) and post-loss (received) flows.

        Flow type selection:
        - 'sent': Flows before transmission losses (injected into lines)  
        - 'received': Flows after transmission losses (withdrawn from lines)

        Direction options:
        - 'up': Flows from area_from to area_to only
        - 'down': Flows from area_to to area_from only
        - 'net': Net flows (up - down), positive means net export from area_from

        The method handles missing data by preserving NaN values when all 
        constituent flows are missing for a given timestamp.

        Args:
            line_flow_data: NetworkLineFlowsData containing bidirectional flow
                time series. Must include sent_up, received_up, sent_down, and
                received_down DataFrames with line IDs as columns.
            flow_type: Type of flows to aggregate:
                - 'sent': Pre-loss flows (power injected into transmission)
                - 'received': Post-loss flows (power withdrawn after losses)
            direction: Flow direction to calculate:
                - 'up': Flows from area_from → area_to
                - 'down': Flows from area_to → area_from  
                - 'net': Net flows (up - down)

        Returns:
            DataFrame with border-level flow aggregations. Index matches input
            flow data, columns are border identifiers. Values represent power
            flows in MW. For net flows, positive values indicate net export
            from area_from to area_to.

        Raises:
            ValueError: If flow_type not in ['sent', 'received'] or direction
                not in ['up', 'down', 'net']

        Example:

            >>> # Calculate net sent flows (most common use case)
            >>> net_sent = calculator.calculate(flows, 'sent', 'net')
            >>> 
            >>> # Calculate received flows in up direction only
            >>> up_received = calculator.calculate(flows, 'received', 'up')
            >>> 
            >>> print(f"DE→FR net flow: {net_sent.loc['2024-01-01 12:00', 'DE-FR']:.0f} MW")
        """
        # Validate inputs
        if flow_type not in ['sent', 'received']:
            raise ValueError(f"Unknown flow_type: {flow_type}. Must be 'sent' or 'received'")
        if direction not in ['up', 'down', 'net']:
            raise ValueError(f"Unknown flow direction: {direction}. Must be 'up', 'down', or 'net'")

        self._validate_time_series_data(line_flow_data.sent_up, "sent_up")
        self._validate_time_series_data(line_flow_data.received_up, "received_up")

        border_flows = {}

        for border_id, border in self.area_border_model_df.iterrows():
            lines_up, lines_down = self.get_border_lines_in_topological_up_and_down_direction(border_id)

            if not lines_up and not lines_down:
                # No lines for this border - create empty series
                index = line_flow_data.sent_up.index
                border_flows[border_id] = pd.Series(index=index, dtype=float)
                continue

            # Select appropriate flow data based on flow_type
            if flow_type == 'sent':
                flow_data_up = line_flow_data.sent_up
                flow_data_down = line_flow_data.sent_down
            else:  # flow_type == 'received'
                flow_data_up = line_flow_data.received_up  
                flow_data_down = line_flow_data.received_down

            # Aggregate flows by direction relative to border
            flow_parts_up = []
            flow_parts_down = []

            if lines_up:
                # Lines in topological "up" direction
                available_lines_up = [line for line in lines_up if line in flow_data_up.columns]
                if available_lines_up:
                    flow_parts_up.append(flow_data_up[available_lines_up])

            if lines_down:  
                # Lines in topological "down" direction contribute to opposite border flow
                available_lines_down = [line for line in lines_down if line in flow_data_down.columns]
                if available_lines_down:
                    flow_parts_up.append(flow_data_down[available_lines_down])

            if lines_down:
                # Lines in topological "down" direction  
                available_lines_down = [line for line in lines_down if line in flow_data_up.columns]
                if available_lines_down:
                    flow_parts_down.append(flow_data_up[available_lines_down])

            if lines_up:
                # Lines in topological "up" direction contribute to opposite border flow
                available_lines_up = [line for line in lines_up if line in flow_data_down.columns]
                if available_lines_up:
                    flow_parts_down.append(flow_data_down[available_lines_up])

            # Sum flows for each direction
            if flow_parts_up:
                flows_up_combined = pd.concat(flow_parts_up, axis=1)
                flow_up = flows_up_combined.sum(axis=1)
                flow_up[flows_up_combined.isna().all(axis=1)] = np.nan
            else:
                flow_up = pd.Series(index=line_flow_data.sent_up.index, dtype=float)

            if flow_parts_down:
                flows_down_combined = pd.concat(flow_parts_down, axis=1)  
                flow_down = flows_down_combined.sum(axis=1)
                flow_down[flows_down_combined.isna().all(axis=1)] = np.nan
            else:
                flow_down = pd.Series(index=line_flow_data.sent_up.index, dtype=float)

            # Select final output based on direction parameter
            if direction == 'up':
                border_flows[border_id] = flow_up
            elif direction == 'down':
                border_flows[border_id] = flow_down
            else:  # direction == 'net'
                flow_net = flow_up.subtract(flow_down, fill_value=0)
                # Preserve NaN when both directions are NaN
                flow_net[flow_up.isna() & flow_down.isna()] = np.nan
                border_flows[border_id] = flow_net

        result = pd.DataFrame(border_flows)
        result.columns.name = self.border_identifier
        return result

calculate

calculate(line_flow_data: NetworkLineFlowsData, flow_type: Literal['sent', 'received'] = 'sent', direction: Literal['up', 'down', 'net'] = 'net') -> DataFrame

Aggregate line-level power flows to border level.

Sums power flows of all lines belonging to each border, respecting flow directionality and transmission loss conventions. The aggregation handles both pre-loss (sent) and post-loss (received) flows.

Flow type selection: - 'sent': Flows before transmission losses (injected into lines)
- 'received': Flows after transmission losses (withdrawn from lines)

Direction options: - 'up': Flows from area_from to area_to only - 'down': Flows from area_to to area_from only - 'net': Net flows (up - down), positive means net export from area_from

The method handles missing data by preserving NaN values when all constituent flows are missing for a given timestamp.

Parameters:

Name Type Description Default
line_flow_data NetworkLineFlowsData

NetworkLineFlowsData containing bidirectional flow time series. Must include sent_up, received_up, sent_down, and received_down DataFrames with line IDs as columns.

required
flow_type Literal['sent', 'received']

Type of flows to aggregate: - 'sent': Pre-loss flows (power injected into transmission) - 'received': Post-loss flows (power withdrawn after losses)

'sent'
direction Literal['up', 'down', 'net']

Flow direction to calculate: - 'up': Flows from area_from → area_to - 'down': Flows from area_to → area_from
- 'net': Net flows (up - down)

'net'

Returns:

Type Description
DataFrame

DataFrame with border-level flow aggregations. Index matches input

DataFrame

flow data, columns are border identifiers. Values represent power

DataFrame

flows in MW. For net flows, positive values indicate net export

DataFrame

from area_from to area_to.

Raises:

Type Description
ValueError

If flow_type not in ['sent', 'received'] or direction not in ['up', 'down', 'net']

Example:

>>> # Calculate net sent flows (most common use case)
>>> net_sent = calculator.calculate(flows, 'sent', 'net')
>>> 
>>> # Calculate received flows in up direction only
>>> up_received = calculator.calculate(flows, 'received', 'up')
>>> 
>>> print(f"DE→FR net flow: {net_sent.loc['2024-01-01 12:00', 'DE-FR']:.0f} MW")
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_flow_calculator.py
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
def calculate(
    self,
    line_flow_data: NetworkLineFlowsData,
    flow_type: Literal['sent', 'received'] = 'sent',
    direction: Literal['up', 'down', 'net'] = 'net'
) -> pd.DataFrame:
    """Aggregate line-level power flows to border level.

    Sums power flows of all lines belonging to each border, respecting flow
    directionality and transmission loss conventions. The aggregation handles
    both pre-loss (sent) and post-loss (received) flows.

    Flow type selection:
    - 'sent': Flows before transmission losses (injected into lines)  
    - 'received': Flows after transmission losses (withdrawn from lines)

    Direction options:
    - 'up': Flows from area_from to area_to only
    - 'down': Flows from area_to to area_from only
    - 'net': Net flows (up - down), positive means net export from area_from

    The method handles missing data by preserving NaN values when all 
    constituent flows are missing for a given timestamp.

    Args:
        line_flow_data: NetworkLineFlowsData containing bidirectional flow
            time series. Must include sent_up, received_up, sent_down, and
            received_down DataFrames with line IDs as columns.
        flow_type: Type of flows to aggregate:
            - 'sent': Pre-loss flows (power injected into transmission)
            - 'received': Post-loss flows (power withdrawn after losses)
        direction: Flow direction to calculate:
            - 'up': Flows from area_from → area_to
            - 'down': Flows from area_to → area_from  
            - 'net': Net flows (up - down)

    Returns:
        DataFrame with border-level flow aggregations. Index matches input
        flow data, columns are border identifiers. Values represent power
        flows in MW. For net flows, positive values indicate net export
        from area_from to area_to.

    Raises:
        ValueError: If flow_type not in ['sent', 'received'] or direction
            not in ['up', 'down', 'net']

    Example:

        >>> # Calculate net sent flows (most common use case)
        >>> net_sent = calculator.calculate(flows, 'sent', 'net')
        >>> 
        >>> # Calculate received flows in up direction only
        >>> up_received = calculator.calculate(flows, 'received', 'up')
        >>> 
        >>> print(f"DE→FR net flow: {net_sent.loc['2024-01-01 12:00', 'DE-FR']:.0f} MW")
    """
    # Validate inputs
    if flow_type not in ['sent', 'received']:
        raise ValueError(f"Unknown flow_type: {flow_type}. Must be 'sent' or 'received'")
    if direction not in ['up', 'down', 'net']:
        raise ValueError(f"Unknown flow direction: {direction}. Must be 'up', 'down', or 'net'")

    self._validate_time_series_data(line_flow_data.sent_up, "sent_up")
    self._validate_time_series_data(line_flow_data.received_up, "received_up")

    border_flows = {}

    for border_id, border in self.area_border_model_df.iterrows():
        lines_up, lines_down = self.get_border_lines_in_topological_up_and_down_direction(border_id)

        if not lines_up and not lines_down:
            # No lines for this border - create empty series
            index = line_flow_data.sent_up.index
            border_flows[border_id] = pd.Series(index=index, dtype=float)
            continue

        # Select appropriate flow data based on flow_type
        if flow_type == 'sent':
            flow_data_up = line_flow_data.sent_up
            flow_data_down = line_flow_data.sent_down
        else:  # flow_type == 'received'
            flow_data_up = line_flow_data.received_up  
            flow_data_down = line_flow_data.received_down

        # Aggregate flows by direction relative to border
        flow_parts_up = []
        flow_parts_down = []

        if lines_up:
            # Lines in topological "up" direction
            available_lines_up = [line for line in lines_up if line in flow_data_up.columns]
            if available_lines_up:
                flow_parts_up.append(flow_data_up[available_lines_up])

        if lines_down:  
            # Lines in topological "down" direction contribute to opposite border flow
            available_lines_down = [line for line in lines_down if line in flow_data_down.columns]
            if available_lines_down:
                flow_parts_up.append(flow_data_down[available_lines_down])

        if lines_down:
            # Lines in topological "down" direction  
            available_lines_down = [line for line in lines_down if line in flow_data_up.columns]
            if available_lines_down:
                flow_parts_down.append(flow_data_up[available_lines_down])

        if lines_up:
            # Lines in topological "up" direction contribute to opposite border flow
            available_lines_up = [line for line in lines_up if line in flow_data_down.columns]
            if available_lines_up:
                flow_parts_down.append(flow_data_down[available_lines_up])

        # Sum flows for each direction
        if flow_parts_up:
            flows_up_combined = pd.concat(flow_parts_up, axis=1)
            flow_up = flows_up_combined.sum(axis=1)
            flow_up[flows_up_combined.isna().all(axis=1)] = np.nan
        else:
            flow_up = pd.Series(index=line_flow_data.sent_up.index, dtype=float)

        if flow_parts_down:
            flows_down_combined = pd.concat(flow_parts_down, axis=1)  
            flow_down = flows_down_combined.sum(axis=1)
            flow_down[flows_down_combined.isna().all(axis=1)] = np.nan
        else:
            flow_down = pd.Series(index=line_flow_data.sent_up.index, dtype=float)

        # Select final output based on direction parameter
        if direction == 'up':
            border_flows[border_id] = flow_up
        elif direction == 'down':
            border_flows[border_id] = flow_down
        else:  # direction == 'net'
            flow_net = flow_up.subtract(flow_down, fill_value=0)
            # Preserve NaN when both directions are NaN
            flow_net[flow_up.isna() & flow_down.isna()] = np.nan
            border_flows[border_id] = flow_net

    result = pd.DataFrame(border_flows)
    result.columns.name = self.border_identifier
    return result

BorderPriceSpreadCalculator

Bases: AreaBorderVariableCalculatorBase

Calculates electricity price spreads between areas for each border.

This calculator computes price differences between connected areas. Price spreads are fundamental indicators in electricity markets for: - Market integration analysis (zero spreads indicate perfect coupling) - Congestion identification (non-zero spreads suggest transmission constraints) - Arbitrage opportunity assessment (price differences drive trading incentives) - Market efficiency evaluation (persistent spreads may indicate inefficiencies) - Cross-border flow direction prediction (flows typically follow price gradients)

The calculator supports multiple spread calculation methods (Spread Types): - 'raw': price_to - price_from (preserves direction and sign) - 'absolute': |price_to - price_from| (magnitude only) - 'directional_up': max(price_to - price_from, 0) (only positive spreads) - 'directional_down': max(price_from - price_to, 0) (only negative spreads as positive)

Attributes:

Name Type Description
variable_name str

Returns 'price_spread' for identification

Example:

>>> import pandas as pd
>>> import numpy as np
>>>
>>> # Create sample area price data
>>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
>>> area_prices = pd.DataFrame({
...     'DE': np.random.uniform(40, 80, 24),  # German prices
...     'FR': np.random.uniform(35, 75, 24),  # French prices
...     'BE': np.random.uniform(45, 85, 24)   # Belgian prices
... }, index=time_index)
>>>
>>> # Set up border model and calculator (see base class docs for setup)
>>> calculator = BorderPriceSpreadCalculator(
...     border_model_df, line_model_df, node_model_df, 'country'
... )
>>>
>>> # Calculate raw price spreads
>>> raw_spreads = calculator.calculate(area_prices, spread_type='raw')
>>> print(f"Average spread DE-FR: {raw_spreads['DE-FR'].mean():.2f} EUR/MWh")
>>>
>>> # Calculate all spread types at once
>>> all_spreads = calculator.calculate_all_spread_types(area_prices)
>>> print(all_spreads.head())
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_price_spread_calculator.py
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
class BorderPriceSpreadCalculator(AreaBorderVariableCalculatorBase):
    """Calculates electricity price spreads between areas for each border.

    This calculator computes price differences between connected areas.
    Price spreads are fundamental indicators in electricity markets for:
    - Market integration analysis (zero spreads indicate perfect coupling)
    - Congestion identification (non-zero spreads suggest transmission constraints)
    - Arbitrage opportunity assessment (price differences drive trading incentives)
    - Market efficiency evaluation (persistent spreads may indicate inefficiencies)
    - Cross-border flow direction prediction (flows typically follow price gradients)

    The calculator supports multiple spread calculation methods (Spread Types):
    - 'raw': price_to - price_from (preserves direction and sign)
    - 'absolute': |price_to - price_from| (magnitude only)
    - 'directional_up': max(price_to - price_from, 0) (only positive spreads)
    - 'directional_down': max(price_from - price_to, 0) (only negative spreads as positive)

    Attributes:
        variable_name (str): Returns 'price_spread' for identification

    Example:

        >>> import pandas as pd
        >>> import numpy as np
        >>>
        >>> # Create sample area price data
        >>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
        >>> area_prices = pd.DataFrame({
        ...     'DE': np.random.uniform(40, 80, 24),  # German prices
        ...     'FR': np.random.uniform(35, 75, 24),  # French prices
        ...     'BE': np.random.uniform(45, 85, 24)   # Belgian prices
        ... }, index=time_index)
        >>>
        >>> # Set up border model and calculator (see base class docs for setup)
        >>> calculator = BorderPriceSpreadCalculator(
        ...     border_model_df, line_model_df, node_model_df, 'country'
        ... )
        >>>
        >>> # Calculate raw price spreads
        >>> raw_spreads = calculator.calculate(area_prices, spread_type='raw')
        >>> print(f"Average spread DE-FR: {raw_spreads['DE-FR'].mean():.2f} EUR/MWh")
        >>>
        >>> # Calculate all spread types at once
        >>> all_spreads = calculator.calculate_all_spread_types(area_prices)
        >>> print(all_spreads.head())
    """

    @property
    def variable_name(self) -> str:
        return "price_spread"

    def calculate(
        self,
        area_price_df: pd.DataFrame,
        spread_type: Literal['raw', 'absolute', 'directional_up', 'directional_down'] = 'raw'
    ) -> pd.DataFrame:
        """Calculate electricity price spreads between connected market areas.

        Computes price differences across transmission borders using the specified
        calculation method. Price spreads are calculated as directional differences
        based on the border naming convention (area_from → area_to).

        The calculation handles missing area data gracefully by excluding borders
        where either area lacks price data. This is common when analyzing subsets
        of larger energy systems or when dealing with data availability issues.

        Args:
            area_price_df (pd.DataFrame): Time series of area-level electricity prices.
                - Index: DateTime index for time series analysis
                - Columns: Area identifiers matching border area names
                - Values: Prices in consistent units (e.g., EUR/MWh, USD/MWh)
                - Example shape: (8760 hours, N areas) for annual analysis

            spread_type (Literal): Method for calculating price spreads.
                - 'raw': Directional price differences (default, preserves sign)
                - 'absolute': Magnitude of price differences (always non-negative)
                - 'directional_up': Only spreads where price_to > price_from
                - 'directional_down': Only spreads where price_from > price_to

        Returns:
            pd.DataFrame: Border-level price spreads with temporal dimension.
                - Index: Same as input area_price_df (typically DatetimeIndex)
                - Columns: Border identifiers (e.g., 'DE-FR', 'FR-BE')
                - Column name: Set to self.border_identifier for consistency
                - Values: Price spreads in same units as input prices
                - Missing data: NaN where area price data is unavailable

        Raises:
            ValueError: If spread_type is not one of the supported options

        Example:

            >>> import pandas as pd
            >>> import numpy as np
            >>>
            >>> # Create hourly price data for German and French markets
            >>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
            >>> prices = pd.DataFrame({
            ...     'DE': [45.2, 43.1, 41.8, 39.5, 38.2, 42.1, 52.3, 65.4,
            ...            72.1, 68.9, 64.2, 58.7, 55.1, 53.8, 56.2, 61.4,
            ...            67.8, 74.2, 69.1, 64.3, 58.9, 52.1, 48.7, 46.3],
            ...     'FR': [42.8, 41.2, 39.1, 37.8, 36.4, 40.3, 49.8, 62.1,
            ...            68.9, 65.2, 61.4, 56.8, 53.2, 51.9, 54.1, 58.7,
            ...            64.3, 70.8, 66.2, 61.1, 56.3, 49.8, 46.1, 43.9]
            ... }, index=time_index)
            >>>
            >>> # Calculate raw spreads (FR - DE for DE-FR border)
            >>> raw_spreads = calculator.calculate(prices, 'raw')
            >>> print(f"Average DE-FR spread: {raw_spreads['DE-FR'].mean():.2f} EUR/MWh")
            >>> # Output: Average DE-FR spread: -2.15 EUR/MWh (German prices higher)
            >>>
            >>> # Calculate absolute spreads for congestion analysis
            >>> abs_spreads = calculator.calculate(prices, 'absolute')
            >>> print(f"Average absolute spread: {abs_spreads['DE-FR'].mean():.2f} EUR/MWh")
            >>> # Output: Average absolute spread: 2.15 EUR/MWh
            >>>
            >>> # Analyze directional spreads for flow prediction
            >>> up_spreads = calculator.calculate(prices, 'directional_up')
            >>> down_spreads = calculator.calculate(prices, 'directional_down')
            >>> print(f"Hours with FR > DE prices: {(up_spreads['DE-FR'] > 0).sum()}")
            >>> print(f"Hours with DE > FR prices: {(down_spreads['DE-FR'] > 0).sum()}")
        """
        self._validate_time_series_data(area_price_df, 'area_price_df')

        spreads = {}

        for border_id, border in self.area_border_model_df.iterrows():
            area_from = border[self.source_area_identifier]
            area_to = border[self.target_area_identifier]

            if area_from in area_price_df.columns and area_to in area_price_df.columns:
                price_from = area_price_df[area_from]
                price_to = area_price_df[area_to]

                raw_spread = price_to - price_from

                if spread_type == 'raw':
                    spreads[border_id] = raw_spread
                elif spread_type == 'absolute':
                    spreads[border_id] = raw_spread.abs()
                elif spread_type == 'directional_up':
                    spreads[border_id] = raw_spread.clip(lower=0)
                elif spread_type == 'directional_down':
                    spreads[border_id] = (-1 * raw_spread).clip(lower=0)
                else:
                    raise ValueError(f"Unknown spread_type: {spread_type}")

        result = pd.DataFrame(spreads)
        result.columns.name = self.border_identifier
        return result

    def calculate_all_spread_types(self, area_price_df: pd.DataFrame) -> pd.DataFrame:
        """Calculate all price spread types simultaneously for comprehensive analysis.

        Returns a MultiIndex DataFrame with all four spread calculation methods
        (raw, absolute, directional_up, directional_down) in a single DataFrame,
        providing a complete view of price relationships across all borders.

        Args:
            area_price_df (pd.DataFrame): Time series of area-level electricity prices.
                Same format as required by the calculate() method:
                - Index: DateTime index for temporal analysis
                - Columns: Area identifiers matching border definitions
                - Values: Prices in consistent units (e.g., EUR/MWh, USD/MWh)

        Returns:
            pd.DataFrame: MultiIndex DataFrame with comprehensive spread analysis.
                - Index: Same temporal index as input area_price_df
                - Columns: MultiIndex with two levels:
                    - Level 0 ('spread_type'): ['raw', 'absolute', 'directional_up', 'directional_down']
                    - Level 1 (border_identifier): Border names (e.g., 'DE-FR', 'FR-BE')
                - Values: Price spreads in same units as input prices
                - Structure: (time_periods, spread_types × borders)

        Example:

            >>> import pandas as pd
            >>> import numpy as np
            >>>
            >>> # Create sample price data
            >>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
            >>> prices = pd.DataFrame({
            ...     'DE': np.random.uniform(40, 80, 24),
            ...     'FR': np.random.uniform(35, 75, 24),
            ...     'BE': np.random.uniform(45, 85, 24)
            ... }, index=time_index)
            >>>
            >>> # Calculate all spread types
            >>> all_spreads = calculator.calculate_all_spread_types(prices)
            >>> print(all_spreads.columns.names)
            >>> # Output: ['spread_type', 'country_border']
            >>>
            >>> # Access specific spread types
            >>> raw_spreads = all_spreads['raw']
            >>> absolute_spreads = all_spreads['absolute']
            >>>
            >>> # Analyze spread statistics by type
            >>> spread_stats = all_spreads.groupby(level='spread_type', axis=1).mean()
            >>> print(spread_stats)
            >>>
            >>> # Compare directional flows
            >>> up_flows = all_spreads['directional_up'].sum(axis=1)
            >>> down_flows = all_spreads['directional_down'].sum(axis=1)
            >>> net_spread_pressure = up_flows - down_flows
            >>>
            >>> # Identify hours with high price volatility
            >>> high_volatility_hours = absolute_spreads.mean(axis=1) > 10  # EUR/MWh threshold
            >>> print(f"Hours with high spread volatility: {high_volatility_hours.sum()}")
        """
        results = {}
        for spread_type in ['raw', 'absolute', 'directional_up', 'directional_down']:
            results[spread_type] = self.calculate(area_price_df, spread_type)

        return pd.concat(results, axis=1, names=['spread_type'])

calculate

calculate(area_price_df: DataFrame, spread_type: Literal['raw', 'absolute', 'directional_up', 'directional_down'] = 'raw') -> DataFrame

Calculate electricity price spreads between connected market areas.

Computes price differences across transmission borders using the specified calculation method. Price spreads are calculated as directional differences based on the border naming convention (area_from → area_to).

The calculation handles missing area data gracefully by excluding borders where either area lacks price data. This is common when analyzing subsets of larger energy systems or when dealing with data availability issues.

Parameters:

Name Type Description Default
area_price_df DataFrame

Time series of area-level electricity prices. - Index: DateTime index for time series analysis - Columns: Area identifiers matching border area names - Values: Prices in consistent units (e.g., EUR/MWh, USD/MWh) - Example shape: (8760 hours, N areas) for annual analysis

required
spread_type Literal

Method for calculating price spreads. - 'raw': Directional price differences (default, preserves sign) - 'absolute': Magnitude of price differences (always non-negative) - 'directional_up': Only spreads where price_to > price_from - 'directional_down': Only spreads where price_from > price_to

'raw'

Returns:

Type Description
DataFrame

pd.DataFrame: Border-level price spreads with temporal dimension. - Index: Same as input area_price_df (typically DatetimeIndex) - Columns: Border identifiers (e.g., 'DE-FR', 'FR-BE') - Column name: Set to self.border_identifier for consistency - Values: Price spreads in same units as input prices - Missing data: NaN where area price data is unavailable

Raises:

Type Description
ValueError

If spread_type is not one of the supported options

Example:

>>> import pandas as pd
>>> import numpy as np
>>>
>>> # Create hourly price data for German and French markets
>>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
>>> prices = pd.DataFrame({
...     'DE': [45.2, 43.1, 41.8, 39.5, 38.2, 42.1, 52.3, 65.4,
...            72.1, 68.9, 64.2, 58.7, 55.1, 53.8, 56.2, 61.4,
...            67.8, 74.2, 69.1, 64.3, 58.9, 52.1, 48.7, 46.3],
...     'FR': [42.8, 41.2, 39.1, 37.8, 36.4, 40.3, 49.8, 62.1,
...            68.9, 65.2, 61.4, 56.8, 53.2, 51.9, 54.1, 58.7,
...            64.3, 70.8, 66.2, 61.1, 56.3, 49.8, 46.1, 43.9]
... }, index=time_index)
>>>
>>> # Calculate raw spreads (FR - DE for DE-FR border)
>>> raw_spreads = calculator.calculate(prices, 'raw')
>>> print(f"Average DE-FR spread: {raw_spreads['DE-FR'].mean():.2f} EUR/MWh")
>>> # Output: Average DE-FR spread: -2.15 EUR/MWh (German prices higher)
>>>
>>> # Calculate absolute spreads for congestion analysis
>>> abs_spreads = calculator.calculate(prices, 'absolute')
>>> print(f"Average absolute spread: {abs_spreads['DE-FR'].mean():.2f} EUR/MWh")
>>> # Output: Average absolute spread: 2.15 EUR/MWh
>>>
>>> # Analyze directional spreads for flow prediction
>>> up_spreads = calculator.calculate(prices, 'directional_up')
>>> down_spreads = calculator.calculate(prices, 'directional_down')
>>> print(f"Hours with FR > DE prices: {(up_spreads['DE-FR'] > 0).sum()}")
>>> print(f"Hours with DE > FR prices: {(down_spreads['DE-FR'] > 0).sum()}")
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_price_spread_calculator.py
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
def calculate(
    self,
    area_price_df: pd.DataFrame,
    spread_type: Literal['raw', 'absolute', 'directional_up', 'directional_down'] = 'raw'
) -> pd.DataFrame:
    """Calculate electricity price spreads between connected market areas.

    Computes price differences across transmission borders using the specified
    calculation method. Price spreads are calculated as directional differences
    based on the border naming convention (area_from → area_to).

    The calculation handles missing area data gracefully by excluding borders
    where either area lacks price data. This is common when analyzing subsets
    of larger energy systems or when dealing with data availability issues.

    Args:
        area_price_df (pd.DataFrame): Time series of area-level electricity prices.
            - Index: DateTime index for time series analysis
            - Columns: Area identifiers matching border area names
            - Values: Prices in consistent units (e.g., EUR/MWh, USD/MWh)
            - Example shape: (8760 hours, N areas) for annual analysis

        spread_type (Literal): Method for calculating price spreads.
            - 'raw': Directional price differences (default, preserves sign)
            - 'absolute': Magnitude of price differences (always non-negative)
            - 'directional_up': Only spreads where price_to > price_from
            - 'directional_down': Only spreads where price_from > price_to

    Returns:
        pd.DataFrame: Border-level price spreads with temporal dimension.
            - Index: Same as input area_price_df (typically DatetimeIndex)
            - Columns: Border identifiers (e.g., 'DE-FR', 'FR-BE')
            - Column name: Set to self.border_identifier for consistency
            - Values: Price spreads in same units as input prices
            - Missing data: NaN where area price data is unavailable

    Raises:
        ValueError: If spread_type is not one of the supported options

    Example:

        >>> import pandas as pd
        >>> import numpy as np
        >>>
        >>> # Create hourly price data for German and French markets
        >>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
        >>> prices = pd.DataFrame({
        ...     'DE': [45.2, 43.1, 41.8, 39.5, 38.2, 42.1, 52.3, 65.4,
        ...            72.1, 68.9, 64.2, 58.7, 55.1, 53.8, 56.2, 61.4,
        ...            67.8, 74.2, 69.1, 64.3, 58.9, 52.1, 48.7, 46.3],
        ...     'FR': [42.8, 41.2, 39.1, 37.8, 36.4, 40.3, 49.8, 62.1,
        ...            68.9, 65.2, 61.4, 56.8, 53.2, 51.9, 54.1, 58.7,
        ...            64.3, 70.8, 66.2, 61.1, 56.3, 49.8, 46.1, 43.9]
        ... }, index=time_index)
        >>>
        >>> # Calculate raw spreads (FR - DE for DE-FR border)
        >>> raw_spreads = calculator.calculate(prices, 'raw')
        >>> print(f"Average DE-FR spread: {raw_spreads['DE-FR'].mean():.2f} EUR/MWh")
        >>> # Output: Average DE-FR spread: -2.15 EUR/MWh (German prices higher)
        >>>
        >>> # Calculate absolute spreads for congestion analysis
        >>> abs_spreads = calculator.calculate(prices, 'absolute')
        >>> print(f"Average absolute spread: {abs_spreads['DE-FR'].mean():.2f} EUR/MWh")
        >>> # Output: Average absolute spread: 2.15 EUR/MWh
        >>>
        >>> # Analyze directional spreads for flow prediction
        >>> up_spreads = calculator.calculate(prices, 'directional_up')
        >>> down_spreads = calculator.calculate(prices, 'directional_down')
        >>> print(f"Hours with FR > DE prices: {(up_spreads['DE-FR'] > 0).sum()}")
        >>> print(f"Hours with DE > FR prices: {(down_spreads['DE-FR'] > 0).sum()}")
    """
    self._validate_time_series_data(area_price_df, 'area_price_df')

    spreads = {}

    for border_id, border in self.area_border_model_df.iterrows():
        area_from = border[self.source_area_identifier]
        area_to = border[self.target_area_identifier]

        if area_from in area_price_df.columns and area_to in area_price_df.columns:
            price_from = area_price_df[area_from]
            price_to = area_price_df[area_to]

            raw_spread = price_to - price_from

            if spread_type == 'raw':
                spreads[border_id] = raw_spread
            elif spread_type == 'absolute':
                spreads[border_id] = raw_spread.abs()
            elif spread_type == 'directional_up':
                spreads[border_id] = raw_spread.clip(lower=0)
            elif spread_type == 'directional_down':
                spreads[border_id] = (-1 * raw_spread).clip(lower=0)
            else:
                raise ValueError(f"Unknown spread_type: {spread_type}")

    result = pd.DataFrame(spreads)
    result.columns.name = self.border_identifier
    return result

calculate_all_spread_types

calculate_all_spread_types(area_price_df: DataFrame) -> DataFrame

Calculate all price spread types simultaneously for comprehensive analysis.

Returns a MultiIndex DataFrame with all four spread calculation methods (raw, absolute, directional_up, directional_down) in a single DataFrame, providing a complete view of price relationships across all borders.

Parameters:

Name Type Description Default
area_price_df DataFrame

Time series of area-level electricity prices. Same format as required by the calculate() method: - Index: DateTime index for temporal analysis - Columns: Area identifiers matching border definitions - Values: Prices in consistent units (e.g., EUR/MWh, USD/MWh)

required

Returns:

Type Description
DataFrame

pd.DataFrame: MultiIndex DataFrame with comprehensive spread analysis. - Index: Same temporal index as input area_price_df - Columns: MultiIndex with two levels: - Level 0 ('spread_type'): ['raw', 'absolute', 'directional_up', 'directional_down'] - Level 1 (border_identifier): Border names (e.g., 'DE-FR', 'FR-BE') - Values: Price spreads in same units as input prices - Structure: (time_periods, spread_types × borders)

Example:

>>> import pandas as pd
>>> import numpy as np
>>>
>>> # Create sample price data
>>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
>>> prices = pd.DataFrame({
...     'DE': np.random.uniform(40, 80, 24),
...     'FR': np.random.uniform(35, 75, 24),
...     'BE': np.random.uniform(45, 85, 24)
... }, index=time_index)
>>>
>>> # Calculate all spread types
>>> all_spreads = calculator.calculate_all_spread_types(prices)
>>> print(all_spreads.columns.names)
>>> # Output: ['spread_type', 'country_border']
>>>
>>> # Access specific spread types
>>> raw_spreads = all_spreads['raw']
>>> absolute_spreads = all_spreads['absolute']
>>>
>>> # Analyze spread statistics by type
>>> spread_stats = all_spreads.groupby(level='spread_type', axis=1).mean()
>>> print(spread_stats)
>>>
>>> # Compare directional flows
>>> up_flows = all_spreads['directional_up'].sum(axis=1)
>>> down_flows = all_spreads['directional_down'].sum(axis=1)
>>> net_spread_pressure = up_flows - down_flows
>>>
>>> # Identify hours with high price volatility
>>> high_volatility_hours = absolute_spreads.mean(axis=1) > 10  # EUR/MWh threshold
>>> print(f"Hours with high spread volatility: {high_volatility_hours.sum()}")
Source code in submodules/mesqual/mesqual/energy_data_handling/area_accounting/border_variable_price_spread_calculator.py
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
def calculate_all_spread_types(self, area_price_df: pd.DataFrame) -> pd.DataFrame:
    """Calculate all price spread types simultaneously for comprehensive analysis.

    Returns a MultiIndex DataFrame with all four spread calculation methods
    (raw, absolute, directional_up, directional_down) in a single DataFrame,
    providing a complete view of price relationships across all borders.

    Args:
        area_price_df (pd.DataFrame): Time series of area-level electricity prices.
            Same format as required by the calculate() method:
            - Index: DateTime index for temporal analysis
            - Columns: Area identifiers matching border definitions
            - Values: Prices in consistent units (e.g., EUR/MWh, USD/MWh)

    Returns:
        pd.DataFrame: MultiIndex DataFrame with comprehensive spread analysis.
            - Index: Same temporal index as input area_price_df
            - Columns: MultiIndex with two levels:
                - Level 0 ('spread_type'): ['raw', 'absolute', 'directional_up', 'directional_down']
                - Level 1 (border_identifier): Border names (e.g., 'DE-FR', 'FR-BE')
            - Values: Price spreads in same units as input prices
            - Structure: (time_periods, spread_types × borders)

    Example:

        >>> import pandas as pd
        >>> import numpy as np
        >>>
        >>> # Create sample price data
        >>> time_index = pd.date_range('2024-01-01', periods=24, freq='h')
        >>> prices = pd.DataFrame({
        ...     'DE': np.random.uniform(40, 80, 24),
        ...     'FR': np.random.uniform(35, 75, 24),
        ...     'BE': np.random.uniform(45, 85, 24)
        ... }, index=time_index)
        >>>
        >>> # Calculate all spread types
        >>> all_spreads = calculator.calculate_all_spread_types(prices)
        >>> print(all_spreads.columns.names)
        >>> # Output: ['spread_type', 'country_border']
        >>>
        >>> # Access specific spread types
        >>> raw_spreads = all_spreads['raw']
        >>> absolute_spreads = all_spreads['absolute']
        >>>
        >>> # Analyze spread statistics by type
        >>> spread_stats = all_spreads.groupby(level='spread_type', axis=1).mean()
        >>> print(spread_stats)
        >>>
        >>> # Compare directional flows
        >>> up_flows = all_spreads['directional_up'].sum(axis=1)
        >>> down_flows = all_spreads['directional_down'].sum(axis=1)
        >>> net_spread_pressure = up_flows - down_flows
        >>>
        >>> # Identify hours with high price volatility
        >>> high_volatility_hours = absolute_spreads.mean(axis=1) > 10  # EUR/MWh threshold
        >>> print(f"Hours with high spread volatility: {high_volatility_hours.sum()}")
    """
    results = {}
    for spread_type in ['raw', 'absolute', 'directional_up', 'directional_down']:
        results[spread_type] = self.calculate(area_price_df, spread_type)

    return pd.concat(results, axis=1, names=['spread_type'])