formulations#

class gamspy.formulations.AvgPool2d(container: Container, kernel_size: int | tuple[int, int], stride: int | tuple[int, int] | None = None, padding: int = 0, name_prefix: str | None = None)[source]#

Bases: object

Formulation generator for 2D Avg Pooling in GAMS.

Parameters:
containerContainer

Container that will contain the new variable and equations.

kernel_sizeint | tuple[int, int]

Filter size

strideint | tuple[int, int] | None

Stride in the avg pooling, it is equal to kernel_size if not provided

paddingint | tuple[int, int]

Amount of padding to be added to input, by default 0

name_prefixstr | None

Prefix for generated GAMSPy symbols, by default None which means random prefix. Using the same name_prefix in different formulations causes name conflicts. Do not use the same name_prefix again.

Methods

__call__(input[, propagate_bounds])

Forward pass your input, generate output and equations required for calculating the average pooling.

Examples

>>> import gamspy as gp
>>> from gamspy.math import dim
>>> m = gp.Container()
>>> # 2x2 avg pooling
>>> ap1 = gp.formulations.AvgPool2d(m, (2, 2))
>>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24)))
>>> out, eqs = ap1(inp)
>>> type(out)
<class 'gamspy._symbols.variable.Variable'>
>>> [len(x) for x in out.domain]
[10, 1, 12, 12]
__call__(input: Parameter | Variable, propagate_bounds: bool = True) tuple[Variable, list[Equation]][source]#

Forward pass your input, generate output and equations required for calculating the average pooling. Unlike the min or max pooling avg pooling does not require binary variables or the big-M formulation. if propagate_bounds is True, it will also set the bounds for the output variable based on the input. Returns the output variable and the list of equations required for the avg pooling formulation.

Parameters:
inputgp.Parameter | gp.Variable

input to the max pooling 2d layer, must be in shape (batch x in_channels x height x width)

propagate_bounds: bool

If True, it will set the bounds for the output variable based on the input. Default value: True

Returns:
tuple[gp.Variable, list[gp.Equation]]
class gamspy.formulations.Conv2d(container: Container, in_channels: int, out_channels: int, kernel_size: int | tuple[int, int], stride: int | tuple[int, int] = 1, padding: int | tuple[int, int] | Literal['same', 'valid'] = 0, bias: bool = True, name_prefix: str | None = None)[source]#

Bases: object

Formulation generator for 2D Convolution symbol in GAMS. It can be used to embed convolutional layers of trained neural networks in your problem. It can also be used to embed convolutional layers when you need weights as variables.

Parameters:
containerContainer

Container that will contain the new variable and equations.

in_channelint

Number of channels in the input

out_channelint

Number of channels in the output

kernel_sizeint | tuple[int, int]

Filter size

strideint | tuple[int, int]

Stride in the convolution, by default 1

paddingint | tuple[int, int] | Literal[“same”, “valid”]

Specifies the amount of padding to apply to the input, by default 0. If an integer is provided, that padding is applied to both the height and width. If a tuple of two integers is given, the first value determines the padding for the top and bottom, while the second value sets the padding for the left and right. It is also possible to provide string literals “same” and “valid”. “same” pads the input so the output has the shape as the input. “valid” is the same as no padding.

biasbool

Is bias added after the convolution, by default True

name_prefixstr | None

Prefix for names of the GAMS symbols generated, by default None which means random prefix. Using same name_prefix in different formulations causes name conflicts. Do not use same name_prefix again.

Methods

__call__(input[, propagate_bounds])

Forward pass your input, generate output and equations required for calculating the convolution.

load_weights(weight[, bias])

Mark Conv2d as parameter and load weights from NumPy arrays.

make_variable()

Mark Conv2d as variable.

Examples

>>> import gamspy as gp
>>> import numpy as np
>>> from gamspy.math import dim
>>> w1 = np.random.rand(2, 1, 3, 3)
>>> b1 = np.random.rand(2)
>>> m = gp.Container()
>>> # in_channels=1, out_channels=2, kernel_size=3x3
>>> conv1 = gp.formulations.Conv2d(m, 1, 2, 3)
>>> conv1.load_weights(w1, b1)
>>> # 10 images, 1 channel, 24 by 24
>>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24)))
>>> out, eqs = conv1(inp)
>>> type(out)
<class 'gamspy._symbols.variable.Variable'>
>>> [len(x) for x in out.domain]
[10, 2, 22, 22]
__call__(input: Parameter | Variable, propagate_bounds: bool = True) tuple[Variable, list[Equation]][source]#

Forward pass your input, generate output and equations required for calculating the convolution. If propagate_bounds is True, the input is of type variable, and load_weights was called, then the bounds of the input are propagated to the output.

Parameters:
inputgp.Parameter | gp.Variable

input to the conv layer, must be in shape (batch x in_channels x height x width)

propagate_boundsbool = True

If True, propagate bounds of the input to the output. Otherwise, the output variable is unbounded.

load_weights(weight: ndarray, bias: ndarray | None = None) None[source]#

Mark Conv2d as parameter and load weights from NumPy arrays. After this is called make_variable cannot be called. Use this when you already have the weights of your convolutional layer.

Parameters:
weightnp.ndarray

Conv2d layer weights in shape (out_channels x in_channels x kernel_size[0] x kernel_size[1])

biasnp.ndarray | None

Conv2d layer bias in shape (out_channels, ), only required when bias=True during initialization

make_variable() None[source]#

Mark Conv2d as variable. After this is called load_weights cannot be called. Use this when you need to learn the weights of your convolutional layer in your optimization model.

This does not initialize the weights, it is highly recommended that you set initial values to weight and bias variables.

class gamspy.formulations.Linear(container: Container, in_features: int, out_features: int, bias: bool = True, name_prefix: str | None = None)[source]#

Bases: object

Formulation generator for Linear layer in GAMS.

Parameters:
containerContainer

Container that will contain the new variable and equations.

in_featuresint

Input feature size

out_featuresint

Output feature size

biasbool = True

Should bias be added after linear transformation, by Default: True

name_prefixstr | None

Prefix for generated GAMSPy symbols, by default None which means random prefix. Using the same name_prefix in different formulations causes name conflicts. Do not use the same name_prefix again.

Methods

__call__(input[, propagate_bounds])

Forward pass your input, generate output and equations required for calculating the linear transformation.

load_weights(weight[, bias])

Mark Linear as parameter and load weights from NumPy arrays.

make_variable()

Mark Linear layer as variable.

Examples

>>> import gamspy as gp
>>> import numpy as np
>>> from gamspy.math import dim
>>> m = gp.Container()
>>> l1 = gp.formulations.Linear(m, 128, 64)
>>> w = np.random.rand(64, 128)
>>> b = np.random.rand(64)
>>> l1.load_weights(w, b)
>>> x = gp.Variable(m, "x", domain=dim([10, 128]))
>>> y, set_y = l1(x)
>>> [d.name for d in y.domain]
['DenseDim10_1', 'DenseDim64_1']
__call__(input: Parameter | Variable, propagate_bounds: bool = True) tuple[Variable, list[Equation]][source]#

Forward pass your input, generate output and equations required for calculating the linear transformation. If propagate_bounds is True, the input is of type variable, and load_weights was called, then the bounds of the input are propagated to the output.

Parameters:
inputgp.Parameter | gp.Variable

input to the linear layer, must be in shape (* x in_features)

propagate_boundsbool = True

If True, propagate bounds of the input to the output. Otherwise, the output variable is unbounded.

load_weights(weight: ndarray, bias: ndarray | None = None) None[source]#

Mark Linear as parameter and load weights from NumPy arrays. After this is called make_variable cannot be called. Use this when you already have the weights of your Linear layer.

Parameters:
weightnp.ndarray

Linear layer weights in shape (out_features x in_features)

biasnp.ndarray | None

Linear layer bias in shape (out_features, ), only required when bias=True during initialization

make_variable() None[source]#

Mark Linear layer as variable. After this is called load_weights cannot be called. Use this when you need to learn the weights of your linear layer in your optimization model.

This does not initialize the weights, it is highly recommended that you set initial values to weight and bias variables.

class gamspy.formulations.MaxPool2d(container: Container, kernel_size: int | tuple[int, int], stride: int | None = None, padding: int = 0, name_prefix: str | None = None)[source]#

Bases: _MPool2d

Formulation generator for 2D Max Pooling in GAMS.

Parameters:
containerContainer

Container that will contain the new variable and equations.

kernel_sizeint | tuple[int, int]

Filter size

strideint | tuple[int, int] | None

Stride in the max pooling, it is equal to kernel_size if not provided

paddingint | tuple[int, int]

Amount of padding to be added to input, by default 0

name_prefixstr | None

Prefix for generated GAMSPy symbols, by default None which means random prefix. Using the same name_prefix in different formulations causes name conflicts. Do not use the same name_prefix again.

Methods

__call__(input[, big_m, propagate_bounds])

Forward pass your input, generate output and equations required for calculating the max pooling.

Examples

>>> import gamspy as gp
>>> from gamspy.math import dim
>>> m = gp.Container()
>>> # 2x2 max pooling
>>> mp1 = gp.formulations.MaxPool2d(m, (2, 2))
>>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24)))
>>> out, eqs = mp1(inp)
>>> type(out)
<class 'gamspy._symbols.variable.Variable'>
>>> [len(x) for x in out.domain]
[10, 1, 12, 12]
__call__(input: Parameter | Variable, big_m: int = 1000, propagate_bounds: bool = True) tuple[Variable, list[Equation]][source]#

Forward pass your input, generate output and equations required for calculating the max pooling. Returns the output variable and the list of equations required for the max pooling formulation. if propagate_bounds is True, it will also set the bounds for the output variable based on the input. It will also compute the big M value required for the pooling operation using the bounds.

Parameters:
inputgp.Parameter | gp.Variable

input to the max pooling 2d layer, must be in shape (batch x in_channels x height x width)

big_m: int

Big M value that is required for the pooling operation. Default value: 1000.

propagate_bounds: bool

If True, it will set the bounds for the output variable based on the input. Default value: True

Returns:
tuple[gp.Variable, list[gp.Equation]]
class gamspy.formulations.MinPool2d(container: Container, kernel_size: int | tuple[int, int], stride: int | None = None, padding: int = 0, name_prefix: str | None = None)[source]#

Bases: _MPool2d

Formulation generator for 2D Min Pooling in GAMS.

Parameters:
containerContainer

Container that will contain the new variable and equations.

kernel_sizeint | tuple[int, int]

Filter size

strideint | tuple[int, int] | None

Stride in the min pooling, it is equal to kernel_size if not provided

paddingint | tuple[int, int]

Amount of padding to be added to input, by default 0

name_prefixstr | None

Prefix for generated GAMSPy symbols, by default None which means random prefix. Using the same name_prefix in different formulations causes name conflicts. Do not use the same name_prefix again.

Methods

__call__(input[, big_m, propagate_bounds])

Forward pass your input, generate output and equations required for calculating the min pooling.

Examples

>>> import gamspy as gp
>>> from gamspy.math import dim
>>> m = gp.Container()
>>> # 2x2 min pooling
>>> mp1 = gp.formulations.MinPool2d(m, (2, 2))
>>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24)))
>>> out, eqs = mp1(inp)
>>> type(out)
<class 'gamspy._symbols.variable.Variable'>
>>> [len(x) for x in out.domain]
[10, 1, 12, 12]
__call__(input: Parameter | Variable, big_m: int = 1000, propagate_bounds: bool = True) tuple[Variable, list[Equation]][source]#

Forward pass your input, generate output and equations required for calculating the min pooling. Returns the output variable and the list of equations required for the min pooling formulation. if propagate_bounds is True, it will also set the bounds for the output variable based on the input. It will also compute the big M value required for the pooling operation using the bounds.

Parameters:
inputgp.Parameter | gp.Variable

input to the min pooling 2d layer, must be in shape (batch x in_channels x height x width)

big_m: int

Big M value that is required for the pooling operation. Default value: 1000.

propagate_bounds: bool

If True, it will set the bounds for the output variable based on the input. Default value: True

Returns:
tuple[gp.Variable, list[gp.Equation]]
gamspy.formulations.flatten_dims(x: Variable | Parameter, dims: list[int], propagate_bounds: bool = True) tuple[Parameter | Variable, list[Equation]][source]#

Flatten domains indicated by dims into a single domain. If propagate_bounds is True, and x is of type variable, the bounds of the input variable are propagated to the output.

Parameters:
xgp.Variable | gp.Parameter

Input to be flattened

dims: list[int]

List of integers indicating indices of the domains to be flattened. Must be consecutive indices.

propagate_bounds: bool, optional

Propagate bounds from the input to the output variable. Default is True.

Examples

>>> import gamspy as gp
>>> from gamspy.math import dim
>>> m = gp.Container()
>>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24)))
>>> out, eqs = gp.formulations.flatten_dims(inp, [2, 3])
>>> type(out)
<class 'gamspy._symbols.variable.Variable'>
>>> [len(x) for x in out.domain]
[10, 1, 576]
gamspy.formulations.pwl_convexity_formulation(input_x: Variable, x_points: Sequence[int | float], y_points: Sequence[int | float], using: Literal['binary', 'sos2'] = 'binary', bound_left: bool = True, bound_right: bool = True) tuple[Variable, list[Equation]][source]#

This function implements a piecewise linear function using the convexity formulation. Given an input (independent) variable input_x, along with the defining x_points and corresponding y_points of the piecewise function, it constructs the dependent variable y and formulates the equations necessary to define the function.

Here is the convexity formulation:

\[ \begin{align}\begin{aligned}x = \sum_{i}{x\_points_i * \lambda_i}\\y = \sum_{i}{y\_points_i * \lambda_i}\\\sum_{i}{\lambda_i} = 1\\\lambda_i \in SOS2\end{aligned}\end{align} \]

By default, SOS2 variables are implemented using binary variables. See Modeling disjunctive constraints with a logarithmic number of binary variables and constraints . However, you can switch to SOS2 (Special Ordered Set Type 2) by setting the using parameter to “sos2”.

The implementation handles discontinuities in the function. To represent a discontinuity at a specific point x_i, include x_i twice in the x_points array with corresponding values in y_points. For example, if x_points = [1, 3, 3, 5] and y_points = [10, 30, 50, 70], the function allows y to take either 30 or 50 when x = 3. Note that discontinuities always introduce additional binary variables, regardless of the value of the using argument.

It is possible to disallow a specific range by including None in both x_points and the corresponding y_points. For example, with x_points = [1, 3, None, 5, 7] and y_points = [10, 35, None, -20, 40], the range between 3 and 5 is disallowed for input_x.

However, x_points cannot start or end with a None value, and a None value cannot be followed by another None. Additionally, if x_i is None, then y_i must also be None. Similar to the discontinuities, disallowed ranges always introduce additional binary variables, regardless of the value of the using argument.

The input variable input_x is restricted to the range defined by x_points unless bound_left or bound_right is set to False. Setting either to True, creates SOS1 type of variables. When input_x is not bound, you can assume as if the first and/or the last line segments are extended.

Returns the dependent variable y and the equations required to model the piecewise linear relationship.

Parameters:
xgp.Variable

Independent variable of the piecewise linear function

x_points: typing.Sequence[int | float]

Break points of the piecewise linear function in the x-axis

y_points: typing.Sequence[int| float]

Break points of the piecewise linear function in the y-axis

using: str = “binary”

What type of variable is used during implementing piecewise function

bound_left: bool = True

If input_x should be limited to start from x_points[0]

bound_right: bool = True

If input_x should be limited to end at x_points[-1]

Returns:
tuple[gp.Variable, list[Equation]]

Examples

>>> from gamspy import Container, Variable, Set
>>> from gamspy.formulations import pwl_convexity_formulation
>>> m = Container()
>>> x = Variable(m, "x")
>>> y, eqs = pwl_convexity_formulation(x, [-1, 4, 10, 10, 20], [-2, 8, 15, 17, 37])
gamspy.formulations.pwl_interval_formulation(input_x: Variable, x_points: Sequence[int | float], y_points: Sequence[int | float], bound_left: bool = True, bound_right: bool = True) tuple[Variable, list[Equation]][source]#

This function implements a piecewise linear function using the intervals formulation. Given an input (independent) variable input_x, along with the defining x_points and corresponding y_points of the piecewise function, it constructs the dependent variable y and formulates the equations necessary to define the function.

Here is the interval formulation:

\[ \begin{align}\begin{aligned}\lambda_i \geq b_i * LB_i \quad \forall{i}\\\lambda_i \leq b_i * UB_i \quad \forall{i}\\\sum_{i}{b_i} = 1\\x = \sum_{i}{\lambda_i}\\y = \sum_{i}{(\lambda_i * slope_i) + (b_i * offset_i) }\\b_i \in \{0, 1\} \quad \forall{i}\end{aligned}\end{align} \]

The implementation handles discontinuities in the function. To represent a discontinuity at a specific point x_i, include x_i twice in the x_points array with corresponding values in y_points. For example, if x_points = [1, 3, 3, 5] and y_points = [10, 30, 50, 70], the function allows y to take either 30 or 50 when x = 3. Note that discontinuities introduce additional binary variables.

It is possible to disallow a specific range by including None in both x_points and the corresponding y_points. For example, with x_points = [1, 3, None, 5, 7] and y_points = [10, 35, None, -20, 40], the range between 3 and 5 is disallowed for input_x.

However, x_points cannot start or end with a None value, and a None value cannot be followed by another None. Additionally, if x_i is None, then y_i must also be None. Similar to the discontinuities, disallowed ranges always introduce additional binary variables.

The input variable input_x is restricted to the range defined by x_points unless bound_left or bound_right is set to False. Setting either to True, creates SOS1 type of variables. When input_x is not bound, you can assume as if the first and/or the last line segments are extended.

Returns the dependent variable y and the equations required to model the piecewise linear relationship.

Parameters:
xgp.Variable

Independent variable of the piecewise linear function

x_points: typing.Sequence[int | float]

Break points of the piecewise linear function in the x-axis

y_points: typing.Sequence[int | float]

Break points of the piecewise linear function in the y-axis

bound_left: bool = True

If input_x should be limited to start from x_points[0]

bound_right: bool = True

If input_x should be limited to end at x_points[-1]

Returns:
tuple[gp.Variable, list[Equation]]

Examples

>>> from gamspy import Container, Variable, Set
>>> from gamspy.formulations import pwl_interval_formulation
>>> m = Container()
>>> x = Variable(m, "x")
>>> y, eqs = pwl_interval_formulation(x, [-1, 4, 10, 10, 20], [-2, 8, 15, 17, 37])