formulations#
- class gamspy.formulations.AvgPool2d(container: Container, kernel_size: int | tuple[int, int], stride: int | tuple[int, int] | None = None, padding: int = 0)[source]#
Bases:
object
Formulation generator for 2D Avg Pooling in GAMS.
- Parameters:
- containerContainer
Container that will contain the new variable and equations.
- kernel_sizeint | tuple[int, int]
Filter size
- strideint | tuple[int, int] | None
Stride in the avg pooling, it is equal to kernel_size if not provided
- paddingint | tuple[int, int]
Amount of padding to be added to input, by default 0
Methods
__call__
(input)Forward pass your input, generate output and equations required for calculating the average pooling.
Examples
>>> import gamspy as gp >>> from gamspy.math import dim >>> m = gp.Container() >>> # 2x2 avg pooling >>> ap1 = gp.formulations.AvgPool2d(m, (2, 2)) >>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24))) >>> out, eqs = ap1(inp) >>> type(out) <class 'gamspy._symbols.variable.Variable'> >>> [len(x) for x in out.domain] [10, 1, 12, 12]
- __call__(input: Parameter | Variable) tuple[Variable, list[Equation]] [source]#
Forward pass your input, generate output and equations required for calculating the average pooling. Unlike the min or max pooling avg pooling does not require binary variables or the big-M formulation. Returns the output variable and the list of equations required for the avg pooling formulation
- Parameters:
- inputgp.Parameter | gp.Variable
input to the max pooling 2d layer, must be in shape (batch x in_channels x height x width)
- Returns:
- tuple[gp.Variable, list[gp.Equation]]
- class gamspy.formulations.Conv2d(container: Container, in_channels: int, out_channels: int, kernel_size: int | tuple[int, int], stride: int | tuple[int, int] = 1, padding: int | tuple[int, int] | Literal['same', 'valid'] = 0, bias: bool = True)[source]#
Bases:
object
Formulation generator for 2D Convolution symbol in GAMS. It can be used to embed convolutional layers of trained neural networks in your problem. It can also be used to embed convolutional layers when you need weights as variables.
- Parameters:
- containerContainer
Container that will contain the new variable and equations.
- in_channelint
Number of channels in the input
- out_channelint
Number of channels in the output
- kernel_sizeint | tuple[int, int]
Filter size
- strideint | tuple[int, int]
Stride in the convolution, by default 1
- paddingint | tuple[int, int] | Literal[“same”, “valid”]
Specifies the amount of padding to apply to the input, by default 0. If an integer is provided, that padding is applied to both the height and width. If a tuple of two integers is given, the first value determines the padding for the top and bottom, while the second value sets the padding for the left and right. It is also possible to provide string literals “same” and “valid”. “same” pads the input so the output has the shape as the input. “valid” is the same as no padding.
- biasbool
Is bias added after the convolution, by default True
Methods
__call__
(input)Forward pass your input, generate output and equations required for calculating the convolution.
load_weights
(weight[, bias])Mark Conv2d as parameter and load weights from NumPy arrays.
Mark Conv2d as variable.
Examples
>>> import gamspy as gp >>> import numpy as np >>> from gamspy.math import dim >>> w1 = np.random.rand(2, 1, 3, 3) >>> b1 = np.random.rand(2) >>> m = gp.Container() >>> # in_channels=1, out_channels=2, kernel_size=3x3 >>> conv1 = gp.formulations.Conv2d(m, 1, 2, 3) >>> conv1.load_weights(w1, b1) >>> # 10 images, 1 channel, 24 by 24 >>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24))) >>> out, eqs = conv1(inp) >>> type(out) <class 'gamspy._symbols.variable.Variable'> >>> [len(x) for x in out.domain] [10, 2, 22, 22]
- __call__(input: Parameter | Variable) tuple[Variable, list[Equation]] [source]#
Forward pass your input, generate output and equations required for calculating the convolution.
- Parameters:
- inputgp.Parameter | gp.Variable
input to the conv layer, must be in shape (batch x in_channels x height x width)
- load_weights(weight: ndarray, bias: ndarray | None = None) None [source]#
Mark Conv2d as parameter and load weights from NumPy arrays. After this is called make_variable cannot be called. Use this when you already have the weights of your convolutional layer.
- Parameters:
- weightnp.ndarray
Conv2d layer weights in shape (out_channels x in_channels x kernel_size[0] x kernel_size[1])
- biasnp.ndarray | None
Conv2d layer bias in shape (out_channels, ), only required when bias=True during initialization
- make_variable() None [source]#
Mark Conv2d as variable. After this is called load_weights cannot be called. Use this when you need to learn the weights of your convolutional layer in your optimization model.
This does not initialize the weights, it is highly recommended that you set initial values to weight and bias variables.
- class gamspy.formulations.Linear(container: Container, in_features: int, out_features: int, bias: bool = True)[source]#
Bases:
object
Formulation generator for Linear layer in GAMS.
- Parameters:
- containerContainer
Container that will contain the new variable and equations.
- in_featuresint
Input feature size
- out_featuresint
Output feature size
- biasbool = True
Should bias be added after linear transformation, by Default: True
Methods
__call__
(input)Forward pass your input, generate output and equations required for calculating the linear transformation.
load_weights
(weight[, bias])Mark Linear as parameter and load weights from NumPy arrays.
Mark Linear layer as variable.
Examples
>>> import gamspy as gp >>> import numpy as np >>> from gamspy.math import dim >>> m = gp.Container() >>> l1 = gp.formulations.Linear(m, 128, 64) >>> w = np.random.rand(64, 128) >>> b = np.random.rand(64) >>> l1.load_weights(w, b) >>> x = gp.Variable(m, "x", domain=dim([10, 128])) >>> y, set_y = l1(x) >>> [d.name for d in y.domain] ['DenseDim10_1', 'DenseDim64_1']
- __call__(input: Parameter | Variable) tuple[Variable, list[Equation]] [source]#
Forward pass your input, generate output and equations required for calculating the linear transformation.
- Parameters:
- inputgp.Parameter | gp.Variable
input to the linear layer, must be in shape (* x in_features)
- load_weights(weight: ndarray, bias: ndarray | None = None) None [source]#
Mark Linear as parameter and load weights from NumPy arrays. After this is called make_variable cannot be called. Use this when you already have the weights of your Linear layer.
- Parameters:
- weightnp.ndarray
Linear layer weights in shape (out_features x in_features)
- biasnp.ndarray | None
Linear layer bias in shape (out_features, ), only required when bias=True during initialization
- make_variable() None [source]#
Mark Linear layer as variable. After this is called load_weights cannot be called. Use this when you need to learn the weights of your linear layer in your optimization model.
This does not initialize the weights, it is highly recommended that you set initial values to weight and bias variables.
- class gamspy.formulations.MaxPool2d(container: Container, kernel_size: int | tuple[int, int], stride: int | None = None, padding: int = 0)[source]#
Bases:
_MPool2d
Formulation generator for 2D Max Pooling in GAMS.
- Parameters:
- containerContainer
Container that will contain the new variable and equations.
- kernel_sizeint | tuple[int, int]
Filter size
- strideint | tuple[int, int] | None
Stride in the max pooling, it is equal to kernel_size if not provided
- paddingint | tuple[int, int]
Amount of padding to be added to input, by default 0
Methods
__call__
(input[, big_m])Forward pass your input, generate output and equations required for calculating the max pooling.
Examples
>>> import gamspy as gp >>> from gamspy.math import dim >>> m = gp.Container() >>> # 2x2 max pooling >>> mp1 = gp.formulations.MaxPool2d(m, (2, 2)) >>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24))) >>> out, eqs = mp1(inp) >>> type(out) <class 'gamspy._symbols.variable.Variable'> >>> [len(x) for x in out.domain] [10, 1, 12, 12]
- __call__(input: Parameter | Variable, big_m: int = 1000) tuple[Variable, list[Equation]] [source]#
Forward pass your input, generate output and equations required for calculating the max pooling. Returns the output variable and the list of equations required for the max pooling formulation.
- Parameters:
- inputgp.Parameter | gp.Variable
input to the max pooling 2d layer, must be in shape (batch x in_channels x height x width)
- big_m: int
Big M value that is required for the pooling operation. Default value: 1000.
- Returns:
- tuple[gp.Variable, list[gp.Equation]]
- class gamspy.formulations.MinPool2d(container: Container, kernel_size: int | tuple[int, int], stride: int | None = None, padding: int = 0)[source]#
Bases:
_MPool2d
Formulation generator for 2D Min Pooling in GAMS.
- Parameters:
- containerContainer
Container that will contain the new variable and equations.
- kernel_sizeint | tuple[int, int]
Filter size
- strideint | tuple[int, int] | None
Stride in the min pooling, it is equal to kernel_size if not provided
- paddingint | tuple[int, int]
Amount of padding to be added to input, by default 0
Methods
__call__
(input[, big_m])Forward pass your input, generate output and equations required for calculating the min pooling.
Examples
>>> import gamspy as gp >>> from gamspy.math import dim >>> m = gp.Container() >>> # 2x2 min pooling >>> mp1 = gp.formulations.MinPool2d(m, (2, 2)) >>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24))) >>> out, eqs = mp1(inp) >>> type(out) <class 'gamspy._symbols.variable.Variable'> >>> [len(x) for x in out.domain] [10, 1, 12, 12]
- __call__(input: Parameter | Variable, big_m: int = 1000) tuple[Variable, list[Equation]] [source]#
Forward pass your input, generate output and equations required for calculating the min pooling. Returns the output variable and the list of equations required for the min pooling formulation.
- Parameters:
- inputgp.Parameter | gp.Variable
input to the min pooling 2d layer, must be in shape (batch x in_channels x height x width)
- big_m: int
Big M value that is required for the pooling operation. Default value: 1000.
- Returns:
- tuple[gp.Variable, list[gp.Equation]]
- gamspy.formulations.flatten_dims(x: Variable | Parameter, dims: list[int]) tuple[Parameter | Variable, list[Equation]] [source]#
Flatten domains indicated by dims into a single domain.
- Parameters:
- xgp.Variable | gp.Parameter
Input to be flattened
- dims: list[int]
List of integers indicating indices of the domains to be flattened. Must be consecutive indices.
Examples
>>> import gamspy as gp >>> from gamspy.math import dim >>> m = gp.Container() >>> inp = gp.Variable(m, domain=dim((10, 1, 24, 24))) >>> out, eqs = gp.formulations.flatten_dims(inp, [2, 3]) >>> type(out) <class 'gamspy._symbols.variable.Variable'> >>> [len(x) for x in out.domain] [10, 1, 576]