Layers

Each layer in the neural net corresponds to a struct that simultaneously specifies: 1) the operation being carried out in the layer (recorded in the type of the struct) and 2) the parameters for the operation (recorded in the values of the fields of the struct).

When we pass an input array of real numbers to a layer struct, we get an output array of real numbers that is the result of the layer operating on the input.

Conversely, when we pass an input array of JuMP variables, we get an output array of JuMP variables, with the appropriate mixed-integer constraints (as determined by the layer) imposed between the input and output.

Index

Public Interface

MIPVerify.Conv2dType
struct Conv2d{T<:Union{Real, JuMP.VariableRef, JuMP.AffExpr}, U<:Union{Real, JuMP.VariableRef, JuMP.AffExpr}, V<:Integer} <: Layer

Represents 2-D convolution operation.

p(x) is shorthand for conv2d(x, p) when p is an instance of Conv2d.

Fields:

  • filter

  • bias

  • stride

  • padding

source
MIPVerify.LinearType
struct Linear{T<:Real, U<:Real} <: Layer

Represents matrix multiplication.

p(x) is shorthand for matmul(x, p) when p is an instance of Linear.

Fields:

  • matrix

  • bias

source
MIPVerify.MaskedReLUType
struct MaskedReLU{T<:Real} <: Layer

Represents a masked ReLU activation, with mask controlling how the ReLU is applied to each output.

p(x) is shorthand for masked_relu(x, p.mask) when p is an instance of MaskedReLU.

Fields:

  • mask

  • tightening_algorithm

source
MIPVerify.PoolType
struct Pool{N} <: Layer

Represents a pooling operation.

p(x) is shorthand for pool(x, p) when p is an instance of Pool.

Fields:

  • strides

  • pooling_function

source
MIPVerify.ReLUType
struct ReLU <: Layer

Represents a ReLU operation.

p(x) is shorthand for relu(x) when p is an instance of ReLU.

source
MIPVerify.SkipBlockType
struct SkipBlock <: Layer

A layer that implements a skip connection pattern, commonly used in residual networks (ResNets).

A SkipBlock takes multiple input tensors and applies a corresponding transformation layer to each input. The outputs of these transformations are then combined through element-wise addition.

When used within a SkipSequential, a SkipBlock processes inputs from the most recent layers in the network. If the block has n layers, it will take the outputs from the last n layers as input, in order. For example, with a 2-layer SkipBlock:

  • The second layer processes the output from the immediately preceding layer
  • The first layer processes the output from two layers back

Typically used in conjunction with SkipSequential to create networks with skip connections, where the outputs from earlier layers can bypass intermediate layers and be combined with later layer outputs.

Example:

skip_block = SkipBlock([
    Linear(input_size_1, output_size),  # Transform from one path
    Linear(input_size_2, output_size)   # Transform from another path
])

Fields:

  • layers
source

Internal

MIPVerify.conv2dMethod
conv2d(input, params)

Computes the result of convolving input with the filter and bias stored in params.

Mirrors tf.nn.conv2d from the tensorflow package, with strides = [1, params.stride, params.stride, 1].

Supports three types of padding:

  • 'same': Specify via SamePadding(). Padding is added so that the output has the same size as the input.
  • 'valid': Specify via FixedPadding(). No padding is added.
  • 'fixed': Specify via:
    • A single integer, interpreted as padding for both axes
    • A tuple of two integers, interpreted as (ypadding, xpadding)
    • A tuple of four integers, interpreted as (top, bottom, left, right)

Throws

  • AssertionError if input and filter are not compatible.
source
MIPVerify.matmulMethod
matmul(x, params)

Computes the result of pre-multiplying x by the transpose of params.matrix and adding params.bias.

source
MIPVerify.matmulMethod
matmul(x, params)

Computes the result of pre-multiplying x by the transpose of params.matrix and adding params.bias. We write the computation out by hand when working with JuMPLinearType so that we are able to simplify the output as the computation is carried out.

source
MIPVerify.getoutputsizeMethod
getoutputsize(input_array, strides)

For pooling operations on an array, returns the expected size of the output array.

source
MIPVerify.getpoolviewMethod
getpoolview(input_array, strides, output_index)

For pooling operations on an array, returns a view of the parent array corresponding to the output_index in the output array.

source
MIPVerify.getsliceindexMethod
getsliceindex(input_array_size, stride, output_index)

For pooling operations on an array where a given element in the output array corresponds to equal-sized blocks in the input array, returns (for a given dimension) the index range in the input array corresponding to a particular index output_index in the output array.

Returns an empty array if the output_index does not correspond to any input indices.

Arguments

  • stride::Integer: the size of the operating blocks along the active dimension.
source
MIPVerify.poolMethod
pool(input, params)

Computes the result of applying the pooling function params.pooling_function to non-overlapping cells of input with sizes specified in params.strides.

source
MIPVerify.poolmapMethod
poolmap(f, input_array, strides)

Returns output from applying f to subarrays of input_array, with the windows determined by the strides.

source