#### Machine Learning (ML)

Get FREE domain for 1st year and build your brand new site

SAME and VALID padding are two common types of padding using in CNN (Convolution Neural Network) models. These padding convention types are mainly, used in TensorFlow.

The general meaning of SAME and VALID padding are:

• SAME: Expects padding to be such that the input and output is of same size (provided stride=1) (pad value = kernel size)
• VALID: Padding value is set to 0

Note: SAME padding is different in different frameworks. We have explained the meaning/ calculation of SAME padding in Keras and TensorFlow.

For SAME padding, following will be the output size:

• Output height = (Input height) / (stride height) + 1
• Output width = (Input width) / (stride width) + 1

As the actual formula for output size is as follows:

• Output height = (Input height + padding height top + padding height bottom - kernel height) / (stride height) + 1
• Output width = (Output width + padding width right + padding width left - kernel width) / (stride width) + 1

The above formula has been taken from Output size for Convolution

SAME padding (both sides) = Kernel size

If kernel size is odd, then padding value for top (in height) and left (in width) has the extra value.

In short,

• Padding height (top) = CEILING( kernel height / 2)

• Padding height (bottom) = FLOOR( kernel height / 2)

• Padding width (left) = CEILING( kernel width / 2)

• Padding width (rigth) = FLOOR( kernel width / 2)

Note that:

• CEILING(N/2) + FLOOR(N/2) = N
• There is no SAME padding in PyTorch.
• SAME padding is mainly, defined in TensorFlow and Keras.
• The meaning of SAME padding in TensorFlow is different.
• To convert a TensorFlow model to PyTorch model, we need to handle SAME padding in TensorFlow as explicit padding in PyTorch.

Following illustrates the process in PyTorch:

import torch
import torch.nn.functional as F

x = torch.randn(batch size, channel, height, width)

nn.Conv2d(batch size, output channel, (kernel height, kernel width))(x).shape


For SAME padding, following will be the output size:

• Output height = Input height
• Output width = Input width

As the actual formula for output size is as follows:

• Output height = (Input height + padding height top + padding height bottom - kernel height) / (stride height) + 1
• Output width = (Output width + padding width right + padding width left - kernel width) / (stride width) + 1

The above formula has been taken from Output size for Convolution

SAME padding (both sides) = (stride - 1) * (input height) - stride + kernel size

If kernel size is odd, then padding value for top (in height) and left (in width) has the extra value.

Hence, note the difference of SAME padding in Keras and in TensorFlow.

Hence, the definition of SAME padding is different across different frameworks like TensorFlow, Keras and Theano.

For VALID padding, following will be the output size:

• Output height = (Input height - kernel height) / (stride height) + 1
• Output width = (Output width - kernel width) / (stride width) + 1

As the actual formula for output size is as follows:

• Output height = (Input height + padding height top + padding height bottom - kernel height) / (stride height) + 1
• Output width = (Output width + padding width right + padding width left - kernel width) / (stride width) + 1

The above formula has been taken from Output size for Convolution

VALID padding (both sides) = 0

tf.nn.conv2d(
input, filters, strides, padding, data_format='NHWC', dilations=None,
name=None
)


padding attributes take 3 different values:

• VALID
• SAME
• EXPLICIT

In VALID padding of TensorFlow, TF sets the pad values as 0 and does not perform any padding.

In SAME padding of TensorFlow, the exact pad values are calculated based on the formula we presented.

When EXPLICIT pad is used in TensorFlow, then we need to specify the extra value of the four pad values and the op (Conv for example) will use this value directly.

The pad values are passed as follows:

[[0, 0], [pad_top,pad_bottom], [pad_left, pad_right], [0, 0]].


#### Ue Kiao, PhD

Ue Kiao is a Technical Author and Software Developer with B. Sc in Computer Science at National Taiwan University and PhD in Algorithms at Tokyo Institute of Technology | Researcher at TaoBao

Vote for Ue Kiao, PhD for Top Writers 2021:

Improved & Reviewed by: