Get this book -> Problems on Array: For Interviews and Competitive Programming

VGG54 and VGG22 are loss metrics to compare high and low resolution images by considering the feature maps generated by VGG19 neural network model.

This was first introduced in the paper "**Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network**" by Christian Ledig from Twitter. It was published in 2017.

## VGG54

VGG54 is defined as the loss which is equal to euclidean distance between Ï•_{5,4} feature maps from high and low resolution images generated using SRGAN-VGG19 based neural network specially trained for Super Resolution.

Ï•_{i,i} is defined as set of feature maps between the j^{th} convolution and the i^{th} MaxPool in the SRGAN-VGG19 based neural network.

So, VGG54 is the euclidean distance between the set of generated feature maps between the 4^{th} Convolution and 5^{th} MaxPool in the SRGAN-VGG19 network.

## VGG22

Similar to VGG54, VGG22 is defined as the loss which is equal to euclidean distance between Ï•_{2,2} feature maps from high and low resolution images generated using SRGAN-VGG19 based neural network specially trained for Super Resolution.

So, VGG22 is the euclidean distance between the set of generated feature maps between the 2^{th} Convolution and 2^{th} MaxPool in the SRGAN-VGG19 network.

With this article at OpenGenus, you must have the complete idea of VGG54 and VGG22 metrics.