×

Search anything:

VGG54 and VGG22

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

VGG54 and VGG22 are loss metrics to compare high and low resolution images by considering the feature maps generated by VGG19 neural network model.

This was first introduced in the paper "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" by Christian Ledig from Twitter. It was published in 2017.

VGG54

VGG54 is defined as the loss which is equal to euclidean distance between Ï•5,4 feature maps from high and low resolution images generated using SRGAN-VGG19 based neural network specially trained for Super Resolution.

Ï•i,i is defined as set of feature maps between the jth convolution and the ith MaxPool in the SRGAN-VGG19 based neural network.

So, VGG54 is the euclidean distance between the set of generated feature maps between the 4th Convolution and 5th MaxPool in the SRGAN-VGG19 network.

VGG22

Similar to VGG54, VGG22 is defined as the loss which is equal to euclidean distance between Ï•2,2 feature maps from high and low resolution images generated using SRGAN-VGG19 based neural network specially trained for Super Resolution.

So, VGG22 is the euclidean distance between the set of generated feature maps between the 2th Convolution and 2th MaxPool in the SRGAN-VGG19 network.

With this article at OpenGenus, you must have the complete idea of VGG54 and VGG22 metrics.

VGG54 and VGG22
Share this