×

Search anything:

VGG54 and VGG22

Internship at OpenGenus

Get this book -> Problems on Array: For Interviews and Competitive Programming

VGG54 and VGG22 are loss metrics to compare high and low resolution images by considering the feature maps generated by VGG19 neural network model.

This was first introduced in the paper "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" by Christian Ledig from Twitter. It was published in 2017.

VGG54

VGG54 is defined as the loss which is equal to euclidean distance between ϕ5,4 feature maps from high and low resolution images generated using SRGAN-VGG19 based neural network specially trained for Super Resolution.

ϕi,i is defined as set of feature maps between the jth convolution and the ith MaxPool in the SRGAN-VGG19 based neural network.

So, VGG54 is the euclidean distance between the set of generated feature maps between the 4th Convolution and 5th MaxPool in the SRGAN-VGG19 network.

VGG22

Similar to VGG54, VGG22 is defined as the loss which is equal to euclidean distance between ϕ2,2 feature maps from high and low resolution images generated using SRGAN-VGG19 based neural network specially trained for Super Resolution.

So, VGG22 is the euclidean distance between the set of generated feature maps between the 2th Convolution and 2th MaxPool in the SRGAN-VGG19 network.

With this article at OpenGenus, you must have the complete idea of VGG54 and VGG22 metrics.

Jonathan Buss

Jonathan Buss

Associate Professor at University of Waterloo | BSc in Computing from California Institute of Technology, PhD from Massachusetts Institute of Technology (MIT)

Read More

Improved & Reviewed by:


OpenGenus Foundation OpenGenus Foundation
VGG54 and VGG22
Share this