Many breakthroughs in speed and accuracy of single image super-resolution (SISR) have been achieved. One of the biggest challenges is how to recover finer texture details when super-resolution is applied at large upscaling factors. A typical solution to SISR involves using a convolutional neural network (CNN), however new approaches using a generative adversarial network (GAN) are now also popular. The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. In this work, we present an evaluation of SRResNet and SRGAN. SRResNet is a deep residual network and SRGAN is a generative adversarial network for image super-resolution (SR). SRResNet is able to recover reasonable quality photo-realistic textures from heavily downsampled images. SRGAN is capable of inferring photo-realistic natural images with high upscaling factors. This is achieved using a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. The content loss is motivated by perceptual similarity instead of similarity in pixel space.
Download the entire article:
Author: Sebastien Strban