1 Introduction

The compact camera sensors found in low-end devices such as mobile phones have come a long way in the past few years. Given adequate lighting conditions, they are able to reproduce unprecedented levels of detail and color. Despite their ubiquity, being used for the vast majority of all photographs taken worldwide, they struggle to come close in image quality to DSLR cameras. These professional grade instruments have many advantages including better color reproduction, less noise due to larger sensor sizes, and better automatic tuning of shooting parameters.

Furthermore, many photographs were taken in the past decade using significantly inferior hardware, for example with early digital cameras or early 2010s smartphones. These do not hold up well to our contemporary tastes and are limited in artistic quality by their technical shortcomings.

The previous work by Ignatov et al. [8] that this paper is based upon proposes a neural-network powered solution to the aforementioned problems. They use a dataset comprised of image patches from various outdoor scenes simultaneously taken by cell phone cameras and a DSLR. They pose an image translation problem, where they feed the low-quality phone image into a residual convolutional neural net (CNN) model that generates a target image, which, when the network is trained, is hopefully perceptually close to the high-quality DSLR target image.

In this work, we take a closer look at the problem of translating poor quality photographs from an iPhone 3GS phone into high-quality DSLR photos, since this is the most dramatic increase in quality attempted by Ignatov et al. [8]. The computational requirements of this baseline model, however, are quite high (20 s on a high-end CPU and 3.7 GB of RAM for a HD-resolution image). Using a modified generator architecture, we propose a way to decrease this cost while maintaining or improving the resulting image quality.

2 Related Work

A considerable body of work is dedicated to automatic photo enhancement. However, it traditionally only focused on a specific subproblem, such as super-resolution, denoising, deblurring, or colorization. All of these subproblems are tackled simultaneously when we generate plausible high-quality photos from low-end ones. Furthermore, these older works commonly train with artifacts that have been artificially applied to the target image dataset. Recreating and simulating all the flaws in one camera given a picture from another is close to impossible, therefore in order to achieve real-world photo enhancement we use the photos simultaneously captured by a capture rig from Ignatov et al. [8]. Despite their limitations, the related works contain many useful ideas, which we briefly review in this section.

Image super-resolution is the task of increasing the resolution of an image, which is usually trained with down-scaled versions of the target image as inputs. Many prior works have been dedicated to doing this using CNNs of progressively larger and more complex nature [4, 14, 18, 20, 22, 23]. Initially, a simple pixel-wise mean squared error (MSE) loss was often used to guarantee high fidelity of the reconstructed images, but this often led to blurry results due to uncertainty in pixel intensity space. Recent works [2] aim at perceptual quality and employ losses based on VGG layers [12], and generative adversarial networks (GANs) [5, 15], which seem to be well suited to generating plausible-looking, realistic high-frequency details.

In image colorization, the aim is to hallucinate color for each pixel, given only its luminosity. It is trained on images with their color artificially removed. Isola et al. [11] achieve state of the art performance using a GAN to solve the more general problem of image-to-image translation.

Image deblurring and dehazing aim to remove optical distortions from photos that have been taken out of focus, while the camera was moving, or of faraway geographical or astronomical features. The neural models employed are CNNs, typically trained on images with artificially added blur or haze, using a MSE loss function [3, 7, 16, 17, 19]. Recently, datasets with both hazy and haze-free images were introduced [1] and solutions such as the one of Ki et al. [13] were proposed, which use a GAN, in addition to L1 and perceptual losses. Similar techniques are effective for image denoising as well [21, 24, 25, 27].

2.1 General Purpose Image-to-Image Translation and Enhancement

The use of GANs has progressed towards the development of general purpose image-to-image translation. Isola et al. [11] propose a conditional GAN architecture for paired data, where the discriminator is conditioned on the input image. Zhu et al. [28] relax this requirement, introducing the cycle consistency loss which allows the GAN to train on unpaired data. These two approaches work on many surprising datasets, however, the image quality is too low for our purpose of photo-realistic image enhancement. This is why Ignatov et al. introduce paired [8] and unpaired [9] GAN architectures that are specially designed for this purpose.

2.2 Dataset

The DPED dataset [8] consists of photos taken simultaneously by three different cell phone cameras, as well as a Canon 70D DSLR camera. In addition, these photographs are aligned and cut into 100 \(\times \) 100 pixel patches, and compared such that patches that differ too much are rejected. In this work, only the iPhone 3GS data is considered. This results in 160k pairs of images.

2.3 Baseline

As a baseline, the residual network with 4 blocks and 64 channels from Ignatov et al. [8] is used.

Since using a simple pixel-wise distance metric does not yield the intended perceptual quality results, the output of the network is evaluated using four carefully designed loss functions.

The generated image is compared to the target high-quality DSLR image using the color loss and the content loss. The same four losses and training setup as the baseline are also used by us in this work (Fig. 1).

Fig. 1.
figure 1

The overall architecture of the DPED baseline [8]

Color Loss. The color loss is computed by applying a Gaussian blur to both source and target images, followed by a MSE function. Let X and Y be the original images, then \(X _ { b }\) and \(Y _ { b }\) are their blurred versions, using

$$\begin{aligned} X _ { b } ( i , j ) = \sum _ { k , l } X ( i + k , j + l ) \cdot G ( k , l ), \end{aligned}$$
(1)

where G is the 2D Gaussian blur operator

$$\begin{aligned} G ( k , l ) = A \exp \left( - \frac{ \left( k - \mu _ { x } \right) ^ { 2 } }{ 2 \sigma _ { x } } - \frac{ \left( l - \mu _ { y } \right) ^ { 2 } }{ 2 \sigma _ { y } } \right) . \end{aligned}$$
(2)

The color loss can then be written as

$$\begin{aligned} \mathcal { L } _ { \mathrm { color } } ( X , Y ) = \left\| X _ { b } - Y _ { b } \right\| _ { 2 } ^ { 2 }. \end{aligned}$$
(3)

We use the same parameters as defined in [8], namely \(A = 0.053 , \mu _ { x , y } = 0\), and \(\sigma _ { x , y } = 3\).

Content Loss. The content loss is computed by comparing the two images after they have been processed by a certain number of layers of VGG-19. This is superior to a pixel-wise loss such as per-pixel MSE, because it closely resembles human perception [8, 26], abstracting away such negligible details as a small shift in pixels, for example. It is also important because it helps preserve the semantics of the image. It is defined as

$$\begin{aligned} \mathcal { L } _ { \mathrm { content } } = \frac{ 1 }{ C _ { j } H _ { j } W _ { j } } \left\| \psi _ { j } \left( F _ { \mathrm { w } } \left( I _ { s } \right) \right) - \psi _ { j } \left( I _ { t } \right) \right\| \end{aligned}$$
(4)

where \(\psi _ { j } (\cdot )\) is the feature map of the VGG-19 network after its j-th convolutional layer, \(C _ { j } , H _ { j } \text{, } \text{ and } W _ { j }\) are the number, height, and width of this map, and \(F _ { \mathbf { W } } \left( I _ { s } \right) \) denotes the enhanced image.

Texture Loss. One important loss which technically makes this network a GAN is the texture loss [8]. Here, the output images are not directly compared to the targets, instead, a discriminator network is tasked with telling apart real DSLR images from fake, generated ones. During training, its weights are optimized for maximum discriminator accuracy, while the generator’s weights are optimized in the opposite direction, to try to minimize the discriminator’s accuracy, therefore producing convincing fake images.

Before feeding the image in, it is first converted to grayscale, as this loss is specifically targeted on texture processing. It can be written as

$$\begin{aligned} \mathcal {L}_{\text {texture}} = -\sum _{i} \log {D(F_\mathbf{W }(I_s), I_t)}, \end{aligned}$$
(5)

where \(F_\mathbf{W }\) and D denote the generator and discriminator networks, respectively.

Total Variation Loss. A total variation loss is also included, so as to encourage the output image to be spatially smooth, and to reduce noise.

$$\begin{aligned} \mathcal {L}_{\text {tv}} = \frac{1}{C H W} \Vert \nabla _x F_\mathbf{W }(I_s) + \nabla _y F_\mathbf{W }(I_s)\Vert \end{aligned}$$
(6)

Again, C, H, and W are the number of channels, height, and width of the generated image \(F_\mathbf{W }(I_s)\). It is given a low weight overall.

Total Loss. The total loss is comprised from a weighted sum of all above mentioned losses.

$$\begin{aligned} \mathcal {L}_{\text {total}} = \mathcal {L}_{\text {content}} + 0.4 \cdot \mathcal {L}_{\text {texture}} + 0.1 \cdot \mathcal {L}_{\text {color}} + 400 \cdot \mathcal {L}_{\text {tv}}, \end{aligned}$$
(7)

Ignatov et al. [8] use the \(\textit{relu}\_5\_4\) layer of the VGG-19 network, and mention that the above coefficients where chosen in experiments run on the DPED dataset.

3 Experiments and Results

3.1 Experiments

Adjusting Residual CNN Parameters. In order to gain an understanding of the performance properties of the DPED model [8], the baseline’s residual CNN was modified in the number of filters (or channels) each layer would have, the size of each filter’s kernel, and the number of residual blocks there would be in total. While reducing the number of blocks was effective and increasing the performance, and decreasing the number of features even more so, this came at a large cost in image quality. Kernel sizes of \(5\times 5\) were also attempted instead of \(3\times 3\), but did not provide the quality improvements necessary to justify their computational costs.

In Fig. 2 and Table 1, a frontier can be seen, beyond which this simple architecture tuning cannot reach. More sophisticated improvements must therefore be explored.

Parametric ReLU. Parametric ReLU [6] is an activation function defined as

$$\begin{aligned} \text {PReLU}\left( y _ { i } \right) = \left\{ \begin{array} { l l } { y _ { i } , } &{} { \text{ if } y _ { i } > 0 } \\ { a _ { i } y _ { i } , } &{} { \text{ if } y _ { i } \le 0 } \end{array} \right. \end{aligned}$$
(8)

where \(y _ { i }\) is the i-th element of the feature vector, and \(a _ { i }\) is the i-th element of the PReLU learned parameter vector. This permits the network to learn a slope for the ReLU activation function instead of leaving it at a constant 0 for negative inputs. In theory, this would cause the network to learn faster, prevent ReLUs from going dormant, and overall provide more power for the network at a small performance cost.

Fig. 2.
figure 2

Speedup (relative to the baseline) vs. MS-SSIM results on DPED test images, from adjusting residual CNN parameters. Key: {kernel size, channels, blocks}. Proposed method for reference. All models trained for 25k iterations, except for the proposed model, at 40k.

In practice though (see an example in Table 2), this cost was more than what was hoped, and it did not perceptibly increase the image quality.

Strided and Transposed Convolutions. In order to more drastically reduce the computation time requirements, a change in the original architecture was implemented, where the spatial resolution of the feature maps is halved, and subsequently halved again, using strided convolutional layers. At the same time, each of these strided layers doubles the number of feature maps, as suggested by Johnson et al. [12].

This down-sampling operation is followed by two residual blocks at this new, \(4\times \) reduced resolution, which is then followed by transposed (fractionally strided) convolution layers, which scale the feature map back up to its original resolution, using a trainable up-sampling convolution.

Table 1. Average PSNR/SSIM results on DPED test images, using the original residual CNN architecture with adjusted parameters. 25k iterations, batch size 50.

At each resolution, the previous feature maps of the same resolution are added to the new maps, through skip connections, in order to facilitate this network to learn simple, non-destructive transformations like the identity function.

This new architecture introduced slight checkerboard artifacts related to the upscaling process, but overall, it allowed for a much faster model without the loss in quality associated with the more straightforward approaches previously described. In Table 2 are summarized the quantitative results for several configurations.

3.2 Results

The best result we achieved was with this new strided approach. The generator architecture is shown in Fig. 3. We chose a kernel size of \(3\times 3\), except in the strided convolutional layers, where we opted for \(4\times 4\) instead, in order to mitigate the checkerboard artifacts. The number of feature maps starts at 16 and increases up to 64 in the middle of the network. We trained the network for 40k iterations using an Adam optimizer and a batch size of 50.

Table 2. Average PSNR/SSIM results on DPED test images, using the proposed strided architecture with varying parameters. The best configuration we propose, line 3, was chosen as a compromise between quality and speed.

Our networkFootnote 1 takes only 3.2 s of CPU time to enhance a \(1280\times 720\)px image compared to the baseline’s 20.5 s. This represents a 6.3-fold speedup. Additionally, the amount of RAM required is reduced from 3.7 GB to 2.3 GB.

Fig. 3.
figure 3

The generator architecture of the proposed method. Discriminator and losses are the same as in the baseline.

As part of a PIRM 2018 challenge on perceptual image enhancement on smartphones [10], a user study was conducted where 2000 people were asked to rate the visual results (photos) of the solutions submitted by challenge participants. The users were able to rate each photo with scores of 1, 2, 3 and 4, corresponding to low and high-quality visual results. The average of all user ratings was then computed and considered as a MOS score of each solution.

Fig. 4.
figure 4

Visual assessment. From left to right: The input test image from the iPhone 3GS, the output from the baseline model, the output from our model, and the (cropped) ground truth photograph from the DSLR camera.

With a MOS of 2.6523, our submission (see Table 3) scored significantly higher than the DPED baseline (2.4411) and was second only to the winning submission, which scored 2.6804. The submission was tested against a different test set, which partially explains its lower PSNR and MS-SSIM scores. It should be noted that the submission shares the same architecture as this paper’s main result, but was trained for only 33k iterations.

Table 3. PIRM 2018 challenge final ranking of teams and baselines [10]

Differences between the DPED baseline and our result are somewhat subtle. Our model produces noticeably fewer colored artifacts around hard edges (e.g. Fig. 4, first row, first zoom box), more accurate colors (e.g. the sky in first row, second box), as well as reduced noise in smooth shadows (last row, second box), and in dense foliage (middle row, first box), it produces more realistic textures than the baseline. Contrast, especially in vertical features (middle row, third box), is often less pronounced. However, this comes with the advantage of fewer grid-like artifacts. For more visual results of our method we refer the reader to the Appendix.

While these subjective evaluation methods are clearly in favor of our method, the PSNR and MS-SSIM scores comparing the generated images to the target DSLR photos are less conclusive. PSNR and MS-SSIM seem to be only weakly correlated with MOS [10]. Better perceptual quality metrics including ones requiring no reference images might be a promising component of future works.

4 Conclusion

Thanks to strided convolutions, a promising architecture was found in the quest for efficient photo enhancement on mobile hardware. Our model produces clear, detailed images exceeding the quality of the baseline, while only requiring 16% as much computation time.

Even though, as evidenced by the PIRM 2018 challenge results [10], further speed improvements will definitely be seen in future works, it is reassuring to conclude that convolutional neural network-based image enhancement can already produce high quality results with performance acceptable for mobile devices.