Abstract
Deep generative models are rapidly gaining traction in medical imaging. Nonetheless, most generative architectures struggle to capture the underlying probability distributions of volumetric data, exhibit convergence problems, and offer no robust indices of model uncertainty. By comparison, the autoregressive generative model PixelCNN can be extended to volumetric data with relative ease, it readily attempts to learn the true underlying probability distribution and it still admits a Bayesian reformulation that provides a principled framework for reasoning about model uncertainty.
Our contributions in this paper are two fold: first, we extend PixelCNN to work with volumetric brain magnetic resonance imaging data. Second, we show that reformulating this model to approximate a deep Gaussian process yields a measure of uncertainty that improves the performance of semi-supervised learning, in particular classification performance in settings where the proportion of labelled data is low. We quantify this improvement across classification, regression, and semantic segmentation tasks, training and testing on clinical magnetic resonance brain imaging data comprising T1-weighted and diffusion-weighted sequences.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
There are two common problems with discriminative learning: class imbalance and sparse labels. These problems are particularly prevalent in medical imaging, due to the essential nature of clinical data. Semi-supervised learning provides a partial solution to these problems. Semi-supervised learning can be improved by using deep generative models, to learn better representations of the data, where generalisable decision boundaries are easier to identify [7].
Variational autoencoders (VAEs), generative adversarial networks (GANs), and autoregressive (AR) models are the leading architectures for deep generative modelling. Unfortunately, their application to volumetric data has so far proved challenging, owing to poor convergence and distribution mode dropping, in the case of GANs [8], or to potentially inaccurate error bounds and inappropriate independence assumptions, in the case of VAEs [8]. The use of generative modelling with high resolution 3D data is still only tentatively explored.
Our contributions are as follows: in Sect. 3 we show how the 2D generative model PixelCNN [10] can be extended to work efficiently with volumetric data. We call the resulting model 3DPixelCNN. Furthermore, we incorporate the architectural changes suggested in [4] so that we can compute voxel-wise measures of uncertainty with little computational overhead. In Sect. 4 we show the benefits of using these uncertainty measures and 3DPixelCNN’s hidden layer activations, in semi-supervised scenarios where labelled data is limited. Our evaluation incorporates three tasks: semantic segmentation of acute stroke lesions on diffusion weighted imaging (DWI) and age regression and sex classification on grey matter tissue compartments extracted from T1-weighted magnetic resonance imaging (MRI). Code available at https://github.com/guilherme-pombo/3DPixelCNN.
2 Related Work
2.1 Generative Models for Brain Imaging
We are interested in modelling \(p(\varvec{x})\), the probability distribution for the stochastic process that generates our brain volumes. In the context of brain imaging, we have a likelihood model \(p_{\theta }\), where the parameters \(\theta \) are found by maximising the following objective:
here, \(x_{1},...,x_{N}\) are the training volumes, which we assume have been sampled i.i.d from \(p(\varvec{x})\). In medical imaging it is common to process volumes as 2D slices to reduce processing time and memory consumption. However, in order to utilise all of the information in \(x_{i}\), and to demonstrate the feasibility 3DPixelCNN, we use a fully 3D model.
To the best of our knowledge, [11] is the only work prior to ours to train a generative model on high-resolution 3D brain imagery. They model the (relatively low-detail) computed tomography (CT) modality using an approximation to a deep Gaussian process (c.f. Sect. 2.3) and an Autoencoder (AE). In the present article we also use this approximation but with a generative model that has increased representational power. We describe this model in the following section.
2.2 PixelRNN
In [10], the authors show how to model \(p(\varvec{x})\) autoregressively, by modelling the joint distribution of pixels in an image using recurrent neural networks. They treat their (2D) images, with dimensions \(M \times N\), as a one-dimensional sequence of length \(MN\), and they write the product of the conditional distributions over pixels as:
This model is comparatively slow due to RNNs’ difficulty in parallelising, so the authors approximate it with much faster standard convolutional networks. To ensure the receptive field of each convolution around each pixel only includes the pixels on which its probability is conditioned (thus, avoiding seeing the future context) they add masks to the convolutions. However, the bounded nature of this ‘masked’ convolutional architecture causes a significant part of the input image to be ignored: a triangular pattern of omitted voxels they call the ‘blind spot’. To remedy this, the authors of [9] use two masks instead of one, which they call ‘stacks’: the first one is conditioned on the row so far (the ‘horizontal’ stack) and the second one conditions on all rows above (the ‘vertical’ stack).
The greater computational efficiency of PixelCNN compared with PixelRNN carries a cost in reconstruction quality. However, it has been shown [9] that this can be ameliorated by replacing the rectified linear units between the convolutional stacks with a gated activation unit. This results in a better emulation of a long short-term memory (LSTM) gate. This use of both the convolutional stacks and the gated unit has enabled PixelCNN to match PixelRNN’s reconstruction quality, whilst maintaining computational feasibility.
2.3 Dropout as a Bayesian Approximation
Unlike VAEs, AR models are not Bayesian by construction, and they do not produce implicit or explicit estimates of model uncertainty. In [4], Gal and Ghahramani show that simply incorporating Dropout [13] in every layer of any given neural network makes it capable of doing Bayesian inference, without harming performance. Once these changes are made, the standard deviation of a large-enough batch of forward passes yields a robust measure of uncertainty.
In [14] it is shown that since natural images exhibit strong spatial correlation, the feature map activations are strongly correlated - so applying standard Dropout to the kernels of the convolution operators is ill advised. Hence, they purpose a new dropout method, SpatialDropout, whereby for a given convolution feature tensor of size \( \text {H} \times \text {W} \times \text {D} \times \text {channels} \), a mask of size \(1 \times 1 \times 1 \times \text {channels} \) is applied.
3 Methods
To extend the PixelCNN solution to volumetric data we must first solve the blind spot problem for 3D (c.f. Sect. 2.2). Consider our model processing an \(M \times N \times K\) volume, and currently calculating the conditional distribution of the voxel with coordinates \((R,C,D)\), which we denote \(x_{R, C, D}\). We must now use three stacks (c.f. Sect. 2.2): horizontal, depth and vertical.
The Horizontal stack conditions on the current depth channel and takes as input the output of the previous horizontal stack gate, as well as the output of the depth and vertical stacks. The set of voxels it considers is \(\{x_{R, C, d} | d \in \{1, \ldots , D-1 \} \}\). In turn the Depth stack conditions on all the entries to the left of the current voxel, but does not go up any rows. It takes as input the output of the previous depth gate, as well as the output of the vertical stack. Its receptive field grows in 2D rectangular fashion, defined by the set \(\{x_{R, c, d} | c \in \{1, \ldots , C-1 \}, d \in \{1, \ldots , K \}\}\). Finally, the Vertical Stack conditions on all the rows and columns in the level above the current voxel. It does not have any masking. Its output is fed into the horizontal and depth stacks and its receptive field grows as a cuboid, defined by the set \(\{x_{r, c, d} | r \in \{1, \ldots , R-1\}, c \in \{1, \ldots , N\}, d \in \{1, \ldots , K \} \}\).
These stacks ensure our convolution operations have the correct receptive fields. To reiterate, using just regular convolutions would lead to a bounded receptive field, which in turn would have led to the omission of several voxels from calculations of the conditional distribution (a pyramidal ‘blind spot’). These stacks are represented in Fig. 1. We use the gated activation unit from [9] to efficiently combine the information of different stacks. We first add the stacks together and do a channel-wise split. If the tensor has \(N\) channels, then we now have tensor \(W_1\) with the first \(N/2\) channels and tensor \(W_2\) with the remaining channels. The gated activation unit is calculated as \(\tanh (W_1) \odot sigmoid(W_2)\), where \(\odot \) is the Hadamard product. After each gate we have a skip shortcut [5] to the next stack in the model. After the first layer, as in [12] we also add a residual connection [5] from a Gated unit to the next one. SpatialDropout is applied after every convolution operator so that we can approximate a deep Gaussian process (see Sect. 2.2). Model statistics are derived at test time from batches of multiple forward passes with dropout enabled. We denote the mean and standard deviation of these batches by \(\varvec{\mu }\) and \(\varvec{\sigma }\) respectively.
We train our 3DPixelCNN models using continuous negative log likelihood (NLL), and evaluate using log likelihood. We used continuous rather than discrete NLL as it has been shown [12] that treating pixel intensities as emission probabilities performs poorly for large images, resulting in noisy and speckly reconstructions. We trained for 20 epochs using the Adam optimiser [6]. The initial learning rate was 0.001, the batch size was 1 and the dropout rate was 0.15 (dropout rates between 0.1 and 0.2 are recommended in [4]). Our model has five layers with the structure depicted in Fig. 1. We use kernel sizes of \(3\times 3\times 3\) for all non-masked convolutions in the network. We could have incorporated downsampling as in [12], but we leave this for future work.
4 Experiments and Results
Data: We use two separate datasets. One is a collection of routinely acquired DWI from patients evaluated for acute stroke at our clinic. This comprises 1333 scans with evidence of an acute ischaemic lesion, and 982 scans with no evidence of an acute lesion but variable presence of chronic vascular disease. The volumes we use consist of the b1000 sequence non-linearly registered to MNI space with unified segmentation [1]. A manually-curated binary mask delineating the area of ischaemic damage is our ground truth for lesion semantic segmentation [15]. We also use a manually curated mask to remove any voxels outside of the brain.
The second dataset consists of 13287 SPM Grey Matter (GM) tissue compartments from MRIs obtained from UK Biobank, and routinely acquired clinical imaging at UCLH. The GM segmentations were derived using methods from [1]. Sex and age are known for all patients and were used to evaluate models on classification and regression tasks. For both modalities we reduced the computational burden (due to time constraints) by downsampling the volumes, using bilinear resampling, to 3 mm resolution \(52\times 64\times 52\) volumes.
Image Reconstructions: For each volume in the DWI and GM datasets, we produce its reconstruction, and then generate \(\varvec{\mu }\) and \(\varvec{\sigma }\) by performing \(T=20\) forward passes with dropout left on (c.f. Sect. 2.3).
We use a train/validation/test split of 80/10/10. The best log likelihood obtained by the model in the task of volume reconstruction on the test sets at 3 mm, are 0.360 for the DWI data and 0.105 for the GM data. Our model outperforms the Bayesian AE from [11] which achieves 0.378 on DWI and 0.222 on GM. Notice that on the more detailed modality (T1-GM) our model performs 111% better.
In order to produce uncertainty estimates (\(\varvec{\sigma }\)) for DWI, we trained our 3DPixelCNN only on data with no evidence of stroke lesion, i.e. from the distribution \(p(\varvec{x}|\text {no lesion})\). Therefore, when producing \(\varvec{\sigma }\) for lesioned data, the uncertainty masks provide a measure of the distance from the lesioned brain to the expected distribution of non-lesioned brains. We use a simple classification strategy on the volumes - thresholding the average intensity of the volume, \(x_i\), which we denote as \(\tau (x_i)\). On the DWI ischemic stroke lesion test set, applying this classification strategy on regular volumes yields Dice coefficients of 14.7%, whereas on \(\varvec{\sigma }\) it yields Dice coefficients of 23.7%. This same strategy on the Bayesian AE, \(|x_i - AE(x_i)|\) (see [11] for more details) yields a performance of 17.3%. This provides early confirmation that uncertainty estimates of generative models capture useful task-independent signal.
Figure 2 shows a representative selection of reconstructions of GM volumes and unsupervised lesion masks produced using \(\tau (x_i)\). Notice on the MRI reconstruction, when the original image is corrupted, the 3DPixelCNN model acts as a super resolution mechanism, further showing the model has learnt \(p(\varvec{x})\) and is not simply memorising the training set.
(a): From left to right: (1) The slice through the axial plane with the greatest area of lesion, (2) The stroke label map, (3) \(\tau (x_i)\) (4) \(|x_i - AE(x_i)|\) (5) \(\tau (\varvec{\sigma })\). \(\varvec{\sigma }\) helps capture the tighest bound on the lesion (b) Axial slices of (1) The original volume, (2) The 3DPixelCNN reconstruction and (3) The Bayesian AE reconstruction (On the last volume there was a capture problem and we use it to test 3DPixelCNN’s ability to super resolve)
Semi-supervised Learning: To experiment with using our uncertainty measures to improve supervised tasks, we use our DWI dataset for evaluating models on the task of semantic segmentation and our GM dataset to evaluate regression and classification tasks.
For the segmentation task we use a 3D U-Net [2] as the baseline. As the DWI dataset is not yet public, there are no state of the art results to which we can compare our results. For the age regression and sex classification tasks we use the architecture from [3] as our baseline, which we’ll call ASC, adding only L2 regularisation and Dropout to ensure better generalisation. All models are trained with early stopping using the validation set, the criterion being 20 successive epochs without a drop in validation error. The models are trained in 5-fold fashion (80/10/10 split) for added statistical resilience. We compare the models’ Dice scores on a semantic segmentation task, their mean absolute errors on an age regression task and their binary accuracy on a sex classification task, all evaluated on the test set.
Figure 3 shows mean model performance with error bars for three different types of inputs into both the 3D-UNet and ASC classifiers: (1) using just the original volumes as input (red- \(\varvec{\chi }\)); (2) using original data concatenated with \(\varvec{\mu }\) and \(\varvec{\sigma }\) (blue - \(\varvec{\xi }\)). For the case Bayesian AE we concatenate \(\varvec{\mu }\) and \(|x_i - AE(x_i)|\); (3) using the activations of the penultimate convolutional layer of the 3DPixelCNN. (black/green - \(\varvec{\psi }\)). For the Bayesian AE we use its latent space.
When using 3DPixelCNN, we notice that performance with \(\varvec{\xi }\) was significantly better than with \(\varvec{\chi }\), for all dataset sizes tested and classification tasks. For sex classification and age regression, using \(\varvec{\psi }\) results in better performance than both \(\varvec{\chi }\) and \(\varvec{\xi }\). We speculate that this is because the embeddings, which are higher-dimensional (10 vs 3 channels), comprise a decomposition of the data from which useful decision boundaries can be more readily identified, although this extra dimensionality comes at the cost of greater GPU memory requirements. On the other hand, for lesion segmentation, using \(\varvec{\xi }\) performs better than using either \(\varvec{\chi }\) or \(\varvec{\psi }\).
For semantic segmentation using \(\varvec{\xi }\), the increase is most noticeable at smaller \(N\) with an improvement of 0.082 (25.6%) in Dice coefficient for \(N < 500\) and an average increase of 0.056 (15.2%) for all \(N\). Using \(\varvec{\psi }\) provides less of a performance gain, with an average increase in Dice of 0.025 (6.9%). For age regression and sex classification, we notice a steady increase in performance when using \(\varvec{\xi }\), with an average error reduction of 0.30 years (3.98%) and accuracy increase of 1.87%, respectively. Using \(\varvec{\psi }\), on the other hand, results in an average error reduction of 0.68 years (9.09%) for age regression and accuracy increase of 3.36%, for sex classification. Using the Bayesian AE’s \(\varvec{\xi }\) results in a performance degradation of at least 2% for all tasks, compared to using the original volume. We suspect this is because here \(\varvec{\xi }\) is relatively noisy, as can be seen in Fig. 2. On the other hand, using the latent space, \(\varvec{\psi }\), results in an average 5.6% increase for the age regression task and a 2.2% increase for sex classification. The Bayesian AE’s latent space degraded performance for the semantic segmentation task.
Clearly, 3DPixelCNN’s uncertainty measures help most with semantic segmentation. They seem to be most useful for tasks with more localised signal (lesion segmentation) as opposed to global signal. We speculate this is because in the lesioned brains \(\varvec{\sigma }\) is more focused on the lesion, since we had the generative model learn \(p(\varvec{x}|\text {no lesion})\), whereas the uncertainty maps are much noisier for volumes with less obvious abnormalities, since the 3DPixelCNN learnt only \(p(\varvec{x})\). We hypothesize these uncertainty measures are also helpful in the presence of artifacts (as can be seen in Fig. 2), which is why they also helped for tasks with less abnormal brains.
5 Conclusion
We have presented the first implementation of a volumetric neural network-based autoregressive model. We have shown that it is a method that can capture the richness of a complicated 3D probability distribution and is therefore well-suited to medical imaging. By augmenting labelled data with measures of uncertainty derived from unsupervised models, we saw improved performance in every supervised task we carried out. For tasks on brains without gross abnormalities, we found it was better to use 3DPixelCNN’s penultimate layer activations than the uncertainty estimates. For lesion detection, we found that the uncertainty measures provided a bigger performance increase, which is of more utility in the medical imaging domain.
References
Ashburner, J., et al.: Unified segmentation. Neuroimage 26(3), 839–851 (2005)
Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49
Cole, J.H., et al.: Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. NeuroImage 163, 115–124 (2017)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML, pp. 1050–1059 (2016)
He, K., et al.: Deep residual learning for image recognition. CoRR (2015)
Kingma, D.P., et al.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kingma, D.P., et al.: Semi-supervised learning with deep generative models. In: NIPS, pp. 3581–3589 (2014)
Kingma, D.P., et al.: Glow: generative flow with invertible 1x1 convolutions. In: NIPS, pp. 10236–10245 (2018)
van den Oord, A., et al.: Conditional image generation with PixelCNN decoders. In: NIPS, pp. 4790–4798 (2016)
van den Oord, A., et al.: Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 (2016)
Pawlowski, N., et al.: Unsupervised lesion detection in brain CT using Bayesian convolutional autoencoders (2018)
Salimans, T., et al.: PixelCNN++. In: ICLR (2017)
Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)
Tompson, J., et al.: Efficient object localization using convolutional networks. In: CVPR, pp. 648–656 (2015)
Xu, T., et al.: High-dimensional therapeutic inference in the focally damaged human brain. Brain 141, 48–54 (2017)
Acknowledgments
This research has been conducted using the UK Biobank Resource under Application Number 16273. This work is supported by the EPSRC-funded UCL CDT in Medical Imaging (EP/L016478/1), the Department of Health’s NIHR-funded BRC at UCLH and the Wellcome Trust.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Pombo, G., Gray, R., Varsavsky, T., Ashburner, J., Nachev, P. (2019). Bayesian Volumetric Autoregressive Generative Models for Better Semisupervised Learning. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11767. Springer, Cham. https://doi.org/10.1007/978-3-030-32251-9_47
Download citation
DOI: https://doi.org/10.1007/978-3-030-32251-9_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32250-2
Online ISBN: 978-3-030-32251-9
eBook Packages: Computer ScienceComputer Science (R0)