Abstract
A holistic understanding of dual-view transformation (DVT) is an enabling technique for computer-aided diagnosis (CAD) of breast lesion in mammogram, e.g., micro-calcification (\(\mu \)C) or mass matching, dual-view feature extraction etc. Learning a complete DVT usually relies on a dense supervision which indicates a corresponding tissue in one view for each tissue in another. Since such dense supervision is infeasible to obtain in practical, a sparse supervision of some traceable lesion tissues across two views is thus an alternative but will lead to a defective DVT, limiting the performance of existing CAD systems dramatically. To address this problem, our solution is simple but very effective, i.e., densifying the existing sparse supervision by synthesizing lesions across two views. Specifically, a Gaussian model is first employed for capturing the spatial relationship of real lesions across two views, guiding a following proposed LT-GAN where to synthesize fake lesions. The proposed novel LT-GAN can not only synthesize visually realistic lesions, but also guarantee appearance consistency across views. At last, a denser supervision can be composed based on both real and synthetic lesions, enabling a robust DVT learning. Experimental results show that a DVT can be learned via our densified supervision, and thus result in a superior performance of cross-view \(\mu \)C matching on INbreast and CBIS-DDSM dataset to the state-of-the-art methods.
J. Xian and Z. Wang are the co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Balntas, V., Johns, E., Tang, L., Mikolajczyk, K.: Pn-net: Conjoined triple deep network for learning local image descriptors. arXiv preprint arXiv:1601.05030 (2016)
Beatty, J.: The radon transform and the mathematics of medical imaging (2012)
Engeland, S.V., Timp, S., Karssemeijer, N.: Finding corresponding regions of interest in mediolateral oblique and craniocaudal mammographic views. Med. Phys. 33(9), 3203–3212 (2006)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028 (2017)
Han, X., Leung, T., Jia, Y., Sukthankar, R., Berg, A.C.: Matchnet: unifying feature and metric learning for patch-based matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3279–3286 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop, vol. 2. Lille (2015)
Lee, R.S., Gimenez, F., Hoogi, A., Miyake, K.K., Gorovoy, M., Rubin, D.L.: A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 4(1), 1–9 (2017)
Ma, J., Li, X., Li, H., Wang, R., Menze, B., Zheng, W.S.: Cross-view relation networks for mammogram mass detection. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8632–8638. IEEE (2021)
Melekhov, I., Kannala, J., Rahtu, E.: Siamese network features for image matching. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 378–383. IEEE (2016)
Moreira, I.C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M.J., Cardoso, J.S.: Inbreast: toward a full-field digital mammographic database. Acad. Radiol. 19(2), 236–248 (2012)
Perek, S., Hazan, A., Barkan, E., Akselrod-Ballin, A.: Siamese network for dual-view mammography mass matching. In: Stoyanov, D., et al. (eds.) RAMBO/BIA/TIA -2018. LNCS, vol. 11040, pp. 55–63. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00946-5_6
Quan, D., Fang, S., Liang, X., Wang, S., Jiao, L.: Cross-spectral image patch matching by learning features of the spatially connected patches in a shared space. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11362, pp. 115–130. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20890-5_8
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F.: Discriminative learning of deep convolutional feature point descriptors. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 118–126 (2015)
Wilms, M., Krüger, J., Marx, M., Ehrhardt, J., Bischof, A., Handels, H.: Estimation of corresponding locations in ipsilateral mammograms: a comparison of different methods. In: Medical Imaging 2015: Computer-Aided Diagnosis, vol. 9414, p. 94142B. International Society for Optics and Photonics (2015)
Yan, Y., Conze, P.H., Lamard, M., Quellec, G., Cochener, B., Coatrieux, G.: Multi-tasking siamese networks for breast mass detection using dual-view mammogram matching. In: International Workshop on Machine Learning in Medical Imaging, pp. 312–321. Springer (2020)
Yang, H., Ciftci, U., Yin, L.: Facial expression recognition by de-expression residue learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2168–2177 (2018)
Yang, Z., et al.: MommiNet: mammographic multi-view mass identification networks. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12266, pp. 200–210. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59725-2_20
Zagoruyko, S., Komodakis, N.: Learning to compare image patches via convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4353–4361 (2015)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Acknowledgement
This work was supported by the National Natural Science Foundation of China (6187241762061160490), the project of Wuhan Science and Technology Bureau (2020010601012167), the Open Project of Wuhan National Laboratory for Optoelectronics (2018WNLOKF025), the Fundamental Research Funds for the Central Universities (2021XXJS033).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Xian, J., Wang, Z., Cheng, KT., Yang, X. (2021). Towards Robust Dual-View Transformation via Densifying Sparse Supervision for Mammography Lesion Matching. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12905. Springer, Cham. https://doi.org/10.1007/978-3-030-87240-3_34
Download citation
DOI: https://doi.org/10.1007/978-3-030-87240-3_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87239-7
Online ISBN: 978-3-030-87240-3
eBook Packages: Computer ScienceComputer Science (R0)