Abstract
We present a general framework for medical image segmentation from limited supervision, reducing the reliance on fully and densely labeled data. Our method is simple, jointly trains triple diverse models, and adopts a mix augmentation scheme, and thus is called TriMix. TriMix imposes consistency under a more challenging perturbation, i.e., combining data augmentation and model diversity on the tri-training framework. This straightforward strategy enables TriMix to serve as a strong and general learner learning from limited supervision using different kinds of imperfect labels. We conduct extensive experiments to show TriMix’s generic purpose for semi- and weakly-supervised segmentation tasks. Compared to task-specific state-of-the-arts, TriMix achieves competitive performance and sometimes surpasses them by a large margin. The code is available at https://github.com/MoriLabNU/TriMix.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
References
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Tajbakhsh, N., Jeyaseelan, L., Li, Q., Chiang, J.N., Wu, Z., Ding, X.: Embracing imperfect datasets: a review of deep learning solutions for medical image segmentation. Med. Image Anal. 63, 101693 (2020)
Miyato, T., Maeda, S.I., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. In: TPAMI, vol. 41 (2018)
Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: NeurIPS (2017)
Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: NeurIPS (2020)
Valvano, G., Leo, A., Tsaftaris, S.A.: Learning to segment from scribbles using multi-scale adversarial attention gates. In: TMI, vol. 40 (2021)
Zhang, K., Zhuang, X.: CycleMix: a holistic strategy for medical image segmentation from scribble supervision. In: CVPR (2022)
Luo, X., et al.: Scribble-supervised medical image segmentation via dual-branch network and dynamically mixed pseudo labels supervision. In: MICCAI (2022)
French, G., Laine, S., Aila, T., Mackiewicz, M., Finlayson, G.: Semi-supervised semantic segmentation needs strong, varied perturbations. In: BMVC (2020)
Chen, X., Yuan, Y., Zeng, G., Wang, J.: Semi-supervised semantic segmentation with cross pseudo supervision. In: CVPR (2021)
Liu, Y., Tian, Y., Chen, Y., Liu, F., Belagiannis, V., Carneiro, G.: Perturbed and strict mean teachers for semi-supervised semantic segmentation. In: CVPR (2022)
Ke, Z., Wang, D., Yan, Q., Ren, J., Lau, R.W.: Dual student: breaking the limits of the teacher in semi-supervised learning. In: ICCV (2019)
Ouali, Y., Hudelot, C., Tami, M.: Semi-supervised semantic segmentation with cross-consistency training. In: CVPR (2020)
Yu, L., Wang, S., Li, X., Fu, C.W., Heng, P.A.: Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. In: MICCAI (2019)
Wang, Y., et al.: Double-uncertainty weighted method for semi-supervised learning. In: MICCAI (2020)
Luo, X., Chen, J., Song, T., Wang, G.: Semi-supervised medical image segmentation through dual-task consistency. In: AAAI (2021)
Wu, Y., Xu, M., Ge, Z., Cai, J., Zhang, L.: Semi-supervised left atrium segmentation with mutual consistency training. In: MICCAI (2021)
Xia, Y., et al.: Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation. Med. Image Anal. 65, 101766 (2020)
Lee, H., Jeong, W.K.: Scribble2label: Scribble-supervised cell segmentation via self-generating pseudo-labels with consistency. In: MICCAI (2020)
Zhang, K., Zhuang, X.: ShapePU: A new PU learning framework regularized by global consistency for scribble supervised cardiac segmentation. In: MICCAI (2022)
Zhou, Z.H., Li, M.: Tri-training: exploiting unlabeled data using three classifiers. Trans. Knowl. Data Eng. 17, 1529–1541 (2005)
Zhao, A., Balakrishnan, G., Durand, F., Guttag, J.V., Dalca, A.V.: Data augmentation using learned transformations for one-shot medical image segmentation. In: CVPR (2019)
Wang, S., et al.: LT-Net: label transfer by learning reversible voxel-wise correspondence for one-shot medical image segmentation. In: CVPR (2020)
Tomar, D., Bozorgtabar, B., Lortkipanidze, M., Vray, G., Rad, M.S., Thiran, J.P.: Self-supervised generative style transfer for one-shot medical image segmentation. In: WACV (2022)
Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, ICML (2013)
Arazo, E., Ortego, D., Albert, P., O’Connor, N.E., McGuinness, K.: Pseudo-labeling and confirmation bias in deep semi-supervised learning. In: IJCNN (2020)
Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.A.: MixMatch: a holistic approach to semi-supervised learning. In: NeurIPS (2019)
Huang, T., Sun, Y., Wang, X., Yao, H., Zhang, C.: Spatial ensemble: a novel model smoothing mechanism for student-teacher framework. In: NeurIPS (2021)
Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: ICLR (2017)
Li, S., Zhang, C., He, X.: Shape-aware semi-supervised 3D semantic segmentation for medical images. In: MICCAI (2020)
Hang, W., et al.: Local and global structure-aware entropy regularized mean teacher model for 3D left atrium segmentation. In: MICCAI (2020)
Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. In: TPAMI, vol. 23 (2001)
Grady, L.: Random walks for image segmentation. In: TPAMI, vol. 28 (2006)
Lin, D., Dai, J., Jia, J., He, K., Sun, J.: ScribbleSup: scribble-supervised convolutional networks for semantic segmentation. In: CVPR (2016)
Bai, W., et al.: Recurrent neural networks for aortic image sequence segmentation with sparse annotations. In: MICCAI (2018)
Ji, Z., Shen, Y., Ma, C., Gao, M.: Scribble-based hierarchical weakly supervised learning for brain tumor segmentation. In: MICCAI (2019)
Tang, M., Perazzi, F., Djelouah, A., Ayed, I.B., Schroers, C., Boykov, Y.: On regularized losses for weakly-supervised CNN segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 524–540. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_31
Tang, M., Djelouah, A., Perazzi, F., Boykov, Y., Schroers, C.: Normalized cut loss for weakly-supervised CNN segmentation. In: CVPR (2018)
Liu, X., et al.: Weakly supervised segmentation of covid19 infection with scribble annotation on CT images. Pattern Recogn. 122, 108341 (2022)
Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: AutoAugment: learning augmentation strategies from data. In: CVPR (2019)
Hataya, R., Zdenek, J., Yoshizoe, K., Nakayama, H.: Faster autoAugment: learning augmentation strategies using backpropagation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12370, pp. 1–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58595-2_1
Lin, C., et al.: Online hyper-parameter learning for auto-augmentation strategy. In: ICCV (2019)
Tian, K., Lin, C., Sun, M., Zhou, L., Yan, J., Ouyang, W.: Improving auto-augment via augmentation-wise weight sharing. In: NeurIPS (2020)
Zhang, H., Cissé, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: ICLR (2018)
DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv (2017)
Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: CutMix: regularization strategy to train strong classifiers with localizable features. In: ICCV (2019)
Kim, J., Choo, W., Jeong, H., Song, H.O.: Co-Mixup: Saliency guided joint mixup with supermodular diversity. In: ICLR (2021)
Verma, V., et al.: Manifold mixup: better representations by interpolating hidden states. In: ICML (2019)
Olsson, V., Tranheden, W., Pinto, J., Svensson, L.: ClassMix: segmentation-based data augmentation for semi-supervised learning. In: WACV (2021)
Kim, J.H., Choo, W., Song, H.O.: Puzzle mix: exploiting saliency and local statistics for optimal mixup. In: ICML (2020)
Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: COLT (1998)
Qiao, S., Shen, W., Zhang, Z., Wang, B., Yuille, A.: Deep co-training for semi-supervised image recognition. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 142–159. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_9
Peng, J., Estrada, G., Pedersoli, M., Desrosiers, C.: Deep co-training for semi-supervised image segmentation. Pattern Recogn. 107, 107269 (2020)
Saito, K., Ushiku, Y., Harada, T.: Asymmetric tri-training for unsupervised domain adaptation. In: ICML (2017)
Chen, D.D., Wang, W., Gao, W., Zhou, Z.H.: Tri-net for semi-supervised deep learning. In: IJCAI (2018)
Zhang, T., Yu, L., Hu, N., Lv, S., Gu, S.: Robust medical image segmentation from non-expert annotations with tri-network. In: MICCAI (2020)
Yu, J., Yin, H., Gao, M., Xia, X., Zhang, X., Viet Hung, N.Q.: Socially-aware self-supervised tri-training for recommendation. In: KDD (2021)
Milletari, F., Navab, N., Ahmadi, S.: V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In: 3DV (2016)
Bernard, O., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? Trans. Med. Imaging 37, 2514–2525 (2018)
Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: NeurIPS (2004)
Zhuang, X.: Multivariate mixture model for myocardial segmentation combining multi-source images. In: TPAMI, vol. 41 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: MICCAI (2015)
Kim, B., Ye, J.C.: Mumford-shah loss functional for image segmentation with deep learning. Trans. Image Process. 29, 1856–1866 (2019)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NeurIPS (2017)
Karimi, D., Dou, H., Warfield, S.K., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)
Grill, J.B., et al.: Bootstrap your own latent-a new approach to self-supervised learning. In: NeurIPS (2020)
Havasi, M., et al.: Training independent subnetworks for robust prediction. In: ICLR (2020)
Acknowledgement
This work was supported by JSPS KAKENHI Grant Numbers 21K19898 and 17H00867 and JST CREST Grant Number JPMJCR20D5, Japan.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, Z., Hayashi, Y., Oda, M., Kitasaka, T., Mori, K. (2023). TriMix: A General Framework for Medical Image Segmentation from Limited Supervision. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13846. Springer, Cham. https://doi.org/10.1007/978-3-031-26351-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-26351-4_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26350-7
Online ISBN: 978-3-031-26351-4
eBook Packages: Computer ScienceComputer Science (R0)