Abstract
The generalization ability of the fetal head segmentation method is reduced due to the data obtained by different machines, settings, and operations. To keep the generalization ability, we proposed a Fourier domain adaptation (FDA) method based on amplitude and phase to achieve better multi-source ultrasound data segmentation performance. Given the source/target image, the Fourier domain information was first obtained using fast Fourier transform. Secondly, the target information was mapped to the source Fourier domain through the phase adjustment parameter α and the amplitude adjustment parameter β. Thirdly, the target image and the preprocessed source image obtained through the inverse discrete Fourier transform were used as the input of the segmentation network. Finally, the dice loss was computed to adjust α and β. In the existing transform methods, the proposed method achieved the best performance. The adaptive-FDA method provides a solution for the automatic preprocessing of multi-source data. Experimental results show that it quantitatively improves the segmentation results and model generalization performance.
Graphical Abstract







Similar content being viewed by others
References
Papageorghiou AT, Ohuma EO, Altman DG, Todros T, Ismail LC, Lambert A, Jaffer YA, Bertino E, Gravett MG, Purwar M, Noble JA, Pang R, Victora CG, Barros FC, Carvalho M, Salomon LJ, Bhutta ZA, Kennedy SH, Villar J (2014) International standards for fetal growth based on serial ultrasound measurements: the Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project. Lancet 384:869–879
van den Heuvel TL, de Bruijn D, de Korte CL, Ginneken BvJPo (2018) Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One 13: e0200412
Huang X, Chen Z, Yang X, Liu Z, Zou Y, Luo M, Xue W, Ni D (2020) Style-invariant cardiac image segmentation with test-time augmentation,International Workshop on Statistical Atlases and Computational Models of the Heart. Springer, pp 305–315
Kim HP, Lee SM, Kwon J-Y, Park Y, Kim KC, Seo JKJPm (2019) Automatic evaluation of fetal head biometry from ultrasound images using machine learning, Physiol Meas 40:065009
Meng Y, Wei M, Gao D, Zhao Y, Yang X, Huang X, Zheng Y (2020) CNN-GCN aggregation enabled boundary regression for biomedical image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention, pp 352–362. https://doi.org/10.1007/978-3-030-59719-1_35
Mikołajczyk A, Grochowski M (2019) Style transfer-based image synthesis as an efficient regularization technique in deep learning, 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR), IEEE, pp. 42–47
Zheng X, Chalasani T, Ghosal K, Lutz S, SmolicAJae-p (2019) STaDA: style transfer as data augmentation, arXiv:1909.01056
Zhang Y, Zhang Y, Cai W (2018) Separating style and content for generalized style transfer. Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8447–8455. https://doi.org/10.48550/arXiv.1711.06454
Zhang Y, Zhang Y, Cai W (2020) A unified framework for generalizable style transfer: style and content separation. IEEE Trans Image Process 29:4085–4098
Chang W-L, Wang H-P, Peng W-H, Chiu W-C (2019) All about structure: adapting structural information across domains for boosting semantic segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1900–1909. https://doi.org/10.1109/CVPR.2019.00200
Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M-HJae-p (2017) Universal style transfer via feature transforms. Advances in neural information processing systems, p 30
Ulyanov D, Lebedev V, Vedaldi A, Lempitsky VS (2016) Texture networks: feed-forward synthesis of textures and stylized images, 33rd International Conference on Machine Learning, ICML, p 4. https://doi.org/10.48550/arXiv.1603.03
Yoo J, Uh Y, Chun S, Kang B, Ha J-W (2019) Photorealistic style transfer via wavelet transforms. Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 9036–9045. https://doi.org/10.1109/ICCV.2019.00913
Liu Z, Yang X, Gao R, Liu S, Dou H, He S, Huang Y, Huang Y, Luo H, Zhang Y (2020) Remove appearance shift for ultrasound image segmentation via fast and universal style transfer, 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp 1824–1828. https://doi.org/10.1109/ISBI45749.2020.9098457
Liu Z, Manh V, Yang X, Huang X, Lekadir K, Campello V, Ravikumar N, Frangi AF, Ni D (2021) Style curriculum learning for robust medical image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, pp 451–460. https://doi.org/10.1007/978-3-030-87193-2_43
Zhang Y, David P, Gong B (2017) Curriculum domain adaptation for semantic segmentation of urban scenes. Proceedings of the IEEE international conference on computer vision, pp 2020–2030. https://doi.org/10.1109/ICCV.2017.223
Hung ALY, Galeotti J (2021) Ultrasound variational style transfer to generate images beyond the observed domain, Deep Generative Models, and Data Augmentation, Labelling, and Imperfections. Springer, Cham, pp 14–23. https://doi.org/10.1007/978-3-030-88210-5_2
Marsden RA, Wiewel F, Döbler M, Yang Y, Yang B (2022) Continual unsupervised domain adaptation for semantic segmentation using a class-specific transfer, 2022 International Joint Conference on Neural Networks (IJCNN), pp 1–8. https://doi.org/10.1109/IJCNN55064.2022.9892200
Sun X, Fang H, Yang Y, Zhu D, Wang L, Liu J, Xu Y (2021) Robust retinal vessel segmentation from a data augmentation perspective, International Workshop on Ophthalmic Medical Image Analysis. Springer, Cham, pp 189–198. https://doi.org/10.1007/978-3-030-87000-3_20
Ma J (2020) Histogram matching augmentation for domain adaptation with application to multi-centre, multi-vendor and multi-disease cardiac image segmentation, International Workshop on Statistical Atlases and Computational Models of the Heart. Springer, Cham, pp 177–186. https://doi.org/10.1007/978-3-030-68107-4_18
Oppenheim AV, Lim JSJPotI (1981) The importance of phase in signals. Proc IEEE 69:529–541
Hansen BC, Hess RF (2007) Structural sparseness and spatial phase alignment in natural scenes. J Opt Soc Am A Opt Image Sci Vis 24:1873–1885
Yang Y, Soatto S (2020) Fda: Fourier domain adaptation for semantic segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 4085–4095. https://doi.org/10.1109/CVPR42600.2020.00414
Xu Q, Zhang R, Zhang Y, Wang Y, Tian Q (2021) A Fourier-based framework for domain generalization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14383–14392. https://doi.org/10.48550/arXiv.2105.11120
Sharifzadeh M, Tehrani AK, Benali H, Rivaz H (2021) Ultrasound domain adaptation using frequency domain analysis. 2021 IEEE International Ultrasonics Symposium (IUS), pp 1-4. https://doi.org/10.1109/IUS52206.2021.9593856
Li X, Fan Y, Rao Z, Guo Z, Lv GJISPL (2022) Improving stereo matching generalization via Fourier-based amplitude transform. IEEE Signal Processing Letters. https://doi.org/10.1109/LSP.2022.3180306
Zakazov I, Shaposhnikov V, Bespalov I, Dylov DV (2022) Feather-light Fourier domain adaptation in magnetic resonance imaging, MICCAI Workshop on Domain Adaptation and Representation Transfer. Springer, pp 88–97
Zhou M, Yuan C, Chen Z, Wang C, Lu Y (2020) Automatic angle of progress measurement of intrapartum transperineal ultrasound image with deep learning, International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, pp 406–414. https://doi.org/10.1007/978-3-030-59725-2_39
Lu Y, Zhou M, Zhi D, Zhou M, Jiang X, Qiu R, Ou Z, Wang H, Qiu D, ZhongMJDib (2022) The JNU-IFM dataset for segmenting pubic symphysis-fetal head. Data in Brief 41:107904. https://doi.org/10.1016/j.dib.2022.107904
Lu Y, Zhi D, Zhou M, Lai F, Chen G, Ou Z, Zeng R, Long S, Qiu R, Zhou MJC, Medicine MMI (2022) Multitask deep neural network for the fully automatic measurement of the angle of progression. Comput Math Method M 2022. https://doi.org/10.1155/2022/5192338
Van der Maaten L, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(2605):2579–2605
Kok S, Azween A, Jhanjhi NJJoIS (2020) Applications, evaluation metric for crypto-ransomware detection using machine learning, J Inf Secur Applic 55: 102646
Bertels J, Eelbode T, Berman M, Vandermeulen D, Maes F, Bisschops R, Blaschko MB (2019) Optimizing the dice score and Jaccard index for medical image segmentation: theory and practice. International Conference on Medical Image Computing and Computer-Assisted Intervention, pp 92–100. https://doi.org/10.1007/978-3-030-32245-8_11
Lin Z, Li S, Ni D, Liao Y, Wen H, Du J, Chen S, Wang T, Lei B (2019) Multi-task learning for quality assessment of fetal head ultrasound images. Med Image Anal 58:101548
Lu W, Tan J, Floyd R (2005) Automated fetal head detection and measurement in ultrasound images by iterative randomized Hough transform. Ultrasound Med Biol 31:929–936
Gao Y, Maraci MA, Noble JA (2016) Describing ultrasound video content using deep convolutional neural networks, 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 787-790
Shorten C, Khoshgoftaar TMJJoBD (2019) A survey on image data augmentation for deep learning. J Big Data 6:1–48
Yang Y, Lao D, Sundaramoorthi G, Soatto S (2020) Phase consistent ecological domain adaptation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 9011–9020. https://doi.org/10.48550/arXiv.2004.04923
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation, International Conference on Medical image computing and computer-assisted intervention, Springer, Cham, pp 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
Funding
This research was funded by the Science and Technology Program of Guangzhou (202201010544) (JB), National Key Research and Development Project (2019YFC0120100, 2019YFC0121907, and 2019YFC0121904) (HW, JB, and YL), Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization (2021B1212040007), Guangdong Health Technology Promotion Project (2022 No. 132) (GC), and the National Natural Science Foundation of China (61901192) (JB).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethics approval
The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by Medical Ethics Committee board of Nanjing Fang Hospital of Southern Medical University (NO.: NFCE-2019–024).
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.

Supplemental Fig. 1
With different parameters (i.e., the content mapping parameter α and the style mapping parameter β), the source image is mapped to the target image to generate the preprocessed source images. α (from left to right) and β (from up to down) are changed from 0 to 0.25, and the final preprocessed source image is marked with a green rectangle. (PNG 1200 kb)

Supplemental Fig. 2
Visualization of t-SNE embedding for the features associated with different adopted datasets (A, B, C, D). (a) Distribution of original multi-source data. (b) Distribution of multi-source data after Adaptive-FDA (migration of data from A to D). The hint is given by the transparent circular area, indicating changes in the distribution of A before and after the FDA migration. After Adaptive-FDA, the data distribution of the preprocessed source dataset is similar to that of the target dataset. (PNG 273 kb)

Supplemental Fig. 3
Qualitative comparison of the generalization results of different methods in fetal head image segmentation. The green and blue contours in fetal head images indicate the prediction boundaries of the fetal head. All red contours represent the ground truths. A#1 represents the qualitative comparison of one of the paradigms on test data-A. A#1, A#2, and A#3 represent the paradigm of A/B, A/C, and A/D, respectively, and the others are the same. (PNG 3200 kb)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhou, M., Wang, C., Lu, Y. et al. The segmentation effect of style transfer on fetal head ultrasound image: a study of multi-source data. Med Biol Eng Comput 61, 1017–1031 (2023). https://doi.org/10.1007/s11517-022-02747-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11517-022-02747-1