Early detection of breast cancer is important to reduce morbidity and mortality. Access to breast imaging is limited in low- and middle-income countries compared to high-income countries. This contributes to advancestage breast cancer presentation with poor survival. Pocket-sized portable ultrasound device, also known as point-of-care ultrasound (POCUS), aided by decision support using deep learning-based algorithms for lesion classification could be a cost-effective way to enable access to breast imaging in low-resource settings. A previous study, where using convolutional neural networks (CNN) to classify breast cancer in conventional ultrasound (US) images, showed promising results. The aim of the present study is to classify POCUS breast images. A POCUS data set containing 1100 breast images was collected. To increase the size of the data set, a CycleConsistent Adversarial Network (CycleGAN) was trained on US images to generate synthetic POCUS images. A CNN was implemented, trained, validated and tested on POCUS images. To improve performance, the CNN was trained with different combinations of data consisting of POCUS images, US images, CycleGAN-generated POCUS images and spatial augmentation. The best result was achieved by a CNN trained on a combination of POCUS images and CycleGAN-generated POCUS images and augmentation. This combination achieved a 95% confidence interval for AUC between 93.5% – 96.6%.
|