Presentation + Paper
15 March 2019 Using deep learning to detect oesophageal lesions in PET-CT
Author Affiliations +
Abstract
PET-CT scans using 18F-FDG are increasingly used to detect cancer, but interpretation can be challenging due to non-specific uptake and complex anatomical structures nearby. To aide this process, we investigate the potential of automated detection of lesions in 18F-FDG scans using deep learning tools. A 5-layer convolutional neural network (CNN) with 2x2 kernels, rectified linear unit (ReLU) activations and two dense layers was trained to detect cancerous lesions in 2D axial image segments from PET scans. Pre-contoured scans from a retrospective cohort study of 480 oesophageal cancer patients were split 80:10:10 into training, validation and test sets. These were then used to generate a total of ~14000 45×45 pixel image segments, where tumor present segments were centered on the marked lesion, and tumor absent segments were randomly located outside the marked lesion. ROC curves generated from the training and validation datasets produced an average AUC of ~<95%.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
I. Ackerley, R. Smith, J. Scuffham, M. Halling-Brown, E. Lewis, E. Spezi, V. Prakash M.D., and K. Wells "Using deep learning to detect oesophageal lesions in PET-CT", Proc. SPIE 10953, Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, 109530S (15 March 2019); https://doi.org/10.1117/12.2511738
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Tumors

Data modeling

3D modeling

Image segmentation

Positron emission tomography

Cancer

Machine learning

Back to Top