Abstract:
Detecting lesions from computed tomography (CT) scans relies on two aspects of the input: intra-slice texture information from the key slice and inter-slice structural co...Show MoreMetadata
Abstract:
Detecting lesions from computed tomography (CT) scans relies on two aspects of the input: intra-slice texture information from the key slice and inter-slice structural context information from the adjacent slices. However, most existing methods ignore the correlation and complementarity between texture and structural information resulting in unexpected loss of performance. In this paper, a novel Composite Context Fusion Network (CCF-Net) is proposed to jointly model intra-slice and inter-slice features so as to prove the effectiveness of the two-steam framework. To extract both texture and structural information, two streams of 2D and 3D convolutional modules are employed in each stage. Moreover, a Composite Fusion architecture equipped with Inter-slice Correlative Fusion (ICF) modules is proposed to achieve stage-by-stage feature fusion in order to excavate and exchange information between texture-aware and context-aware features. Extensive experiments show that the proposed CCF-Net is able to achieve state-of-the-art detection performance on the multi-disease CT lesion detection task and significantly surpass the baseline methods.1
Date of Conference: 19-22 September 2021
Date Added to IEEE Xplore: 23 August 2021
ISBN Information: