Abstract:
The real-time and robust surgical instrument segmentation is an important issue for endoscopic vision. We propose an instrument segmentation method fusing the convolution...Show MoreMetadata
Abstract:
The real-time and robust surgical instrument segmentation is an important issue for endoscopic vision. We propose an instrument segmentation method fusing the convolutional neural networks (CNN) prediction and the kinematic pose information. First, the CNN model ToolNet-C is designed, which cascades a convolutional feature extractor trained over numerous unlabeled images and a pixel-wise segmentor trained on few labeled images. Second, the silhouette projection of the instrument body onto the endoscopic image is implemented based on the measured kinematic pose. Third, the particle filter with the shape matching likelihood and the weight suppression is proposed for data fusion, whose estimate refines the kinematic pose. The refined pose determines an accurate silhouette mask, which is the final segmentation output. The experiments are conducted with a surgical navigation system, several animal-tissue backgrounds, and a debrider instrument.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information: