Loading [a11y]/accessibility-menu.js
Road Segmentation Based on Hybrid Convolutional Network for High-Resolution Visible Remote Sensing Image | IEEE Journals & Magazine | IEEE Xplore

Road Segmentation Based on Hybrid Convolutional Network for High-Resolution Visible Remote Sensing Image


Abstract:

Road segmentation plays an important role in many applications, such as intelligent transportation system and urban planning. Various road segmentation methods have been ...Show More

Abstract:

Road segmentation plays an important role in many applications, such as intelligent transportation system and urban planning. Various road segmentation methods have been proposed for visible remote sensing images, especially the popular convolutional neural network-based methods. However, high-accuracy road segmentation from high-resolution visible remote sensing images is still a challenging problem due to complex background and multiscale roads in these images. To handle this problem, a hybrid convolutional network (HCN), fusing multiple subnetworks, is proposed in this letter. The HCN contains a fully convolutional network, a modified U-Net, and a VGG subnetwork; these subnetworks obtain a coarse-grained, a medium-grained, and a fine-grained road segmentation map. Moreover, the HCN uses a shallow convolutional subnetwork to fuse these multigrained segmentation maps for final road segmentation. Benefitting from multigrained segmentation, our HCN shows impressing results in processing both multiscale roads and complex background. Four testing indicators, including pixel accuracy, mean accuracy, mean region intersection over union (IU), and frequency weighted IU, are computed to evaluate the proposed HCN on two testing data sets. Compared with five state-of-the-art road segmentation methods, our HCN has higher segmentation accuracy than them.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 16, Issue: 4, April 2019)
Page(s): 613 - 617
Date of Publication: 21 November 2018

ISSN Information:

Funding Agency:


References

References is not available for this document.