Loading [a11y]/accessibility-menu.js
Do we really need more training data for object localization | IEEE Conference Publication | IEEE Xplore

Do we really need more training data for object localization


Abstract:

The key factor for training a good neural network lies in both model capacity and large-scale training data. As more datasets are available nowadays, one may wonder wheth...Show More

Abstract:

The key factor for training a good neural network lies in both model capacity and large-scale training data. As more datasets are available nowadays, one may wonder whether the success of deep learning descends from data augmentation only. In this paper, we propose a new dataset, namely, Extended ImageNet Classification (EIC) dataset based on the original ILSVRC CLS 2012 set to investigate if more training data is a crucial step. We address the problem of object localization where given an image, some boxes (also called anchors) are generated to localize multiple instances. Different from previous work to place all anchors at the last layer, we split boxes of different sizes at various resolutions in the network, since small anchors are more prone to be identified at larger spatial location in the shallow layers. Inspired by the hourglass work, we apply a conv-deconv network architecture to generate object proposals. The motivation is to fully leverage high-level summarized semantics and to utilize their up-sampling version to help guide local details in the low-level maps. Experimental results demonstrate the effectiveness of such a design. Based on the newly proposed dataset, we find more data could enhance the average recall, but a more balanced data distribution among categories could obtain better results at the cost of fewer training samples.
Date of Conference: 17-20 September 2017
Date Added to IEEE Xplore: 22 February 2018
ISBN Information:
Electronic ISSN: 2381-8549
Conference Location: Beijing, China

Contact IEEE to Subscribe

References

References is not available for this document.