Abstract
State-of-the-art object classifiers finetuned from a pretrained (e.g. from ImageNet) model on a domain-specific dataset can accurately classify well-localized object images. However, such classifiers often fail on poorly localized images (images with lots of context, heavily occluded/partially visible, and off-centered objects). In this paper, we propose a two-stage training scheme to improve the classification of such noisy detections, often produced by low-compute algorithms such as motion based background removal techniques that run on the edge. The proposed two-stage training pipeline first trains a classifier from scratch with extreme image augmentation, followed by finetuning in the second stage. The first stage incorporates a lot of contextual information around the objects, given access to the corresponding full images. This stage works very well for classification of poorly localized input images, but generates a lot of false positives by classifying non-object images as objects. To reduce the false positives, a second training is done on the tight ground-truth bounding boxes (as done traditionally) by using the trained model in the first stage as the initial model and very slowly adjusting its weights during the training. To demonstrate the efficacy of our approach, we curated a new classification dataset for poorly localized images - noisy PASCAL VOC 2007 test dataset. Using this dataset, we show that the proposed two-stage training scheme can significantly improve the accuracy of the trained classifier on both well-localized and poorly-localized object images.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection (2020)
Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S.: Once-for-all: train one network and specialize it for efficient deployment (2019)
Chen, P., Liu, S., Zhao, H., Jia, J.: Gridmask data augmentation (2020)
Chin, T., Ding, R., Zhang, C., Marculescu, D.: Legr: filter pruning via learned global ranking. CoRR abs/1904.12368 (2019). http://arxiv.org/abs/1904.12368
Dai, X., et al.: Fbnetv3: joint architecture-recipe search using neural acquisition function (2020)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR09 (2009)
Devries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. CoRR abs/1708.04552 (2017). http://arxiv.org/abs/1708.04552
Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision 111(1), 98–136 (2015)
Freund, Y., Schapire, R.E.: A decision theoretic generalization of on-line learning and an application to boosting. In: Vitányi, P.M.B. (ed.) Second European Conference on Computational Learning Theory (EuroCOLT-95), pp. 23–37 (1995). citeseer.nj.nec.com/freund95decisiontheoretic.html
He, Y., Han, S.: AMC: automated deep compression and acceleration with reinforcement learning. CoRR abs/1802.03494 (2018). http://arxiv.org/abs/1802.03494
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network (2015)
Howard, A., et al.: Searching for mobilenetv3. CoRR abs/1905.02244 (2019). http://arxiv.org/abs/1905.02244
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. CoRR abs/1704.04861 (2017). http://arxiv.org/abs/1704.04861
Huang, G., Liu, S., van der Maaten, L., Weinberger, K.Q.: Condensenet: an efficient densenet using learned group convolutions. CoRR abs/1711.09224 (2017). http://arxiv.org/abs/1711.09224
Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and<1mb model size. CoRR abs/1602.07360 (2016), http://arxiv.org/abs/1602.07360
Krishnamoorthi, R.: Quantizing deep convolutional networks for efficient inference: a whitepaper. CoRR abs/1806.08342 (2018), http://arxiv.org/abs/1806.08342
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection (2017)
Liu, C., et al.: Progressive neural architecture search. CoRR abs/1712.00559 (2017), http://arxiv.org/abs/1712.00559
Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. CoRR abs/1806.09055 (2018), http://arxiv.org/abs/1806.09055
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector (2015), http://arxiv.org/abs/1512.02325, cite arxiv:1512.02325Comment: ECCV 2016
Polino, A., Pascanu, R., Alistarh, D.: Model compression via distillation and quantization. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=S1XolQbRW
Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. CoRR abs/1802.01548 (2018), http://arxiv.org/abs/1802.01548
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR abs/1612.08242 (2016), http://arxiv.org/abs/1612.08242
Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.: Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR abs/1801.04381 (2018), http://arxiv.org/abs/1801.04381
Shetty, R., Schiele, B., Fritz, M.: Not using the car to see the sidewalk: quantifying and controlling the effects of context in classification and segmentation. CoRR abs/1812.06707 (2018), http://arxiv.org/abs/1812.06707
Singh, K.K., Yu, H., Sarmasi, A., Pradeep, G., Lee, Y.J.: Hide-and-seek: a data augmentation technique for weakly-supervised localization and beyond. CoRR abs/1811.02545 (2018), http://arxiv.org/abs/1811.02545
Tan, M., et al.: Mnasnet: platform-aware neural architecture search for mobile. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
Tan, M., Le, Q.V.: Efficientnet: rethinking model scaling for convolutional neural networks. CoRR abs/1905.11946 (2019), http://arxiv.org/abs/1905.11946
Touvron, H., Vedaldi, A., Douze, M., Jégou, H.: Fixing the train-test resolution discrepancy: Fixefficientnet (2020)
Viola, P.A., Jones, M.J.: Rapid object detection using a boosted cascade of simple features. In: CVPR (1), pp. 511–518. IEEE Computer Society (2001)
Wan, A., et al.: Fbnetv2: differentiable neural architecture search for spatial and channel dimensions (2020)
Wu, B., et al.: Fbnet: hardware-aware efficient convnet design via differentiable neural architecture search. CoRR abs/1812.03443 (2018), http://arxiv.org/abs/1812.03443
Wu, B., et al.: Shift: a zero flop, zero parameter alternative to spatial convolutions. CoRR abs/1711.08141 (2017), http://arxiv.org/abs/1711.08141
Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: regularization strategy to train strong classifiers with localizable features. CoRR abs/1905.04899 (2019), http://arxiv.org/abs/1905.04899
Zhang, H., Cissé, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. CoRR abs/1710.09412 (2017), http://arxiv.org/abs/1710.09412
Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: an extremely efficient convolutional neural network for mobile devices. CoRR abs/1707.01083 (2017), http://arxiv.org/abs/1707.01083
Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. CoRR abs/1708.04896 (2017), http://arxiv.org/abs/1708.04896
Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. CoRR abs/1611.01578 (2016), http://arxiv.org/abs/1611.01578
Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. CoRR abs/1707.07012 (2017), http://arxiv.org/abs/1707.07012
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Bondugula, S., Qian, G., Beach, A. (2020). Two-Stage Training for Improved Classification of Poorly Localized Object Images. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12539. Springer, Cham. https://doi.org/10.1007/978-3-030-68238-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-68238-5_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-68237-8
Online ISBN: 978-3-030-68238-5
eBook Packages: Computer ScienceComputer Science (R0)