Skip to main content

Training Wide Residual Hashing from Scratch

  • Conference paper
  • First Online:
  • 1117 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12307))

Abstract

Deep supervised hashing aims to encode high-dimensional data into low-dimensional compact hash codes, which is important for solving large-scale image retrieval problems. In recent years, deep supervised hashing methods have achieved some positive outcomes. However, these methods can not meet expectations without fine-tuning from the off-the-shelf networks pre-trained on large-scale classification dataset. To cope with this problem, we rethink the paradigm of fine-tuning on deep supervised hashing and propose training deep supervised hashing from scratch. Based on this new perspective, we propose a wide residual hashing model trained from scratch, which can greatly improve the training time and reduce the model size. To the best of our knowledge, our method is the first framework that can train deep hashing networks from scratch without losing performance. By training from scratch, the parameters of the model are more less and our results are superior to state-of-the-art hashing algorithms. We hope the insights in this paper will open a new avenue for learning deep hash codes from scratch and transfer to further tasks not explored in this work.

This work was supported by the National Natural Science Foundation of China under Grant 61806220 and the Frontier Science and Technology Innovation Project of Army Engineering University of PLA.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.cs.toronto.edu/~kriz/cifar.html.

References

  1. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1349–1380 (2000)

    Article  Google Scholar 

  2. Zheng, L., Yang, Y., Tian, Q.: SIFT meets CNN: a decade survey of instance retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 40(5), 1224–1244 (2017)

    Article  Google Scholar 

  3. Li, Y., Miao, Z., Wang, J., Zhang, Y.: Nonlinear embedding neural codes for visual instance retrieval. Neurocomputing 275, 1275–1281 (2018)

    Article  Google Scholar 

  4. Zhang, X., Lai, H., Feng, J.: Attention-aware deep adversarial hashing for cross-modal retrieval. In: IEEE Conference on European Conference on Computer Vision, pp. 591–606 (2018)

    Google Scholar 

  5. Xia, R., Pan, Y., Lai, H., Liu, C., Yan, S.: Supervised hashing for image retrieval via image representation learning. In: International Joint Conference on Artificial Intelligence, pp. 2156–2162 (2014)

    Google Scholar 

  6. Lai, H., Pan, Y., Liu, Y., Yan, S.: Simultaneous feature learning and hash coding with deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3270–3278 (2015)

    Google Scholar 

  7. Zhu, H., Long, M., Wang, J., Cao, Y.: Deep hashing network for efficient similarity retrieval. In: International Conference on Artificial Intelligence, pp. 3270–3278 (2016)

    Google Scholar 

  8. Zhao, F., Huang, Y., Wang, L., Tan, T.: Deep semantic ranking based hashing for multi-label image retrieval. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1556–1564 (2015)

    Google Scholar 

  9. Zhang, R., Lin, L., Zhang, R., Zuo, W.: Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE Trans. Image Process. 24(12), 4766–4779 (2015)

    Article  MathSciNet  Google Scholar 

  10. Li, W.J., Wang, S., Kang, W.C.: Feature learning based deep supervised hashing with pairwise labels. In: International Joint Conference on Artificial Intelligence, pp. 1711–1717 (2015)

    Google Scholar 

  11. Wang, X., Shi, Y., Kitani, K.M.: Deep supervised hashing with triplet labels. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10111, pp. 70–84. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54181-5_5

    Chapter  Google Scholar 

  12. Li, Y., Miao, Z., He, M., Zhang, Y., Li, H.: Deep attention residual hashing. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 101(3), 654–657 (2018)

    Article  Google Scholar 

  13. Yang, H.F., Lin, K., Chen, C.S.: Supervised learning of semantics-preserving hash via deep convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 437–451 (2018)

    Article  Google Scholar 

  14. Li, Y., Miao, Z., Wang, J., Zhang, Y.: Deep binary constraint hashing for fast image retrieval. Electron. Lett. 54(1), 25–27 (2018)

    Article  Google Scholar 

  15. Gui, J., Liu, T., Sun, Z., Tao, D., Tan, T.: Fast supervised discrete hashing. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 490–496 (2017)

    Article  Google Scholar 

  16. Gong, Y., Lazebnik, S.: Iterative quantization: a procrustean approach to learning binary codes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 817–824 (2011)

    Google Scholar 

  17. Wang, J., Zhang, T., Song, J., Sebe, N., Shen, H.T.: A survey on learning to hash. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 769–790 (2018)

    Article  Google Scholar 

  18. Liong, V.E., Lu, J., Tan, Y.-P., Zhou, J.: Deep video hashing. IEEE Trans. Multimedia 19(6), 1209–1219 (2016)

    Article  Google Scholar 

  19. Song, J., Zhang, H., Li, X., Gao, L., Wang, M., Hong, R.: Self-supervised video hashing with hierarchical binary auto-encoder. IEEE Trans. Image Process. 27(7), 3210–3221 (2018)

    Article  MathSciNet  Google Scholar 

  20. Zhao, Y., Guangyuan, F., Wang, J., Guo, M., Guoxian, Yu.: Gene function prediction based on gene ontology hierarchy preserving hashing. Genomics 111(3), 334–342 (2019)

    Article  Google Scholar 

  21. Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. J. Big data 3(1), 1–9 (2016). https://doi.org/10.1186/s40537-016-0043-6

    Article  Google Scholar 

  22. Jie, L., Behbood, V., Hao, P., Zuo, H., Xue, S., Zhang, G.: Transfer learning using computational intelligence: a survey. Knowl.-Based Syst. 80, 14–23 (2015)

    Article  Google Scholar 

  23. Guth, F., de Campos, T.E.: Research frontiers in transfer learning-a systematic and bibliometric review. arXiv preprint arXiv:1912.08812 (2019)

  24. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  25. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1106–1114 (2012)

    Google Scholar 

  26. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations, pp. 1–14 (2015)

    Google Scholar 

  27. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  28. Recht, B., Roelofs, R., Schmidt, L., Shankar, V.: Do imagenet classifiers generalize to imagenet? arXiv preprint arXiv:1902.10811 (2019)

  29. Cao, Z., Long, M., Wang, J., Yu, P.S.: HashNet: deep learning to hash by continuation. In: IEEE International Conference on Computer Vision, pp. 5609–5618 (2018)

    Google Scholar 

  30. Cao, Y., Long, M., Liu, B., Wang, J.: Deep cauchy hashing for hamming space retrieval. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1229–1237 (2018)

    Google Scholar 

  31. Zhang, S., Li, J., Bo, Z.: Semantic cluster unary loss for efficient deep hashing. IEEE Trans. Image Process. 28(6), 1–17 (2019)

    Article  MathSciNet  Google Scholar 

  32. Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  33. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  34. Shen, Z., Liu, Z., Li, J., Jiang, Y.-G., Chen, Y., Xue, X.: DSOD: learning deeply supervised object detectors from scratch. In: IEEE International Conference on Computer Vision, pp. 1919–1927 (2017)

    Google Scholar 

  35. Zhu, R., et al.: ScratchDet: training single-shot object detectors from scratch. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2268–2277 (2019)

    Google Scholar 

  36. de Masson d’Autume, C., Rosca, M., Rae, J.W., Mohamed, S.: Training language GANs from scratch. CoRR, abs/1905.09922 (2019)

    Google Scholar 

  37. Jégou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y.: The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 11–19 (2017)

    Google Scholar 

  38. He, K., Girshick, R.B., Dollár, P.: Rethinking imagenet pre-training. CoRR, abs/1811.08883 (2018)

    Google Scholar 

  39. Bethge, J., Yang, H., Bornstein, M., Meinel, C.: Back to simplicity: how to train accurate BNNs from scratch? arXiv preprint arXiv:1906.08637 (2019)

  40. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: IEEE Conference on British Machine Vision Conference, pp. 1–12 (2016)

    Google Scholar 

  41. Xie, Q., Dai, Z., Hovy, E., Luong, M.-T., Le, Q.V.: Unsupervised data augmentation. arXiv preprint arXiv:1904.12848 (2019)

  42. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, pp. 1–15 (2015)

    Google Scholar 

  43. Song, J., He, T., Gao, L., Xu, X., Hanjalic, A., Shen, H.T.: Binary generative adversarial networks for image retrieval. In: Thirty-Second AAAI Conference on Artificial Intelligence, pp. 394–401 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Wang, J., Miao, Z., Wang, J., Zhang, R. (2020). Training Wide Residual Hashing from Scratch. In: Peng, Y., et al. Pattern Recognition and Computer Vision. PRCV 2020. Lecture Notes in Computer Science(), vol 12307. Springer, Cham. https://doi.org/10.1007/978-3-030-60636-7_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60636-7_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60635-0

  • Online ISBN: 978-3-030-60636-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics