Skip to main content

Residual Vector Product Quantization for Approximate Nearest Neighbor Search

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13280))

Included in the following conference series:

Abstract

Product Quantization is popular for approximate nearest neighbor search, which decomposes the vector space into Cartesian product of several subspaces and constructs separately one codebook for each subspace. The construction of codebooks dominates the quantization error that directly impacts the retrieval accuracy. In this paper, we propose a novel quantization method, residual vector product quantization (RVPQ), which constructs a residual hierarchy structure consisted of several ordered residual codebooks for each subspace. The proposed method minimizes the quantization error by jointly optimizing all the codebooks in each subspace using the efficient mini-batch stochastic gradient descent algorithm. Furthermore, an efficient encoding method, based on H-variable Beam Search, is also proposed to reduce the computation complexity of encoding with negligible loss of accuracy. Extensive experiments show that our proposed method outperforms the-state-of-the-art on retrieval accuracy while retaining a comparable computation complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    All of datasets used in this section are available at http://corpus-texmex.irisa.fr/.

References

  1. Ai, L., Yu, J., Wu, Z., He, Y., Guan, T.: Optimized residual vector quantization for efficient approximate nearest neighbor search. Multimed. Syst. 23(2), 169–181 (2015). https://doi.org/10.1007/s00530-015-0470-9

    Article  Google Scholar 

  2. Babenko, A., Lempitsky, V.S.: The inverted multi-index. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012, pp. 3069–3076. IEEE Computer Society (2012)

    Google Scholar 

  3. Babenko, A., Lempitsky, V.S.: Additive quantization for extreme vector compression. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, 23–28 June 2014, pp. 931–938. IEEE Computer Society (2014)

    Google Scholar 

  4. Böhm, C., Berchtold, S., Keim, D.A.: Searching in high-dimensional spaces: index structures for improving the performance of multimedia databases. ACM Comput. Surv. 33(3), 322–373 (2001)

    Article  Google Scholar 

  5. Chen, Y., Guan, T., Wang, C.: Approximate nearest neighbor search by residual vector quantization. Sensors 10(12), 11259–11273 (2010)

    Article  Google Scholar 

  6. Ge, T., He, K., Ke, Q., Sun, J.: Optimized product quantization for approximate nearest neighbor search. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013, pp. 2946–2953. IEEE Computer Society (2013)

    Google Scholar 

  7. Gray, R., Neuhoff, D.: Quantization. IEEE Trans. Inf. Theory 44(6), 2325–2383 (1998)

    Article  Google Scholar 

  8. Jégou, H., Douze, M., Schmid, C.: Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 117–128 (2011)

    Article  Google Scholar 

  9. Kalantidis, Y., Avrithis, Y.: Locally optimized product quantization for approximate nearest neighbor search. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, 23–28 June 2014, pp. 2329–2336. IEEE Computer Society (2014)

    Google Scholar 

  10. Li, M., Zhang, T., Chen, Y., Smola, A.J.: Efficient mini-batch training for stochastic optimization. In: Macskassy, S.A., Perlich, C., Leskovec, J., Wang, W., Ghani, R. (eds.) The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2014, New York, NY, USA, 24–27 August 2014, pp. 661–670. ACM (2014)

    Google Scholar 

  11. Noh, H., Kim, T., Heo, J.P.: Product quantizer aware inverted index for scalable nearest neighbor search. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12210–12218, October 2021

    Google Scholar 

  12. Ozan, E.C., Kiranyaz, S., Gabbouj, M.: Competitive quantization for approximate nearest neighbor search. IEEE Trans. Knowl. Data Eng. 28(11), 2884–2894 (2016)

    Article  Google Scholar 

  13. Pan, Z., Wang, L., Wang, Y., Liu, Y.: Product quantization with dual codebooks for approximate nearest neighbor search. Neurocomputing 401, 59–68 (2020)

    Article  Google Scholar 

  14. Sloane, N., Wyner, A.: Coding theorems for a discrete source with a fidelity criterioninstitute of radio engineers, international convention record, vol. 7, p. 1959 (1993)

    Google Scholar 

  15. Wang, J., Wang, J., Song, J., Xu, X., Shen, H.T., Li, S.: Optimized cartesian k-means. IEEE Trans. Knowl. Data Eng. 27(1), 180–192 (2015)

    Article  Google Scholar 

  16. Wei, B., Guan, T., Yu, J.: Projected residual vector quantization for ANN search. IEEE Multim. 21(3), 41–51 (2014)

    Article  Google Scholar 

  17. Wu, Z., Yu, J.: Vector quantization: a review. Front. Inf. Technol. Electron. Eng. 20(4), 507–524 (2019). https://doi.org/10.1631/FITEE.1700833

    Article  Google Scholar 

Download references

Acknowledgement

The work was supported by the National Natural Science Foundation of China (No. 61702130), Guangxi Natural Science Foundation (Nos. 2020GXNSFAA297186, 2020GXNSFAA159137), Guangxi Project of technology base and special talent (No. AD19110022), Guangxi Science and Technology Major Project (No. 2018AA32001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, Z., Niu, L., Meng, R., Zhao, L., Ji, J. (2022). Residual Vector Product Quantization for Approximate Nearest Neighbor Search. In: Gama, J., Li, T., Yu, Y., Chen, E., Zheng, Y., Teng, F. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2022. Lecture Notes in Computer Science(), vol 13280. Springer, Cham. https://doi.org/10.1007/978-3-031-05933-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05933-9_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05932-2

  • Online ISBN: 978-3-031-05933-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics