Skip to main content

On the Vulnerability of Hyperdimensional Computing-Based Classifiers to Adversarial Attacks

  • Conference paper
  • First Online:
Network and System Security (NSS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12570))

Included in the following conference series:

Abstract

Hyperdimensional computing (HDC) has been emerging as a brain-inspired in-memory computing architecture, exhibiting ultra energy efficiency, low latency and strong robustness against hardware-induced bit errors. Nonetheless, state-of-the-art designs for HDC classifiers are mostly security-oblivious, raising concerns with their safety and immunity to adversarial inputs. In this paper, we study for the first time adversarial attacks on HDC classifiers and highlight that HDC classifiers can be vulnerable to even minimally-perturbed adversarial samples. Specifically, using handwritten digit classification as an example, we construct a HDC classifier and formulate a grey-box attack problem, where an attacker’s goal is to mislead the target HDC classifier to produce erroneous prediction labels while keeping the amount of added perturbation noise as little as possible. Then, we propose a modified genetic algorithm to generate adversarial samples within a reasonably small number of queries, and further apply critical gene crossover and perturbation adjustment to limit the amount of perturbation noise. Our results show that adversarial images can successfully mislead the HDC classifier to produce wrong prediction labels with a high probability (i.e., 78% when the HDC classifier uses a fixed majority rule for decision).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ge, L., Parhi, K.K.: Classification using hyperdimensional computing: a review. IEEE Circuits Syst. Mag. 20(2), 30–47 (2020)

    Article  Google Scholar 

  2. Karunaratne, G., Le Gallo, M., Cherubini, G., Benini, L., Rahimi, A., Sebastian, A.: In-memory Hyperdimensional Computing, Nature Electronics, June 2020

    Google Scholar 

  3. Kanerva, P.: Hyperdimensional computing: an introduction to computing in distributed representation. Cogn. Comput. 1, 139–159 (2009)

    Article  Google Scholar 

  4. Imani, M., Morris, J., Messerly, J., Shu, H., Deng, Y., Rosing, T.: BRIC: Locality-based encoding for energy-efficient brain-inspired hyperdimensional computing. In: DAC (2019)

    Google Scholar 

  5. Imani, M., Huang, C., Kong, D., Rosing, T.: Hierarchical hyperdimensional computing for energy efficient classification. In: DAC (2018)

    Google Scholar 

  6. Benatti, S., Montagna, F., Kartsch, V., Rahimi, A., Rossi, D., Benini, L.: Online learning and classification of EMG-based gestures on a parallel ultra-low power platform using hyperdimensional computing. IEEE Trans. Biomed. Circuits Syst. 13(3), 516–528 (2019)

    Article  Google Scholar 

  7. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  8. Imani, M., Hwang, J., Rosing, T., Rahimi, A., Rabaey, J.M.: Low-power sparse hyperdimensional encoder for language recognition. IEEE Design Test 34(6), 94–101 (2017)

    Article  Google Scholar 

  9. Chang, C.-Y., Chuang, Y.-C., Wu, A.-Y.A.: Task-projected hyperdimensional computing for multi-task learning. In: Artificial Intelligence Applications and Innovations (2020)

    Google Scholar 

  10. Chang, E., Rahimi, A., Benini, L., Wu, A.A.: Hyperdimensional computing-based multimodality emotion recognition with physiological signals. In: IEEE International Conference on Artificial Intelligence Circuits and Systems (2019)

    Google Scholar 

  11. Kleyko, D., Osipov, E., Papakonstantinou, N., Vyatkin, V.: Hyperdimensional computing in industrial systems: the use-case of distributed fault isolation in a power plant. IEEE Access 6, 30766–30777 (2018)

    Article  Google Scholar 

  12. Burrello, A., Schindler, K., Benini, L., Rahimi, A.: Hyperdimensional computing with local binary patterns: one-shot learning of seizure onset and identification of ictogenic brain regions using short-time ieeg recordings. IEEE Trans. Biomed. Eng. 67(2), 601–613 (2020)

    Article  Google Scholar 

  13. Mitrokhin, A., Sutor, P., Fermüller, C., Aloimonos, Y.: Learning sensorimotor control with neuromorphic sensors: toward hyperdimensional active perception. Sci. Robotics 4(30), 1–10 (2019)

    Article  Google Scholar 

  14. Plate, T.A.: Holographic reduced representations. IEEE Trans. Neural Networks 6(3), 623–641 (1995)

    Article  Google Scholar 

  15. Frady, E.P., Kleyko, D., Sommer, F.T.: A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput. 30(6), 1449–1513 (2018)

    Article  MathSciNet  Google Scholar 

  16. Kleyko, D., Rahimi, A., Rachkovskij, D., Osipov, E., Rabaey, J.: Classification and recall with binary hyperdimensional computing: tradeoffs in choice of density and mapping characteristics. IEEE Trans. Neural Netw. Learn. Syst. 29, 1–19 (2018)

    Article  Google Scholar 

  17. LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  18. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. MIT press (2018)

    Google Scholar 

  19. Bhandari, D., Murthy, C., Pal, S.K.: Genetic algorithm with elitist model and its convergence. Int. J. Pattern Recognit. Artif. Intell. 10(06), 731–747 (1996)

    Article  Google Scholar 

  20. Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Elsevier Eng. 6(3), 346–360 (2020)

    Google Scholar 

  21. Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.-J., Srivastava, M.B.: GenAttack: practical black-box attacks with gradient-free optimization. In: Genetic and Evolutionary Computation Conference (2019)

    Google Scholar 

  22. Liu, X., Luo, Y., Zhang, X., Zhu, Q.: A black-box attack on neural networks based on swarm evolutionary algorithm. Elsevier Comput. Secur. 85, 89–106 (2019)

    Article  Google Scholar 

  23. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  24. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: S&P (2017)

    Google Scholar 

  25. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: AsiaCCS (2017)

    Google Scholar 

  26. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: AISec (2017)

    Google Scholar 

  27. Tu, C.-C., et al.: AutoZoom: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: AAAI (2019)

    Google Scholar 

  28. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: ICLR (2018)

    Google Scholar 

  29. Narodytska, N., Kasiviswanathan, S.: Simple black-box adversarial attacks on deep neural networks. In: CVPR Workshops (2017)

    Google Scholar 

  30. Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers: a case study on PDF malware classifiers. In: NDSS (2016)

    Google Scholar 

  31. Khaleghi, B., Imani, M., Rosing, T.: Prive-HD: privacy-preserved hyperdimensional computing. In: DAC (2020)

    Google Scholar 

  32. Imani, M., et al.: SemiHD: semi-supervised learning using hyperdimensional computing. In: ICCAD (2019)

    Google Scholar 

  33. Imani, M., Messerly, J., Wu, F., Pi, W., Rosing, T.: A binary learning framework for hyperdimensional computing. In: DATE (2019)

    Google Scholar 

  34. Imani, M., Rahimi, A., Kong, D., Rosing, T., Rabaey, J.M.: Exploring hyperdimensional associative memory. In: HPCA (2017)

    Google Scholar 

  35. Imani, M., Salamat, S., Gupta, S., Huang, J., Rosing, T.: Fach: FPGA-based acceleration of hyperdimensional computing by reducing computational complexity. In: ASPDAC (2019)

    Google Scholar 

  36. Salamat, S., Imani, M., Khaleghi, B., Rosing, T.: F5-HD: fast flexible FPGA-based framework for refreshing hyperdimensional computing. In: FPGA (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaolei Ren .

Editor information

Editors and Affiliations

Appendix: Additional Results

Appendix: Additional Results

Query Count for Attacks on HDC Classifier with FMR. We calculate the query counts for the successfully attacked images and show the results in a box plot in Fig. 6. We can notice that the median query count of all digits is less than 5,000, which is a reasonably good query efficiency for black-/grey-box attacks [21].

Fig. 6.
figure 6

Box plot of query count needed by \(\mathsf {GA}\)-\(\mathsf {CGC}\)-\(\mathsf {PA}\) for the HDC classifier with FMR. Each box plot shows the values for the maximum/minimum/median/75th percentile/25th percentile, excluding outliers.

Table 2. Perturbation for Images Shown in Fig. 8 and Fig. 9. The values for Fig. 9 are shown in parentheses.

Adversarial Examples. Finally, we visually show some adversarial examples for the HDC classifier with FMR. In the hard case, benign images would have a 100% per-image accuracy had the HDC classifier use RMR. In the vulnerable case, benign images are correctly classified by the HDC classifier with FMR, but would have less than 100% per-image accuracy had the classifier use RMR. That is, the vulnerable images are those borderline images that are already hard to be correctly classified by the HDC classifier.

The benign images, perturbation noise, and adversarial images for hard and vulnerable cases are shown in Fig. 8 and Fig. 9, respectively. Also, we give the amount of perturbation noises for the two cases in Table 2.

It is more difficult to launch successful attacks in the hard case than in the vulnerable case. Thus, as expected, the perturbation noise added by \(\mathsf {GA}\)-\(\mathsf {CGC}\)-\(\mathsf {PA}\) in the hard case is generally more than in the vulnerable case. In particular, in the vulnerable case, the adversarial image is almost identical to the corresponding benign image by human perception. This can also be reflected from the perturbation noise figures and Table 2.

Fig. 7.
figure 7

Attacks on the HDC classifier with RMR. The first row shows the original images. The second row shows the perturbation noise added by the attacker. The third row shows the adversarial images, and the corresponding misclassified labels are given at the top of each image.

Fig. 8.
figure 8

Attacks on the HDC classifier with FMR (hard). The first row shows the original images. The second row shows the perturbation noise added by the attacker. The third row shows the adversarial images, and the corresponding misclassified labels are given at the top of each image.

Fig. 9.
figure 9

Attacks on the HDC classifier with FMR (vulnerable). The first row shows the original images. The second row shows the perturbation noise added by the attacker. The third row shows the adversarial images, and the corresponding misclassified labels are given at the top of each image.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, F., Ren, S. (2020). On the Vulnerability of Hyperdimensional Computing-Based Classifiers to Adversarial Attacks. In: Kutyłowski, M., Zhang, J., Chen, C. (eds) Network and System Security. NSS 2020. Lecture Notes in Computer Science(), vol 12570. Springer, Cham. https://doi.org/10.1007/978-3-030-65745-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65745-1_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65744-4

  • Online ISBN: 978-3-030-65745-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics