Abstract
Hyperdimensional computing (HDC) has been emerging as a brain-inspired in-memory computing architecture, exhibiting ultra energy efficiency, low latency and strong robustness against hardware-induced bit errors. Nonetheless, state-of-the-art designs for HDC classifiers are mostly security-oblivious, raising concerns with their safety and immunity to adversarial inputs. In this paper, we study for the first time adversarial attacks on HDC classifiers and highlight that HDC classifiers can be vulnerable to even minimally-perturbed adversarial samples. Specifically, using handwritten digit classification as an example, we construct a HDC classifier and formulate a grey-box attack problem, where an attacker’s goal is to mislead the target HDC classifier to produce erroneous prediction labels while keeping the amount of added perturbation noise as little as possible. Then, we propose a modified genetic algorithm to generate adversarial samples within a reasonably small number of queries, and further apply critical gene crossover and perturbation adjustment to limit the amount of perturbation noise. Our results show that adversarial images can successfully mislead the HDC classifier to produce wrong prediction labels with a high probability (i.e., 78% when the HDC classifier uses a fixed majority rule for decision).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ge, L., Parhi, K.K.: Classification using hyperdimensional computing: a review. IEEE Circuits Syst. Mag. 20(2), 30–47 (2020)
Karunaratne, G., Le Gallo, M., Cherubini, G., Benini, L., Rahimi, A., Sebastian, A.: In-memory Hyperdimensional Computing, Nature Electronics, June 2020
Kanerva, P.: Hyperdimensional computing: an introduction to computing in distributed representation. Cogn. Comput. 1, 139–159 (2009)
Imani, M., Morris, J., Messerly, J., Shu, H., Deng, Y., Rosing, T.: BRIC: Locality-based encoding for energy-efficient brain-inspired hyperdimensional computing. In: DAC (2019)
Imani, M., Huang, C., Kong, D., Rosing, T.: Hierarchical hyperdimensional computing for energy efficient classification. In: DAC (2018)
Benatti, S., Montagna, F., Kartsch, V., Rahimi, A., Rossi, D., Benini, L.: Online learning and classification of EMG-based gestures on a parallel ultra-low power platform using hyperdimensional computing. IEEE Trans. Biomed. Circuits Syst. 13(3), 516–528 (2019)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
Imani, M., Hwang, J., Rosing, T., Rahimi, A., Rabaey, J.M.: Low-power sparse hyperdimensional encoder for language recognition. IEEE Design Test 34(6), 94–101 (2017)
Chang, C.-Y., Chuang, Y.-C., Wu, A.-Y.A.: Task-projected hyperdimensional computing for multi-task learning. In: Artificial Intelligence Applications and Innovations (2020)
Chang, E., Rahimi, A., Benini, L., Wu, A.A.: Hyperdimensional computing-based multimodality emotion recognition with physiological signals. In: IEEE International Conference on Artificial Intelligence Circuits and Systems (2019)
Kleyko, D., Osipov, E., Papakonstantinou, N., Vyatkin, V.: Hyperdimensional computing in industrial systems: the use-case of distributed fault isolation in a power plant. IEEE Access 6, 30766–30777 (2018)
Burrello, A., Schindler, K., Benini, L., Rahimi, A.: Hyperdimensional computing with local binary patterns: one-shot learning of seizure onset and identification of ictogenic brain regions using short-time ieeg recordings. IEEE Trans. Biomed. Eng. 67(2), 601–613 (2020)
Mitrokhin, A., Sutor, P., Fermüller, C., Aloimonos, Y.: Learning sensorimotor control with neuromorphic sensors: toward hyperdimensional active perception. Sci. Robotics 4(30), 1–10 (2019)
Plate, T.A.: Holographic reduced representations. IEEE Trans. Neural Networks 6(3), 623–641 (1995)
Frady, E.P., Kleyko, D., Sommer, F.T.: A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput. 30(6), 1449–1513 (2018)
Kleyko, D., Rahimi, A., Rachkovskij, D., Osipov, E., Rabaey, J.: Classification and recall with binary hyperdimensional computing: tradeoffs in choice of density and mapping characteristics. IEEE Trans. Neural Netw. Learn. Syst. 29, 1–19 (2018)
LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. MIT press (2018)
Bhandari, D., Murthy, C., Pal, S.K.: Genetic algorithm with elitist model and its convergence. Int. J. Pattern Recognit. Artif. Intell. 10(06), 731–747 (1996)
Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Elsevier Eng. 6(3), 346–360 (2020)
Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.-J., Srivastava, M.B.: GenAttack: practical black-box attacks with gradient-free optimization. In: Genetic and Evolutionary Computation Conference (2019)
Liu, X., Luo, Y., Zhang, X., Zhu, Q.: A black-box attack on neural networks based on swarm evolutionary algorithm. Elsevier Comput. Secur. 85, 89–106 (2019)
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: S&P (2017)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: AsiaCCS (2017)
Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: AISec (2017)
Tu, C.-C., et al.: AutoZoom: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: AAAI (2019)
Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: ICLR (2018)
Narodytska, N., Kasiviswanathan, S.: Simple black-box adversarial attacks on deep neural networks. In: CVPR Workshops (2017)
Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers: a case study on PDF malware classifiers. In: NDSS (2016)
Khaleghi, B., Imani, M., Rosing, T.: Prive-HD: privacy-preserved hyperdimensional computing. In: DAC (2020)
Imani, M., et al.: SemiHD: semi-supervised learning using hyperdimensional computing. In: ICCAD (2019)
Imani, M., Messerly, J., Wu, F., Pi, W., Rosing, T.: A binary learning framework for hyperdimensional computing. In: DATE (2019)
Imani, M., Rahimi, A., Kong, D., Rosing, T., Rabaey, J.M.: Exploring hyperdimensional associative memory. In: HPCA (2017)
Imani, M., Salamat, S., Gupta, S., Huang, J., Rosing, T.: Fach: FPGA-based acceleration of hyperdimensional computing by reducing computational complexity. In: ASPDAC (2019)
Salamat, S., Imani, M., Khaleghi, B., Rosing, T.: F5-HD: fast flexible FPGA-based framework for refreshing hyperdimensional computing. In: FPGA (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Additional Results
Appendix: Additional Results
Query Count for Attacks on HDC Classifier with FMR. We calculate the query counts for the successfully attacked images and show the results in a box plot in Fig. 6. We can notice that the median query count of all digits is less than 5,000, which is a reasonably good query efficiency for black-/grey-box attacks [21].
Adversarial Examples. Finally, we visually show some adversarial examples for the HDC classifier with FMR. In the hard case, benign images would have a 100% per-image accuracy had the HDC classifier use RMR. In the vulnerable case, benign images are correctly classified by the HDC classifier with FMR, but would have less than 100% per-image accuracy had the classifier use RMR. That is, the vulnerable images are those borderline images that are already hard to be correctly classified by the HDC classifier.
The benign images, perturbation noise, and adversarial images for hard and vulnerable cases are shown in Fig. 8 and Fig. 9, respectively. Also, we give the amount of perturbation noises for the two cases in Table 2.
It is more difficult to launch successful attacks in the hard case than in the vulnerable case. Thus, as expected, the perturbation noise added by \(\mathsf {GA}\)-\(\mathsf {CGC}\)-\(\mathsf {PA}\) in the hard case is generally more than in the vulnerable case. In particular, in the vulnerable case, the adversarial image is almost identical to the corresponding benign image by human perception. This can also be reflected from the perturbation noise figures and Table 2.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, F., Ren, S. (2020). On the Vulnerability of Hyperdimensional Computing-Based Classifiers to Adversarial Attacks. In: Kutyłowski, M., Zhang, J., Chen, C. (eds) Network and System Security. NSS 2020. Lecture Notes in Computer Science(), vol 12570. Springer, Cham. https://doi.org/10.1007/978-3-030-65745-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-65745-1_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65744-4
Online ISBN: 978-3-030-65745-1
eBook Packages: Computer ScienceComputer Science (R0)