skip to main content
10.1145/3400302.3415696acmconferencesArticle/Chapter ViewAbstractPublication PagesiccadConference Proceedingsconference-collections
research-article
Public Access

SynergicLearning: neural network-based feature extraction for highly-accurate hyperdimensional learning

Published:17 December 2020Publication History

ABSTRACT

Machine learning models differ in terms of accuracy, computational/memory complexity, training time, and adaptability among other characteristics. For example, neural networks (NNs) are well-known for their high accuracy due to the quality of their automatic feature extraction while brain-inspired hyperdimensional (HD) learning models are famous for their quick training, computational efficiency, and adaptability. This work presents a hybrid, synergic machine learning model that excels at all the said characteristics and is suitable for incremental, on-line learning on a chip. The proposed model comprises an NN and a classifier. The NN acts as a feature extractor and is specifically trained to work well with the classifier that employs the HD computing framework. This work also presents a parameterized hardware implementation of the said feature extraction and classification components while introducing a compiler that maps any arbitrary NN and/or classifier to the aforementioned hardware. The proposed hybrid machine learning model has the same level of accuracy (i.e. ±1%) as NNs while achieving at least 10% improvement in accuracy compared to HD learning models. Additionally, the end-to-end hardware realization of the hybrid model improves power efficiency by 1.60x compared to state-of-the-art, high-performance HD learning implementations while improving latency by 2.13x. These results have profound implications for the application of such synergic models in challenging cognitive tasks.

References

  1. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (CVPR), 2016.Google ScholarGoogle ScholarCross RefCross Ref
  2. Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S Emer. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  3. Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In Advances in neural information processing systems, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. Elsevier, 1989.Google ScholarGoogle Scholar
  5. Doyen Sahoo, Quang Pham, Jing Lu, and Steven CH Hoi. Online deep learning: Learning deep neural networks on the fly. arXiv preprint arXiv:1711.03705, 2017.Google ScholarGoogle Scholar
  6. Viktor Losing, Barbara Hammer, and Heiko Wersing. Incremental on-line learning: A review and comparison of state of the art algorithms. Neurocomputing, 2018.Google ScholarGoogle Scholar
  7. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International journal of computer vision, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Pentti Kanerva. Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive computation, 2009.Google ScholarGoogle Scholar
  10. Mohsen Imani, Chenyu Huang, Deqian Kong, and Tajana Rosing. Hierarchical hyperdimensional computing for energy efficient classification. In ACM/ESDA/IEEE Design Automation Conference (DAC), 2018.Google ScholarGoogle Scholar
  11. Justin Morris, Mohsen Imani, Samuel Bosch, Anthony Thomas, Helen Shu, and Tajana Rosing. CompHD: Efficient hyperdimensional computing using model compression. In IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), 2019.Google ScholarGoogle ScholarCross RefCross Ref
  12. Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. PACT: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.Google ScholarGoogle Scholar
  13. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.Google ScholarGoogle Scholar
  14. Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. A systematic DNN weight pruning framework using alternating direction method of multipliers. In European Conference on Computer Vision (ECCV), 2018.Google ScholarGoogle ScholarCross RefCross Ref
  15. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.Google ScholarGoogle Scholar
  16. Manuel Schmuck, Luca Benini, and Abbas Rahimi. Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory. ACM Journal on Emerging Technologies in Computing Systems (JETC), 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge Luis Reyes-Ortiz. A public domain dataset for human activity recognition using smartphones. In Esann, 2013.Google ScholarGoogle Scholar
  18. Ron Cole, Yeshwant Muthusamy, and Mark Fanty. The ISOLET spoken letter database. Oregon Graduate Institute of Science and Technology, Department of Computer ..., 1990.Google ScholarGoogle Scholar
  19. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Py-Torch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, 2019.Google ScholarGoogle Scholar
  20. Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications. International Society for Optics and Photonics, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  21. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of machine learning research, 2008.Google ScholarGoogle Scholar
  22. Andrey Ignatov. Real-time human activity recognition from accelerometer data using convolutional neural networks. Applied Soft Computing, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  23. Mohsen Imani, Sahand Salamat, Saransh Gupta, Jiani Huang, and Tajana Rosing. Fach: Fpga-based acceleration of hyperdimensional computing by reducing computational complexity. In Proceedings of the 24th Asia and South Pacific Design Automation Conference, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Gal Chechik, Uri Shalit, Varun Sharma, and Samy Bengio. An online algorithm for large scale image similarity learning. In Advances in Neural Information Processing Systems, 2009.Google ScholarGoogle Scholar
  25. Mohsen Imani, Deqian Kong, Abbas Rahimi, and Tajana Rosing. Voicehd: Hyper-dimensional computing for efficient speech recognition. In 2017 IEEE International Conference on Rebooting Computing (ICRC). IEEE, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  26. Abbas Rahimi, Pentti Kanerva, and Jan M Rabaey. A robust and energy-efficient classifier using brain-inspired hyperdimensional computing. In Proceedings of the 2016 International Symposium on Low Power Electronics and Design (ISLPED), 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Sohum Datta, Ryan AG Antonio, Aldrin RS Ison, and Jan M Rabaey. A programmable hyper-dimensional processor architecture for human-centric IoT. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  28. Mohsen Imani, Abbas Rahimi, Deqian Kong, Tajana Rosing, and Jan M Rabaey. Exploring hyperdimensional associative memory. In 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2017.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. SynergicLearning: neural network-based feature extraction for highly-accurate hyperdimensional learning

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ICCAD '20: Proceedings of the 39th International Conference on Computer-Aided Design
            November 2020
            1396 pages
            ISBN:9781450380263
            DOI:10.1145/3400302
            • General Chair:
            • Yuan Xie

            Copyright © 2020 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 17 December 2020

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate457of1,762submissions,26%

            Upcoming Conference

            ICCAD '24
            IEEE/ACM International Conference on Computer-Aided Design
            October 27 - 31, 2024
            New York , NY , USA

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader