Loading [a11y]/accessibility-menu.js
Accelerating Hyperdimensional Computing on FPGAs by Exploiting Computational Reuse | IEEE Journals & Magazine | IEEE Xplore

Accelerating Hyperdimensional Computing on FPGAs by Exploiting Computational Reuse


Abstract:

Brain-inspired hyperdimensional (HD) computing emulates cognition by computing with long-size vectors. HD computing consists of two main modules: encoder and associative ...Show More

Abstract:

Brain-inspired hyperdimensional (HD) computing emulates cognition by computing with long-size vectors. HD computing consists of two main modules: encoder and associative search. The encoder module maps inputs into high dimensional vectors, called hypervectors. The associative search finds the closest match between the trained model (set of hypervectors) and a query hypervector by calculating a similarity metric. To perform the reasoning task for practical classification problems, HD needs to store a non-binary model and uses costly similarity metrics as cosine. In this article we propose an FPGA-based acceleration of HD exploiting Computational Reuse (HD-Core) which significantly improves the computation efficiency of both encoding and associative search modules. HD-Core enables computation reuse in both encoding and associative search modules. We observed that consecutive inputs have high similarity which can be used to reduce the complexity of the encoding step. The previously encoded hypervector is reused to eliminate the redundant operations in encoding the current input. HD-Core, additionally eliminates the majority of multiplication operations by clustering the class hypervector values, and sharing the values among all the class hypervectors. Our evaluations on several classification problems show that HD-Core can provide 4.4x energy efficiency improvement and 4.8x speedup over the optimized GPU implementation while ensuring the same quality of classification. HD-Core provides 2.4x more throughput than the stateof-the-art FPGA implementation; on average, 40 percent of this improvement comes directly from enabling computation reuse in the encoding module and the rest comes from the computation reuse in the associative search module.
Published in: IEEE Transactions on Computers ( Volume: 69, Issue: 8, 01 August 2020)
Page(s): 1159 - 1171
Date of Publication: 06 May 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.