Skip to main content
Log in

A Design Strategy for the Efficient Implementation of Random Basis Neural Networks on Resource-Constrained Devices

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The deployment of connectionist models on resource-constrained, low-power embedded systems brings about specific implementation issues. The paper presents a design strategy, aimed at low-end reconfigurable devices, for implementing the prediction operation supported by a single hidden-layer feedforward neural network (SLFN). The paper first shows that a considerable efficiency can be obtained when hard-limiter thresholding operators support the activation functions of the neurons. Secondly, the analysis highlights the advantages of using random basis networks, thanks to their limited memory requirements. Finally, the paper presents a pair of different architectural approaches to the effective support of SLFNs on CPLDs and low-end FPGAs. The alternatives differ in the specific trade-off strategy between area utilization and latency. Experiments confirm the effectiveness of both schemes, yielding a pair of viable implementation options for satisfying the respective constraints, namely effective area utilization or low latency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. Altera FPGAs https://www.altera.com/products/general/fpga.html

References

  1. Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz J.L (2012) Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In: International workshop on ambient assisted living, Springer, Berlin, pp. 216–223

  2. Basu A, Shuo S, Zhou H, Lim MH, Huang GB (2013) Silicon spiking neurons for hardware implementation of extreme learning machines. Neurocomputing 102:125–134

    Article  Google Scholar 

  3. Canziani A, Culurciello E, Paszke A (2017) Evaluation of neural network architectures for embedded systems. In: 2017 IEEE international symposium on circuits and systems (ISCAS), IEEE, pp 1–4

  4. Cao W, Wang X, Ming Z, Gao J (2018) A review on neural networks with random weights. Neurocomputing 275:278–287

    Article  Google Scholar 

  5. Chen Y, Yao E, Basu A (2015) A 128 channel 290 gmacs/w machine learning based co-processor for intention decoding in brain machine interfaces. In: 2015 IEEE international symposium on circuits and systems (ISCAS), IEEE, pp 3004–3007

  6. Courbariaux M, Bengio Y, David JP (2015) Binaryconnect: training deep neural networks with binary weights during propagations. Advances in neural information processing systems. MIT Press, Cambridge, pp 3123–3131

    Google Scholar 

  7. Danese G, Leporati F, Ramat S (2002) A parallel neural processor for real-time applications. IEEE Micro 22(3):20–31

    Article  Google Scholar 

  8. Decherchi S, Gastaldo P, Leoncini A, Zunino R (2012) Efficient digital implementation of extreme learning machines for classification. IEEE Trans Circuits Syst II Express Briefs 59(8):496–500

    Article  Google Scholar 

  9. Dua D, Graff C (2017) UCI machine learning repository. http://archive.ics.uci.edu/ml

  10. Frances-Villora JV, Rosado-Muñoz A, Bataller-Mompean M, Barrios-Aviles J, Guerrero-Martinez JF (2018) Moving learning machine towards fast real-time applications: a high-speed FPGA-based implementation of the OS-ELM training algorithm. Electronics 7(11):308

    Article  Google Scholar 

  11. Frances-Villora JV, Rosado-Muñoz A, Martínez-Villena JM, Bataller-Mompean M, Guerrero JF, Wegrzyn M (2016) Hardware implementation of real-time extreme learning machine in fpga: analysis of precision, resource occupation and performance. Comput Electr Eng 51:139–156

    Article  Google Scholar 

  12. Gastaldo P, Pinna L, Seminara L, Valle M, Zunino R (2015) A tensor-based approach to touch modality classification by using machine learning. Robot Auton Syst 63:268–278

    Article  Google Scholar 

  13. Guo K, Zeng S, Yu J, Wang Y, Yang H (2017) A survey of FPGA based neural network accelerator. arXiv preprint arXiv:1712.08934

  14. Hecht-Nielsen R (1992) Theory of the backpropagation neural network. Neural networks for perception. Elsevier, Amsterdam, pp 65–93

    Chapter  Google Scholar 

  15. Himavathi S, Anitha D, Muthuramalingam A (2007) Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization. IEEE Trans Neural Netw 18(3):880–888

    Article  Google Scholar 

  16. Huang GB, Chen L, Siew CK et al (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Networks 17(4):879–892

    Article  Google Scholar 

  17. Huang HC, Chiang CH (2016) An evolutionary radial basis function neural network with robust genetic-based immunecomputing for online tracking control of autonomous robots. Neural Process Lett 44(1):19–35

    Article  Google Scholar 

  18. Ibrahim A, Valle M (2018) Real-time embedded machine learning for tensorial tactile data processing. IEEE Trans Circuits Syst I Regul Pap 99:1–10

    Google Scholar 

  19. Iqbal R, Doctor F, More B, Mahmud S, Yousuf U (2017) Big data analytics and computational intelligence for cyber-physical systems: recent trends and state of the art applications. Future Gener Comput Syst. https://doi.org/10.1016/j.future.2017.10.021

    Article  Google Scholar 

  20. Li Y, Zhang S, Yin Y, Zhang J, Xiao W (2018) A soft sensing scheme of gas utilization ratio prediction for blast furnace via improved extreme learning machine. Neural Process Lett 50(2):1–23

    Google Scholar 

  21. Linear feedback shift register maximal length table. https://www.xilinx.com/support/documentation/application_notes

  22. Lin CT, Liu YT, Wu SL, Cao Z, Wang YK, Huang CS, King JT, Chen SA, Lu SW, Chuang CH (2017) EEG-based brain-computer interfaces: a novel neurotechnology and computational intelligence method. IEEE Syst Man Cybern Mag 3(4):16–26

    Article  Google Scholar 

  23. Lyon RJ, Stappers B, Cooper S, Brooke J, Knowles J (2016) Fifty years of pulsar candidate selection: from simple filters to a new principled real-time classification approach. Mon Not R Astron Soc 459(1):1104–1123

    Article  Google Scholar 

  24. Murase M (1992) Linear feedback shift register. US Patent 5,090,035

  25. Patil A, Shen S, Yao E, Basu A (2017) Hardware architecture for large parallel array of random feature extractors applied to image recognition. Neurocomputing 261:193–203

    Article  Google Scholar 

  26. Ragusa E, Gianoglio C, Gastaldo P, Zunino R (2018) A digital implementation of extreme learning machines for resource-constrained devices. IEEE Trans Circuits Syst 65(8):1104–1108

    Article  Google Scholar 

  27. Rastegari M, Ordonez V, Redmon J, Farhadi A (2016) Xnor-net: imagenet classification using binary convolutional neural networks. In: European conference on computer vision, Springer, Berlin, pp 525–542

  28. Ren W, Han M (2018) Classification of EEG signals using hybrid feature extraction and ensemble extreme learning machine. Neural Process Lett 50(2):1–21

    Google Scholar 

  29. Safaei A, Wu QJ, Akilan T, Yang Y (2018) System-on-a-chip (soc)-based hardware acceleration for an online sequential extreme learning machine (OS-ELM). IEEE Trans Comput-Aided Des Integr Circuits Syst 38(11):2127–2138

    Article  Google Scholar 

  30. Xia W, Mita Y, Shibata T (2016) A nearest neighbor classifier employing critical boundary vectors for efficient on-chip template reduction. IEEE Trans Neural Networks Learn Syst 27(5):1094–1107

    Article  MathSciNet  Google Scholar 

  31. Yao E, Basu A (2017) VLSI extreme learning machine: a design space exploration. IEEE Trans Very Larg Scale Integr Syst 25(1):60–74

    Article  Google Scholar 

  32. Yeam TC, Ismail N, Mashiko K, Matsuzaki T (2017) FPGA implementation of extreme learning machine system for classification. In: Region 10 conference, TENCON 2017–2017 IEEE, pp 1868–1873

Download references

Acknowledgements

The authors acknowledge financial support from Compagnia di San Paolo, Grant Number: 2017.0559, ID ROL: 19795

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Edoardo Ragusa.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ragusa, E., Gianoglio, C., Zunino, R. et al. A Design Strategy for the Efficient Implementation of Random Basis Neural Networks on Resource-Constrained Devices. Neural Process Lett 51, 1611–1629 (2020). https://doi.org/10.1007/s11063-019-10165-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-019-10165-y

Keywords

Navigation