Skip to main content

Advertisement

Log in

A piecewise weight update rule for a supervised training of cortical algorithms

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

First introduced by MountCastle, cortical algorithms (CA) are positioned to outperform artificial neural networks second generations due to their ability to hierarchically store sequences of patterns in an invariant form. Despite their closer resemblance to the human cortex and their hypothetical improved performance, CA adoption as a deep learning approach remains limited in energy aware environments due to their high computational training complexity. Motivated to reduce CA supervised training complexity in limited hardware resources environments, we propose in this paper a piecewise linear or polygonal weight update rule for a supervised training of CA based on a linearization of the exponential function. As shown by our simulation results on 12 publicly available databases and our developed error-bound proofs, the proposed rule reduces CA training time by a factor of 3 at the expense of a 0.5% degradation in accuracy. A simpler approximation relying on the asymptotes at 0 and infinity reduces training time by a factor of 3.5 coupled with a reduction of 1.49% in accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Edelman GM, Mountcastle VB (1978) The mindful brain: cortical organization and the group-selective theory of higher brain function. Massachusetts Institute of Technology, Cambridge

    Google Scholar 

  2. Hashmi A, Lipasti MH (2010) Discovering cortical algorithms. In: IJCCI (ICFC-ICNC), pp. 196–204

  3. Hashmi AG, Lipasti MH (2009) Cortical columns: building blocks for intelligent systems. In: IEEE symposium on computational intelligence for multimedia signal and vision processing, 2009. CIMSVP’09. IEEE, pp 21–28

  4. Hashmi A, Nere A, Thomas JJ, Lipasti M (2011) A case for neuromorphic ISAS. In: ACM SIGARCH computer architecture news, vol 39, no 1. ACM, pp. 145–158

  5. Hashmi A, Berry H, Temam O, Lipasti M (2011) Automatic abstraction and fault tolerance in cortical microachitectures. In: ACM SIGARCH computer architecture news, vol 39, no 3. ACM, pp 1–10

  6. Hajj N, Awad M (2013) Weighted entropy cortical algorithms for isolated Arabic speech recognition. In: The 2013 International joint conference on neural networks (IJCNN). IEEE, pp 1–7

  7. Hajj N, Rizk Y, Awad M (2015) A mapreduce cortical algorithms implementation for unsupervised learning of big data. Proc Comput Sci 53:327–334

    Article  Google Scholar 

  8. Awad M, Khanna R (2015) Efficient learning machines: theories, concepts, and applications for engineers and system designers. Apress, New York

    Book  Google Scholar 

  9. Aleksandrovsky B, Whitson J, Andes G, Lynch G, Granger R (1996) Novel speech processing mechanism derived from auditory neocortical circuit analysis. In: Fourth international conference on spoken language, 1996. ICSLP 96. Proceedings, vol 1. IEEE, pp 558–561

  10. Roelfsema PR (2006) Cortical algorithms for perceptual grouping. Annu Rev Neurosci 29:203–227

    Article  Google Scholar 

  11. Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T (2007) Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell 29(3):411–426

    Article  Google Scholar 

  12. Nere A, Hashmi A, Lipasti M (2011) Profiling heterogeneous multi-GPU systems to accelerate cortically inspired learning algorithms. In: 2011 IEEE international parallel and distributed processing symposium (IPDPS). IEEE 2011, pp 906–920

  13. Collobert R, Bengio S (2004) Links between perceptrons, MLPS and SVMS. In: Proceedings of the twenty-first international conference on machine learning. ACM, p 23

  14. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: International conference on artificial intelligence and statistics, pp 315–323

  15. Moler C (2011) Experiments with matlab. The MathWorks, Co, Natick

    Google Scholar 

  16. Kuhn PM, Kuhn Peter M (1999) Algorithms, complexity analysis and VLSI architectures for MPEG-4 motion estimation, vol 8. Springer, Berlin

    MATH  Google Scholar 

  17. Bruck J, Goodman JW (1988) A generalized convergence theorem for neural networks. IEEE Trans Inf Theory 34(5):1089–1092

    Article  MathSciNet  MATH  Google Scholar 

  18. Mutch J, Knoblich U, Poggio T (2010) CNS: a GPU-based framework for simulating cortically-organized networks. Massachusetts Institute of Technology, Cambridge, MA, Technical Report, MIT-CSAIL-TR-2010-013/CBCL-286

  19. Lichman M (2013) UCI machine learning repository (online). http://archive.ics.uci.edu/ml

  20. Madani O, Georg M, Ross D (2013) On using nearly-independent feature families for high precision and confidence. Mach Learn 92(2–3):457–477

    Article  MathSciNet  Google Scholar 

  21. Baldi P, Sadowski P, Whiteson D (2014) Searching for exotic particles in high-energy physics with deep learning. Nat Commun 5:4308

    Article  Google Scholar 

  22. Bagirov AM, Ugon J, Webb D (2011) An efficient algorithm for the incremental construction of a piecewise linear classifier. Inf Syst 36(4):782–790

    Article  Google Scholar 

  23. Dash M, Liu H, Scheuermann P, Tan KL (2003) Fast hierarchical clustering and its validation. Data Knowl Eng 44(1):109–138

    Article  MATH  Google Scholar 

  24. Dietterich TG, Bakiri G (1995) Solving multiclass learning problems via error-correcting output codes. J Artif Intell Res 2:263–286

    Article  MATH  Google Scholar 

  25. Perkins S, Theiler J (2003) Online feature selection using grafting. In: ICML, pp 592–599

  26. Tan PJ, Dowe DL (2003) MML inference of decision graphs with multi-way joins and dynamic attributes. In: AI 2003: advances in artificial intelligence. Springer, pp 269–281

  27. Kim H, Park H (2004) Data reduction in support vector machines by a kernelized ionic interaction model. In: SDM. SIAM, pp 507–511

  28. Botta M, Giordana A, Saitta L (1993) Learning fuzzy concept definitions. In: Second IEEE international conference on fuzzy systems, 1993. IEEE, pp 18–22

  29. Dimitrakakis C, Bengio S (2005) Online adaptive policies for ensemble classifiers. Neurocomputing 64:211–221

    Article  Google Scholar 

  30. Chou C-H (2013) Using tic-tac-toe for learning data mining classifications and evaluations. Int J Inf Educ Technol 3(4):437

    Google Scholar 

Download references

Acknowledgements

This work was supported by (1) MER, a partnership between Intel Corporation and King Abdul-Aziz city for science and technology (KACST) (Saudi Arabia) to conduct and promote research in the Middle East, (2) the university research board at the American University of Beirut and (3) the National Center for Scientific Research (NCSR) Lebanon. The authors are also grateful for the insights of Chris Wilkerson of Intel Corporation in Hillsboro, OR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariette Awad.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hajj, N., Awad, M. A piecewise weight update rule for a supervised training of cortical algorithms. Neural Comput & Applic 31, 1915–1930 (2019). https://doi.org/10.1007/s00521-017-3167-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-3167-5

Keywords

Navigation