Abstract
Behavioural synthesis enables the automation of the design process by generating task-specific hardware configured for either FPGA and SoC platforms or custom silicon devices such as ASICs. Relevant commercial tools’ flows can bring significant benefits for software developers with no hardware design expertise. Our Custom Coprocessor Compilations (CCC) high level synthesis tool is leveraged in this work to synthesize a FPGA design for stochastic gradient descent (SGD), a cornerstone optimization approach into today’s modern deep neural networks. A simple 3-input-XOR-solving, multilayer perceptron (MLP) is implemented and transformed into a Register Transfer Level (RTL) VHDL hardware microarchitecture using the CCC hardware synthesizer. The produced VHDL is subsequently verified for correct functionality in GNU Ada. Results validate our motivation for accelerated performance, targeted to low-powered, autonomous devices.









Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Data availability
The data that support the findings of this study are not openly available due to reasons of patent protecting data, and are available from the corresponding author upon reasonable request in a controlled access repository where relevant.
References
McClelland JL, Rumelhart DE (1988) Explorations in parallel distributed processing: a handbook of models, programs, and exercises. MIT Press, Cambridge, MA
Bland R (1998) Learning XOR: exploring the space of a classic problem. Technical Report, Department of Computing Science and Mathematics, University of Stirling
Rumelhart D, Hinton G, Williams R (1986) Learning representations by back-propagating errors. Nature 323:533–536
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge, MA
Bottou L, Bousquet O (2012) The tradeoffs of large scale learning. In: Sra S, Nowozin S, Wright SJ (eds) Optimization for machine learning. MIT Press, Cambridge, MA, pp 351–368
Nielsen MA (2015) Neural networks and deep learning, vol 2018. Determination Press, San Francisco, CA
Sutskever I, Martens J, Dahl G, Hinton G (2013) On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia, USA
Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12(7)
Hinton G Overview of mini-batch gradient descent. Lecture 6a: (available online) http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf, retrieved 25 November 2020
Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference for Learning Representations (ICLR), San Diego, CA, USA
Bengio Y (2012) Practical recommendations for gradient-based training of deep architectures. In: Montavon G, Orr G, Müller KR (eds) Neural networks: tricks of the trade. Berlin, Springer, pp 437–478
Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th international conference on artificial intelligence and statistics, Chia Laguna Resort, Sardinia, Italy, pp 249–256
He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE international conference on computer vision, Araucano Park, Las Condes, Chile, pp 1026–1034
Coussy P, Morawiec A (2008) High-level synthesis: from algorithm to digital circuit. Springer, Berlin
Martin G, Smith G (2009) High-level synthesis: past, present, and future. IEEE Des Test Comput 26(4):18–25
Nane R, Sima VM et al (2015) A survey and evaluation of FPGA high-level synthesis tools. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 35(10):1591–1604
Cong J, Liu B, Neuendorffer S, Noguera J, Vissers K, Zhang Z (2011) High-level synthesis for FPGAs: from prototyping to deployment. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 30(4):473–491
Lahti S, Sjövall P, Vanne J, Hämäläinen TD (2018) Are we there yet? A study on the state of high-level synthesis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38(5):898–911
Byerly A, Kalganova T, Dear I (2021) No routing needed between capsules. Neurocomputing 463:545–553
Crockett LH, Elliot RA, Enderwitz MA, Stewart RW (2014) The Zynq book: embedded processing with the arm cortex-A9 on the Xilinx Zynq-7000 all programmable SoC. Strathclyde Academic Media
Dossis M (2011) A Formal Design Framework to Generate Coprocessors with Implementation Options. International Journal of Research and Reviews in Computer Science (IJRRCS) 2(4): 929–936, ISSN: 2079–2557, Science Academy Publisher, UK
Dossis M (2010) Intermediate predicate format for design automation tools. Journal of Next Generation Information Technology (JNIT) 1(1):100–117
Dossis M (2006) Patent number 1005308, 5/10/2006, Greek Industrial Property Organisation
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Amanatidis, D., Dossis, M. Behavioural synthesis of SGD using the CCC framework: a simple XOR-solving MLP. Appl Intell 52, 15226–15236 (2022). https://doi.org/10.1007/s10489-022-03376-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-03376-9