skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator

Abstract

The training process for spiking neural networks can be very computationally intensive. Approaches such as evolutionary algorithms may require evaluating thousands or millions of candidate solutions. In this work, we propose using neuromorphic cores implemented on a Xilinx Zynq system on chip to accelerate and improve the energy efficiency of the evaluation step of an evolutionary training approach. We demonstrate this can significantly reduce the required energy to evolve a network with some cases showing greater than 10 times improvement as compared to a CPU-only system.

Authors:
 [1]; ORCiD logo [1]
  1. ORNL
Publication Date:
Research Org.:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
OSTI Identifier:
1827021
DOE Contract Number:  
AC05-00OR22725
Resource Type:
Conference
Resource Relation:
Conference: International Conference on Neuromorphic Systems (ICONS) 2021 - Knoxville, Tennessee, United States of America - 7/27/2021 5:00:00 AM-7/29/2021 5:00:00 AM
Country of Publication:
United States
Language:
English

Citation Formats

Mitchell, Parker, and Schuman, Catherine. Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator. United States: N. p., 2021. Web. doi:10.1145/3477145.3477150.
Mitchell, Parker, & Schuman, Catherine. Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator. United States. https://doi.org/10.1145/3477145.3477150
Mitchell, Parker, and Schuman, Catherine. 2021. "Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator". United States. https://doi.org/10.1145/3477145.3477150. https://www.osti.gov/servlets/purl/1827021.
@article{osti_1827021,
title = {Low Power Hardware-In-The-Loop Neuromorphic Training Accelerator},
author = {Mitchell, Parker and Schuman, Catherine},
abstractNote = {The training process for spiking neural networks can be very computationally intensive. Approaches such as evolutionary algorithms may require evaluating thousands or millions of candidate solutions. In this work, we propose using neuromorphic cores implemented on a Xilinx Zynq system on chip to accelerate and improve the energy efficiency of the evaluation step of an evolutionary training approach. We demonstrate this can significantly reduce the required energy to evolve a network with some cases showing greater than 10 times improvement as compared to a CPU-only system.},
doi = {10.1145/3477145.3477150},
url = {https://www.osti.gov/biblio/1827021}, journal = {},
number = ,
volume = ,
place = {United States},
year = {Thu Jul 01 00:00:00 EDT 2021},
month = {Thu Jul 01 00:00:00 EDT 2021}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: