A 1.15-TOPS 6.57-TOPS/W Neural Network Processor for Multi-Scale Object Detection With Reduced Convolutional Operations | IEEE Journals & Magazine | IEEE Xplore

A 1.15-TOPS 6.57-TOPS/W Neural Network Processor for Multi-Scale Object Detection With Reduced Convolutional Operations


Abstract:

For automated driving cars, we present a 40-nm dedicated object detection processor with only three operations: 3 × 3 convolution, 1 × 1 convolution, and 4 × 4 deconvolut...Show More

Abstract:

For automated driving cars, we present a 40-nm dedicated object detection processor with only three operations: 3 × 3 convolution, 1 × 1 convolution, and 4 × 4 deconvolution. Multi-scale object detection at high recognition accuracy is possible by virtue of the deconvolution feature and concatenation. The input memory for a feature map has 8-bit width. A multiplier for the inputs has 8-bit precision. Partial-sum memory, however, has 16-bit width to suppress detection accuracy deterioration in a layer with 1024 channels in the target network. By fixed-point bit precision, the external memory bandwidth and internal memory capacity are reduced. Optimized parallelization in input and output channels reduces the external memory bandwidth to 0.062 billion accesses per 1280 × 384 image with internal memory capacity of 400 kB. The detection error is 1.9% of that using single-precision floating point. The maximum operating frequency is 500 MHz at a supply voltage of 1 V. Its peak performance is 1.15 TOPS. The maximum energy efficiency is 6.57 TOPS/W at 174 MHz and 0.6 V.
Published in: IEEE Journal of Selected Topics in Signal Processing ( Volume: 14, Issue: 4, May 2020)
Page(s): 634 - 645
Date of Publication: 13 January 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.