A Low-complexity Max Unpooling Achitecture for CNNs | IEEE Conference Publication | IEEE Xplore

A Low-complexity Max Unpooling Achitecture for CNNs


Abstract:

In the training of Convolutional Neural Networks (CNN), the conventional max unpooling architecture introduces significant sparsity for zero-filling during upsampling, le...Show More

Abstract:

In the training of Convolutional Neural Networks (CNN), the conventional max unpooling architecture introduces significant sparsity for zero-filling during upsampling, leading to additional resource costs in adjacent deconvolution layers. In order to leverage the high sparsity of the recovery matrix, this paper proposes a low-complexity hardware architecture. This architecture solely stores the input matrix in the max unpooling layer, eliminating the need for the upsampling recovery process, while mapping the stored relative indices from the forward inference to deconvolution. Moreover, the deconvolution process exclusively involves non-zero elements, reducing computational and storage requirements. This paper derives the distribution of the maximum number of non-zero values corresponding to the deconvolution window for different layers, demonstrating its compatibility with networks of various sizes. Synthesizing the proposed architecture on Arria 10 AX115N2F45E1SG, for the VGG16, it can save 60.7% of ALMs and 68.8% of storage resources, Moreover, the proposed architecture enhances the frequency by 8.0%.
Date of Conference: 24-27 October 2023
Date Added to IEEE Xplore: 24 January 2024
ISBN Information:

ISSN Information:

Conference Location: Nanjing, China

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.