Abstract:
Convolutional Neural Networks (CNNs) have achieved exciting performance and are widely used in various fields such as industry, cultural heritage protection, etc. However...Show MoreMetadata
Abstract:
Convolutional Neural Networks (CNNs) have achieved exciting performance and are widely used in various fields such as industry, cultural heritage protection, etc. However, CNNs are computational-intensive and resource-consuming, which limits the applications, especially for embedded systems. Therefore, this paper proposes a dynamic pruning method for compacting the mainstream CNNs model. Furthermore, we implement the compacted models on FPGAs for acceleration. Experimental results show that the proposal compress at most 80% parameter and FLOPs, and 80% inference time reduction on FPGAs.
Published in: 2022 19th International SoC Design Conference (ISOCC)
Date of Conference: 19-22 October 2022
Date Added to IEEE Xplore: 07 February 2023
ISBN Information:
Print on Demand(PoD) ISSN: 2163-9612