skip to main content
10.1145/3607889.3609090acmconferencesArticle/Chapter ViewAbstractPublication PagesesweekConference Proceedingsconference-collections
Work in Progress

Work-in-Progress: QRCNN: Scalable CNNs

Published: 24 January 2024 Publication History

Abstract

Dropping the features/kernels in the convolutional layer of convolutional neural networks is a popular variant of structured pruning to reduce the computational load, but this comes at the cost of retraining and performance loss. In this work, we propose the QRCNN framework that shows graceful degradation of performance while dropping features in the convolutional layers without needing retraining. The framework allows trimming the network at inference time, thereby scaling with available computational power. The proposed method can achieve a reduction in the number of MAC computations of 1.22 -- 2.28X with a median of 1.575X. The speedup is measured on three compute platforms namely 8GB RAM Raspberry Pi 4B (embedded platform), octa-core Intel i7 processor with 64GB RAM, and NVidia Quadro K2200 (GPU) with 4GB memory.

References

[1]
Abadi et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016).
[2]
Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical Report.
[3]
Yann LeCun, Corinna Cortes, and Chris Burges. 2010. MNIST handwritten digit database.
[4]
Netzer and et al. 2011. Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011).
[5]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[6]
Gilbert Strang. 2019. In Linear Algebra and Learning from Data citation, 1st edition (1st ed.).

Index Terms

  1. Work-in-Progress: QRCNN: Scalable CNNs
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CASES '23 Companion: Proceedings of the International Conference on Compilers, Architecture, and Synthesis for Embedded Systems
        September 2023
        31 pages
        ISBN:9798400702907
        DOI:10.1145/3607889
        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 24 January 2024

        Check for updates

        Qualifiers

        • Work in progress

        Conference

        CASES '23 Companion

        Acceptance Rates

        Overall Acceptance Rate 52 of 230 submissions, 23%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 33
          Total Downloads
        • Downloads (Last 12 months)26
        • Downloads (Last 6 weeks)1
        Reflects downloads up to 10 Feb 2025

        Other Metrics

        Citations

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media