skip to main content
10.1145/3503161.3547768acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Adjustable Memory-efficient Image Super-resolution via Individual Kernel Sparsity

Published: 10 October 2022 Publication History

Abstract

Though single image super-resolution (SR) has witnessed incredible progress, the increasing model complexity impairs its applications in memory-limited devices. To solve this problem, prior arts have aimed to reduce the number of model parameters and sparsity has been exploited, which usually enforces the group sparsity constraint on the filter level and thus is not arbitrarily adjustable for satisfying the customized memory requirements. In this paper, we propose an individual kernel sparsity (IKS) method for memory-efficient and sparsity-adjustable image SR to aid deep network deployment in memory-limited devices. IKS performs model sparsity in the weight level that implicitly allocates the user-defined target sparsity to each individual kernel. To induce the kernel sparsity, a soft thresholding operation is used as a gating constraint for filtering the trivial weights. To achieve adjustable sparsity, a dynamic threshold learning algorithm is proposed, in which the threshold is updated by associated training with the network weight and is adaptively decayed with the guidance of the desired sparsity. This work essentially provides a dynamic parameter reassignment scheme with a given resource budget for an off-the-shelf SR model. Extensive experimental results demonstrate that IKS imparts considerable sparsity with negligible effect on SR quality. The code is available at: https://github.com/RaccoonDML/IKS.

Supplementary Material

MP4 File (MM22-fp0134.mp4)
Presentation video Though single image super-resolution (SR) has witnessed incredible progress, the increasing model complexity impairs its applications in memory-limited devices. To solve this problem, we propose an individual kernel sparsity (IKS) method for memory-efficient and sparsity-adjustable image SR to aid deep network deployment in memory-limited devices. Extensive experimental results demonstrate that IKS imparts considerable sparsity with negligible effect on SR quality. The code is available at: https://github.com/RaccoonDML/IKS.

References

[1]
Eirikur Agustsson and Radu Timofte. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In CVPRW.
[2]
Namhyuk Ahn, Byungkon Kang, and Kyung-Ah Sohn. 2018. Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network. In ECCV.
[3]
Keivan Alizadeh-Vahid, Anish Prabhu, Ali Farhadi, and Mohammad Rastegari. 2020. Butterfly Transform: An Efficient FFT Based Neural Architecture Design. In CVPR.
[4]
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge Luis Reyes-Ortiz. 2012. Human Activity Recognition on Smartphones Using a Multiclass Hardware-Friendly Support Vector Machine. In IWAAL.
[5]
Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. 2011. Contour detection and hierarchical image segmentation. TPAMI (2011).
[6]
Kambiz Azarian, Yash Bhalgat, Jinwon Lee, and Tijmen Blankevoort. 2020. Learned Threshold Pruning. arXiv:2003.00075 (2020).
[7]
Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie-Line Alberi-Morel. 2012. Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding. In BMVC.
[8]
José M. Bioucas-Dias and Mário A. T. Figueiredo. 2007. A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration. TIP (2007).
[9]
Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, and Ashish Kumar Pandey. 2021. TanhSoft - Dynamic Trainable Activation Functions for Faster Learning and Better Performance. IEEE Access (2021).
[10]
Xiangxiang Chu, Bo Zhang, Hailong Ma, Ruijun Xu, Jixiang Li, and Qingyuan Li. 2019. Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search. arXiv:1901.07261 (2019).
[11]
Ingrid Daubechies, Michel Defrise, and Christine De Mol. 2004. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics (2004).
[12]
Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando de Freitas. 2013. Predicting Parameters in Deep Learning. In NeurIPS.
[13]
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2016. Image Super-Resolution Using Deep Convolutional Networks. TPAMI (2016).
[14]
Michael Elad, Mário A. T. Figueiredo, and Yi Ma. 2010. On the Role of Sparse and Redundant Representations in Image Processing. Proc. IEEE (2010).
[15]
Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang, Yun Fu, Ding Liu, and Thomas S. Huang. 2020. Neural Sparse Representation for Image Restoration. In NeurIPS.
[16]
Qinquan Gao, Yan Zhao, Gen Li, and Tong Tong. 2018. Image Super-Resolution Using Knowledge Distillation. In ACCV.
[17]
Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning both Weights and Connections for Efficient Neural Networks. In NeurIPS.
[18]
Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. 2015. Single image superresolution from transformed self-exemplars. In CVPR.
[19]
Zheng Hui, Xinbo Gao, Yunchu Yang, and Xiumei Wang. 2019. Lightweight Image Super-Resolution with Information Multi-distillation Network. In ACM MM.
[20]
Zheng Hui, Xiumei Wang, and Xinbo Gao. 2018. Fast and Accurate Single Image Super-Resolution via Information Distillation Network. In CVPR.
[21]
Andrey Ignatov, Radu Timofte, William Chou, Ke Wang, Max Wu, Tim Hartley, and Luc Van Gool. 2018. AI Benchmark: Running Deep Neural Networks on Android Smartphones. In ECCVW.
[22]
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Deeply-Recursive Convolutional Network for Image Super-Resolution. In CVPR.
[23]
Jinsu Kim and Namje Park. 2022. Lightweight knowledge-based authentication model for intelligent closed circuit television in mobile personal computing. Pers. Ubiquitous Comput. (2022).
[24]
Aditya Kusupati, Vivek Ramanujan, Raghav Somani, MitchellWortsman, Prateek Jain, Sham M. Kakade, and Ali Farhadi. 2020. Soft Threshold Weight Reparameterization for Learnable Sparsity. In ICML.
[25]
Wonkyung Lee, Junghyup Lee, Dohyung Kim, and Bumsub Ham. 2020. Learning with Privileged Information for Efficient Image Super-Resolution. In ECCV.
[26]
Feng Li, Runming Cong, Huihui Bai, and Yifan He. 2020. Deep Interleaved Network for Single Image Super-Resolution with Asymmetric Co-Attention. In IJCAI.
[27]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning Filters for Efficient ConvNets. In ICLR.
[28]
Huixia Li, Chenqian Yan, Shaohui Lin, Xiawu Zheng, Baochang Zhang, Fan Yang, and Rongrong Ji. 2020. PAMS: Quantized Super-Resolution via Parameterized Max Scale. In ECCV.
[29]
Pengzhen Li, Hulya Seferoglu, Venkat R. Dasari, and Erdem Koyuncu. 2021. Model-Distributed DNN Training for Memory-Constrained Edge Computing Devices. In LANMAN.
[30]
Xin Li. 2011. Image Recovery Via Hybrid Sparse Representations: A Deterministic Annealing Approach. IEEE J. Sel. Top. Signal Process. (2011).
[31]
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced Deep Residual Networks for Single Image Super-Resolution. In CVPRW.
[32]
Mingbao Lin, Rongrong Ji, YanWang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao. 2020. HRank: Filter Pruning Using High-Rank Feature Map. In CVPR.
[33]
Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. 2019. MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. In ICCV.
[34]
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. 2019. Rethinking the Value of Network Pruning. In ICLR.
[35]
Ilya Loshchilov and Frank Hutter. 2019. DecoupledWeight Decay Regularization. In ICLR.
[36]
Xiaotong Luo, Yuan Xie, Yulun Zhang, Yanyun Qu, Cuihua Li, and Yun Fu. 2020. LatticeNet: Towards Lightweight Image Super-Resolution with Lattice Block. In ECCV.
[37]
Jianhao Ma and Salar Fattahi. 2022. Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over parameterization. abs/2202.08788 (2022).
[38]
Yiqun Mei, Yuchen Fan, and Yuqian Zhou. 2021. Image Super-Resolution With Non-Local Sparse Attention. In CVPR.
[39]
K. Moriya, G. Yamaji, and Y. Sasaki. 2017. Binarization for Cross-section Imaging of M. Longissimus Using Dynamic Thresholding Method. Bioimages (2017).
[40]
Ben Niu,WeileiWen,Wenqi Ren, Xiangde Zhang, Lianping Yang, ShuzhenWang, Kaihao Zhang, Xiaochun Cao, and Haifeng Shen. 2020. Single Image Super- Resolution via a Holistic Attention Network. In ECCV.
[41]
Jin-Woo Park and Jong-Seok Lee. 2020. Dynamic Thresholding for Learning Sparse Neural Networks. In ECAI.
[42]
Shishir G. Patil, Don Kurian Dennis, Chirag Pabbaraju, Nadeem Shaheer, Harsha Vardhan Simhadri, Vivek Seshadri, Manik Varma, and Prateek Jain. 2019. GesturePod: Enabling On-device Gesture-based Interaction for White Cane Users. In UIST.
[43]
Pedro Savarese, Hugo Silva, and Michael Maire. 2020. Winning the Lottery with Continuous Sparsification. In NeurIPS.
[44]
Dehua Song, YunheWang, Hanting Chen, Chang Xu, Chunjing Xu, and Dacheng Tao. 2021. AdderSR: Towards Energy Efficient Image Super-Resolution. In CVPR.
[45]
Shudian Song, Shuyue Ma, Jingmei Zhao, Feng Yang, and Linbo Zhai. 2022. Cost efficient multi-service task offloading scheduling for mobile edge computing. Appl. Intell. (2022).
[46]
Ying Tai, Jian Yang, and Xiaoming Liu. 2017. Image Super-Resolution via Deep Recursive Residual Network. In CVPR.
[47]
Jerome Thai, Cathy Wu, Alexey Pozdnukhov, and Alexandre M. Bayen. 2015. Projected sub-gradient with ??1 or simplex constraints via isotonic regression. In IEEE Conference on Decision and Control (CDC).
[48]
Longguang Wang, Xiaoyu Dong, Yingqian Wang, Xinyi Ying, Zaiping Lin, Wei An, and Yulan Guo. 2021. Exploring Sparsity in Image Super-Resolution for Efficient Inference. In CVPR.
[49]
Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, and Angela Yao. 2021. Towards Compact Single Image Super-Resolution via Contrastive Self-distillation. In IJCAI.
[50]
Ziheng Wang. 2020. SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning Inference. In International Conference on Parallel Architectures and Compilation Techniques, Vivek Sarkar and Hyesoon Kim (Eds.).
[51]
Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, and Thomas S. Huang. 2015. Deep Networks for Image Super-Resolution with Sparse Prior. In ICCV.
[52]
Xia Xiao, Zigeng Wang, and Sanguthevar Rajasekaran. 2019. AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters. In NeurIPS.
[53]
Jingwei Xin, Nannan Wang, Xinrui Jiang, Jie Li, Heng Huang, and Xinbo Gao. 2020. Binarized Neural Network for Single Image Super Resolution. In ECCV.
[54]
Qingqing Yan, Shu Li, Chengju Liu, and Qijun Chen. 2019. Real-Time Lightweight CNN in Robots with Very Limited Computational Resources: Detecting Ball in NAO. In ICVS.
[55]
Jianchao Yang, John Wright, Thomas S. Huang, and Yi Ma. 2010. Image Super-Resolution Via Sparse Representation. TIP (2010).
[56]
Jiahui Yu and Thomas S. Huang. 2019. Network Slimming by Slimmable Networks: Towards One-Shot Architecture Search for Channel Numbers. arxiv:1903.11728 (2019).
[57]
Roman Zeyde, Michael Elad, and Matan Protter. 2010. On single image scale-up using sparse-representations. In International Conference on Curves and Surfaces.
[58]
Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. 2018. A Systematic DNN Weight Pruning Framework Using Alternating Direction Method of Multipliers. In ECCV.
[59]
Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, and YunheWang. 2021. Data-Free Knowledge Distillation for Image Super-Resolution. In CVPR.
[60]
Yulun Zhang, Kunpeng Li, Kai Li, LichenWang, Bineng Zhong, and Yun Fu. 2018. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In ECCV.
[61]
Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, and Chao Dong. 2020. Efficient Image Super-Resolution Using Pixel Attention. In ECCVW.

Cited By

View all
  • (2024)A Systematic Survey of Deep Learning-Based Single-Image Super-ResolutionACM Computing Surveys10.1145/365910056:10(1-40)Online publication date: 13-Apr-2024
  • (2023)Hardware-friendly Scalable Image Super Resolution with Progressive Structured SparsityProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3611875(9061-9069)Online publication date: 27-Oct-2023
  • (2023)Memory-Friendly Scalable Super-Resolution via Rewinding Lottery Ticket Hypothesis2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01384(14398-14407)Online publication date: Jun-2023
  • Show More Cited By

Index Terms

  1. Adjustable Memory-efficient Image Super-resolution via Individual Kernel Sparsity

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '22: Proceedings of the 30th ACM International Conference on Multimedia
    October 2022
    7537 pages
    ISBN:9781450392037
    DOI:10.1145/3503161
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. dynamic learnable threshold
    2. image super-resolution
    3. kernel sparsity
    4. memory-efficient
    5. sparsity-adjustable

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)37
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Systematic Survey of Deep Learning-Based Single-Image Super-ResolutionACM Computing Surveys10.1145/365910056:10(1-40)Online publication date: 13-Apr-2024
    • (2023)Hardware-friendly Scalable Image Super Resolution with Progressive Structured SparsityProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3611875(9061-9069)Online publication date: 27-Oct-2023
    • (2023)Memory-Friendly Scalable Super-Resolution via Rewinding Lottery Ticket Hypothesis2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01384(14398-14407)Online publication date: Jun-2023
    • (2023)RBSR: Efficient and Flexible Recurrent Network for Burst Super-ResolutionPattern Recognition and Computer Vision10.1007/978-981-99-8537-1_6(65-78)Online publication date: 13-Oct-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media