skip to main content
10.1145/3474085.3475288acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Boosting Lightweight Single Image Super-resolution via Joint-distillation

Published: 17 October 2021 Publication History

Abstract

The rising of deep learning has facilitated the development of single image super-resolution (SISR). However, the growing burdensome model complexity and memory occupation severely hinder its practical deployments on resource-limited devices. In this paper, we propose a novel joint-distillation (JDSR) framework to boost the representation of various off-the-shelf lightweight SR models. The framework includes two stages: the superior LR generation and the joint-distillation learning. The superior LR is obtained from the HR image itself. With less than $300$K parameters, the peer network using superior LR as input can achieve comparable SR performance with large models, e.g., RCAN, with 15M parameters, which enables it as the input of peer network to save the training expense. The joint-distillation learning consists of internal self-distillation and external mutual learning. The internal self-distillation aims to achieve model self-boosting by transferring the knowledge from the deeper SR output to the shallower one. Specifically, each intermediate SR output is supervised by the HR image and the soft label from subsequent deeper outputs. To shrink the capacity gap between shallow and deep layers, a soft label generator is designed in a progressive backward fusion way with meta-learning for adaptive weight fine-tuning. The external mutual learning focuses on obtaining interaction information from a peer network in the process. Moreover, a curriculum learning strategy and a performance gap threshold are introduced for balancing the convergence rate of the original SR model and its peer network. Comprehensive experiments on benchmark datasets demonstrate that our proposal improves the performance of recent lightweight SR models by a large margin, with the same model architecture and inference expense.

References

[1]
Eirikur Agustsson and Radu Timofte. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In CVPRW.
[2]
Namhyuk Ahn, Byungkon Kang, and Kyung-Ah Sohn. 2018. Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network. In ECCV.
[3]
Sungsoo Ahn, Shell Xu Hu, Andreas C. Damianou, Neil D. Lawrence, and Zhenwen Dai. 2019 a. Variational Information Distillation for Knowledge Transfer. In CVPR.
[4]
Sungsoo Ahn, Shell Xu Hu, Andreas C. Damianou, Neil D. Lawrence, and Zhenwen Dai. 2019 b. Variational Information Distillation for Knowledge Transfer. In CVPR.
[5]
Pablo Arbelaez, Michael Maire, Charless C. Fowlkes, and Jitendra Malik. 2011. Contour Detection and Hierarchical Image Segmentation. TPAMI (2011).
[6]
Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie-Line Alberi-Morel. 2012. Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding. In BMVC.
[7]
Xiangxiang Chu, Bo Zhang, Hailong Ma, Ruijun Xu, Jixiang Li, and Qingyuan Li. 2019. Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search. arXiv:1901.07261 (2019).
[8]
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2016b. Image Super-Resolution Using Deep Convolutional Networks. TPAMI (2016).
[9]
Chao Dong, Chen Change Loy, and Xiaoou Tang. 2016a. Accelerating the Super-Resolution Convolutional Neural Network. In ECCV.
[10]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017a. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In ICML.
[11]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017b. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In ICML.
[12]
Qinquan Gao, Yan Zhao, Gen Li, and Tong Tong. 2018. Image Super-Resolution Using Knowledge Distillation. In ACCV.
[13]
Abhishek Gupta, Russell Mendonca, Yuxuan Liu, Pieter Abbeel, and Sergey Levine. 2018. Meta-Reinforcement Learning of Structured Exploration Strategies. In NeurIPS.
[14]
Zibin He, Tao Dai, Jian Lu, Yong Jiang, and Shu-Tao Xia. 2020. Fakd: Feature-Affinity Based Knowledge Distillation for Efficient Image Super-Resolution. In ICIP.
[15]
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 (2015).
[16]
Xuecai Hu, Haoyuan Mu, Xiangyu Zhang, Zilei Wang, Tieniu Tan, and Jian Sun. 2019. Meta-SR: A Magnification-Arbitrary Network for Super-Resolution. In CVPR.
[17]
Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. 2015. Single Image Super-Resolution from Transformed Self-Exemplars. In CVPR.
[18]
Zheng Hui, Xinbo Gao, Yunchu Yang, and Xiumei Wang. 2019. Lightweight Image Super-Resolution with Information Multi-distillation Network. In ACMMM.
[19]
Zheng Hui, Xiumei Wang, and Xinbo Gao. 2018. Fast and Accurate Single Image Super-Resolution via Information Distillation Network. In CVPR.
[20]
Muhammad Abdullah Jamal and Guo-Jun Qi. 2019. Task Agnostic Meta-Learning for Few-Shot Learning. In CVPR.
[21]
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Deeply-Recursive Convolutional Network for Image Super-Resolution. In CVPR.
[22]
Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2017. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In CVPR.
[23]
Wonkyung Lee, Junghyup Lee, Dohyung Kim, and Bumsub Ham. 2020. Learning with Privileged Information for Efficient Image Super-Resolution. In ECCV.
[24]
Huixia Li, Chenqian Yan, Shaohui Lin, Xiawu Zheng, Baochang Zhang, Fan Yang, and Rongrong Ji. 2020. PAMS: Quantized Super-Resolution via Parameterized Max Scale. In ECCV.
[25]
Tsung-Yi Lin, Piotr Dollá r, Ross B. Girshick, Kaiming He, Bharath Hariharan, and Serge J. Belongie. [n.d.]. Feature Pyramid Networks for Object Detection. In CVPR.
[26]
Benlin Liu, Yongming Rao, Jiwen Lu, Jie Zhou, and Cho-Jui Hsieh. 2020. MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down Distillation. In ECCV.
[27]
Xiaotong Luo, Yuan Xie, Yulun Zhang, Yanyun Qu, Cuihua Li, and Yun Fu. 2020. LatticeNet: Towards Lightweight Image Super-Resolution with Lattice Block. In ECCV.
[28]
Seobin Park, Jinsu Yoo, Donghyeon Cho, Jiwon Kim, and Tae Hyun Kim. 2020. Fast Adaptation to Super-Resolution Networks via Meta-learning. In ECCV.
[29]
Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In CVPR.
[30]
Assaf Shocher, Nadav Cohen, and Michal Irani. 2018. "Zero-Shot" Super-Resolution Using Deep Internal Learning. In CVPR.
[31]
Jae Woong Soh, Sunwoo Cho, and Nam Ik Cho. 2020. Meta-Transfer Learning for Zero-Shot Super-Resolution. In CVPR.
[32]
Ying Tai, Jian Yang, and Xiaoming Liu. 2017. Image Super-Resolution via Deep Recursive Residual Network. In CVPR.
[33]
Tao Wang, Li Yuan, Xiaopeng Zhang, and Jiashi Feng. 2019. Distilling Object Detectors With Fine-Grained Feature Imitation. In CVPR.
[34]
Junho Yim, Donggyu Joo, Ji-Hoon Bae, and Junmo Kim. 2017. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning. In CVPR.
[35]
Sergey Zagoruyko and Nikos Komodakis. 2017. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer. In ICLR.
[36]
Roman Zeyde, Michael Elad, and Matan Protter. 2010. On single image scale-up using sparse-representations. In International conference on curves and surfaces.
[37]
Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. 2019. Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. In ICCV.
[38]
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. 2018a. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In ECCV.
[39]
Ying Zhang, Tao Xiang, Timothy M. Hospedales, and Huchuan Lu. 2018b. Deep Mutual Learning. In CVPR.
[40]
Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, and Chao Dong. [n.d.]. Efficient Image Super-Resolution Using Pixel Attention. In ECCVW.

Cited By

View all
  • (2024)A Systematic Survey of Deep Learning-Based Single-Image Super-ResolutionACM Computing Surveys10.1145/365910056:10(1-40)Online publication date: 13-Apr-2024
  • (2024)A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in ConversationsIEEE Transactions on Multimedia10.1109/TMM.2023.327101926(776-788)Online publication date: 1-Jan-2024
  • (2024)Multi-grained fusion network with self-distillation for aspect-based multimodal sentiment analysisKnowledge-Based Systems10.1016/j.knosys.2024.111724293:COnline publication date: 7-Jun-2024
  • Show More Cited By

Index Terms

  1. Boosting Lightweight Single Image Super-resolution via Joint-distillation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '21: Proceedings of the 29th ACM International Conference on Multimedia
    October 2021
    5796 pages
    ISBN:9781450386517
    DOI:10.1145/3474085
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. image super-resolution
    2. meta-learning
    3. mutual learning
    4. self-distillation

    Qualifiers

    • Research-article

    Funding Sources

    • the National Natural Science Foundation of China
    • the National Key Research and Development Program of China

    Conference

    MM '21
    Sponsor:
    MM '21: ACM Multimedia Conference
    October 20 - 24, 2021
    Virtual Event, China

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)52
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Systematic Survey of Deep Learning-Based Single-Image Super-ResolutionACM Computing Surveys10.1145/365910056:10(1-40)Online publication date: 13-Apr-2024
    • (2024)A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in ConversationsIEEE Transactions on Multimedia10.1109/TMM.2023.327101926(776-788)Online publication date: 1-Jan-2024
    • (2024)Multi-grained fusion network with self-distillation for aspect-based multimodal sentiment analysisKnowledge-Based Systems10.1016/j.knosys.2024.111724293:COnline publication date: 7-Jun-2024
    • (2024)A unified architecture for super-resolution and segmentation of remote sensing images based on similarity feature fusionDisplays10.1016/j.displa.2024.10280084(102800)Online publication date: Sep-2024
    • (2023)TAKDSR: Teacher Assistant Knowledge Distillation Framework for Graphics Image Super-ResolutionIEEE Access10.1109/ACCESS.2023.332327311(112015-112026)Online publication date: 2023
    • (2023)Hybrid knowledge distillation from intermediate layers for efficient Single Image Super-ResolutionNeurocomputing10.1016/j.neucom.2023.126592554(126592)Online publication date: Oct-2023
    • (2022)DesnowFormer: an effective transformer-based image desnowing network2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)10.1109/VCIP56404.2022.10008815(1-5)Online publication date: 13-Dec-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media