DefocusSR: An Efficient Framework for Defocus Image Super-Resolution Guided by Depth Information | IEEE Conference Publication | IEEE Xplore

DefocusSR: An Efficient Framework for Defocus Image Super-Resolution Guided by Depth Information


Abstract:

Existing image super-resolution (SR) methods often cause oversharpening, especially in defocus images. However, we found that defocus regions and focus regions have diffe...Show More

Abstract:

Existing image super-resolution (SR) methods often cause oversharpening, especially in defocus images. However, we found that defocus regions and focus regions have different levels of difficulty in recovery. This provides an opportunity for efficient enhancements. In this paper, we propose DefocusSR, an efficient framework for defocus images in SR. DefocusSR comprises two modules: the Depth-guided Segmentation (DGS) and the Defocus-Aware Classify Enhance (DCE). In the DGS, we prompt MobileSAM with the depth of field information to accurately segment the input image and generate defocus maps which contain information about the locations of defocus areas. In the DCE, we crop the defocus map and classify them into defocus and focus patches based on a threshold. In practice, the defocus patches are input into the Efficient Blur Match SR Network (EBM-SR) while preserving the blur kernel to relieve the computation burden. The focus patches are processed using expensive operations. Therefore, DefocusSR combines defocus classification and SR in a unified framework. Experiments demonstrate that DefocusSR can accelerate most SR methods. Reduce the FLOPs of SR models by approximately 70% while preserving state-of-the-art SR performance.
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea, Republic of

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.