Regressor Basis Learning for anchored super-resolution | IEEE Conference Publication | IEEE Xplore

Regressor Basis Learning for anchored super-resolution


Abstract:

A+ aka Adjusted Anchored Neighborhood Regression - is a state-of-the-art method for exemplar-based single image super-resolution with low time complexity at both train an...Show More

Abstract:

A+ aka Adjusted Anchored Neighborhood Regression - is a state-of-the-art method for exemplar-based single image super-resolution with low time complexity at both train and test time. By robustly training a clustered regression model over a low-resolution dictionary, its performance keeps improving with the dictionary size - even when using tens of thousands of regressors. However, this can pose a memory issue where the model size can grow to more than a gigabyte, limiting applicability in memory constrained scenarios. To address this, we propose Regressor Basis Learning (RB), a novel variant of A+ where we restrict the regressor set to a learned low-dimensional subspace, such that each regressor is coded as a linear combination of few basis regressors. We learn the regressor basis by alternating between closed form solutions of the optimal coding of the regressor set (given the basis) and the optimal regressor basis (given the coding). We validate RB on several standard benchmarks and achieve comparable performance to A+ but by using orders of magnitude fewer basis regressors, ie. 32 basis regressors instead of 1024 regressors. This makes our RB method ideal for memory constrained applications.
Date of Conference: 04-08 December 2016
Date Added to IEEE Xplore: 24 April 2017
ISBN Information:
Conference Location: Cancun, Mexico

Contact IEEE to Subscribe

References

References is not available for this document.