Abstract
This paper describes an efficient method for general norm approximation that appears frequently in various computer vision problems. Such a lot of problems are differently formulated, but frequently require to minimize the sum of weighted norms as the general norm approximation. Therefore we extend Iteratively Reweighted Least Squares (IRLS) that is originally for minimizing single norm. The proposed method accelerates solving the least-square problem in IRLS by warm start that finds the next solution by the previous solution over iterations. Through numerical tests and application to the computer vision problems, we demonstrate that the proposed method solves the general norm approximation efficiently with small errors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
While it may be still valid even when \(p < 1\), the problem becomes non-convex when \(p < 1\); thus they may be trapped by local minima.
References
Lu, M., Zheng, B., Takamatsu, J., Nishino, K., Ikeuchi, K.: In: 3D Shape Restoration via Matrix Recovery. Springer, Heidelberg (2011)
Futragoon, N., Kitamoto, A., Andaroodi, E., Matini, M.R., Ono, K.: In: 3D Reconstruction of a Collapsed Historical Site from Sparse Set of Photographs and Photogrammetric Map. Springer, Heidelberg (2011)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)
Tikhonov, A.N., Arsenin, V.Y.: Solution of Ill-posed Problems. Winston & Sons, Washington (1977). ISBN 0-470-99124-0
Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. B 67, 301–320 (2005)
Hestenes, M., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)
Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53, 217–288 (2011)
Gentle, J.E.: In: Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer, New York (2007). ISBN 978-0-387-70872-0
Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 373–395 (1984)
Daubechies, I., DeVore, R., Fornasier, M., Gunturk, S.: Iteratively re-weighted least squares minimization: proof of faster than linear rate for sparse recovery. In: 42nd Annual Conference on Information Sciences and Systems, pp. 26–29 (2008)
Aftab, K., Hartley, R.: Convergence of iteratively re-weighted least squares to robust m-estimators. In: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 480–487 (2015)
Paige, C.C., Saunders, M.A.: LSQR: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8, 43–71 (1982)
Lawson, C.L.: Contributions to the theory of linear least maximum approximations. Ph.D. thesis, UCLA (1961)
Rice, J.R., Usow, K.H.: The lawson algorithm and extensions. Math. Comput. 22, 118–127 (1968)
Gorodnitsky, I.F., Rao, B.D.: Sparse signal reconstruction from limited data using focuss: a re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45, 600–616 (1997)
Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 3869–3872 (2008)
Candès, E.J., Wakin, M.B., Boyd, S.: Enhancing sparsity by reweighted \(\ell _1\) minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)
Wipf, D.P., Nagarajan, S.: Iterative reweighted \(\ell _1\) and \(\ell _2\) methods for finding sparse solutions. J. Sel. Top. Signal Process 4(2), 317–329 (2010)
Burrus, C.S., Barreto, J., Selesnick, I.W.: Iterative reweighted least-squares design of fir filters. IEEE Trans. Signal Process. 42, 2926–2936 (1994)
Levin, A., Fergus, R., Durand, F., Freeman, W.: Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph. 26, 70 (2007). Proceedings of SIGGRAPH
Joshi, N., Zitnick, L., Szeliski, R., Kriegman, D.: Image deblurring and denoising using color priors. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009)
Wohlberg, B., Rodríguez, P.: An iteratively reweighted norm algorithm for minimization of total variation functionals. IEEE Signal Process. Lett. 14, 948–951 (2007)
Liu, C., Sun, D.: On Bayesian adaptive video super resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36, 346–360 (2014)
Mohan, K., Fazel, M.: Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Represent. 13, 3441–3473 (2012)
Chen, C., Huang, J., He, L., Li, H.: Preconditioning for accelerated iteratively reweighted least squares in structured sparsity reconstruction. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2713–2720 (2014)
Fornasier, M., Peter, S., Rauhut, H., Worm, S.: Conjugate gradient acceleration of iteratively re-weighted least squares methods. Comput. Optim. Appl. 65, 205–259 (2016)
Shewchuk, J.R.: An introduction to the conjugate gradient method without the agonizing pain. Technical report, Pittsburgh, PA, USA (1994)
Howell, G.W., Baboulin, M.: LU preconditioning for overdetermined sparse least squares problems. In: Wyrzykowski, R., Deelman, E., Dongarra, J., Karczewski, K., Kitowski, J., Wiatr, K. (eds.) PPAM 2015. LNCS, vol. 9573, pp. 128–137. Springer, Heidelberg (2016). doi:10.1007/978-3-319-32149-3_13
Benbow, S.J.: Solving generalized least-squares problems with LSQR. SIAM J. Matrix Anal. Appl. 21, 166–177 (1999)
Nolet, G.: Solving Large Linearized Tomographic Problems: Seismic Tomography, Theory and Practice, pp. 227–247. Chapmanand Hall, London (1993)
Bochkanov, S., Bystritsky, V.: ALGLIB. http://www.alglib.net/
Guennebaud, G., Jacob, B., et al.: Eigen v3
Woodham, R.J.: Photometric method for determining surface orientation from multiple images. Opt. Eng. 19, 191139–191139 (1980)
Wu, L., Ganesh, A., Shi, B., Matsushita, Y., Wang, Y., Ma, Y.: Robust photometric stereo via low-rank matrix completion and recovery. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010. LNCS, vol. 6494, pp. 703–717. Springer, Heidelberg (2011). doi:10.1007/978-3-642-19318-7_55
Ikehata, S., Wipf, D., Matsushita, Y., Aizawa, K.: Robust photometric stereo via low-rank matrix completion and recovery. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012)
Harker, M., O’leary, P.: Regularized reconstruction of a surface from its measured gradient field. J. Math. Imaging Vis. 51, 46–70 (2015)
Reddy, D., Agrawal, A.K., Chellappa, R.: Enforcing integrability by error correction using l1-minimization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2350–2357 (2009)
Avron, H., Maymounkov, P., Toledo, S.: Blendenpik: supercharging lapack’s least-squares solver. SIAM J. Sci. Comput. 32, 1217–1236 (2010)
Acknowledgement
This work was partly supported by JSPS KAKENHI Grant Numbers JP16H01732 and JP26540085.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Samejima, M., Matsushita, Y. (2017). Fast General Norm Approximation via Iteratively Reweighted Least Squares. In: Chen, CS., Lu, J., Ma, KK. (eds) Computer Vision – ACCV 2016 Workshops. ACCV 2016. Lecture Notes in Computer Science(), vol 10117. Springer, Cham. https://doi.org/10.1007/978-3-319-54427-4_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-54427-4_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-54426-7
Online ISBN: 978-3-319-54427-4
eBook Packages: Computer ScienceComputer Science (R0)