Abstract
The density matrix least squares problem arises from the quantum state tomography problem in experimental physics and has many applications in signal processing and machine learning, mainly including the phase recovery problem and the matrix completion problem. In this paper, we first reformulate the density matrix least squares problem as an equivalent convex optimization problem and then design an efficient semismooth Newton-based augmented Lagrangian (Ssnal) algorithm to solve the dual of its equivalent form, in which an inexact semismooth Newton (Ssn) algorithm with superlinear or even quadratic convergence is applied to solve the inner subproblems. Theoretically, the global convergence and locally asymptotically superlinear convergence of the Ssnal algorithm are established under very mild conditions. Computationally, the costs of the Ssn algorithm for solving the subproblem are significantly reduced by making full use of low-rank or high-rank property of optimal solutions of the density matrix least squares problem. In order to verify the performance of our algorithm, numerical experiments conducted on randomly generated quantum state tomography problems and density matrix least squares problems with real data demonstrate that the Ssnal algorithm is more effective and robust than the Qsdpnal solver and several state-of-the-art first-order algorithms.
Similar content being viewed by others
References
Aravkin, A.Y., Burke, J., Drusvyatskiy, D., Friedlander, M.P., Roy, S.: Level-set methods for convex optimization. Math. Program. 174, 359–390 (2019)
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)
Beck, A.: First-Order Methods in Optimization. SIAM, Philadephia (2017)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)
Bhatia, R.: Matrix Analysis. Springer-Verlag, New York (2013)
Bot, R.I., Nguyen, D.K.: The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates. Math. Oper. Res. 45, 682–712 (2020)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)
Candès, E.J., Strohmer, T., Voroninski, V.: PhaseLift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 66, 1241–1274 (2013)
Chen, C.H., He, B.S., Yuan, X.M.: Matrix completion via alternating direction methods. IMA J. Numer. Anal. 32, 227–245 (2012)
Clarke, F.H.: Optimization and Nonsmooth Analysis. SIAM, Philadephia (1990)
Cui, Y., Ding, C., Li, X.D., Zhao, X.Y.: Augmented Lagrangian methods for convex matrix optimization problems. J. Oper. Res. Soc. China 10, 305–342 (2022)
Cui, Y., Sun, D.F., Toh, K.-C.: On the asymptotic superlinear convergence of the augmented Lagrangian method for semidefinite programming with multiple solutions. arXiv preprint arXiv: 1610.00875, (2016)
Cui, Y., Sun, D.F., Toh, K.-C.: On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming. Math. Program. 178, 381–415 (2019)
Ding, C.: An introduction to a class of matrix optimization problems. Ph.D. thesis, National University of Singapore (2012)
Ding, C., Sun, D.F., Toh, K.-C.: Spectral operators of matrices. Math. Program. 168, 509–531 (2018)
Ding, C., Sun, D.F., Sun, J., Toh, K.-C.: Spectral operators of matrices: semismoothness and characterizations of the generalized Jacobian. SIAM J. Optim. 30, 630–659 (2020)
Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings. Springer, New York (2009)
Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
Facchinei, F., Pang, J.-S.: Finite-dimensional Variational Inequalities and Complementarity Problems. Springer-Verlag, New York (2003)
Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002)
Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference (2001)
Friedlander, M.P., Macêdo, I.: Low-rank spectral optimization via gauge duality. SIAM J. Sci. Comput. 38, A1616–A1638 (2016)
Friedlander, M.P., Macêdo, I., Pong, T.K.: Gauge optimization and duality. SIAM J. Optim. 24, 1999–2022 (2014)
Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2, 17–40 (1976)
Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de Dirichlet non linéaires. J. Equine Vet. Sci. 2, 41–76 (1975)
Guo, H.: The metric subregularity of KKT solution mappings of composite conic programming. Ph.D. thesis, National University of Singapore (2017)
Hazan, E.: Sparse approximate solutions to semidefinite programs. In: Latin American Symposium on Theoretical Informatics 4957, 306–316 (2008)
Held, M., Wolfe, P., Crowder, H.P.: Validation of subgradient optimization. Math. Program. 6, 62–88 (1974)
Hiriart-Urruty, J.B., Strodiot, J.J., Nguyen, V.H.: Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data. Appl. Math. Optim. 11, 43–56 (1984)
Jiang, K.F., Sun, D.F., Toh, K.-C.: Solving nuclear norm regularized and semidefinite matrix least squares problems with linear equality constraints. Discrete Geom. Optim. 69, 133–162 (2013)
Korpas, G., Marecek, J.: Quantum state tomography as a bilevel problem, utilizing I-Q plane data. arxiv preprint arxiv:2108.03448, (2021)
Kyrillidis, A., Kalev, A., Park, D., Bhojanapalli, S.: Provable compressed sensing quantum state tomography via non-convex methods. npj Quantum Inform. 4, 1–7 (2018)
Lemaréchal, C., Sagastizábal, C.: Practical aspects of the Moreau-Yosida regularization: theoretical preliminaries. SIAM J. Optim. 7, 367–385 (1997)
Lewis, A.S.: Derivatives of spectral functions. Math. Oper. Res. 21, 576–588 (1996)
Li, X.D.: A two-phase augmented Lagrangian method for convex composite quadratic programming. Ph.D. thesis, National University of Singapore (2015)
Li, X.D., Sun, D.F., Toh, K.-C.: A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems. SIAM J. Optim. 28, 433–458 (2018)
Li, X.D., Sun, D.F., Toh, K.-C.: On efficiently solving the subproblems of a level-set method for fused lasso problems. SIAM J. Optim. 28, 1842–1866 (2018)
Li, X.D., Sun, D.F., Toh, K.-C.: QSDPNAL: a two-phase augmented Lagrangian method for convex quadratic semidefinite programming. Math. Program. Comput. 10, 703–743 (2018)
Li, X.D., Sun, D.F., Toh, K.-C.: On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope. Math. Program. 179, 419–446 (2020)
Lin, L.Y., Liu, Y.-J.: An efficient Hessian based algorithm for singly linearly and box constrained least squares regression. J. Sci. Comput. 88, 1–21 (2021)
Lin, M.X., Liu, Y.-J., Sun, D.F., Toh, K.-C.: Efficient sparse semismooth Newton methods for the clustered lasso problem. SIAM J. Optim. 29, 2026–2052 (2019)
Löwner, K.: Über monotone matrixfunktionen. Math. Z. 38, 177–216 (1934)
Mangasarian, O.L.: A simple characterization of solution sets of convex programs. Oper. Res. Lett. 7, 21–26 (1988)
Meng, F.W., Sun, D.F., Zhao, G.Y.: Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization. Math. Program. 104, 561–581 (2005)
Mifflin, R.: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 15, 957–972 (1977)
Moreau, J.J.: Proximité et dualité dans un espace hilbertien. B. Soc. Math. Fr. 93, 273–299 (1965)
Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. The University Press, Cambridge (2002)
Oustry, F.: A second-order bundle method to minimize the maximum eigenvalue function. Math. Program. 89, 1–33 (2000)
Ouyang, Y., Chen, Y., Lan, G., Pasiliao, J.E.: An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8, 644–681 (2014)
Overton, M.L.: Large-scale optimization of eigenvalues. SIAM J. Optim. 2, 88–120 (1992)
Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 3, 471–501 (2010)
Robinson, S.M.: Some continuity properties of polyhedral multifunctions. Math. Program. Stud. 14, 206–214 (1981)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Qi, L.Q., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993)
Toh, K.-C., Yun, S.W.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6, 615–640 (2010)
Waldspurger, I., d’Aspremont, A., Mallat, S.: Phase recovery, MaxCut and complex semidefinite programming. Math. Program. 149, 47–81 (2015)
Yang, L.Q., Sun, D.F., Toh, K.-C.: SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7, 331–366 (2015)
Yang, J.F., Zhang, Y.: Alternating direction algorithms for \(l_1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)
Zhang, Y.J., Zhang, N., Sun, D.F., Toh, K.-C.: An efficient Hessian based algorithm for solving large-scale sparse group lasso problems. Math. Program. 179, 223–263 (2020)
Zhao, X.Y., Sun, D.F., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)
Acknowledgements
The work of Yong-Jin Liu was in part supported by the National Natural Science Foundation of China (Grants No. 11871153 and 12271097) and the Natural Science Foundation of Fujian Province of China (Grant No. 2019J01644).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Russell Luke.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liu, YJ., Yu, J. A Semismooth Newton-based Augmented Lagrangian Algorithm for Density Matrix Least Squares Problems. J Optim Theory Appl 195, 749–779 (2022). https://doi.org/10.1007/s10957-022-02120-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-022-02120-0
Keywords
- Density matrix least squares problems
- Semismooth Newton algorithm
- Augmented Lagrangian algorithm
- Quadratic growth condition