Skip to main content
Log in

A diagonally scaled Newton-type proximal method for minimization of the models with nonsmooth composite cost functions

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

Based on the memoryless BFGS (Broyden–Fletcher–Goldfarb–Shanno) updating formula of a recent well-structured diagonal approximation of the Hessian, we propose an improved proximal method for solving the minimization problem of nonsmooth composite functions. More exactly, a diagonally scaled matrix is iteratively used to approximate Hessian of the smooth ingredient of the cost function, which leads to straightly determining the search directions in each iteration. Afterward, in light of the Zhang–Hager nonmonotone scheme, a nonmonotone technique for performing the line search for the unconstrained optimization models with composite cost functions is devised. What is more, we address convergence of the suggested proximal algorithm. We close the discussion by empirically studying performance of the proposed algorithm on some large-scale compressive sensing and sparse logistic regression problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data Availability

The authors confirm that the data supporting the findings of this study are available within the manuscript. Raw data that support the finding of this study are available from the corresponding author, upon reasonable request.

References

  • Akyildiz IF, Su W, Sankarasubramaniam Y, Cayirci E (2002) Wireless sensor networks: a survey. Comput Netw 38(4):393–422

    Article  Google Scholar 

  • Aminifard Z, Babaie-Kafaki S (2022) An approximate Newton-type proximal method using symmetric rank-one updating formula for minimizing the nonsmooth composite functions. Optim Methods Softw 38:529–542. https://doi.org/10.1080/10556788.2022.2142587

    Article  MathSciNet  MATH  Google Scholar 

  • Aminifard Z, Babaie-Kafaki S (2023) Diagonally scaled memoryless quasi-Newton methods with application to compressed sensing. J Ind Manag Optim 19(1):437

    Article  MathSciNet  MATH  Google Scholar 

  • Aminifard Z, Hosseini A, Babaie-Kafaki S (2022) Modified conjugate gradient method for solving sparse recovery problem with nonconvex penalty. Signal Process 193:108424

    Article  Google Scholar 

  • Andrei N (2007) Convex functions. Adv Model Optim 9(2):257–267

    MathSciNet  MATH  Google Scholar 

  • Babaie-Kafaki S (2015) On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J Optim Theory Appl 167(1):91–101

    Article  MathSciNet  MATH  Google Scholar 

  • Baraniuk RG, Goldstein T, Sankaranarayanan AC, Studer C, Veeraraghavan A, Wakin MB (2017) Compressive video sensing: algorithms, architectures, and applications. IEEE Signal Process Mag 34(1):52–66

    Article  Google Scholar 

  • Barzilai J, Borwein JM (1988) Two-point stepsize gradient methods. IMA J Numer Anal 8(1):141–148

    Article  MathSciNet  MATH  Google Scholar 

  • Becker S, Bobin J, Candès EJ (2011) NESTA: a fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci 4(1):1–39

    Article  MathSciNet  MATH  Google Scholar 

  • Bobin J, Starck JL, Ottensamer R (2008) Compressed sensing in astronomy. IEEE J Sel Top Signal Process 2(5):718–726

    Article  Google Scholar 

  • Bruckstein AM, Donoho DL, Elad M (2009) From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev 51(1):34–81

    Article  MathSciNet  MATH  Google Scholar 

  • Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509

    Article  MathSciNet  MATH  Google Scholar 

  • Chang CC, Lin CJ (2020) Libsvm data: classification, regression, and multi-label. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/

  • Combettes PL, Wajs VR (2005) Signal recovery by proximal forward-backward splitting. Multiscale Model Simul 4(4):1168–1200

    Article  MathSciNet  MATH  Google Scholar 

  • Dettling M, Bühlmann P (2004) Finding predictive gene groups from microarray data. J Multivar Anal 90(1):106–131

    Article  MathSciNet  MATH  Google Scholar 

  • Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91(2, Ser. A):201–213

    Article  MathSciNet  MATH  Google Scholar 

  • Duarte MF, Davenport MA, Takhar D, Laska JN, Sun T, Kelly KF, Baraniuk RG (2008) Single-pixel imaging via compressive sampling. IEEE Signal Process Mag 25(2):83–91

    Article  Google Scholar 

  • Esmaeili H, Shabani S, Kimiaei M (2019) A new generalized shrinkage conjugate gradient method for sparse recovery. Calcolo 56(1):1–38

    Article  MathSciNet  MATH  Google Scholar 

  • Hale ET, Yin W, Zhang Y (2007) A fixed-point continuation method for \(\ell _1\)-regularized minimization with applications to compressed sensing. CAAM, TR07–07, Rice University 43:44

  • Hale ET, Yin W, Zhang Y (2010) Fixed-point continuation applied to compressed sensing: implementation and numerical experiments. J Comput Math 28(2):170–194

    Article  MathSciNet  MATH  Google Scholar 

  • Herman M, Strohmer T (2008) Compressed sensing radar. In: 2008 IEEE international conference on acoustics, speech and signal processing, pp 1509–1512

  • Huang Y, Liu H (2015) A Barzilai-Borwein type method for minimizing composite functions. Numer Algorithms 69(4):819–838

    Article  MathSciNet  MATH  Google Scholar 

  • Kogan S, Levin D, Routledge BR, Sagi JS, Smith NA (2009) Predicting risk from financial reports with regression. In: Proceedings of human language technologies: the 2009 annual conference of the North American Chapter of the Association for Computational Linguistics, pp. 272–280

  • Lichman M (2013) Uci machine learning repository. http://archive.ics.uci.edu/ml

  • Nakayama S, Narushima Y, Yabe H (2021) Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions. Comput Optim Appl 79(1):127–154

    Article  MathSciNet  MATH  Google Scholar 

  • Oren SS, Luenberger DG (1973) Self-scaling variable metric (SSVM) algorithms. I. Criteria and sufficient conditions for scaling a class of algorithms. Manag Sci 20(5):845–862

    Article  MathSciNet  MATH  Google Scholar 

  • Oren SS, Spedicato E (1976) Optimal conditioning of self-scaling variable metric algorithms. Math Program 10(1):70–90

    Article  MathSciNet  MATH  Google Scholar 

  • Panayirci E, Senol H, Uysal M, Poor HV (2015) Sparse channel estimation and equalization for OFDM-based underwater cooperative systems with amplify-and-forward relaying. IEEE Trans Signal Process 64(1):214–228

    Article  MathSciNet  MATH  Google Scholar 

  • Saucedo A, Lefkimmiatis S, Rangwala N, Sung K (2017) Improved computational efficiency of locally low rank MRI reconstruction using iterative random patch adjustments. IEEE Trans Med Imaging 36(6):1209–1220

    Article  Google Scholar 

  • Sun B, Feng H, Chen K, Zhu X (2016) A deep learning framework of quantized compressed sensing for wireless neural recording. IEEE Access 4:5169–5178

    Article  Google Scholar 

  • Tseng P, Yun S (2009) A coordinate gradient descent method for nonsmooth separable minimization. Math Program 117(1–2):387–423

    Article  MathSciNet  MATH  Google Scholar 

  • Wright SJ, Nowak RD, Figueiredo M (2009) Sparse reconstruction by separable approximation. IEEE Trans Signal Process 57(7):2479–2493

    Article  MathSciNet  MATH  Google Scholar 

  • Xiao Y, Wu SY, Qi L (2014) Nonmonotone Barzilai-Borwein gradient algorithm for \(\ell _1\)-regularized nonsmooth minimization in compressive sensing. J Sci Comput 61(1):17–41

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang H, Hager WW (2004) A nonmonotone line search technique and its application to unconstrained optimization. SIAM J Optim 14(4):1043–1056

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was in part supported by the grant no. 4005578 from Iran National Science Foundation (INSF), and in part by the Research Councils of Semnan University and Free University of Bozen–Bolzano. The authors thank the anonymous reviewer for his/her valuable comments and suggestions that helped to improve the quality of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zohre Aminifard.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by Gabriel Haeser.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aminifard, Z., Babaie–Kafaki, S. A diagonally scaled Newton-type proximal method for minimization of the models with nonsmooth composite cost functions. Comp. Appl. Math. 42, 353 (2023). https://doi.org/10.1007/s40314-023-02494-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40314-023-02494-5

Keywords

Mathematics Subject Classification

Navigation