Skip to main content
Log in

KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

This work probes into a smoothing recurrent neural network (SRNN) in terms of a smoothing approximation technique and the equivalent version of the Karush–Kuhn–Tucker condition. Such a network is developed to handle the \(L_0\hbox {-norm}\) minimization model originated from compressed sensing, after replacing the model with a nonconvex nonsmooth approximation one. The existence, uniqueness and limit behavior of solutions of the network are well studied by means of some mathematical tools. Multiple kinds of nonconvex approximation functions are examined so as to decide which of them is most suitable for SRNN to address the problem of sparse signal recovery under different kinds of sensing matrices. Comparative experiments have validated that among the chosen approximation functions, transformed L1 function (TL1), logarithm function (Log) and arctangent penalty function are effective for sparse recovery; SRNN-TL1 is robust and insensitive to the coherence of sensing matrix, while it is competitive by comparison against several existing discrete numerical algorithms and neural network methods for compressed sensing problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306

    Article  MathSciNet  MATH  Google Scholar 

  2. Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509

    Article  MathSciNet  MATH  Google Scholar 

  3. Natarajan BK (1995) Sparse approximate solutions to linear systems. SIAM J Comput 24(2):227–234

    Article  MathSciNet  MATH  Google Scholar 

  4. Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159

    Article  MathSciNet  MATH  Google Scholar 

  5. Gasso G, Rakotomamonjy A, Canu S (2009) Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans Signal Process 57(12):4686–4698

    Article  MathSciNet  MATH  Google Scholar 

  6. Foucart S, Lai MJ (2009) Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0<q\le 1\). Appl Comput Harmon Anal 26(3):395–407

    Article  MathSciNet  MATH  Google Scholar 

  7. Lai MJ, Xu Y, Yin W (2013) Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J Numer Anal 51(2):927–957

    Article  MathSciNet  MATH  Google Scholar 

  8. Geman D, Yang C (1995) Nonlinear image recovery with half-quadratic regularization. IEEE Trans Image Process 4(7):932–946

    Article  Google Scholar 

  9. Trzasko J, Manduca A (2009) Relaxed conditions for sparse signal recovery with general concave priors. IEEE Trans Signal Process 57(11):4347–4354

    Article  MathSciNet  MATH  Google Scholar 

  10. Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Stat Assoc 96(456):1348–1360

    Article  MathSciNet  MATH  Google Scholar 

  11. Friedman JH (2012) Fast sparse regression and classification. Int J Forecast 28(3):722–738

    Article  Google Scholar 

  12. Zhang CH (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942

    Article  MathSciNet  MATH  Google Scholar 

  13. Zhang S, Qian H, Chen W, Zhang Z (2013) A concave conjugate approach for nonconvex penalized regression with the MCP penalty. In: Proceedings of the 27th AAAI conference on artificial intelligence 2013, pp 1027–1033

  14. Zhang T (2010) Analysis of multi-stage convex relaxation for sparse regularization. J Mach Learn Res 11(2):1081–1107

    MathSciNet  MATH  Google Scholar 

  15. Gao C, Wang N, Yu Q, Zhang Z (2011) A feasible nonconvex relaxation approach to feature selection. In: Proceedings of the 25th AAAI conference on artificial intelligence 2011, pp 356–361

  16. Soubies E, Blanc-Fraud L, Aubert G (2015) A continuous exact \(L_0\) penalty (CEL0) for least squares regularized problem. SIAM J Imaging Sci 8(3):1607–1639

    Article  MathSciNet  MATH  Google Scholar 

  17. Malek-Mohammadi M, Koochakzadeh A, Babaie-Zadeh M, Jansson M, Rojas CR (2016) Successive concave sparsity approximation for compressed sensing. IEEE Trans Signal Process 64(21):5657–5671

    Article  MathSciNet  MATH  Google Scholar 

  18. Selesnick IW, Bayram I (2014) Sparse signal estimation by maximally sparse convex optimization. IEEE Trans Signal Process 62(5):1078–1092

    Article  MathSciNet  MATH  Google Scholar 

  19. Lou Y, Osher S, Xin J (2015) Computational aspects of constrained \(L_1-L_2\) minimization for compressive sensing. Modelling. Springer, Computation and optimization in information systems and management sciences, pp 169–180

  20. Yin P, Lou Y, He Q, Xin J (2015) Minimization of \(l_{1-2}\) for compressed sensing. SIAM J Sci Comput 37(1):A536–A563

    Article  MATH  Google Scholar 

  21. Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624

    Article  Google Scholar 

  22. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Article  Google Scholar 

  23. Bian W, Chen X (2014) Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE Trans Neural Netw Learn Syst 25(3):545–556

    Article  Google Scholar 

  24. Qin S, Bian W, Xue X (2013) A new one-layer recurrent neural network for nonsmooth pseudoconvex optimization. Neurocomputing 120:655–662

    Article  Google Scholar 

  25. Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52(3):141–152

    MATH  Google Scholar 

  26. Bian W, Chen X (2012) Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411

    Article  MathSciNet  Google Scholar 

  27. Rozell CJ, Garrigues P (2010) Analog sparse approximation for compressed sensing recovery. In: Proceedings of the ASILOMAR conference on signals, systems and computers 2010, pp 822–826

  28. Charles AS, Garrigues P, Rozell CJ (2011) Analog sparse approximation with applications to compressed sensing. arXiv preprint arXiv:1111.4118

  29. Leung CS, Sum J, Constantinides AG (2014) Recurrent networks for compressive sampling. Neurocomputing 129:298–305

    Article  Google Scholar 

  30. Feng R, Leung CS, Constantinides AG, Zeng WJ (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Netw Learn Syst 28(10):2395–2407

    Article  MathSciNet  Google Scholar 

  31. Wang H, Lee CM, Feng R, Leung CS (2017) An analog neural network approach for the least absolute shrinkage and selection operator problem. Neural Comput Appl. https://doi.org/10.1007/s00521-017-2863-5

    Article  Google Scholar 

  32. Liu Y, Hu J (2016) A neural network for \(l_1\)-\(l_2\) minimization based on scaled gradient projection: application to compressed sensing. Neurocomputing 173(3):988–993

    Article  Google Scholar 

  33. Liu Q, Wang J (2016) \(L_1\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707

    Article  MathSciNet  Google Scholar 

  34. Guo Z, Wang J (2010) A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of the 2010 international joint conference on neural networks (IJCNN) 2010, pp 18–23

  35. Guo C, Yang Q (2015) A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to minimization \(L_0\). IEEE Trans Neural Netw Learn Syst 26(7):1363–1374

    Article  MathSciNet  Google Scholar 

  36. Bazaraa MS, Sherali HD, Shetty CM (2013) Nonlinear programming: theory and algorithms. Wiley, New York

    MATH  Google Scholar 

  37. Chen X (2012) Smoothing methods for nonsmooth, nonconvex minimization. Math Program 134(1):71–99

    Article  MathSciNet  MATH  Google Scholar 

  38. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666

    Article  MathSciNet  MATH  Google Scholar 

  39. Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321

    Article  MathSciNet  MATH  Google Scholar 

  40. Tropp JA (2004) Greed is good: algorithmic results for sparse approximation. IEEE Trans Inf Theory 50(10):2231–2242

    Article  MathSciNet  MATH  Google Scholar 

  41. Cohen A, Dahmen W, DeVore R (2009) Compressed sensing and best \(k\)-term approximation. J Am Math Soc 22(1):211–231

    Article  MathSciNet  MATH  Google Scholar 

  42. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122

    Article  MATH  Google Scholar 

  43. Hale E, Yin W, Zhang Y (2008) Fixed-point continuation for \(l_1\)-minimization: methodology and convergence. SIAM J Optim 19(3):1107–1130

    Article  MathSciNet  MATH  Google Scholar 

  44. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202

    Article  MathSciNet  MATH  Google Scholar 

  45. Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563

    Article  MathSciNet  Google Scholar 

  46. Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215

    Article  MathSciNet  MATH  Google Scholar 

  47. Gribonval R, Nielsen M (2007) Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl Comput Harmon Anal 22(3):335–355

    Article  MathSciNet  MATH  Google Scholar 

  48. Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New York

    MATH  Google Scholar 

  49. Bandeira A, Dobriban E, Mixon D, Sawin W (2013) Certifying the restricted isometry property is hard. IEEE Trans Inform Theory 59(6):3448–3450

    Article  MathSciNet  MATH  Google Scholar 

  50. Slotine JJE, Li W (1991) Applied nonlinear control. Englewood Cliffs, Prentice-Hall

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 61563009, the Science and Technology Foundation of Guizhou Province (No. LKQS201314) and the Foundation of Qiannan Normal University for Nationalities (No. 2014ZCSX18).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhuhong Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Zhang, Z. KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing. Neural Comput & Applic 31, 2905–2920 (2019). https://doi.org/10.1007/s00521-017-3239-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-3239-6

Keywords

Navigation