Abstract
This work probes into a smoothing recurrent neural network (SRNN) in terms of a smoothing approximation technique and the equivalent version of the Karush–Kuhn–Tucker condition. Such a network is developed to handle the \(L_0\hbox {-norm}\) minimization model originated from compressed sensing, after replacing the model with a nonconvex nonsmooth approximation one. The existence, uniqueness and limit behavior of solutions of the network are well studied by means of some mathematical tools. Multiple kinds of nonconvex approximation functions are examined so as to decide which of them is most suitable for SRNN to address the problem of sparse signal recovery under different kinds of sensing matrices. Comparative experiments have validated that among the chosen approximation functions, transformed L1 function (TL1), logarithm function (Log) and arctangent penalty function are effective for sparse recovery; SRNN-TL1 is robust and insensitive to the coherence of sensing matrix, while it is competitive by comparison against several existing discrete numerical algorithms and neural network methods for compressed sensing problems.
Similar content being viewed by others
References
Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306
Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509
Natarajan BK (1995) Sparse approximate solutions to linear systems. SIAM J Comput 24(2):227–234
Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159
Gasso G, Rakotomamonjy A, Canu S (2009) Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans Signal Process 57(12):4686–4698
Foucart S, Lai MJ (2009) Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0<q\le 1\). Appl Comput Harmon Anal 26(3):395–407
Lai MJ, Xu Y, Yin W (2013) Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J Numer Anal 51(2):927–957
Geman D, Yang C (1995) Nonlinear image recovery with half-quadratic regularization. IEEE Trans Image Process 4(7):932–946
Trzasko J, Manduca A (2009) Relaxed conditions for sparse signal recovery with general concave priors. IEEE Trans Signal Process 57(11):4347–4354
Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Stat Assoc 96(456):1348–1360
Friedman JH (2012) Fast sparse regression and classification. Int J Forecast 28(3):722–738
Zhang CH (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942
Zhang S, Qian H, Chen W, Zhang Z (2013) A concave conjugate approach for nonconvex penalized regression with the MCP penalty. In: Proceedings of the 27th AAAI conference on artificial intelligence 2013, pp 1027–1033
Zhang T (2010) Analysis of multi-stage convex relaxation for sparse regularization. J Mach Learn Res 11(2):1081–1107
Gao C, Wang N, Yu Q, Zhang Z (2011) A feasible nonconvex relaxation approach to feature selection. In: Proceedings of the 25th AAAI conference on artificial intelligence 2011, pp 356–361
Soubies E, Blanc-Fraud L, Aubert G (2015) A continuous exact \(L_0\) penalty (CEL0) for least squares regularized problem. SIAM J Imaging Sci 8(3):1607–1639
Malek-Mohammadi M, Koochakzadeh A, Babaie-Zadeh M, Jansson M, Rojas CR (2016) Successive concave sparsity approximation for compressed sensing. IEEE Trans Signal Process 64(21):5657–5671
Selesnick IW, Bayram I (2014) Sparse signal estimation by maximally sparse convex optimization. IEEE Trans Signal Process 62(5):1078–1092
Lou Y, Osher S, Xin J (2015) Computational aspects of constrained \(L_1-L_2\) minimization for compressive sensing. Modelling. Springer, Computation and optimization in information systems and management sciences, pp 169–180
Yin P, Lou Y, He Q, Xin J (2015) Minimization of \(l_{1-2}\) for compressed sensing. SIAM J Sci Comput 37(1):A536–A563
Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Bian W, Chen X (2014) Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE Trans Neural Netw Learn Syst 25(3):545–556
Qin S, Bian W, Xue X (2013) A new one-layer recurrent neural network for nonsmooth pseudoconvex optimization. Neurocomputing 120:655–662
Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52(3):141–152
Bian W, Chen X (2012) Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411
Rozell CJ, Garrigues P (2010) Analog sparse approximation for compressed sensing recovery. In: Proceedings of the ASILOMAR conference on signals, systems and computers 2010, pp 822–826
Charles AS, Garrigues P, Rozell CJ (2011) Analog sparse approximation with applications to compressed sensing. arXiv preprint arXiv:1111.4118
Leung CS, Sum J, Constantinides AG (2014) Recurrent networks for compressive sampling. Neurocomputing 129:298–305
Feng R, Leung CS, Constantinides AG, Zeng WJ (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Netw Learn Syst 28(10):2395–2407
Wang H, Lee CM, Feng R, Leung CS (2017) An analog neural network approach for the least absolute shrinkage and selection operator problem. Neural Comput Appl. https://doi.org/10.1007/s00521-017-2863-5
Liu Y, Hu J (2016) A neural network for \(l_1\)-\(l_2\) minimization based on scaled gradient projection: application to compressed sensing. Neurocomputing 173(3):988–993
Liu Q, Wang J (2016) \(L_1\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707
Guo Z, Wang J (2010) A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of the 2010 international joint conference on neural networks (IJCNN) 2010, pp 18–23
Guo C, Yang Q (2015) A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to minimization \(L_0\). IEEE Trans Neural Netw Learn Syst 26(7):1363–1374
Bazaraa MS, Sherali HD, Shetty CM (2013) Nonlinear programming: theory and algorithms. Wiley, New York
Chen X (2012) Smoothing methods for nonsmooth, nonconvex minimization. Math Program 134(1):71–99
Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666
Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321
Tropp JA (2004) Greed is good: algorithmic results for sparse approximation. IEEE Trans Inf Theory 50(10):2231–2242
Cohen A, Dahmen W, DeVore R (2009) Compressed sensing and best \(k\)-term approximation. J Am Math Soc 22(1):211–231
Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122
Hale E, Yin W, Zhang Y (2008) Fixed-point continuation for \(l_1\)-minimization: methodology and convergence. SIAM J Optim 19(3):1107–1130
Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202
Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563
Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215
Gribonval R, Nielsen M (2007) Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl Comput Harmon Anal 22(3):335–355
Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New York
Bandeira A, Dobriban E, Mixon D, Sawin W (2013) Certifying the restricted isometry property is hard. IEEE Trans Inform Theory 59(6):3448–3450
Slotine JJE, Li W (1991) Applied nonlinear control. Englewood Cliffs, Prentice-Hall
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant No. 61563009, the Science and Technology Foundation of Guizhou Province (No. LKQS201314) and the Foundation of Qiannan Normal University for Nationalities (No. 2014ZCSX18).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Wang, D., Zhang, Z. KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing. Neural Comput & Applic 31, 2905–2920 (2019). https://doi.org/10.1007/s00521-017-3239-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-017-3239-6