Skip to main content
Log in

An analog neural network approach for the least absolute shrinkage and selection operator problem

  • ICONIP 2015
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper addresses the analog optimization for non-differential functions. The Lagrange programming neural network (LPNN) approach provides us a systematic way to build analog neural networks for handling constrained optimization problems. However, its drawback is that it cannot handle non-differentiable functions. In compressive sampling, one of the optimization problems is least absolute shrinkage and selection operator (LASSO), where the constraint is non-differentiable. This paper considers the hidden state concept from the local competition algorithm to formulate an analog model for the LASSO problem. Hence, the non-differentiable limitation of LPNN can be overcome. Under some conditions, at equilibrium, the network leads to the optimal solution of the LASSO. Also, we prove that these equilibrium points are stable. Simulation study illustrates that the proposed analog model and the traditional digital method have the similar mean squared performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. Wiley, London

    MATH  Google Scholar 

  2. MacIntyre J (2013) Applications of neural computing in the twenty-first century and 21 years of Neural Computing & Applications. Neural Computing Appl 23(3):657–665

    Article  Google Scholar 

  3. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proceedings of the National Academy of Sciences, 79, 2554–2558

  4. Tank D, Hopfield JJ (1986) Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans Circuits Syst 33(5):533–541

    Article  Google Scholar 

  5. Duan S, Dong Z, Hu X, Wang L, Li H (2016) Small-world Hopfield neural networks with weight salience priority and memristor synapses for digit recognition. Neural Computing Appl 27(4):837–844

    Article  Google Scholar 

  6. Chua LO, Lin GN (1984) Nonlinear programming without computation. IEEE Trans Circuits Syst 31:182–188

    Article  MathSciNet  Google Scholar 

  7. Liu Q, Wang J (2008) A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming. IEEE Trans Neural Netw 19(4):558–570

    Article  Google Scholar 

  8. Wang J (2010) Analysis and design of a k-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21(9):1496–1506

    Article  Google Scholar 

  9. Bharitkar S, Tsuchiya K, Takefuji Y (1999) Microcode optimization with neural networks. IEEE Trans Neural Netw 10(3):698–703

    Article  Google Scholar 

  10. Chua LO, Yang L (1988) Cellular neural networks: theory. IEEE Trans Circuits Syst 35(10):1257–1272

    Article  MathSciNet  MATH  Google Scholar 

  11. Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692

    Article  MATH  Google Scholar 

  12. Lin YL, Hsieh JG, Kuo YS, Jeng JH (2016) NXOR- or XOR-based robust template decomposition for cellular neural networks implementing an arbitrary Boolean function via support vector classifiers. Neural Computing Appl (accepted)

  13. Liu X (2016) Improved convergence criteria for HCNNs with delays and oscillating coefficients in leakage terms. Neural Computing Appl 27(4):917–925

    Article  Google Scholar 

  14. Sum J, Leung CS, Tam P, Young G, Kan WK, Chan LW (1999) Analysis for a class of winner-take-all model. IEEE Trans Neural Netw 10(1):64–71

    Article  Google Scholar 

  15. Liu S, Wang J (2006) A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans Neural Netw 17(6):1500–1510

    Article  Google Scholar 

  16. Xiao Y, Liu Y, Leung CS, Sum J, Ho K (2012) Analysis on the convergence time of dual neural network-based kwta. IEEE Trans Neural Netw Learn Syst 23(4):676–682

    Article  Google Scholar 

  17. Gao XB (2003) Exponential stability of globally projected dynamics systems. IEEE Trans Neural Netw 14:426–431

    Article  Google Scholar 

  18. Hu X, Wang J (2007) A recurrent neural network for solving a class of general variational inequalities. IEEE Trans Syst Man Cybern B Cybern 37(3):528–539

    Article  Google Scholar 

  19. Zhang S, Constantinidies AG (1992) Lagrange programming neural networks. IEEE Tran Circuits Syst II 39:441–452

    Article  Google Scholar 

  20. Leung CS, Sum J, So HC, Constantinides AG, Chan FKW (2014) Lagrange programming neural networks for time-of-arrival-based source localization. Neural Computing Appl 24(1):109–116

    Article  Google Scholar 

  21. Liang J, So HC, Leung CS, Li J, Farina A (2015) Waveform design with unit modulus and spectral shape constraints via Lagrange programming neural network. IEEE J Sel Top Signal Process 9(8):1377–1386

    Article  Google Scholar 

  22. Liang J, Leung CS, So HC (2016) Lagrange programming neural network approach for target localization in distributed MIMO radar. IEEE Trans Signal Process 64(6):1574–1585

    Article  MathSciNet  Google Scholar 

  23. Donoho DL, Elad M (2003) Optimally sparse representation in general (nonorthogonal) dictionaries via \(l_1\) minimization. Proc Natl Acad Sci 100(5):2197–2202

    Article  MathSciNet  MATH  Google Scholar 

  24. Gilbert AC, Tropp JA (2005) Applications of sparse approximation in communications. In: Proceedings of the international symposium on information theory ISIT 2005:1000–1004

  25. Sahoo SK, Lu W(2011) Image denoising using sparse approximation with adaptive window selection. In: Proceedings of the 8th international conference on information, communications and signal processing (ICICS) 2011, 1–5

  26. Rahmoune A, Vandergheynst P, Frossard P (2012) Sparse approximation using m-term pursuit and application in image and video coding. IEEE Trans Image Process 21(4):1950–1962

    Article  MathSciNet  MATH  Google Scholar 

  27. Kim SJ, Koh K, Lustig M, Boyd S, Gorinevsky D (2007) An interior-point method for large-scale ‘1-regularized least squares. IEEE J Sel Top Sig Proc 1(4):606–617

    Article  Google Scholar 

  28. Saunders MA (2005) Matlab software for convex optimization. http://www.stanford.edu/group/SOL/software/pdco.html

  29. Figueiredo M, Nowak R, Wright S (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE. J Sel Top Sig Proc 1(4):586–597

    Article  Google Scholar 

  30. Berg E, Friedlander MP (2008) Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Computing 31(2):890912

    MathSciNet  MATH  Google Scholar 

  31. Berg E, Friedlander MP (2011) Sparse optimization with least-squares constraints. SIAM J Optim 21(4):1201–1229

    Article  MathSciNet  MATH  Google Scholar 

  32. Berg E, Friedlander MP (2007) SPGL1: a solver for large-scale sparse reconstruction. http://www.cs.ubc.ca/labs/scl/spgl1

  33. Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563

    Article  MathSciNet  Google Scholar 

  34. Chen SS, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20(1):33–61

    Article  MathSciNet  MATH  Google Scholar 

  35. Feng R, Lee CM, Leung CS (2015) Lagrange programming neural network for the L1-norm constrained quadratic minimization. In: Proceedings of the ICONIP 2015, Istanbul, Turkey, 3, pp 119–126

  36. Balavoine A, Rozell CJ, Romberg J (2011) Global convergence of the locally competitive algorithm. In: Proceedings of the IEEE signal processing education workshop (DSP/SPE) (2011) Sedona. Arizona, USA, pp 431–436

  37. Balavoine A, Romberg J, Rozell CJ (2012) Convergence and rate analysis of neural networks for sparse approximation. IEEE Trans Neural Netw Learn Syst 23(9):1377–1389

    Article  Google Scholar 

  38. Gordon G, Tibshirani R (2012) Karush–Kuhn–Tucker conditions, Optimization Fall 2012 Lecture Notes

  39. Guenin B, Konemann J, Tunel T (2014) A gentle introduction to optimization. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  40. Feng X, Zhang Z (2007) The rank of a random matrix. Appl Math Comput 185(1):689–694

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is partially supported by the Research Grants Council, Hong Kong, under Grant Number, CityU 115612.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi Sing Leung.

Ethics declarations

Conflict of interest

Authors declare that they do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Lee, C.M., Feng, R. et al. An analog neural network approach for the least absolute shrinkage and selection operator problem. Neural Comput & Applic 29, 389–400 (2018). https://doi.org/10.1007/s00521-017-2863-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-2863-5

Keywords

Navigation