Elsevier

Neurocomputing

Volume 64, March 2005, Pages 359-374
Neurocomputing

An improved neural network for convex quadratic optimization with application to real-time beamforming

https://doi.org/10.1016/j.neucom.2004.11.009Get rights and content

Abstract

This paper develops an improved neural network to solve convex quadratic optimization problems with general linear constraints. Compared with the existing primal–dual neural network and dual neural network for solving such problems, the proposed neural network has a lower complexity for implementation. Unlike the Kennedy–Chua neural network, the proposed neural network can converge to an exact optimal solution. Analyzed results and illustrative examples show that the proposed neural network has a fast convergence to the optimal solution. Finally, the proposed neural network is effectively applied to real-time beamforming.

Introduction

Consider the following quadratic optimization problemminimize12xTQx+cTxsubjecttoBx=b,Axd,lxh,where QRn×n is a symmetric and positive definite matrix, BRm×n, ARr×n, c,h,lRn, bRm, and dRr. It is well-known that quadratic optimization problems arise in a wide variety of scientific and engineering applications including regression analysis, image and signal processing, parameter estimation, filter design, robot control, etc. [1]. Many of them have time-varying nature and thus have to be solved in real time [6], [16]. Because of the nature of digital computers, conventional numerical optimization techniques may not be effective for such real-time applications. Neural networks are composed of many massively connected neurons. The main advantage of the neural network approach to optimization is that the nature of the dynamic solution procedure is inherently parallel and distributed. Unlike other parallel algorithms, neural networks can be implemented physically in designated hardware such as application-specific integrated circuits, where optimization is carried out in a truly parallel and distributed manner. Because of the inherent nature of parallel and distributed information processing in neural networks, the convergence rate of the solution process is not decreasing as the size of the problem increases. Therefore, the neural network approach can solve optimization problems in running time at the orders of magnitude much faster than the most popular optimization algorithms executed on general-purpose digital computers [3]. Neural networks for optimization have received tremendous interest in recent years [2], [5], [7], [10], [12], [13], [14], [15], [17]. At present, there are several recurrent neural networks for solving quadratic optimization problems (1). Kennedy and Chua [7] presented a primal neural network. Because the network contains a finite penalty parameter, it converges an approximate solution only. To overcome the penalty parameter, we proposed a primal–dual neural network and a dual neural network [12], [14]. The primal–dual neural network has a two-layer structure and the dual neural network requires computing an inverse matrix. Thus, both neural networks have a model complexity problem. Moreover, all existing neural networks for solving (1) cannot be guaranteed to have an exponential convergence to the optimal solution of (1). Thus, studying alternative neural networks with a low complexity and a fast convergence rate is of importance and significance.

The objective of this paper is to develop an improved neural network for solving (1) with a low complexity and fast convergence. The proposed neural network has one-layer structure without the need of computing an inverse matrix. The beamforming processor is used to generate an optimal set of beams to track the mobiles within the coverage area of the base-station. The implemented algorithms have to rapidly enhance the desired signal and suppress noise and interference at the output of an array of sensors; thus, real-time solution algorithms are very desirable. As another objective of this paper, the proposed neural network is effectively applied to real-time beamforming. Theoretical results and illustrative examples show that the proposed neural network has a good performance with a fast convergence rate.

Section snippets

Neural network models

In this section, we reformulate problem (1) and then develop a neural network model for solving (1).

Convergence results

In this section, we prove that the modified neural network is globally convergent to optimal solutions with a fast convergence rate. A definition and two lemmas are first introduced.

Definition 1

A neural network is said to have an exponential convergence to x* if there exists T0>t0 such that the output trajectory x(t) of this network satisfies x(t)-x*=O(e-η(t-t0))tT0,where η is a positive constant independent of the initial point.

Lemma 1

(i) For any initial point there exists a unique continuous solution z(t)

Application to real-time beamforming

Adaptive antenna techniques offer the possibility of increasing the performance of wireless communication systems by maximizing directional gain and enhancing the protection towards multipath fading conditions [8]. The beamforming processor is used to generate an optimal set of beams to track the mobiles within the coverage area of the base-station. The implemented algorithms have to rapidly enhance the desired signal and suppress noise and interference at the output of an array of sensors;

Youshen Xia received B.S. and M.S. degrees in computational mathematics from Nanjing University, China in 1982 and 1989, respectively. He received his Ph.D. degree from Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, China in 2000. His present research interests include system identification, signal and image processing, and design and analysis of recurrent neural networks for constrained optimization and their engineering applications.

References (18)

  • Z.G. Hou

    A hierarchical optimization neural network for large-scale dynamic systems

    Automatica

    (2001)
  • D.P. Bertsekas

    Parallel and Distributed ComputationNumerical Methods

    (1989)
  • A. Bouzerdoum et al.

    Neural network for quadratic optimization with bound constraints

    IEEE Trans. on Neural Networks

    (1993)
  • A. Cichocki et al.

    Neural Networks for Optimization and Signal Processing

    (1993)
  • H. Cox et al.

    Robust adaptive beamforming

    IEEE Trans. Acoust. Speech Signal Process.

    (1987)
  • N. Kalouptisidis

    Signal Processing Systems, Theory and Design

    (1997)
  • M.P. Kennedy et al.

    Neural networks for nonlinear programming

    IEEE Trans. Circuits Syst.

    (1988)
  • I.S. Reed et al.

    Rapid convergence rate in adaptive arrays

    IEEE Trans. Aerosp. Electron. Syst.

    (1974)
  • S. Vorobyov et al.

    Robust adaptive beamforming using worst-case performance optimizationa solution to the signal mismatch problem

    IEEE Trans. Signal Process.

    (2003)
There are more references available in the full text version of this article.

Cited by (0)

Youshen Xia received B.S. and M.S. degrees in computational mathematics from Nanjing University, China in 1982 and 1989, respectively. He received his Ph.D. degree from Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, China in 2000. His present research interests include system identification, signal and image processing, and design and analysis of recurrent neural networks for constrained optimization and their engineering applications.

Gang Feng received B.Eng. and M.Eng. degrees in Automatic Control (of Electrical Engineering) from Nanjing Aeronautical Institute, China in 1982 and in 1984, respectively, and Ph.D. degree in Electrical Engineering from the University of Melbourne, Australia, in 1992. He has been with City University of Hong Kong since 2000 and was with School of Electrical Engineering, University of New South Wales, Australia, 1992–1999. He was awarded an Alexander von Humboldt Fellowship in 1997–1998. He was a visiting Fellow at National University of Singapore (1997), and Aachen Technology University, Germany (1997–1998). He has authored and/or coauthored more than 90 referred international journal papers and numerous conference papers. His current research interests include robust adaptive control, signal processing, piecewise linear systems, and intelligent systems and control. Dr. Feng is an associate editor of IEEE Transactions on Fuzzy Systems, and IEEE Transactions on Systems, Man and Cybernetics, Part C, Journal of Control Theory and Applications, and was an associate editor of the Conference Editorial Board of IEEE Control System Society.

View full text