Multi-start stochastic competitive Hopfield neural network for frequency assignment problem in satellite communications

https://doi.org/10.1016/j.eswa.2010.06.027Get rights and content

Abstract

The objective of the frequency assignment problem (FAP) is to minimize cochannel interference between two satellite systems by rearranging frequency assignment. In this paper, we first propose a competitive Hopfield neural network (CHNN) for FAP. Then we propose a stochastic CHNN (SCHNN) for the problem by introducing stochastic dynamics into the CHNN to help the network escape from local minima. In order to further improve the performance of the SCHNN, a multi-start strategy or re-start mechanism is introduced into the SCHNN. The multi-start strategy or re-start mechanism super-imposed on the SCHNN is characterized by alternating phases of cooling and reheating the stochastic dynamics, thus provides a means to achieve an effective dynamic or oscillating balance between intensification and diversification during the search. Furthermore, dynamic weighting coefficient setting strategy is adopted in the energy function to satisfy the constraints and improve the objective of the problem simultaneously. The proposed multi-start SCHNN (MS-SCHNN) is tested on a set of benchmark problems and a large number of randomly generated instances. Simulation results show that the MS-SCHNN is better than several typical neural network algorithms such as GNN, TCNN, NCNN and NCNN-VT, and metaheuristic algorithm such as hybrid SA.

Introduction

Wireless communication has received lots of attention these years due to its various applications including mobile telephone, TV broadcasting and satellite communications. Frequency assignment problems (FAPs) have arisen in many different situations in the field of wireless communications. In this paper, we focus on the FAP in satellite communication systems. In satellite communication systems, the reduction of the cochannel interference has become a major factor for determining system design (Funabiki and Nishikawa, 1997, Liu et al., 2007, Salcedo-Sanz et al., 2004, Salcedo-Sanz and B-Calz’on, 2005, Wang et al., 2008). Furthermore, due to the necessity of accommodating as many satellites as possible in geostationary orbit, this interference reduction has become an even more important issue with the increase of geostationary satellites (Funabiki and Nishikawa, 1997, Liu et al., 2007, Salcedo-Sanz et al., 2004, Salcedo-Sanz and B-Calz’on, 2005, Wang et al., 2008). To deal with interference reduction in practical situations, the rearrangement of frequency assignments is considered as an effective measure (Funabiki and Nishikawa, 1997, Liu et al., 2007, Salcedo-Sanz et al., 2004, Salcedo-Sanz and B-Calz’on, 2005, Wang et al., 2008).

There are two objectives for the FAP in satellite communications. The primary objective is to minimize the largest interference of elements selected for the assignment, and the secondary objective is to minimize the total interference of all the selected elements (Funabiki & Nishikawa, 1997). The FAP is proven to be NP-complete (Mizuike & Ito, 1989). Thus some heuristic algorithms, especially neural network, are proposed to deal with the FAP.

Mizuike and Ito, 1986, Mizuike and Ito, 1989 proposed a segmentation of frequency band and an exact algorithm based on branch-and-bound for the FAP. But the branch-and-bound algorithm may fail when applied to large instances since the FAP is NP-complete.

Kurokawa and Kozuka (1993) firstly proposed a Hopfield neural network (HNN) (Hopfield, 1982, Hopfield, 1984, Hopfield and Tank, 1985) that consists of M × M neurons for the FAP with N-carrier M-segment. But the neural network of Kurokawa and Kozuka focuses the goal function only on the minimization of the total interference. Further, it can not be applied for practical large size systems because of the very small convergence rate and requirements of large number of neurons and computation time (Funabiki and Nishikawa, 1997, Kurokawa and Kozuka, 1993). For the FAP with N-carrier M-segment, Funabiki and Nishikawa (1997) proposed a gradual neural network (GNN) that consists of N × M neurons. In the GNN, the cost optimization is achieved by a gradual expansion scheme and a binary neural network is in charge of constraints. The GNN has the advantage over the neural network of Kurokawa and Kozuka because the GNN reduces the required number of neurons. But the multiphase searching leads to heavy computation burden, thus the GNN can not be applied for the FAP with large size (Wang et al., 2008).

Salcedo-Sanz et al., 2004, Salcedo-Sanz and B-Calz’on, 2005 combined a binary HNN (Hopfield, 1982) with simulated annealing (HopSA) and genetic algorithm (NG), respectively, for the FAP. Simulation results show that the HopSA (Salcedo-Sanz et al., 2004) is better than the NG (Salcedo-Sanz & B-Calz’on, 2005). But these two hybrid algorithms fail to obtain solutions in large size problems due to the excessive computation time (Wang et al., 2008).

In order to deal with the local minima of the continuous HNN (Hopfield, 1984), Chen and Aihara (1995) proposed a transiently chaotic neural network (TCNN) by adding a negative self-feedback to continuous HNN and then gradually decreasing the self-feedback to stabilize the network. In order to improve the performance of the TCNN, Wang et al., 2004, Wang and Shi, 2006, Wang and Shi, 2007 proposed a noisy chaotic neural network (NCNN) by adding decaying stochastic noise into the TCNN. Liu et al. (2007) proposed a TCNN for the FAP. More recently, Wang et al. (2008) proposed a NCNN with variable thresholds (NCNN-VT) for the FAP. The NCNN-VT handles only the constraint with the energy function and maps the objective to variable thresholds (biases) of the neurons. Similar objective mapping scheme is also proposed in Xu, Tang, and Wang (2004). Simulation results show that the NCNN-VT is better than the previous algorithms (Wang et al., 2008). But in the NCNN-VT algorithm, it is difficult to control and balance the chaotic dynamics, stochastic dynamics and gradient ascent dynamics for converging to a stable equilibrium point corresponding to an acceptably near-optimal solution. Furthermore, the rate of convergence to feasible solution is not very high in particular when the problem size becomes large.

In this paper, we first propose a competitive HNN (CHNN) for the FAP. Then a stochastic dynamics are introduced into the CHNN in order to help the network escape from local minima. In order to further improve the performance of CHNN, a multi-start strategy or re-start mechanism is introduced into the stochastic CHNN (SCHNN). The multi-start strategy or re-start mechanism super-imposed on the SCHNN is characterized by alternating phases of cooling and reheating the stochastic dynamics, thus providing a means to achieve an effective dynamic or oscillating balance between intensification and diversification during the search. Furthermore, dynamic weighting coefficient setting strategy is adopted in the energy function to satisfy the constraints and improve the objective of the problem simultaneously. The proposed multi-start SCHNN (MS-SCHNN) is tested on a set of benchmark problems and randomly generated test instances. Simulation results show that the MS-SCHNN is better than or competitive with several typical neural network algorithms including GNN, TCNN, NCNN and NCNN-VT and is better than metaheuristic algorithm such as hybrid SA.

The contributions of this paper are: (1) a novel MS-SCHNN is proposed by introducing the stochastic dynamics and multi-start strategy into the CHNN to help the network escape from local minima. The proposed MS-SCHNN can be applied to a class of combinatorial optimization problems, which is the contribution to the HNN literature; (2) the proposed MS-SCHNN with dynamic weighting coefficient setting strategy is applied to the FAP and obtain superior results, which is the contribution to the available literature on the FAP; and (3) a comprehensive experimental comparison of the proposed MS-SCHNN with previous methods is also provided.

The remaining sections of this paper are organized as follows. In Section 2, we describe the FAP in detail. In Section 3, we propose a CHNN for the FAP. In Section 4, we propose a MS-SCHNN for the FAP. In Section 5, benchmark data sets are used to evaluate the MS-SCHNN. Last section concludes this paper.

Section snippets

Problem statement of FAP

In this section, we briefly describe the FAP in satellite communications systems as a combinatorial optimization problem with three constraints and two objectives, as in Funabiki and Nishikawa, 1997, Salcedo-Sanz et al., 2004, Salcedo-Sanz and B-Calz’on, 2005, Liu et al., 2007, Wang et al., 2008.

Given two adjacent satellite systems, the whole objective of the FAP is to reduce the inter-system conchannel interference by rearranging the frequency assignment on carrier in system 2, while fixing

CHNN for FAP

In this section, we firstly give the neural network representation for the FAP. Then, we propose a CHNN for the FAP based on the neural network representation.

Multi-start stochastic CHNN for FAP

In the previous section, a CHNN for the FAP is proposed. However, the CHNN has no mechanism to escape from local minima. In this section, we first propose a stochastic CHNN (SCHNN) that permits temporary energy ascent to help the CHNN escape from local minima by incorporating the stochastic dynamics into the CHNN. Then, a multi-start strategy or re-start mechanism is imposed on the SCHNN in order to further improve the performance of the SCHNN. Thus, a multi-start SCHNN (MS-SCHNN) is proposed

Simulation results and discussions

In order to assess the performance of the MS-SCHNN, simulations were implemented in C on a PC (Pentium Dual 1.6 GHz, 1.0G RAM). In this section we describe the benchmark datasets, the parameter setting, and present the results of the MS-SCHNN on the benchmark datasets. We also compare the results with several typical neural network algorithms such as the GNN, TCNN, NCNN and NCNN-VT, and metaheuristic algorithm such as hybrid SA.

Conclusions and future works

In this paper, we first propose a CHNN for the FAP. Then a stochastic dynamics is introduced into the CHNN in order to help the network escape from local minima. In order to further improve the performance of CHNN, a multi-start strategy or re-start mechanism is introduced into the stochastic CHNN (SCHNN). Furthermore, dynamic weighting coefficient setting strategy is adopted in the energy function to satisfy the constraints and improve the objective of the problem simultaneously. The proposed

Acknowledgments

The authors thank Wang et al. (2008) for sending the original source code of the NCNN-VT. The project supported by the National Natural Science Foundation of China (60805026, 60905038), the Specialized Research Fund for the Doctoral Program of Higher Education (20070558052), and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry (2007-1108).

References (28)

  • J.J. Hopfield

    Neural networks and physical systems with emergent collective computational abilities

    Proceedings of National Academy of Sciences

    (1982)
  • J.J. Hopfield

    Neurons with graded response have collective computational properties like those of two-state neurons

    Proceedings of National Academy of Sciences

    (1984)
  • J.J. Hopfield et al.

    Neural computation of decisions in optimization problems

    Biological Cybernetics

    (1985)
  • S. Kirkpatrick et al.

    Optimization by simulated annealing

    Science

    (1983)
  • Cited by (0)

    View full text