Soft output decision convolutional (SONNA) decoders based on the application of neural networks

https://doi.org/10.1016/j.engappai.2007.03.009Get rights and content

Abstract

The paper investigates principles of work and BER characteristics of a new soft decision algorithm for decoding convolutional codes that is based on neural network applications. The novelty of the algorithm is in its capability to generate soft output estimates of the message bits encoded. For this purpose the noise energy function, which is defined and used for the neural network decoding of convolutional codes, has been related to the well known log-likelihood function and the soft decision decoding rule has been defined and derived. The BER curves are obtained for this novel algorithm and then compared to the curve obtained by the Viterbi algorithm and the gradient descent algorithm. Based on the theoretical model a simulator of coding communication system has been developed and used to confirm theoretically expected results. It was found that the performances of the proposed soft decision decoder are comparable or better than the performances of the recurrent neural network decoder and decoders based on the Viterbi algorithm.

Introduction

Artificial neural networks (ANNs) have been applied in various fields of digital communications primarily due to their non-linear processing, possible parallel processing and efficient hardware implementations (Ibnkahla, 2000). Recently, substantial efforts have been made to apply ANNs in error control coding theory, initially, for block codes decoding (Bruck and Blaum, 1989), (Ciocoiu, 1996) and recently for convolutional (Wicker and Wang, 1996; Hamalainen and Henriksson, 1999a, Hamalainen and Henriksson, 1999b, Hamalainen and Henriksson, 2000; Berber et al., 2005), and turbo codes decoding (Buckley and Wicker, 1999). It was shown that the decoding problem could be formulated as a function minimization problem and the gradient descent algorithm was applied to solve this problem. The prototyping of neural network decoders was presented in (Salcic et al., 2006), and the algorithm was also implemented in hardware using floating—gate MOSFET circuits (Rantala et al., 2001).

It is to note that the theory developed until year 2000 was related to the 1/n rate codes, defined only for small values of n, with various constraint lengths. The general expressions needed for decoder development, presented in the form of the noise energy function, were derived for the 1/n-rate codes in Secker et al. (2003), Berber et al. (2005) and for the general k/n-rate code in Berber (2004). In this way the central problem of decoding based on neural networks application become solvable and reduced to a function minimization problem of the derived differentiable noise energy function.

The Viterbi algorithm is known to be asymptotically optimal in the maximum likelihood sense as can be seen from Appendix A, where a brief explanation of this algorithm is presented. However, because its complexity increases exponentially with the constraint length of the encoder, further research was undertaken to reduce this complexity (Zadeh and Soleymani, 2005b). For this purpose an adaptive algorithm was developed, called adaptive M-algorithm, which starts decoding with a reduced number of survivor paths and increases the number of paths depending on the error state in the channel that is controlled by CRC error detection code. Practical problems related to the increase of decoding speed of the Viterbi decoder are analysed in Tang and Parhi (2005) and three techniques are proposed to overcome these problems. New decoder architecture is proposed that is based on three existing techniques, lookahead, sliding block and parallel processing technique, which increase the speed of decoding with a relatively low clock frequency.

The soft output Viterbi algorithm (SOVA) is proved as a very good algorithm that improves significantly the coding gain achieved by the hard decision Viterbi algorithm. The algorithm is based on calculating the log-likelihood function (LLF) and making the estimates of message bits. If the LLF is available it is possible to include the estimates of the a priori probabilities and to develop turbo decoders based on the SOVA algorithm.

The intention of this paper is to extend the results presented in Berber (2004), Berber et al. (2005) by developing and simulating a soft output decoding algorithm based on neural networks application (SONNA). It will be shown by numerical calculations that the SONNA algorithm can be used to decode convolutional codes (Berber, 2004). For that purpose the LLF is derived and the algorithm for soft decoding is developed and simulated. According to this algorithm an iterative decoding procedure is implemented and simulated that starts with a specified set of message bits and finishes when the last estimated set of message bits is identical to the set of bits obtained in the previous iteration step. This algorithm is demonstrated on a 12-rate convolutional code.

Section snippets

Encoder operation

Traditionally, the decoding of convolutional codes has been performed using Viterbi algorithm (VA), which is asymptotically optimum in the maximum likelihood sense (Viterbi, 1967). Due to the importance of Viterbi algorithm, its concise explanation is presented in Appendix A. It was also shown that the convolution decoding problem can be expressed as a function minimization problem, where the task of decoder is to minimize the noise energy function which represents the Euclidean distance

Development of soft decision neural network algorithm (SONNA)

In the following analysis we will assume that the transmission channel is memoryless. Thus, the noise affecting a given bit is independent of the noise affecting any proceeding or succeeding bit. Assuming that the message sequence is transmitted in the time interval from t=0 to t=T, the word error probability can be expressed in this formPW=1-rp(b,r)dr=1-rp(b|r)p(r)dr,where the received codeword is a vector expressed in the form r=rT, where T is the number of sets containing n encoded bits.

Demonstration of the basic algorithm on a numerical example

The expressions for the general case encoder were applied for an example 12-rate encoder and the constraint length L=3, which is shown in Fig. 2 alongside with the unit impulse generator matrix obtained from the general form in (1).

For this encoder the noise energy function can be obtained from (4) and express asf[b(k)]=s=0Tj=12[rj(k+s)-i=13b(s+k+1-i)gj,i]2=s=0T{[r1(k+s)-b(k+s)]2+[r1(k+s)-b(k+s)b(k+s-2)]2}and the first partial derivative as f(b(k))b(k)=-2r1(k)-2r2(k)b(k-2)-2r2(k+2)b(k+2)+6

Conclusions

A mathematical model for the soft output decoding algorithm, based on neural networks application, is developed and simulated. The log-likelihood function is defined as a difference of the noise energy function of message bits and the noise energy function of their competitors. The proposed decoding algorithm is demonstrated on a traditional structure convolutional encoder by calculating the updated values of message bits. Then, the algorithm in simulated on a traditionally used 12-rate

References (21)

  • S.M. Berber et al.

    Theory and application of neural networks for 1/n rate convolutional decoders

    Journal on Engineering Applications of Artificial Intelligence

    (2005)
  • Berber, S.M., 2004. Soft Decision Output Decoding (SONNA) Algorithm for Convolutional Codes Based on Artificial Neural...
  • Box, M.J., Davies, D., Swann, W. H., 1969. Non-linear optimisation techniques. Mathematical and statistical techniques...
  • J. Bruck et al.

    Neural networks, error-correcting codes, and polynomials over the binary n-cube

    IEEE Transaction on Information Theory

    (1989)
  • M.E. Buckley et al.

    A neural network for predicting decoder error in turbo decoders

    IEEE Communications Letters

    (1999)
  • A. Cichocki et al.

    Neural networks for optimisation and signal processing

    (1993)
  • I.B. Ciocoiu

    Analog decoding using a gradient-type neural networks

    IEEE Transaction on Neural Networks

    (1996)
  • Hamalainen, A., Henriksson, J., 1999a. A Recurrent Neural Decoder for Convolutional Codes. In: Proceedings of 1999 IEEE...
  • Hamalainen, A., Henriksson, J., 1999b. Convolutional Decoding Using Recurrent Neural Networks. In: Proceedings of...
  • Hamalainen, A., Henriksson, J., 2000. Novel Use of Channel Information in a Neural Convolutional Decoder. In:...
There are more references available in the full text version of this article.

Cited by (3)

View full text