1 Introduction

Since its proposal in 1963 by Gallager [3], Low-Density Parity-Check (LDPC) codes has gained considerable research attention over the past years. In [3], two types of decoding algorithms were proposed. The first type consists of soft decision algorithms, such as Belief Propagation (BP) algorithm [3], Min-Sum (MS) algorithm [2] and Offset Min-Sum (OMS) algorithm [1]. The second type consists of hard decision algorithms, known as Bit-Flipping (BF) algorithms.

The soft decision algorithms have an excellent bit error rate (BER) performance, but this comes at the cost of high computation complexity. The iterative soft decision decoder will not stop until a valid codeword is found by parity checking or the preset maximum number of iterations is reached. In many cases, the maximum number of iterations exceed one hundred for better BER performance. However, what often happens at low signal-to-noise ratios (SNRs) is, the valid codeword can not be found. Under these scenarios, the soft decision algorithms will continue to run until the maximum number of iterations is exceeded, and therefore, a lot of energy and time are wasted especially when the maximum number of iterations is large. Therefore, how to detect and stop the decoding of undecodable blocks at an early stage is worth studying.

Various kinds of early stopping criterion for soft decision algorithms have been proposed. In [6], the new criterion is based on the convergence of the mean magnitude (CMM) of the log-likelihood ratio messages at the output of each decoding iteration. The CMM early stopping criterion can detect and stop the undecodable blocks very well, but at the cost of high computation complexity. In [8], the proposed stopping criterion is based on the variations of the number of satisfied parity-check constraints in the BP decoder. This criterion has a lower complexity compared with the CMM, but it may cause a performance loss at high SNRs. Similarly, an efficient criterion based on the evolution of the number of reliable variable nodes is proposed in [11]. In [5], a new method based on the intelligent combination of check equations and temporary hard decisions has been proposed, and it can be pipelined with the updating of the check nodes.

Since soft decision algorithms have high complexity, early stopping criterions for these algorithms were popular. In comparison, the hard decision algorithms such as the BF algorithm [3] were very simple, and therefore, the energy and time saved by using early stopping does not seem to be worth the cost. However, in recent years, many variants of BF algorithms which employ soft information have been investigated to improve the BER performance, such as Gradient Descent Bit Flipping (GDBF) algorithms [10] and Noisy Gradient Descent Bit Flipping (NGDBF) algorithms [9]. These algorithms have a better BER performance than previous BF algorithms at the cost of complexity increase. Hence, early stopping for these BF algorithms, which has higher complexity, is worth studying. In [4], an early stopping criterion for the Adaptive Threshold Bit Flipping (ATBF) algorithm, which is a variant of GDBF algorithm, has been proposed. It is based on the threshold of the ATBF algorithm, and thus termed the Early Stopping Adaptive Threshold Bit Flipping (ES-ATBF) algorithm.

In this paper, we propose a new early stopping criterion for the multi-bit NGDBF (M-NGDBF) algorithm [9] with a fixed threshold. M-NGDBF algorithm assumes the knowledge of the SNR from an external SNR estimator, but such knowledge is not always available. In absence of an external SNR estimator, we propose a new early stopping criterion that decides whether to stop the decoder at a certain iteration based on the number of flipped bits in the corresponding iteration. As a result, our proposed early stopping criterion is very simple to implement and has low complexity. We believe it is important that the early stopping criterions for the BF decoding algorithms should be much simpler than that for the BP decoding algorithms, given the lower complexity of BF decoding algorithms compared to the BP decoding algorithms. Simulation results show that the proposed early stopping criterion can significantly reduce the number of decoding iterations at low SNRs with an extremely small amount of complexity increase, and at high SNRs, only a slight BER performance degradation is incured.

2 System Model

Let \(\mathbf{H}\) denote a binary parity check matrix with \(m \times n\) dimensions and \(n>m\ge 1\). We consider the set of LDPC codes \(\mathcal {C}\) which is represented by

$$\begin{aligned} \mathcal {C} \triangleq \{\mathbf{c} \in F_{2}^{n} :\mathbf{H} \mathbf{c} =0 \}. \end{aligned}$$

In the present paper, we define the set of bipolar codes

$$\begin{aligned} \mathcal {\hat{C}} \triangleq \{ ( 1-2c_{1} ),( 1-2c_{2} ),...,( 1-2c_{n} ) \} \end{aligned}$$

corresponding to \(\mathcal {C}\). Evidently, \(\mathcal {C}\) is mapped to \(\mathcal {\hat{C}}\) from binary (0, 1) to bipolar \((+1,-1)\). Assume that the codes are transmitted over a binary input AWGN channel and the transmission is defined by

$$\begin{aligned} \mathbf{y} = \mathbf {\hat{c}} + \mathbf{z}, \end{aligned}$$

where \(\mathbf {\hat{c}} \in \mathcal {\hat{C}}\) and \(\mathbf{z}\) is a white Gaussian noise vector. Each element of \(\mathbf{z}\) is an independent and identically distributed Gaussian random variable with zero-mean and variance \(N_0/2\), where \(N_0\) is the noise spectral density.

The hard decision vector \(\mathbf{x} \in \{ +1,-1 \}^{n}\) is obtained by

$$\begin{aligned} x_{k} = \mathrm{sign} (y_{k}), \quad k=1,2,...,n. \end{aligned}$$

Let T be the maximum iteration number specified by the algorithm. In each iteration, one or more bits may be flipped according to different algorithms. In this paper, we only consider the M-NGDBF algorithm.

The parity check matrix \(\mathbf{H}\) can also be defined by a Tanner graph with m Check Nodes (CNs) and n Variable Nodes (VNs). Let \(h_{ij}\) be the (ij)-th element of the parity check matrix \(\mathbf{H}\), \(i \in [1,m]\), \(j \in [1,n]\). If \(h_{ij}=1\), the i-th CN is said to be linked with the j-th VN. Let

$$\begin{aligned} N(i) \triangleq \{j \in [1,n]:h_{ij}=1\}, \quad i=1,2,...,m \end{aligned}$$

be the VNs linked with the i-th CN. The CNs linked with the j-th VN is represented similarly as

$$\begin{aligned} M(j) \triangleq \{i \in [1,m]:h_{ij}=1\}, \quad j=1,2,...,n. \end{aligned}$$

The parity check conditions of the codeword can be written as

$$\begin{aligned} s_i \triangleq \prod _{j \in N(i)} x_j, \quad i=1,2,...,m, \end{aligned}$$

which is also called the bipolar syndrome. A codeword is said to be a legitimate codeword if and only if every syndrome component is equal to \(+1\).

3 Early Stopping Criterion for Multi-bit Noisy Gradient Descent Bit Flipping Decoding

3.1 Preliminary

In this subsection, we will briefly review the Noisy Gradient Descent Bit Flipping Algorithm.

In [10], the GDBF algorithm is proposed by regarding the maximum likelihood (ML) problem as the gradient descent optimization problem of an objective function. Inspired by the ML decoding problem, the objective function of the GDBF algorithm, which employs the syndrome component as a penalty term, is proposed. The objective function is defined by

$$\begin{aligned} f(\mathbf{x})= \sum _{k=1}^{n}x_{k}y_{k} + \sum _{i=1}^{m}s_{i} . \end{aligned}$$
(1)

Thus, the codeword that solves the ML problem is also the solution that maximizes the objective function. The local inversion function is obtained by taking the partial derivative of a particular symbol \(x_k\), and it is defined by

$$\begin{aligned} E_k = x_k \frac{\partial f(\mathbf{x})}{\partial x_k} = x_k y_k + \sum _{i \in M(k)} s_i. \end{aligned}$$
(2)

To increase the objective function (1), every \(x_k\) whose \(E_k\) is under the inversion threshold is flipped. To improve the performance of the GDBF algorithm, a noisy type of GDBF called NGDBF was proposed in [9]. The inversion function of the NGDBF algorithm is defined by

$$\begin{aligned} E_k = x_k y_k + w\sum _{i \in M(k)} s_i + q_k, \end{aligned}$$
(3)

where w is the parameter of the syndrome weight and \(q_k\), \(k=1,2,...,n\) are independent and identically distributed Gaussian random variables from the distribution \(\mathcal {N}(0,\eta ^2 N_0 /2)\), and \(\eta \) is the noise scale parameter. The details of the M-NGDBF algorithm can be described in Algorithm 1.

figure a

Note that Algorithm 1 is the non-adaptive version of the M-NGDBF algorithm [9]. The adaptive version is similar, except in Step 4, the inversion threshold is a function of the SNR. Only the non-adaptive version is written out above, as we are focusing on the scenario where there is no external SNR estimator, and as a result, the decoder does not know the SNR of the channel. When the SNR of the channel is not available, devising an early stopping criterion is meaningful for the M-NGDBF algorithm as we may prevent the time and energy wasted in running the maximum number of iterations for an undecodable block at low SNRs.

3.2 Proposed Early Stopping Criterion

In absence of an external SNR estimator, the variance of the \(q_k\), \(k=1,2,...,n\) in Algorithm 1 can no longer depend on \(N_0\). Thus, we assume that the i.i.d. Gaussian random variables \(q_k\), \(k=1,2,\cdots , n\) follow the distribution \(\mathcal {N}(0,\eta ^2 \sigma ^2)\), where \(\eta \) is the noise scale parameter in [9] and \(\sigma \) is the scale parameter used in early stopping criterion.

To design an early stopping criterion that is meaningful to the M-NGDBF algorithm, the simplicity and easiness of implementation is key. Figure 1 illustrates the average number of flipped bits for the success and failure of M-NGDBF decoding at different iterations and SNRs. As can be seen, the average number of flipped bits in the failure cases of M-NGDBF decoding falls more slowly as the number of iterations increases and remains at a higher value than that of the successful decoding cases, especially during the first twenty iterations. For example, the average number of flipped bits in the decoding failure cases at 2 dB hardly drops and almost all of them exceed 140 throughout the iterations.

Fig. 1.
figure 1

Average number of flipped bits for success and failure of M-NGDBF decoding on the PEGReg \(504\times 1008\), rate-\(\frac{1}{2}\), regular LDPC code at different iterations and SNRs, with parameters \(\omega =0.75\), \(\eta =1\), \(\sigma =0.8\), \(\theta =0.3\) and \(T=100\).

Based on this observation, we use the number of flipped bits at certain iterations to predict and decide whether to stop the decoder. Algorithm 2 presents the details of the early stopping criterion for the M-NGDBF algorithm we propose. The algorithm starts out the same way as the M-NGDBF algorithm, and in Step 5, the early stopping criterion is being checked. Let \(\mathcal {S}\) denote the set of iterations in which we will check the number of flipped bits. If the current iteration is in \(\mathcal {S}\), we record the number of flipped bits in this iteration and check this number against a predefined threshold \(\lambda \). If the number of flipped bits exceeds \(\lambda \), we deem that this block will not be decoded successfully and we stops the algorithm. Otherwise, the algorithm proceeds normally as in the M-NGDBF algorithm. Hence, \(\mathcal {S}\) and \(\lambda \) are two parameters of the proposed algorithm, and their values shall be obtained empirically through simulations.

figure b

We comment here that the proposed early stopping criterion has a very low complexity and only runs for a few times, i.e., only when the iteration number is in \(\mathcal {S}\). Hence, the cost of the proposed early stopping criterion is extremely small.

4 Simulation Results

Simulation results are based on PEGReg \(504\times 1008\) regular (3,6), \(\frac{1}{2}\)-rate LDPC code from MacKay’s online encyclopedia [7] on the AWGN channel with the Binary Phase Shift Keying (BPSK) modulation. The Bit Error Rate (BER), Block Error Rate (BLER) and average number of iterations (ANI) for the M-NGDBF algorithms with and without early stopping are compared. As mentioned above, due to the absence of an external SNR estimator, all the parameters of the M-NGDBF algorithms are set to constant. Therefore, the parameters are determined as \(\omega =0.75\), \(\eta =1\), \(\sigma =0.8\) and \(\theta =0.3\). The maximum number of decoding iterations T is set to 100. The M-NGDBF decoder with early stopping criterion is indicated by ES-M-NGDBF.

Fig. 2.
figure 2

BER and BLER performance for the M-NGDBF and ES-M-NGDBF with \(\mathcal {S} =\{1,10,20\}\) and \(\lambda =140\).

Fig. 3.
figure 3

ANI for the M-NGDBF and ES-M-NGDBF with \(\mathcal {S} =\{1,10,20\}\) and \(\lambda =140\).

Figure 2 and Fig. 3 present the BER/BLER and ANI for the M-NGDBF and ES-M-NGDBF algorithms. The ES-M-NGDBF algorithm with \(\mathcal {S} =\{1,10,20\}\) and \(\lambda =140\) has almost the same BER and BLER performance as the M-NGDBF algorithm at low to middle SNRs, and a little performance degradation at high SNRs. Since the complexity of the proposed early stopping criterion is extremely low, and it is designed for the BF algorithm, a certain BER and BLER performance degradation at high SNRs is acceptable. Note that the ANI of ES-M-NGDBF with \(\mathcal {S} =\{1,10,20\}\) and \(\lambda =140\) drops to about 30\(\%\), 50\(\%\) and 80\(\%\) of the ANI of M-NGDBF at 2 dB, 2.5 dB and 3 dB, respectively, in Fig. 3. The number of iterations hardly decreases at high SNRs, because the original number of iterations is relatively small, and the probability of decoding failure is quite low.

The ES-M-NGDBF algorithm is sensitive to the precise variance of the threshold \(\lambda \). The value of the threshold \(\lambda \) is found empirically through a numerical search. This search may produce different values for different codes and algorithms. Some examples are shown in Figs. 4 and 5. According to Fig. 4, with the same set \(\mathcal {S}\), the ES-M-NGDBF with \(\lambda = 140\) has less BER and BLER performance loss than that with \(\lambda = 130\), but the decline in ANI is also smaller as shown in Fig. 5. Since the BER performance degradation should not be too much, these results indicate that the optimal \(\lambda \) is typically around the average number of flipped bits for failure decoding at the specific SNR, such as the line with a value of 140 in Fig. 1.

Fig. 4.
figure 4

Sensitivity of BER and BLER performance for the ES-M-NGDBF relative to the parameter \(\lambda \).

Fig. 5.
figure 5

Sensitivity of ANI for the ES-M-NGDBF relative to the parameter \(\lambda \).

As with the value of threshold \(\lambda \), the set \(\mathcal {S}\) is found through an empirical search. Figures 6 and 7 represents the result of an example with parameter \(\lambda = 140\). When the number of elements in the set \(\mathcal {S}\) increases, the value of ANI at low SNRs become smaller, and the BER performance loss at high SNRs become bigger. It is shown in the Fig. 7 that the decreases in ANI at low SNRs is relatively large when the number of elements in the set \(\mathcal {S}\) increases from 1 to 2, and from 2 to 3. When it is further increased, the ANI at low SNRs hardly decreases. Obviously, there is a tradeoff between the number of elements in the set \(\mathcal {S}\) and the ANI at low SNRs. Since the BER and BLER performance degradation should not be too large, it is best to set the number of elements in set \(\mathcal {S}\) to 3 in the ES-M-NGDBF. The values of the three elements in the set \(\mathcal {S}\) are obtained through simulation.

Fig. 6.
figure 6

Sensitivity of BER and BLER performance for the ES-M-NGDBF relative to the parameter \(\mathcal {S}\), with parameter \(\lambda = 140\).

Fig. 7.
figure 7

Sensitivity of ANI for the ES-M-NGDBF relative to the parameter \(\mathcal {S}\), with parameter \(\lambda = 140\).

Table 1 shows the performance analysis of ES-M-NGDBF with \(\mathcal {S} =\{1,10,20\}\) and \(\lambda =140\). The maximum number of error blocks for the simulation is 5000 at 2 dB, 3000 at 2.5 dB, 1000 at 3 dB, 500 at 3.5 dB, 200 at 4 dB and 100 at 4.5 dB. ES\(\_\)isRight\(\_\)counter and ES\(\_\)counter represent the number of correct and total early stopping respectively, and ES\(\_\)isRight\(\_\)P and ES\(\_\)P are their probability. ES\(\_\)miss\(\_\)counter represent the number of failure decoding which is not stopped by the proposed criterion. The ES\(\_\)isRight\(\_\)P is close to 1 at 2 dB, which means that the proposed criterion is an efficient filter at low SNRs. The ES\(\_\)P and ES\(\_\)isRight\(\_\)P decrease while the ES\(\_\)miss\(\_\)counter increases from low to high SNRs, which means that the probability of failure decoding becomes smaller and the detection of early stopping becomes more difficult with the rise of SNR. Therefore, there is a certain BER and BLER performance loss at high SNRs. With high-complexity early stopping criterions, the performance loss at high SNRs may be avoided, but given our proposed early stopping criterion is extremely low complexity, we feel that the slight performance degradation at high SNRs is tolerable.

Table 1. Performance analysis of proposed early stopping

5 Conclusion

In this paper, an early stopping criterion, which is based on the number of flipped bits for certain iterations, is proposed for the M-NGDBF decoding algorithm. With an extremely small amount of complexity increase, the proposed early stopping criterion can significantly reduce the number of decoding iterations at low SNRs, while the BER and BLER performance at high SNRs is slightly decreased.