Abstract

This paper presents a novel iterative reliability-based bit flipping (BF) algorithm for decoding low-density parity-check codes. The new decoder is a single BF algorithm called two-round selection -based bit flipping. It introduces the idea of a two-round selection of the flipped bit, based successively on hard and soft received channel values. In the first stage, a set of unreliable bits is identified, and then a second selection is used, to pick out among them the bit to flip. In the second round of selection, the initial belief about received signals, contributes efficiently to selecting the best candidate bit. We demonstrate through simulations over the binary-input additive white Gaussian noise channel and the Rayleigh fading channel that the proposed algorithm exhibits better decoding performance when compared with some well-known soft decision BF algorithms. A complexity analysis of the proposal and a comparison to other BF decoders are also presented.

1. Introduction

Error-correcting codes (ECC) are used to control errors during data transmission. The fundamental principle of ECC is to add redundant bits to the transmitted data in the emission, while in the reception we use a decoding algorithm to detect and correct errors occurring over noisy communication channels. The low-density parity-check (LDPC) codes [1] are currently considered one of the best next generation ECC that allow data transmission to reach Shannon’s limit [2]. These codes were first introduced by Gallager [1, 3] in his pioneering Ph.D. thesis in 1962. In 1996, the LDPC codes were rediscovered by MacKay and Neal [4] who brought them back into prominence. When the sum-product algorithm (SPA) [5] is used for decoding, it has shown near-Shannon’s limit-capacity performance. Many state-of-the-art communication systems adopt the LDPC codes in their standards, such as 5G network systems, second-generation satellite broadcasting systems (DVB-S2), and IEEE 802.11 systems.

LDPC codes can be decoded by many well-known decoding algorithms [6] such as bit flipping (BF) algorithm, and SPA algorithm [7, 8]. The BF algorithm, proposed by Gallager [1, 3], is a hard decision algorithm that flips a set of bits based on the values computed by the flipping function (FF) for each iteration. Even if the BF algorithm is much simpler than the probabilistic SPA algorithm (soft decision), its bit error rate (BER) performance is far from optimal. Therefore, many variants of Gallager’s BF algorithms have been proposed to reduce the performance gap, but in some cases with an increase in complexity. In this class of decoders we find the candidate bit-based bit flipping (CBBF) [9], the weighted candidate bit based bit-flipping (WCBBF) [10], and the single bit flipping (SBF) [11]. In CBBF, a reliability of unsatisfied parity-check equations is calculated in addition to the reliability of each bit. For WCBBF decoding, the authors used a weighted reliability of the parity-check equations, and the weights are prefixed integers. The SBF [11] decoder flips a single bit chosen carefully in each iteration. All of these decoders belong to hard decision variants of the original Gallager’s BF algorithm.

Nevertheless, for the hard decision decoding algorithms, considerable performance degradation is noticed compared to the soft decision algorithms. That is why BF techniques moved toward the category of simplified soft decision decoding algorithms. The latter use not only hard information but also soft information during the decoding process. The work on this class of decoding algorithm starts with the weighted bit flipping (WBF) decoder [12]. Algorithms in this class allow improvements in performance without a large increase in complexity. Other algorithms, following the approach of WBF, tried to improve the reliability metric and/or the method of selecting the flipped bits (the FB). They achieve different degrees of enhancement from WBF in performance and convergence rate. We quote in this class-modified weighted bit flipping (MWBF) [13], reliability-ratio-based weighted bit-flipping algorithm (RRWBF) [14], gradient descent bit flipping (GDBF) [15, 16], and the dynamic weighted bit-flipping decoding algorithms (DWBF) [17].

This paper introduces a new reliability-bit based bit-flipping algorithm for decoding LDPC codes called two-round selection-based bit flipping (TRSBF). We show hereafter that the proposed algorithm achieves good tradeoffs between BER performance and decoding complexity.

Our decoder is not in the class of variants of the WBF decoder, but it is indeed a soft decoder. At each iteration, a selection in two rounds is used to pick out the bit to flip. More precisely, first-round selection uses only hard decisions to form a candidate pool. The candidate bits selected in this round are those contained in more than some fixed number of unsatisfied parity-check equations. In the second round, the best candidate bit, which is the closest to the received word, is selected from the candidate pool and flipped. Here, the neighborhood is calculated in terms of Euclidean distance from the received word. At each iteration of our decoder, only a single bit will be flipped.

The remainder of this paper is organized as follows: in Section 2, we will present an overview of the BF algorithm and its variants. Section 3 provides details of the proposed soft information BF algorithm. The simulation results and threshold optimization are presented in Section 4. In Section 5, the decoding complexity comparison is discussed. Finally, conclusions are drawn in Section 6.

2. Bit Flipping Algorithm and Its Variants

2.1. Preliminaries

Let C a binary LDPC code of length n. C is defined by the null space of a parity-check matrix H = [hj, k] with m rows and n columns. The code C is said to be a regular LDPC code if the matrix H has constant column weight dc and constant row weight dr and is said to be irregular otherwise.

We assume that code words c = (c0, c1cn − 1) obtained at the output of the encoder are modulated by a binary phase shift keying (BPSK) modulator and transmitted over a binary input AWGN channel with a variance σ2. The sequence r = (r0, r1… rn − 1) stands for the sequence of soft channel values obtained at the receiver’s output. The hard-decision information z = (z0, z1zn − 1) associated with the sequence r is as follows:

We introduce the sequence as the bipolar value sequence corresponding to the hard decision sequence and define it as follows:

The syndrome s defined by s = z.HT is calculated at the first stage of the decoder. If the syndrome s = z.HT = 0, we can say that z is the most likely transmitted codeword. Otherwise, the decoding process begins.

We denote N (j) = {k, 0 kn–1: hj,k = 1} the set of code bits that participate in the jth parity-check equation, and M (k) = {j, 0jm–1: hj,k = 1} the set of checks that contain the kth code bit.

2.2. Bit Flipping Algorithm

A typical BF algorithm (GBF) [1, 3] is a simple hard-decision algorithm that flips the bits involved in a large number of unsatisfied check equations that exceed a threshold (T) for LDPC codes, because they are most probably incorrect bits. The algorithm is terminated once all the parity-check equations are satisfied, which means a valid code word has been found or the maximum number of iterations, pmax is reached. The algorithm, requires only integer operations for decoding and can therefore be easily implemented by an electronic circuit.

The main part of the standard BF algorithm is the calculation of the flipping metric for each bit and each iteration called the FF. The FF values allow for tentative bit decisions and depend on the binary-value checksums and on the bits connected with the check equations. For the BF algorithm, the FF can be equivalently expressed in two ways as the following formulas:

The quantity is the scalar product of the syndrome and the kth column of H and it represents the number of unsatisfied parity checks containing the kth bit. It gives information about the reliability of the kth received bit. It is easy to prove that:

The sequence  = (v0, v1vn–1) is the so-called reliability profile of the received sequence r (or more precisely of the hard version z) [18].

The steps of a standard BF (GBF) algorithm are described in the Algorithm 1 as follows:

Step 0: Initialize the parameters: p = 0 (p is the iteration counter) and T (depends on the variant of the algorithm).
Step 1: Compute s = (s0, s1… sm-1) ← z.HT. If s = 0, then stops the algorithm.
Step 2: Compute for all indices k.
Step 3: If max () < T (or max() < T), then stops the algorithm.
Step 4: Flip all bits , in the sequence z, with ≥ T, p ← p + 1.
Step 5: If p > pmax, then stop the algorithm. Else go to Step 1.
2.3. Single Bit Flipping Algorithm

The single bit flipping (SBF) algorithm [11] is a variant of the standard BF that stays within the scope of hard-decision decoding algorithms. In each iteration, SBF flips one elected bit to avoid flipping correct bits, in contrast to the standard BF, which flips many bits each iteration, and thus may need a longer time for convergence.

The SBF algorithm uses Equation (5) for computing FF values. It did not use a threshold but needs to find a maximum of FF values. The steps 3 and 4 of the SBF are as follows:(i)Step 3: Find the index  = argmax () where k belongs to {0,.., n–1}(ii)Step 4: Flip the bit , p ← p + 1.

The SBF results in better performance than the standard BF and converges toward the final solution with a lower number of iterations [11].

2.4. Candidate Bit Based Bit Flipping

The CBBF algorithm [9] uses the correlation data between the column vectors of the parity-check matrix and the syndrome vector to decode. It has minimal decoding complexity and does not require soft information.

The algorithm has a new parameter δ > 0 which is an integer valued threshold for a decision on candidate bits. We calculate in the Equation (5), If  > δ, then is marked as a candidate bit for BF.

The steps 3–6 of the CBBF are as follows:(i)Step 3: Find umax = maxi ui. If umaxδ, then terminate the decoding procedure.(ii)Step 4: Let cm be the number of candidate bits included in the m-th parity check equation and calculate wm = cm − 1 for m = 1, 2, …, M.(iii)Step 5: For i = 1, 2, …, M with ui = umax, calculate ri by:(iv)Step 6′’: Flip all bits vi with ri = mini ri and ui = umax. Let l ← l + 1. If l > lmax, then terminate the decoding procedure. Go to Step 1.

2.5. Weighted Bit-Flipping Algorithm

The WBF algorithm [12] is a variant of the BF algorithm, and it is considered a soft decoder. In the algorithm, throughout the decoding process, the weights of checks are decided by the soft received channel values and remain unchanged. That is because the weights reflect the decoder’s belief in the channel’s behavior. The FF of the WBF algorithm can be expressed by the following general formula:where is the minimum absolute of soft values for the bits participating in the jth parity-check equation (minimum for all indices i in the set N(j)).

The WBF algorithm combines the checksum values and the reliability of received messages to make decisions, therefore, the algorithm yields better decoding performance when compared with the BF algorithm.

2.6. Gradient Descent Bit-Flipping Algorithm

The GDBF algorithm [15] derives its FF in Equation (7) by computing the gradient of a nonlinear objective function instead of using a weighted checksum-based FF, which is comparable to the log-likelihood function of the bit decisions with checksum constraints.

The FF of the GDBF algorithm can be expressed by the following general formula:

2.7. Noisy Gradient Descent Bit-Flipping Algorithm

In an effort to avoid undesirable local maxima, the noisy GDBF (NGDBF) [16] improves efficiency by adding a random perturbation at each iteration.

The FF of the NGDBF algorithm can be expressed by the following general formula:where is a syndrome weight parameter and qk is a Gaussian distributed random variable with zero mean and variance σ2 = η2N0/2, where 0 < η < 1. All qk are independent and identically distributed.

3. Proposed Bit-Flipping Algorithm

3.1. Motivation

All soft BF decoding algorithms combine hard metrics with soft metrics to decide which bit to flip. We believe that the magnitude of a received bit carries additional information on its reliability and can be used separately for decision-making about it. We describe hereafter a new algorithm based on a two-round selection strategy: the TRSBF decoder. The latter is made up of two stages. The first/second round selections are made by the first/second stages, respectively. The proposed algorithm works as follows (see Figure 1).

The first stage, consisting of FF processing, is based only on hard information. This stage can be considered a filter for the next processing step. On the other hand, the second stage of processing is based on soft information. The first stage passes a Set B of unreliable bit positions to the second stage. In the second stage, no FF calculus is done. The processing in the second stage provides a single bit (bit of position k0) to be flipped. The two processes mentioned constitute a single iteration of the algorithm. In contrast, the known soft BF decoding algorithms combine, hard and soft information into a unique metric for the FF.

3.2. The Proposed TRSBF Algorithm

In the first stage, the TRSBF algorithm calculates the check-based value about regarding the symbol by FF in Equation (4). The value represents the number of unsatisfied parity checks (UPC) containing the bit . Then consider the set of bit positions that satisfy the threshold condition denoted by:where B is the set of bits in z that have the largest parity-check failures. Thus, these bits are the less reliable ones.

The identification of Set B is the goal of the first stage of our algorithm. Set B can be seen as a pool of good candidate bits for a second selection.

A first step in the second stage consists of determining a certain number of tentative decision sequences z(k). For every k belonging to the Set B, a sequence z(k) is obtained from the sequence z as follows:

Let be the bipolar sequence corresponding to .

Then the TRSBF algorithm calculates the squared Euclidean distance between the received soft sequence and the sequence , as shown in Equation (12).

But minimizing the squared Euclidean distance in Equation (12) is the same as minimizing the set of values, with defined as follows:

A final step of the second stage of the algorithm consists of finding the index of the nearest sequence from the received sequence as shown by Equation (14):

The position is the selected bit position to be flipped in the current iteration.

The steps of the TRSBF algorithm are described in the Algorithm 2 as follows:

Step 0: Initialize the parameters: and T.
Step 1: Compute s = (s0, s1… sm-1) ← z.HT. If s = 0, then stops the algorithm.
Step 2: Identify the set If is empty then stops the algorithm.
Step 3: Compute for each index .
Step 4: Find the index and flip the bit in z, p←p + 1.
Step 5: If p > pmax, then stop the algorithm. Else go to Step 1.

The first stage of the decoding algorithm consists of Steps 1 and 2 and the second stage consists of Steps 3 and 4. The threshold T is to be optimized for each code (see Section 4.1).

One of the strengths of the proposed decoding algorithm is that it can be used for regular LDPC codes as well as irregular ones.

4. Simulation Results

In order to illustrate the decoding performance of the proposed decoding algorithm, four regular LDPC codes (see Table 1) are considered and used for the simulation: the LDPC1, LDPC2, and LDPC4 are difference-set codes (DSC) family [12], and the LDPC3 code is a pseudorandom or Gallager code and was selected from MacKay’s online encyclopedia [19].

We carried out extensive simulations using a communication chain implemented in language C. The communication chains contain an AWGN/Rayleigh channel and a BPSK modulation/demodulator. The Monte Carlo method was used for simulations.

The performance output is given in terms of bit error rate (BER) and block error rate (BLER) as a function of signal-to-noise ratio (SNR), with the default simulation’s parameters outlined in Table 2.

4.1. Optimization of the Threshold

The threshold T of the proposed decoding algorithm is optimized using simulation results for the three chosen codes. The criterion for optimization is the BER performance at several SNRs. The communication chains contain an AWGN channel and a BPSK modulation/demodulator. The Monte Carlo method was used for simulations.

These values of SNR depend on the selected code. The interval where T is located depends on the dc parameters of the codes, but we always have [1, dc].

Simulation results illustrate the behavior of the parameters T for the proposed algorithm (see Figures 24) for three codes.

By observing these figures, the best T for each code is chosen. See Table 3 for the optimal values of the threshold parameter for the three codes. These values are used for the rest of this study.

From this observation, the best value of T is determined by:

Equation (15) comes from the following facts:

First, note that represents the maximum number of participations of a bit in the parity-check equations and represents the participation of a given bit in the parity-check equations that are not satisfied.

Then the rule to put an index k of a bit every time (when ) in the Set B, is simply applying the majority voting rule.

4.2. Performance Results for AWGN Channel

Figure 5 benchmarks our decoder against some known BF decoders for the LDPC1 code. As shown in Figure 5 our decoder has a better BER performance than SBF and a slight advantage over GDBF. It presents coding gains of 0.95 and 0.2 dB at BER of 3.10−5 compared to the SBF and GDBF, respectively.

Figure 6 shows results for the code LDPC2 where our decoder achieves coding gains of 0.5 and 0.7 dB compared to the GDBF and SBF algorithms, respectively at BER of 2.10−5.

Furthermore, our decoder presents a 0.1-dB coding gain compared to the NGDBF for these codes, unlike for the LDPC1 code, where we lose 0.6 dB of performance at BER 10−5. This observation may imply that longer codes will gain a larger improvement over NGDBF.

Figure 7 compares our decoder to some known BF decoders for the LDPC3 code. The figure shows that our decoder outperforms other algorithms in terms of BER performance. As TRSBF achieves, respectively, coding gains of 2.5, 1.9, 1, and 1 dB compared to the BF, SBF, WBF, and CBBF at BER 2.10−5.

Figure 8 benchmarks our decoder against some known BF decoders for the LDPC4 code. As shown in Figure 8 our decoder has a better BER performance than SBF and GDBF. It presents coding gains of 1.4 and 0.65 dB at BER 10−5 compared to the SBF and GDBF respectively, and 0.2 dB of performance loss against NGDBF.

The results highlight that it is important to save the reliability values of received signals during the decoding process since they are the initial belief of the channel on received signal reliability. In addition, the obtained results show a correlation between the code length and the performance of the proposed decoder.

4.3. Performance Results for Rayleigh Fading Channel

In order to evaluate our new decoder, we have simulated its performance in the Rayleigh fading channel and compared it with the performances of GBF and SBF using the fourth codes in Table 1.

The curves plotted in Figure 9 show that the performance of our TRSBF decoder is better than the GBF and SBF ones. It presents coding gains of 4.3 and 2.4 dB at BER 10−5 compared to GBF and SBF, respectively, at BER 10−5.

Figure 10 benchmarks our decoder for the LDPC2 code. As shown in Figure 10, our decoder has a better BER performance than GBF and SBF. It presents coding gains of 4 and 2 dB at BER 10−5 compared to SBF and GDBF, respectively.

Figure 11 shows that our decoder outperforms other algorithms in terms of BER performance for the LDPC3 code. As TRSBF achieves, respectively, a huge coding gain of 1 and 8.5 dB compared to GBF and SBF at BER 4.10−5.

Figure 12 shows results for the LDPC4 code, where our decoder achieves coding gains of 3 and 6 dB compared to the GDBF and the SBF algorithms, respectively, at BER 10−5.

So, we have in the case of Rayleigh fading channel a better performance gain behavior than the case of AWGN.

5. Complexity Study

5.1. Analytic Complexity

Let C denoted by (n, k) (dr, dc) be a regular binary LDPC code over GF (2). C is the null space of an m × n parity-check matrix H = (Hm, n) which has dc 1s in each column and dr 1s in each row with m = (n–k), and Itr the average number of iterations.

Table 4 illustrates the complexity analysis of the BF algorithms (FF). The table shows that the complexity is polynomial in n, m, and Itr for all algorithms.

The soft algorithms (except the WBF) are more complex compared to the hard BF variants. On the other hand, the complexity of the proposed algorithm TRSBF and the GDBF and NGDBF are identic; since m and n are fixed parameters, the study of the average number of iterations (Itr) will determine which algorithm is more complex.

5.2. Average Number of Iterations

We analyze the average number of iterations with respect to the SNR in order to perform a numerical convergence analysis of the proposed decoding scheme and compare it to the known BF variants [20]. We consider the number of simulated transmitted blocks in which at least 200 erroneous decoded words are observed for each SNR to be N and the total number of iterations used for decoding all the N blocks to be Pall with Pmax = 50 for each block processed in this study.

The average number of iterations is obtained by the following ratio formula:

The curves corresponding to the average number of iterations for the LDPC1, LDPC2, and LDPC4 codes listed in Table 1 with respect to the different SNRs are shown in Figures 1315, respectively. In Figure 13 the TRSBF decoder presents a lower complexity in terms of the average number of iterations than the SBF, GDBF, and NGDBF decoders. This decoder also has an advantage in terms of BER performance when compared with the SBF and GDBF decoders.

In addition to the gain in BER performance of our TRSBF decoder over the SBF, GDBF, and NGDBF decoders, we can see clearly in Figure 14 that our TRSBF decoder presents a modest advantage in terms of the average number of iterations over the compared decoders except for the SBF.

In Figure 15, we can see that in the entire range of SNRs, the TRSBF decoder needs fewer iterations to achieve convergence than the GDBF and NGDBF decoders.

We can clearly see that the proposed decoding algorithm requires fewer decoding iterations than other soft variants of BF decoders; consequently, the proposed algorithm is less complex than the GDBF and NGDBF when the analytic complexity seen in Table 4 is taken into account; thus, the TRSBF achieves fast convergence.

5.3. Cardinality of the Set B

We explore the average size of the Set B with respect to the SNR for different codes (Table 1) in order to investigate the complexity of the second-stage processing of our decoder.

Figure 16 shows the average size of the Set B for each code across the SNRs. We can see clearly that the average size of Set B decreases as the SNR increases. Also, we can observe for the two codes (with the same code length and rate), LDPC2 and LDPC3 that the average size of the Set B decreased in a fast way for the LDPC2 code compared to the LDPC3 code, and this observation explains the good BER performance given by the LDPC2 code compared to the LDPC3 code and confirms the power of the DSC codes class over the Gallager codes class.

5.4. Computational Complexity

To evaluate the computational complexity of the proposed decoding algorithm for one iteration, we will denote δ = m.dr= n.dc the number of 1-entries in the parity-check matrix H and β the size of the Set B (see Equation (10)).

Table 5 shows a comparison of the computational complexities of several decoding algorithms for LDPC codes.

To compute the syndrome s of a received vector in Step 1 of the TRSBF algorithm? we need m.(dr-1) = δ-m binary operations (BO). Then to identify the size of the Set B in Step 2 we need n integer comparison (IC). In Step 3 of the algorithm, the computation of requires β real addition (RA) and β real multiplication (RM). Thereafter, in Step 4, finding the k0 requires β real comparison (RC) and flipping the bit requires one binary operation (BO).

In Table 4, we can see that our decoder has two parts of complexity. The first part, like the hard decoding algorithms (BO, IA, and IC operations). The second part, like soft decoding algorithms (RA, RC, and RM operations), Therefore, to evaluate the complexity of our decoder, we need to study the range of values for the β parameter (the average size of the Set B).

In Figure 16, we have plotted the average size of the Set B obtained for 1,000 erroneous received sequences versus SNRs. Figure 16 also shows the average number of iterations versus SNRs for the LDPC codes.

By considering the real complexity parts (RA, RC, and RM) in Table 5 and the average size of the Set B shown in Figure 8, we can conclude that the complexity of our decoder is lower than that of WBF and its variants since δ = n.dc >> n > > β, while our decoder provides a better performance in terms of BER or BLER.

6. Conclusion

This paper proposes a new BF algorithm based on the reliability of the received signal: TRSBF. The proposed algorithm yielded better decoding performance than some known BF algorithms for the studied LDPC codes.

The proposed algorithm uses a two-round selection approach to get the bit to be flipped, by separating the hard decision information from the soft one. In the first round, only hard information is used, and its solutions are refined by the second round, which is based on the soft information. An advantage of our decoder is that it can be applied to both regular and irregular LDPC codes.

The proposed algorithm achieves effective tradeoffs between performance and decoding complexity.

The study of the complexity has proved that our algorithm has a low complexity and fast convergence rate compared to the other soft or hard decoders, either in terms of iterations or computational complexity.

We believe this research’s findings invite more investigations on the performance of the proposed algorithm for irregular the codes or codes with large blocklengths.

Data Availability

The data supporting the results of the study can be found with the authors upon request for free.

Conflicts of Interest

The authors declare that they have no conflicts of interest.