Temporally enhanced erasure codes for reliable communication protocols
Introduction
A wide range of applications in the future Internet will require both reliable and timely transmission of data packets between a sender and receiver(s). These include network whiteboards and video/audio conferencing systems, etc. Providing both reliability and timeliness in packet transmission is a difficult problem and several approaches have been proposed and used.
Two different mechanisms exist for error control in network applications, automatic repeat request (ARQ) and forward error correction (FEC). ARQ technique is based on timeout and retransmission of packets, and has been adopted in TCP protocol.
However, ARQ technique has its limitations especially if there exist timing constraints on packet delivery times as in real-time multimedia applications, or if it is used for multicasting packets to many receivers. FEC can be used in those environments in which ARQ cannot be used effectively [3], [4], [6], [7]. The sender sends additional packets along with the original data packets so that, in cases of packet losses, they can be used to recover the original data packets that were lost during transmission. One advantage of this approach is that no further interaction between the sender and receiver is needed as long as the lost packets can be recovered from the received packets. In FEC, the sender performs encoding and the receiver performs decoding to reconstruct the lost data packets [4], [6], [9]. One of the well-known codes in FEC is erasure codes [6]. In erasure codes, the encoded packets are generated by, first, dividing a stream of data packets into blocks of k packets, and generating n−k (>0) encoded repair packets for each block using the original packets. The sender sends n packets for each block, and the receiver can recover the k original data packets as long as it receives at least k distinct original or repair packets. Hence, the encoding and decoding in such protocols are performed on block-by-block basis. Tornado codes [9] are also block-based FEC codes, but they are probabilistic codes that have a lower algorithmic complexity at the price of some decoding overhead at the receiver.
FEC technique has been used in many applications due to the advantages mentioned above. For example, loss recovery using FEC codes has been studied in the context of ATM networks. Schacham and McKenny [10] have studied the performance of FEC based on the ex-OR operation to recover from loss of cells in ATM networks. Also, the use of FEC for MPEG-2 video transport in a wireless ATM LAN was studied by Ayanoglu et al. [11], and the performance of two-level FEC technique was studied for lost cell recovery in ATM networks [12]. The applications and performances of FEC were studied for multicast applications, and it was shown that FEC can be a viable solution approach for reliable multicast transmissions [3], [4], [7], [8].
In this paper, a new mechanism named as temporally enhanced FEC (TEFEC) is proposed as an enhancement to the existing block-based FEC codes such as erasure and Tornado codes. And, to show its feasibility, this new mechanism is applied to erasure codes to enhance their error correction capability, and new codes named as temporally enhanced erasure codes (TEEC) are developed and presented in this paper. This new mechanism, TEFEC, can be applied to other FEC codes, but we chose erasure codes for application platform because of its mathematical simplicity and shown feasibility [6].
The basic idea of TEFEC is that the scopes of encoding and decoding may be expanded beyond block boundaries, and they may overlap with scopes of neighboring blocks. The encoding scope factor is defined to be the number of blocks from which the encoded packets will be generated. For example, Fig. 1 shows how the encoding is performed in TEFEC with the encoding scope factor 2. Fig. 2, Fig. 3, Fig. 4, Fig. 5 show the differences between TEFEC and traditional block-based FEC in terms of error correcting capability and recovery delays.
In Fig. 2, erasure codes are used to produce one encoded (repair) packet for each block with two data packets. Note that a lost packet in block, i, is recovered at the receiver by using the two packets arrived at the receiver, one original data packet and another repair packet. However, if both data packets in block, i, are lost, then there is no way to recover any of them.
In Fig. 3, block size is doubled, i.e., four data packets are in each block, and the erasure codes are used to generate two encoded packets. Up to two-packet losses in a block can be tolerated in this scheme because of its increased block size. However, the receiver has to wait until all the packets in the enlarged block arrive whether it is a one-packet loss or two-packet loss, which means a longer delay until recovery.
But, in TEFEC, as is shown in Fig. 4, the loss of three data packets (two in block i plus one in block i+1) may also be recovered, which is impossible in the traditional codes. This is because three repair packets in blocks, i, i+1, and i+2, can be used altogether to recover the lost packets. This means that, with the same amount of bandwidth used for encoded packet, TEFEC may achieve higher error correcting capability.
Fig. 5 shows another advantage in using TEFEC. One-packet loss can be recovered immediately after block i arrives, and two-packet loss can also be recovered after the arrival of block, i+1. In other words, in TEFEC, loss of small number of packets may incur smaller recovery delays.
In this paper, the detailed algorithm of TEEC is developed and presented with their performances compared to those of the block-based erasure codes. TEEC is developed and implemented by extending erasure codes [6]. In addition, to show their applicability, a reliable protocol is developed and simulated by combining both ARQ and TEEC techniques. It is shown that, in most of the cases, the error correcting capability could be enhanced much while not increasing the average recovery delays at the receiver. The expense we have to pay for this is the encoding/decoding overheads at the sender and the receiver, which increases linearly to the encoding scope factor. However, as is shown in Section 5 of this paper, we were able to achieve significant performance improvement for small values of the encoding scope factors (⩽5). Hence, the encoding scope factor does not need to be arbitrarily large in TEEC. Especially, if the end hosts involved have high computation powers to manage the increased encoding and decoding complexities (Section 5), then our reliable protocol can be employed effectively by reducing the average packet retransmission rates while not increasing average recovery delays.
TEEC can be utilized in many applications to enhance the error correcting capability without increasing the bandwidth used for encoded packets and without affecting the real-time aspects of the applications. If the network path has long delays, or if it is expensive to send back NAK packets due to power problems as in mobile communication devices, TEEC can also be used to enhance the performances. For multicast applications, FEC was shown to be a viable solution approach [3], [4], [7], [8]. TEEC can be effectively utilized in such applications as a better solution approach due to the enhanced error correcting capabilities.
The paper is organized as follows. Section 2 summarizes the erasure codes [2], [6]. In Section 3 the encoding and decoding algorithms are presented for TEEC, and the detailed reliable protocol is presented in Section 4 which utilizes ARQ and TEEC. Section 5 discusses the run-time complexity issues of TEEC encoding and decoding processes. In Section 6 the simulation settings and results are presented for the reliable protocol with TEEC and traditional protocols with block-based FEC techniques. Finally, in Section 7 the conclusion of the paper follows.
Section snippets
Introduction to erasure codes
A brief introduction to the erasure code is given in this section with its principles and computational complexities. A more detailed description of the codes can be found in numerous literatures [1], [2], [5], [6]. The erasure code presented and used in this paper is called linear block codes whose principles and implementation techniques are nicely presented in Ref. [6].
In erasure codes the original data packets to be sent by the sender are divided into independent blocks whose size (in terms
Temporally enhanced erasure codes
The erasure code techniques explained in the previous section usually perform encoding and decoding using blocks as their basic units. The blocks are mutually independent in a sense that encoding and decoding of one block do not affect those for other blocks. The common approach is to divide the entire data packet stream into packet blocks and apply encoding and decoding algorithms to each block.
A new approach, TEEC, has been developed to reduce the amount of bandwidths required for additional
Reliable protocol using TEEC
In this section, a reliable protocol is presented which utilizes TEEC encoding/decoding algorithms to recover lost packets during transmission. The receiver explicitly requests the sender to resend lost data packets (or additional repair packets) when the decoding cannot be performed with the arriving packets.
The original data packet stream is divided into blocks that contain k packets, and the sender also sends n−k redundant (encoded) repair packets along with the k data packets for each
Computational complexity of TEEC
In this section the complexity analysis of TEEC is presented. When n−k repair packets are generated by the sender, the complexity of encoding process is O(v(n−k)kσ) where σ denotes the number of data items (e.g., bytes) in a packet. This is from the facts that the traditional erasure codes takes O((n−k)kσ) time [6], and that, in TEEC, v blocks are used to generate redundant packets for one block. Hence, the encoding complexity has increased by a factor of v. Note that the complexity of
Simulation of reliable protocol with TEEC
The simulation program was written to test the performance of the reliable protocol utilizing TEEC and ARQ, and the results are compared with those utilizing the traditional erasure codes. The sender continuously sends the initial data and repair packets on block-by-block basis with the same inter-packet sending time, i.e., data and repair packets for one block are sent, and then the same process is repeated for the following blocks. Three parameters are used by the sender for encoding, n, k,
Conclusion
New FEC technique, TEEC, and a new reliable protocol utilizing TEEC have been developed in this paper with its performance results. The simulation results of the protocol imply that TEEC enhances the error correction capabilities with small recovery delays. The expense we have to pay for this is the encoding/decoding overheads at the sender and the receiver, which increases linearly to the encoding scope factor. However, as is shown in Section 5 of this paper, we were able to achieve
Seonho Choi is an Assistant Professor of Computer Science at Bowie State University, Bowie, Maryland. Dr. Seonho Choi received his BS degree in Computer Science and Statistics from Seoul National University, Seoul, Korea, in 1990, and Ph.D. degree in Computer Science from the University of Maryland at College Park in 1997. His research interests include computer networks, network security, and real-time system.
References (12)
Theory and Practice of Error Control Codes
(1984)- et al.
Error Control Coding: Fundamentals and Applications
(1983) - J. Nonnenmacher, E.W. Biersack, Reliable multicast: where to use forward error correction, Proceedings of the 5th...
- et al.
Parity-based loss recovery for reliable multicast transmission
IEEE/ACM Trans. Networking
(1998) Introduction to Error-Correcting Codes
(1989)Effective erasure codes for reliable computer communication protocols
ACM Comput. Commun. Rev.
(1997)
Cited by (0)
Seonho Choi is an Assistant Professor of Computer Science at Bowie State University, Bowie, Maryland. Dr. Seonho Choi received his BS degree in Computer Science and Statistics from Seoul National University, Seoul, Korea, in 1990, and Ph.D. degree in Computer Science from the University of Maryland at College Park in 1997. His research interests include computer networks, network security, and real-time system.